Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>For example, I have 50 Gib PV/PVC and grant to one Pod, I just want to check usage of the storage</p>
<p>The way I am just following is to set up a <strong>busybox</strong> pod with mounting the same PVC, then exec into the <strong>busybox</strong> to run <code>df -h</code> to check the storage.</p>
<p>I just want to know if there is an efficient way to do the same thing.</p>
| Vampire_D | <p>Depending on how often you need to do this you might look at the <code>df-pv</code> plugin for <code>kubectl</code> <a href="https://github.com/yashbhutwala/kubectl-df-pv" rel="noreferrer">https://github.com/yashbhutwala/kubectl-df-pv</a></p>
<p>It does exactly what you are asking across all the pvs in a namespace or cluster. Once you've got it installed, just run <code>kubectl df-pv</code> and you are set.</p>
| Kav Latiolais |
<p>I have purchased 3 VPS and have IP address like</p>
<pre><code> 172.168.1.146
72.168.43.245
98.156.56.123
</code></pre>
<p>I have got an ingress controller</p>
<p>I have installed metallb which loadbalances traffic and sends it to any of the IP address.</p>
<p>Now how do I provide a static IP to my kubernetes cluster endpoint?</p>
<p>Should I purchase a virtual IP and configure it to metallb? If so how?</p>
<p>or</p>
<p>Should I configure keepalived in all the above nodes and configure a virtual IP. In this case can I configure the same virtual IP to all the nodes? Will there not be an IP conflict? If one of the node dies will the virtual IP be automatically assigned to another node which is alive? If so what would be the usual duration of automatically assigning the IP address to another node which is alive?</p>
| jeril | <p>MetalLB is currently in beta. Read about the Project <a href="https://metallb.universe.tf/concepts/maturity/" rel="nofollow noreferrer">maturity</a> and make sure you inform yourself by reading the official documentation thoroughly.</p>
<p>MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the <a href="https://metallb.universe.tf/installation/" rel="nofollow noreferrer">Installation</a> instructions.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.</p>
<p>Given the following 3-node Kubernetes cluster (the external IP is added as an example</p>
<pre><code>$ kubectl get node
NAME STATUS ROLES EXTERNAL-IP
host-1 Ready master 203.0.113.1
host-2 Ready node 203.0.113.2
host-3 Ready node 203.0.113.3
</code></pre>
<p>After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 203.0.113.10-203.0.113.15
</code></pre>
| Kailash |
<p>I am working through "learn kubernetes the hard way" and am at the "bootstrapping the etcd cluster" step: <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md</a></p>
<p>I have run into a command that causes a timeout:</p>
<pre><code>somersbmatthews@controller-0:~$ { sudo systemctl daemon-reload; sudo systemctl enable etcd; sudo systemctl start etcd; }
Job for etcd.service failed because a timeout was exceeded.
See "systemctl status etcd.service" and "journalctl -xe" for details.
</code></pre>
<p>Here I follow the above recommendations:</p>
<p>This is the first thing the CLI asked me to check:</p>
<pre><code>somersbmatthews@controller-0:~$ systemctl status etcd.service
● etcd.service - etcd
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
Active: activating (start) since Wed 2020-12-02 03:15:05 UTC; 34s ago
Docs: https://github.com/coreos
Main PID: 49251 (etcd)
Tasks: 8 (limit: 9544)
Memory: 10.2M
CGroup: /system.slice/etcd.service
└─49251 /usr/local/bin/etcd --name controller-0 --cert-file=/etc/etcd/kubernetes.pem --key-file=/etc/etcd/kubernetes-key.pem --peer-cert-file>
Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 is starting a new election at term 570
Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 became candidate at term 571
Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 571
Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 is starting a new election at term 571
Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 became candidate at term 572
Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 572
Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
</code></pre>
<p>The second thing the CLI asked me to check:</p>
<pre><code>somersbmatthews@controller-0:~$ journalctl -xe
-- A stop job for unit etcd.service has finished.
--
-- The job identifier is 3597 and the job result is done.
Dec 02 03:05:32 controller-0 systemd[1]: Starting etcd...
-- Subject: A start job for unit etcd.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit etcd.service has begun execution.
--
-- The job identifier is 3597.
Dec 02 03:05:32 controller-0 etcd[48861]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Dec 02 03:05:32 controller-0 etcd[48861]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Dec 02 03:05:32 controller-0 etcd[48861]: etcd Version: 3.4.10
Dec 02 03:05:32 controller-0 etcd[48861]: Git SHA: 18dfb9cca
Dec 02 03:05:32 controller-0 etcd[48861]: Go Version: go1.12.17
Dec 02 03:05:32 controller-0 etcd[48861]: Go OS/Arch: linux/amd64
Dec 02 03:05:32 controller-0 etcd[48861]: setting maximum number of CPUs to 2, total number of available CPUs is 2
Dec 02 03:05:32 controller-0 etcd[48861]: the server is already initialized as member before, starting as etcd member...
Dec 02 03:05:32 controller-0 etcd[48861]: peerTLS: cert = /etc/etcd/kubernetes.pem, key = /etc/etcd/kubernetes-key.pem, trusted-ca = /etc/etcd/ca.pem, cli>
Dec 02 03:05:32 controller-0 etcd[48861]: name = controller-0
Dec 02 03:05:32 controller-0 etcd[48861]: data dir = /var/lib/etcd
Dec 02 03:05:32 controller-0 etcd[48861]: member dir = /var/lib/etcd/member
Dec 02 03:05:32 controller-0 etcd[48861]: heartbeat = 100ms
Dec 02 03:05:32 controller-0 etcd[48861]: election = 1000ms
Dec 02 03:05:32 controller-0 etcd[48861]: snapshot count = 100000
Dec 02 03:05:32 controller-0 etcd[48861]: advertise client URLs = https://10.240.0.10:2379
Dec 02 03:05:32 controller-0 etcd[48861]: initial advertise peer URLs = https://10.240.0.10:2380
Dec 02 03:05:32 controller-0 etcd[48861]: initial cluster =
Dec 02 03:05:32 controller-0 etcd[48861]: restarting member f98dc20bce6225a0 in cluster 3e7cc799faffb625 at commit index 3
Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 switched to configuration voters=()
Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 became follower at term 183
Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: newRaft f98dc20bce6225a0 [peers: [], term: 183, commit: 3, applied: 0, lastindex: >
Dec 02 03:05:32 controller-0 etcd[48861]: simple token is not cryptographically signed
Dec 02 03:05:32 controller-0 etcd[48861]: starting server... [version: 3.4.10, cluster version: to_be_decided]
Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 switched to configuration voters=(4203990652121993521)
Dec 02 03:05:32 controller-0 etcd[48861]: added member 3a57933972cb5131 [https://10.240.0.12:2380] to cluster 3e7cc799faffb625
Dec 02 03:05:32 controller-0 etcd[48861]: starting peer 3a57933972cb5131...
Dec 02 03:05:32 controller-0 etcd[48861]: started HTTP pipelining with peer 3a57933972cb5131
Dec 02 03:05:32 controller-0 etcd[48861]: started streaming with peer 3a57933972cb5131 (writer)
Dec 02 03:05:32 controller-0 etcd[48861]: started streaming with peer 3a57933972cb5131 (writer)
Dec 02 03:05:32 controller-0 etcd[48861]: started peer 3a57933972cb5131
somersbmatthews@controller-0:~$ journalctl -xe
Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 is starting a new election at term 224
Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 became candidate at term 225
Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 225
Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused
Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused
Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused
Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused
Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 is starting a new election at term 225
Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 became candidate at term 226
Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 226
Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 is starting a new election at term 226
Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 became candidate at term 227
Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 227
Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:35 controller-0 etcd[48861]: publish error: etcdserver: request timed out
Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 is starting a new election at term 227
Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 became candidate at term 228
Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 228
Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused
Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused
Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused
Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused
Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 is starting a new election at term 228
Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 became candidate at term 229
Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 229
Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 is starting a new election at term 229
Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 became candidate at term 230
Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 230
Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 is starting a new election at term 230
Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 became candidate at term 231
Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 231
</code></pre>
<p>So I redo the step that I think allows what is not being allowed above <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/03-compute-resources.md#firewall-rules" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/03-compute-resources.md#firewall-rules</a>:</p>
<pre><code>somersbmatthews@controller-0:~$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
> --allow tcp,udp,icmp \
> --network kubernetes-the-hard-way \
> --source-ranges 10.240.0.0/24,10.200.0.0/16
Creating firewall...failed.
ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
- The resource 'projects/k8s-hard-way-2571/global/firewalls/kubernetes-the-hard-way-allow-internal' already exists
</code></pre>
<p>but I'm still getting the timeout errors above.</p>
<p>Any help is appreciated, thanks :)</p>
| Somers Matthews | <p>It looks like you would need to change the source ranges of the VPC network with the <a href="https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/update" rel="nofollow noreferrer">update</a> subcommand instead; try:</p>
<pre><code>$ gcloud compute firewall-rules update kubernetes-the-hard-way-allow-internal --source-ranges 10.240.0.0/24,10.200.0.0/16
</code></pre>
| Arnau C. |
<p>Kafka elasticsearch connector "confluentinc-kafka-connect-elasticsearch-5.5.0" doesn't work on-prem.</p>
<pre><code>"java.lang.NoClassDefFoundError: com/google/common/collect/ImmutableSet\n\tat io.searchbox.client.AbstractJestClient.<init>(AbstractJestClient.java:38)\n\tat io.searchbox.client.http.JestHttpClient.<init>(JestHttpClient.java:43)\n\tat io.searchbox.client.JestClientFactory.getObject(JestClientFactory.java:51)\n\tat io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:149)\n\tat io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:141)\n\tat io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:122)\n\tat io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:51)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:305)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"
</code></pre>
<p>I'm using also mssql connector and s3 connector plugins in the same path; they work but elasticsearch plugin gives noclassfound error. This is my folder structure in worker:</p>
<pre><code>[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ ls
confluentinc-kafka-connect-elasticsearch-5.5.0 confluentinc-kafka-connect-s3-5.5.0 debezium-connector-sqlserver kafka-connect-shell-sink-5.1.0
[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ ls -l
total 16
drwxrwxr-x 2 root root 4096 May 25 22:15 confluentinc-kafka-connect-elasticsearch-5.5.0
drwxrwxr-x 5 root root 4096 May 15 02:26 confluentinc-kafka-connect-s3-5.5.0
drwxrwxr-x 2 root root 4096 May 15 02:26 debezium-connector-sqlserver
drwxrwxr-x 4 root root 4096 May 15 02:26 kafka-connect-shell-sink-5.1.0
[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ ls debezium-connector-sqlserver
debezium-api-1.1.1.Final.jar debezium-connector-sqlserver-1.1.1.Final.jar debezium-core-1.1.1.Final.jar mssql-jdbc-7.2.2.jre8.jar
[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ ls confluentinc-kafka-connect-s3-5.5.0
assets etc lib manifest.json
[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ ls confluentinc-kafka-connect-elasticsearch-5.5.0 -l
total 8356
-rw-r--r-- 1 root root 17558 May 25 11:53 common-utils-5.5.0.jar
-rw-r--r-- 1 root root 263965 May 25 11:53 commons-codec-1.9.jar
-rw-r--r-- 1 root root 61829 May 25 11:53 commons-logging-1.2.jar
-rw-r--r-- 1 root root 79845 May 25 19:34 compress-lzf-1.0.3.jar
-rw-r--r-- 1 root root 241622 May 25 11:53 gson-2.8.5.jar
-rw-r--r-- 1 root root 2329410 May 25 19:34 guava-18.0.jar
-rw-r--r-- 1 root root 1140290 May 25 19:34 hppc-0.7.1.jar
-rw-r--r-- 1 root root 179335 May 25 11:53 httpasyncclient-4.1.3.jar
-rw-r--r-- 1 root root 747794 May 25 11:53 httpclient-4.5.3.jar
-rw-r--r-- 1 root root 323824 May 25 11:53 httpcore-4.4.6.jar
-rw-r--r-- 1 root root 356644 May 25 11:53 httpcore-nio-4.4.6.jar
-rw-r--r-- 1 root root 280996 May 25 19:34 jackson-core-2.8.2.jar
-rw-r--r-- 1 root root 22191 May 25 11:53 jest-6.3.1.jar
-rw-r--r-- 1 root root 276130 May 25 11:53 jest-common-6.3.1.jar
-rw-r--r-- 1 root root 621992 May 25 19:34 joda-time-2.8.2.jar
-rw-r--r-- 1 root root 62226 May 25 19:34 jsr166e-1.1.0.jar
-rw-r--r-- 1 root root 83179 May 25 11:53 kafka-connect-elasticsearch-5.5.0.jar
-rw-r--r-- 1 root root 1330394 May 25 19:34 netty-3.10.5.Final.jar
-rw-r--r-- 1 root root 41139 May 25 11:53 slf4j-api-1.7.26.jar
-rw-r--r-- 1 root root 49754 May 25 19:34 t-digest-3.0.jar
[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$
</code></pre>
<p>I've read some messages there are missing jar files / dependencies in the elasticsearch connector, I added them as you can see above but no luck.</p>
<p>This is my connector config:</p>
<pre><code>apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
name: "elastic-files-connector"
labels:
strimzi.io/cluster: mssql-minio-connect-cluster
spec:
class: io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
config:
connection.url: "https://escluster-es-http.dev-kik.io:9200"
connection.username: "${file:/opt/kafka/external-configuration/elasticcreds/connector.properties:connection.username}"
connection.password: "${file:/opt/kafka/external-configuration/elasticcreds/connector.properties:connection.password}"
flush.timeout.ms: 10000
max.buffered.events: 20000
batch.size: 2000
topics: filesql1.dbo.Files
tasks.max: '1'
type.name: "_doc"
max.request.size: "536870912"
key.converter: io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url: http://schema-registry-cp-schema-registry:8081
value.converter: io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url: http://schema-registry-cp-schema-registry:8081
internal.key.converter: org.apache.kafka.connect.json.JsonConverter
internal.value.converter: org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable: true
value.converter.schemas.enable: true
schema.compatibility: NONE
errors.tolerance: all
errors.deadletterqueue.topic.name: "dlq_filesql1.dbo.Files"
errors.deadletterqueue.context.headers.enable: "true"
errors.log.enable: "true"
behavior.on.null.values: "ignore"
errors.retry.delay.max.ms: 60000
errors.retry.timeout: 300000
behavior.on.malformed.documents: warn
</code></pre>
<p>I changed username/password as plaintext; no luck.
I tried for both http/https for elasticsearch connection, no luck.</p>
<p>This is my elasticsearch srv information:</p>
<pre><code>devadmin@vdi-mk2-ubn:~/kafka$ kubectl get svc -n elastic-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elastic-webhook-server ClusterIP 10.104.95.105 <none> 443/TCP 21h
escluster-es-default ClusterIP None <none> <none> 8h
escluster-es-http LoadBalancer 10.108.69.136 192.168.215.35 9200:31214/TCP 8h
escluster-es-transport ClusterIP None <none> 9300/TCP 8h
kibana-kb-http LoadBalancer 10.102.81.206 192.168.215.34 5601:31315/TCP 20h
devadmin@vdi-mk2-ubn:~/kafka$
</code></pre>
<p>I can connec to to elastic search services from Kafka Connect Worker by the both ways:</p>
<pre><code>[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ curl -u "elastic:5NM0Pp25sFzNu578873BWFnN" -k "https://10.108.69.136:9200"
{
"name" : "escluster-es-default-0",
"cluster_name" : "escluster",
"cluster_uuid" : "TP5f4MGcSn6Dt9hZ144tEw",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ curl -u "elastic:5NM0Pp25sFzNu578873BWFnN" -k "https://escluster-es-http.dev-kik.io:9200"
{
"name" : "escluster-es-default-0",
"cluster_name" : "escluster",
"cluster_uuid" : "TP5f4MGcSn6Dt9hZ144tEw",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$
</code></pre>
<p>no matter what I do, exception never changes at all. I don't know there is something else which I can try. My brain is burning, I'm about to get crazy.</p>
<p>Am I missing anything or could you please advise how you guys run this connector on-prem on Kubernetes?</p>
<p>Thanks & regards</p>
| Tireli Efe | <p>I'm using <code>kafka_2. 12-2.5.0</code> and I had the same problem. I've noticed that guava jar is missing in $KAFKA_HOME/libs with respect to Kafka <code>2. 4.0</code>. As a workaround, I copied the jar manually (<code>guava-20.0. jar</code>) from the previous Kafka distribution and everything worked.</p>
| Massimo Callisto |
<p>I have pod definition file</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: ubuntu-docker-soc
spec:
containers:
- name: ubuntu-docker-soc
image: rajendarre/ubuntu-docker
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
restartPolicy: Never
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
</code></pre>
<p>I have created pod successfully ,
but command <code>kubectl get pv</code> is not showing any volumes ,
as I have volume dockersoc , it should come as volume .
please let me know understanding is correct ?</p>
| Rajendar Talatam | <p>PersistentVolume is a <code>provisioned</code> cluster resource, where else local file or directory that you mount from the underlying node directly in your pod is not a cluster resource that requires provisioning. Therefore in your example, <code>kubectl get pv</code> will not get you anything.</p>
<p>Here's a good <a href="https://medium.com/avmconsulting-blog/persistent-volumes-pv-and-claims-pvc-in-kubernetes-bd76923a61f6" rel="nofollow noreferrer">discussion</a> about PV/PVC on major cloud providers.</p>
| gohm'c |
<p>When I try to build a new image to Kubernetes, I got this error:</p>
<blockquote>
<p><code>**unable to decode "K8sDeploy.yaml": no kind "Deployment" is registered for version "apps/v1"** </code></p>
</blockquote>
<p>Thie error began when I updated the Kubernetes version, here my version info:</p>
<pre><code>Client Version: v1.19.2
Server Version: v1.16.13
</code></pre>
<p><a href="https://i.stack.imgur.com/2uRGJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2uRGJ.jpg" alt="enter image description here" /></a></p>
<p>I tried also to build by my localhost and does work, but by Jenkins don't.</p>
<p>Somebody knows to solve this?</p>
| Gabriel Aguiar | <p>I updated my Kubernetes version from 1.5 to 1.9, when i force the Kubectl command, does work, just by Jenkis doesn't, as you request follow my k8sdeploy.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: cbbox
labels:
app: cbbox
spec:
replicas: 1
selector:
matchLabels:
app: cbbox
template:
metadata:
labels:
app: cbbox
spec:
containers:
- image: myregistryrepository
name: cbbox
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: SECRET_DB_IP
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_DB_IP
- name: SECRET_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_DB_PASSWORD
- name: SECRET_DB_USER
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_DB_USER
- name: SECRET_LDAP_DOMAIN
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_LDAP_DOMAIN
- name: SECRET_LDAP_URLS
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_LDAP_URLS
- name: SECRET_LDAP_BASE_DN
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_LDAP_BASE_DN
- name: SECRET_TIME_ZONE
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_TIME_ZONE
imagePullSecrets:
- name: acrcredentials
---
apiVersion: v1
kind: Service
metadata:
name: cbbox
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: cbbox
</code></pre>
| Gabriel Aguiar |
<p>If I rollback a deployment to a previous version as the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-to-a-previous-revision" rel="nofollow noreferrer">docs specify</a>:</p>
<pre><code>kubectl rollout undo deployment.v1.apps/nginx-deployment
kubectl rollout status deployment.v1.apps/nginx-deployment
</code></pre>
<p>Say we start at version A, update deployment to version B, use rollout undo to go back to version A, is there any way to roll forward and get back to version B?</p>
| jjbskir | <p>The newest version should be in your history.</p>
<pre><code>kubectl rollout history deployment.v1.apps/nginx-deployment
</code></pre>
<p>Find the <code>newest-revision</code> and plug it in below:</p>
<pre><code>kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=<newest-version>
</code></pre>
| Casper Dijkstra |
<p>I am relatively new to Kubernetes, and have a current task to debug Eviction pods on my work.
I'm trying to replicate the behaviour on a local k8s cluster in minikube.
So far I just cannot get evicted pods happening.</p>
<p>Can you help me trigger this mechanism?</p>
| Diogo Cardoso | <p>the eviction of pods is managed by the qos classes (quality of pods)</p>
<p>there are 3 categories</p>
<p>Guaranteed (limit = request cpu or ram) not evictable
Burstable
BestEffort</p>
<p>if you want test this mechanism , you can scale a pod that consume lot of memory or cpu and before that launch your example pods with différent request and limit for test this behavior. this behavior is only avaible for eviction so your pods must be already started before a cpu load.</p>
<p>after if you test a scheduling mechanism during a luanch time you can configure a priorityclassname for schedule a pods even if the cluster is full.</p>
<p>by example if your cluster is full you can't schedule a new pods because your pod don't have a sufficient privilege.</p>
<p>if you want schedule anyway a pod despite that you can add a priorityclassname system-node-critical or create your own priorityclass and one of the pod with a lower priority will be evict and your pod will be launched</p>
| breizh5729 |
<p>I have set up a Google Cloud Platform kubernetes cluster (and Container Registry) with source code on GitHub. Source code is divided into folders with separate Dockerfiles for each microservice.</p>
<p>I want to set up CI/CD using GitHub actions.</p>
<p>As far as I understand, the <a href="https://github.com/actions/starter-workflows/blob/d9236ebe5585b1efd5732a29ea126807279ccd56/ci/google.yml" rel="nofollow noreferrer">default GKE workflow</a> will connect to gcloud using secrets, build the images and push them to Container Registry. And then perform an update.</p>
<h2>My questions</h2>
<ul>
<li>How is the deployment performed?</li>
<li>What is <strong>kustomize</strong> for?</li>
<li>Do I have to configure on gcloud anything else than <strong>GKE key / token</strong></li>
<li>Suppose I want to update multiple docker images. Will it suffice to build multiple images and push them? Like below (a little bit simplified for clarity), or do I have to also modify the <strong>Deploy</strong> job:</li>
</ul>
<pre><code> - name: Build
run: |-
docker build -t "gcr.io/$PROJECT_ID/$IMAGE_1:$GITHUB_SHA" service1/.
docker build -t "gcr.io/$PROJECT_ID/$IMAGE_2:$GITHUB_SHA" service2/.
docker build -t "gcr.io/$PROJECT_ID/$IMAGE_3:$GITHUB_SHA" service3/.
- name: Publish
run: |-
docker push "gcr.io/$PROJECT_ID/$IMAGE_1:$GITHUB_SHA"
docker push "gcr.io/$PROJECT_ID/$IMAGE_2:$GITHUB_SHA"
docker push "gcr.io/$PROJECT_ID/$IMAGE_3:$GITHUB_SHA"
</code></pre>
<hr />
<p>This is the deploy fragment from the GKE workflow:</p>
<pre><code> # Deploy the Docker image to the GKE cluster
- name: Deploy
run: |-
./kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA
./kustomize build . | kubectl apply -f -
kubectl rollout status deployment/$DEPLOYMENT_NAME
kubectl get services -o wide
</code></pre>
| Piotr Kolecki | <p>I just wanted to share my experience with GKE after posting this question and then implementing the GitHub action.</p>
<blockquote>
<p>How is the deployment performed?</p>
</blockquote>
<p>Basically, the workflow sets up a connection to GKE via <strong>gcloud</strong> CLI (this also sets up <strong>kubectl</strong> context).<br />
After the connection is established and the correct cluster targeted you are free to do whatever you like.</p>
<blockquote>
<p>Do I have to configure on gcloud anything else than GKE key / token</p>
</blockquote>
<p>Nothing else is required. Just remember to hash it correctly and store it in a secret on GitHub.</p>
<blockquote>
<p>Suppose I want to update multiple docker images...</p>
</blockquote>
<p>Doing it the way it is in a question is absolutely valid and fully functional.</p>
<blockquote>
<p>...or do I have to also modify the Deploy job</p>
</blockquote>
<p>I decided to change the Deploy a little bit.</p>
<pre><code># Deploy update to services
- name: Deploy
run: |-
kubectl set image deployment dep1 dep1="gcr.io/$PROJECT_ID/$IMAGE_1:$GITHUB_SHA"
kubectl set image deployment dep2 dep2="gcr.io/$PROJECT_ID/$IMAGE_2:$GITHUB_SHA"
</code></pre>
<p>This way I don't have to use <strong>Kustomize</strong> that I am not familiar with.<br />
If you have the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy" rel="nofollow noreferrer">update strategy</a> set to RollingUpdate <em>- - which I believe is the default - -</em> the change in image tag will trigger the rolling update (other strategies may work as well). But to use this approach you have to use the same image tag in building Docker images and deploying them using the code above. Using the <a href="https://docs.github.com/en/github/getting-started-with-github/github-glossary#commit-id" rel="nofollow noreferrer">$GITHUB_SHA</a> will provide a distinct hash for a commit that can be used to differentiate docker images.</p>
<p>This may not be the most elegant solution but I am sure you can come up with a better solution (like obtaining the release Tag) because this is just a variable.</p>
| Piotr Kolecki |
<p>I was upgrade kubernetes 1.19.1. then ingress deployment give this warning;</p>
<p>Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.networking.k8s.io/msrs-ingress created</p>
<p>I have changed correct new version of ingress api ( v1beta to v1) but now I cant install again because of admission rule;</p>
<p>Error from server: error when creating "disabled/my-ingress-prod-v2.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: rejecting admission review because the request does not contains an Ingress resource but networking.k8s.io/v1, Resource=ingresses with name my-ingress2 in namespace my-pro</p>
<p>actualy I changed my-ingress2 like this;</p>
<p>after:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: my-ingress
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
</code></pre>
<p>befor:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
</code></pre>
<p>how can I found correct way to install ingress rules. I don't want to disable admission</p>
<pre><code>kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
</code></pre>
| alparslan | <p>Here is pull request which fixed it:</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/pull/6187" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/pull/6187</a></p>
<p>You just need to wait for new release. You can track progress here:</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/projects/43#card-45661384" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/projects/43#card-45661384</a></p>
| CheGuevara_cz |
<p>I'm new to the Kubernetes and try to set up a network policy to protect my api.</p>
<p>Here is my network NetworkPolicy</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-network-policy
namespace: api
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: api
- namespaceSelector:
matchLabels:
name: backend
- podSelector:
matchLabels:
rule: database
</code></pre>
<p>In my design, all pods in the namespace "api" allows ingress only from namespace:api, namespace:backend and pods with rule of database.
However, when I add a test namespace and send request to the pods in the namespace:api, it doesn't deny that request.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
namespace: test
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: test
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: test
spec:
type: NodePort
selector:
app: test
ports:
- port: 5000
targetPort: 5000
nodePort: 32100
</code></pre>
<p>my ingress:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-backend-service
namespace: backend
labels:
rule: ingress
annotations:
kubernetes.io/ingress.class: 'nginx'
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /api/?(.*)
pathType: Prefix
backend:
service:
name: chatbot-server
port:
number: 5000
</code></pre>
<p>one of my api:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-worker-deployment
namespace: api
spec:
replicas: 1
selector:
matchLabels:
api: redis-worker
template:
metadata:
labels:
api: redis-worker
spec:
containers:
- name: redis-worker
image: redis-worker
env:
- name: REDIS_HOST
value: redis
- name: REDIS_PORT
value: "6379"
resources:
requests:
memory: "32Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: redis-worker-service
namespace: api
labels:
rule: api
spec:
selector:
api: redis-worker
ports:
- port: 5000
targetPort: 5000
</code></pre>
<p>my namespace:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: v1
kind: Namespace
metadata:
name: backend
---
apiVersion: v1
kind: Namespace
metadata:
name: api
</code></pre>
<p>my code in the test pod</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, url_for, request, jsonify
import requests
import config
app = Flask(__name__)
@app.route('/', methods=['GET', 'POST'])
def hello():
x = requests.get("http://redis-worker-service.api:5000").json()
print(x)
return x
if __name__ == '__main__':
app.run(host=config.HOST, port=config.PORT, debug=config.DEBUG)
</code></pre>
<p>when I go to http://myminikubeip:32100, the request should be denied but it doesn't work</p>
| Abbott | <p>Hi all I make stupid mistakes. I forget to set up a network plugin for Minikube as <a href="https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/" rel="nofollow noreferrer">Use Cilium for NetworkPolicy</a></p>
<p>Also, I don't set any egress so all egress will be denied.</p>
<p>fixed one:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-network-policy
namespace: api
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: api
- namespaceSelector:
matchLabels:
purpose: backend
- podSelector:
matchLabels:
rule: database
egress:
- {}
</code></pre>
<p>also, set labels for namespace as following</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: v1
kind: Namespace
metadata:
name: backend
labels:
purpose: backend
---
apiVersion: v1
kind: Namespace
metadata:
name: api
labels:
purpose: api
</code></pre>
<p>I'm sorry I post such a stupid question and I hope others can learn something from my mistakes.. I'm so sorry</p>
<p><a href="https://github.com/ahmetb/kubernetes-network-policy-recipes" rel="nofollow noreferrer">helpful link for network-policy</a></p>
| Abbott |
<p>I experimenting and learning Rancher/Kubernetes. In a very short eBook I read this.</p>
<p>"In a Kubernetes Cluster, it can be desirable to have persistent storage available for applications to use. As we do not have a Kubernetes Cloud Provider enabled in this cluster, we will be deploying the nfs-server-provisioner which will run an NFS server inside of our Kubernetes cluster for persistent storage.</p>
<p><strong>This is not a production-ready solution by any means, but helps to illustrate the persistent storage constructs.</strong>"</p>
<p>I configured nfs-server-provisioner in Rancher and everything works as expected. But here the question.</p>
<p>For my "production" homelab, I prepared 5x Bare Metal Server and installed Rancher on top of a Kubernates, also I created an RKE2 Cluster with</p>
<ul>
<li>etcd Node</li>
<li>control plane Node</li>
<li>worker Node 1</li>
<li>worker Node 2</li>
</ul>
<p>I dont use AWS, Azure or any Cloud Solutions.</p>
<p>What would be a "production ready solution" for my stack. And why exactly is "nfs-server-provisioner" not a "production ready solution"?</p>
| Tristate | <p>Without seeing the full text this is only a guess, but based on that quote only using nfs-server-provider isn't providing "true" and reliable persistence.</p>
<p>nfs-server-provider launches the NFS server within the cluster, which means also its data is within the kubernetes' storage system. There's no real persistence there: instead the persistence, availability and security of NFS based persistent volumes depend on how the nfs-server-provider stores the data. You definitely lose production-readiness if served NFS data is stored by the provider in a way that is not highly available - say, on a local hostpath on each node. Then again if nfs-server-provider <em>is</em> using a reliable storage class, why not cut the overhead and use <em>that</em> storage class directly for all persistent volumes? This might be what your quoted text refers to.</p>
<p>I'd also like to note (at least at the side), that using NFS as a storage class when the NFS server resides inside the same cluster might potentially mean asking for trouble. If provisioning of nfs-server-provider on nodes fails for some reason (even a trivial one, like not being able to fetch the image) you'd lose access to NFS based persistent volumes, sending all pods relying on NFS volumes on crashloops. This is however true of Longhorn, OpenEBS and other cluster-residing storage classes, too.</p>
<p>Making a production ready setup would require you to at least configure the nfs-server-provider itself to use a production-grade storage backend or use a highly available external NFS.</p>
<p>Also note that for production grade you should have at least two control plane and three etcd nodes instead of just one and one (never use an even number of etcd nodes!). One node can run multiple nodes, so with your equipment I'd probably go for two nodes running both control plane and etcd, two "pure" worker nodes and one node doing all three. The last isn't exactly recommended, but in a homelab environment would give you more workers when testing with pod replicas.</p>
| TLii |
<p>I have a cronjob which run once every midnight, however one day I deployed wrong version of it and consequently the pod it made went failed soon.</p>
<p>So.. the problem is when I delete the failed pod the cronjob immediately recreates it. How can I stop it? Well anyhow its image is already broken so however it recreates new pod again and again it'd go fail.</p>
<p>my question is "How can I delete failed pod created by cronjob?"</p>
<p>P.S. I'm using rancher but I'm not sure my problem is related to it.</p>
| 0xF4D3C0D3 | <p><code>my question is "How can I delete failed pod created by cronjob?"</code></p>
<p>You can use <code>ttlSecondsAfterFinished</code> to control how long you want to keep <strong>either Complete or Failed job</strong> in your cluster.</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
...
spec:
schedule: ...
...
jobTemplate:
spec:
ttlSecondsAfterFinished: 43200 <-- example 12 hrs. 0 to clean immediately.
template:
...
</code></pre>
<p>Another thing to note this is a K8s v1.21 beta feature at the point of this writing.</p>
| gohm'c |
<p><a href="https://www.fluentd.org/" rel="nofollow noreferrer">Fluentd</a> in Docker cannot tail from my log file</p>
<p>My input in <code>/var/log/logf/a.log</code>:</p>
<pre><code>test 1st log
test 2nd log
test 3rd log
</code></pre>
<p>and my config in <code>/opt/app/conf/fluent.conf</code>:</p>
<pre><code><source>
@type tail
path /var/log/logf/a.log
tag test
read_from_head true
<parse>
@type none
message_key test
</parse>
</source>
<match test>
@type stdout
</match>
</code></pre>
<p>and my <code>Dockerfile</code> id <code>/opt/app/Dockerfile</code></p>
<pre><code>FROM fluent/fluentd:v1.11-debian-1
USER root
COPY ./conf/fluent.conf /fluentd/etc
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-document", "--version", "3.5.2"]
USER fluent
</code></pre>
<p>i run my <code>Dockerfile</code></p>
<pre><code>$ sudo docker build -t log-app .
$ sudo docker run -d --name logging log-app:latest
$ sudo docker logs -f logging
</code></pre>
<p>and I got result stuck, I don't know why</p>
<pre><code>2020-10-26 10:24:58 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
2020-10-26 10:24:58 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '3.5.2'
2020-10-26 10:24:58 +0000 [info]: gem 'fluentd' version '1.11.4'
2020-10-26 10:24:58 +0000 [warn]: 'pos_file PATH' parameter is not set to a 'tail' source.
2020-10-26 10:24:58 +0000 [warn]: this parameter is highly recommended to save the position to resume tailing.
2020-10-26 10:24:58 +0000 [info]: using configuration file: <ROOT>
<source>
@type tail
path "/var/log/logf/a.log"
tag "test"
read_from_head true
<parse>
@type "none"
message_key "test"
unmatched_lines
</parse>
</source>
<match test>
@type stdout
</match>
</ROOT>
2020-10-26 10:24:58 +0000 [info]: starting fluentd-1.11.4 pid=6 ruby="2.6.6"
2020-10-26 10:24:58 +0000 [info]: spawn command to main: cmdline=["/usr/local/bin/ruby", "-Eascii- 8bit:ascii-8bit", "/usr/local/bundle/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "-p", "/fluentd/plugins", "--under-supervisor"]
2020-10-26 10:24:59 +0000 [info]: adding match pattern="test" type="stdout"
2020-10-26 10:24:59 +0000 [info]: adding source type="tail"
2020-10-26 10:24:59 +0000 [warn]: #0 'pos_file PATH' parameter is not set to a 'tail' source.
2020-10-26 10:24:59 +0000 [warn]: #0 this parameter is highly recommended to save the position to resume tailing.
2020-10-26 10:24:59 +0000 [info]: #0 starting fluentd worker pid=15 ppid=6 worker=0
2020-10-26 10:24:59 +0000 [info]: #0 fluentd worker is now running worker=0
</code></pre>
<p>I think this is a permission problem, but I'm not sure because this <a href="https://www.fluentd.org/" rel="nofollow noreferrer">Fluentd</a> not throw an error, can you solve this problem guys?</p>
<hr />
<p><strong>[SOLVED]</strong> completely solved by mr karan shah's explanation</p>
<p>i was solved with docker-compose with mounting volume, below:</p>
<p>in file <code>/opt/app/docker-compose.yaml</code></p>
<pre><code>version: '2'
services:
fluentd:
build: .
container_name: fl-logging
volumes:
- "./conf/:/fluentd/etc:ro"
- "/var/log/logf:/var/log/logf"
</code></pre>
<p>and run the docker compose</p>
<pre><code> $ sudo docker-compose up -d --build
</code></pre>
| Ganang Wahyu W | <p>The issue is that you have not mounted the local log files into the Fluentd container for it to be accessible.</p>
<p>Use a command like below.</p>
<p><code>sudo docker run -d --name logging -v PATHTOYOURLOGFILE:/var/log/logf/ log-app:latest</code></p>
<p>Read more about volumes <a href="https://docs.docker.com/storage/volumes/" rel="nofollow noreferrer">here</a>.</p>
<p>You can also use a docker-compose file like below</p>
<pre><code>version: '2.2'
services:
fluentd:
build: ./opt/app/
container_name: fl01
volumes:
- "/opt/app/conf/:/fluentd/etc/:ro"
- "PATHTOYOURLOGFILE:/var/log/logf/"
networks:
- elastic
ports:
- "9880:9880"
networks:
elastic:
driver: bridge
</code></pre>
| karan shah |
<p>I did kubeinit using the following command (I am trying to setup Kubernetes on RHEL 7.6)</p>
<pre><code>kubeadm init --apiserver-advertise-address=15.217.230.99 --pod-network-cidr=15.232.10.195/27
</code></pre>
<p>I want to use the calico network. Since I can't use 192.168.0.0/16 network, I had to wget the calico.yaml from <a href="https://docs.projectcalico.org/v3.9/manifests/calico.yaml" rel="nofollow noreferrer">https://docs.projectcalico.org/v3.9/manifests/calico.yaml</a> and then modify CALICO_IPV4POOL_CIDR to have the value 15.232.10.195/27 (First thing is, I don't know if I am doing it correctly here. I am very new to Kubernetes and trying to setup my first ever cluster)
When I try to apply the file using command (as a sudo user):</p>
<pre><code>kubectl apply -f ./calico.yaml
</code></pre>
<p>I get the following error:</p>
<pre><code>unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
unable to recognize "./calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
</code></pre>
<p>My api server runs on port 6443. That is what I see in my kubeadm join token generated by kubeadm init.</p>
<p>Can someone please correct me where I am making mistakes?
Is it ok to use any other mask with calico network than 192.168.0.0/16? I can't use that since it is already being used in our network.</p>
<p>I also want to join Windows nodes in addition to linux nodes on my cluster. Is Calico network a correct approach OR recommendation is something else instead. I would like to know before I initialize the network on my cluster so that I can do the right thinhs</p>
<p>Thanks</p>
| Andy Johnson | <p>Follow the below steps to overcome this issue:</p>
<pre><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
| Anbazhagan |
<p>Is there a way to allow access to all APIs for an existing node pool in GKE? When creating a node pool, you can choose it, but I can’t find a way to change it.</p>
| pat | <p>To change the API Access scope on a running GKE Cluster, you can create a new node pool with your desired scope, migrate the workload, and then delete the old node pool. In that way, the cluster will be available all the time.</p>
<p><a href="https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes</a></p>
| Hernan |
<p>While mounting my EBS volume to the kubernetes cluster I was getting this error :</p>
<pre><code> Warning FailedMount 64s kubelet Unable to attach or mount volumes: unmounted volumes=[ebs-volume], unattached volumes=[ebs-volume kube-api-access-rq86p]: timed out waiting for the condition
</code></pre>
<p>Below are my SC, PV, PVC, and Deployment files</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ebs-pv
labels:
type: ebs-pv
spec:
storageClassName: standard
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-0221ed06914dbc8fd
fsType: ext4
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ebs-pvc
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: Deployment
metadata:
labels:
app: gitea
name: gitea
spec:
replicas: 1
selector:
matchLabels:
app: gitea
template:
metadata:
labels:
app: gitea
spec:
volumes:
- name: ebs-volume
persistentVolumeClaim:
claimName: ebs-pvc
containers:
- image: gitea/gitea:latest
name: gitea
volumeMounts:
- mountPath: "/data"
name: ebs-volume
</code></pre>
<p>This is my PV and PVC which I believe is connected perfectly</p>
<pre><code> NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/ebs-pv 1Gi RWO Retain Bound default/ebs-pvc standard 18m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/ebs-pvc Bound ebs-pv 1Gi RWO standard 18m
</code></pre>
<p>This is my storage class</p>
<pre><code>NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard kubernetes.io/aws-ebs Retain Immediate false 145m
</code></pre>
<p>This is my pod description</p>
<pre><code>Name: gitea-bb86dd6b8-6264h
Namespace: default
Priority: 0
Node: worker01/172.31.91.105
Start Time: Fri, 04 Feb 2022 12:36:15 +0000
Labels: app=gitea
pod-template-hash=bb86dd6b8
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/gitea-bb86dd6b8
Containers:
gitea:
Container ID:
Image: gitea/gitea:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/data from ebs-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rq86p (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ebs-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ebs-pvc
ReadOnly: false
kube-api-access-rq86p:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 20m default-scheduler Successfully assigned default/gitea-bb86dd6b8-6264h to worker01
Warning FailedMount 4m47s (x2 over 16m) kubelet Unable to attach or mount volumes: unmounted volumes=[ebs-volume], unattached volumes=[kube-api-access-rq86p ebs-volume]: timed out waiting for the condition
Warning FailedMount 19s (x7 over 18m) kubelet Unable to attach or mount volumes: unmounted volumes=[ebs-volume], unattached volumes=[ebs-volume kube-api-access-rq86p]: timed out waiting for the condition
</code></pre>
<p>This is my ebs-volume the last one which I have connected to the master node on which I am performing operations right now...</p>
<pre><code>NAME FSTYPE LABEL UUID MOUNTPOINT
loop0 squashfs /snap/core18/2253
loop1 squashfs /snap/snapd/14066
loop2 squashfs /snap/amazon-ssm-agent/4046
xvda
└─xvda1 ext4 cloudimg-rootfs c1ce24a2-4987-4450-ae15-62eb028ff1cd /
xvdf ext4 36609bbf-3248-41f1-84c3-777eb1d6f364
</code></pre>
<p>The cluster I have created manually on the AWS ubuntu18 instances, there are 2 worker nodes and 1 master node all on Ubuntu18 instances running on AWS.</p>
<p>Below are the commands which I have used to create the EBS volume.</p>
<pre><code>aws ec2 create-volume --availability-zone=us-east-1c --size=10 --volume-type=gp2
aws ec2 attach-volume --device /dev/xvdf --instance-id <MASTER INSTANCE ID> --volume-id <MY VOLUME ID>
sudo mkfs -t ext4 /dev/xvdf
</code></pre>
<p>After this the container was successfully created and attached, so I don't think there will be a problem in this part.</p>
<p>I have not done one thing which I don't know if it is necessary or not is the below part</p>
<pre><code>The cluster also needs to have the flag --cloud-provider=aws enabled on the kubelet, api-server, and the controller-manager during the cluster’s creation
</code></pre>
<p>This thing I found on one of the blogs but at that moment my cluster was already set-up so I didn't do it but if it is a problem then please notify me and also please give some guidance about how to do it.</p>
<p>I have used Flannel as my network plugin while creating the cluster.</p>
<p>I don't think I left out any information but if there is something additional you want to know please ask.</p>
<p>Thank you in advance!</p>
| XANDER_015 | <p><code>This is my ebs-volume the last one which I have connected to the master node...</code></p>
<p>Pod that wish to mount this volume must run on the same node as the PV currently attached. Given the scenario you described; it is currently mounted on your Ubuntu based master node therefore you need to run pod on this node in order to mount it. Otherwise, you need to release it from the master node (detach from the underlying EC2) and re-deploy your PVC/PV/Pod so that they can settle down on a worker node instead of master node.</p>
| gohm'c |
<p>We have setup private registery using nexus on kubernetes cluster. we expose our registery on cluster ip for dedicated ip and we are able to pull push using docker. when i setup docker credentials for private registery using secrets, i am getting error as below</p>
<p><code>Failed to pull image "ip:port/repository/ydocker-repo/apps:tag": rpc error: code = Unknown desc = Error response from daemon: Get http://ip:port/v2/repository/docker/app/manifests/1.0: no basic auth credentials</code></p>
<p>I have setup service account and again i am getting same error.</p>
<p>What's wrong i am doing here.</p>
<p>below is my deployment code</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: xyz
spec:
selector:
matchLabels:
app: xyz
replicas: 3
template:
metadata:
labels:
app: xyz
spec:
containers:
- name: yuwee-app-server
image: ip:port/repository/ydocker-repo/apps:tag
imagePullPolicy: "Always"
stdin: true
tty: true
ports:
- containerPort: port-number
imagePullPolicy: Always
imagePullSecrets:
- name: myregistrykey
restartPolicy: Always
serviceAccountName: default
</code></pre>
<p>Does someone have any idea how to setup registery secrets for clusterIP ?</p>
| Dharmendra Jha | <p>so i found out issue. my deployment is inside a namespace and i have created secrets inside default namespace, which should be inside that namespace. now it's working what i expected.</p>
| Dharmendra Jha |
<pre><code>minikube start
--extra-config=apiserver.enable-admission-plugins=PodSecurityPolicy
--addons=pod-security-policy
</code></pre>
<p>we have a default namespace in which the nginx service account does not have the rights to launch the nginx container</p>
<p>when creating a pod, use the command</p>
<pre><code>kubectl run nginx --image=nginx -n default --as system:serviceaccount:default:nginx-sa
</code></pre>
<p>as a result, we get an error</p>
<pre><code> Error: container has runAsNonRoot and image will run as root (pod: "nginx_default(49e939b0-d238-4e04-a122-43f4cfabea22)", container: nginx)
</code></pre>
<p>as I understand it, it is necessary to write a psp policy that will allow the nginx-sa service account to run under, but I do not understand how to write it correctly for a specific service account</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-sa
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nginx-sa-role
namespace: default
rules:
- apiGroups: ["extensions", "apps",""]
resources: [ "deployments","pods" ]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nginx-sa-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
roleRef:
kind: Role
name: nginx-sa-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
| Iceforest | <p><code>...but I do not understand how to write it correctly for a specific service account</code></p>
<p>After you get your special psp ready for your nginx, you can grant your nginx-sa to use the special psp like this:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: role-to-use-special-psp
rules:
- apiGroups:
- policy
resourceNames:
- special-psp-for-nginx
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bind-to-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: role-to-use-special-psp
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
</code></pre>
| gohm'c |
<p>I have Kubernetes system in Azure and used the following instrustions to install fluent, elasticsearch and kibana: <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch</a> I am able to see my pods logs in kibana but when i send logs more then 16k chars its just split.</p>
<p>if i send 35k chars . its split into 3 logs.</p>
<p>how can i increase the limit of 1 log? I want to able to see the 36k chars in one log.</p>
<p><a href="https://i.stack.imgur.com/ewocJ.png" rel="nofollow noreferrer">image here</a></p>
| DevFromI | <p><a href="https://github.com/fluent-plugins-nursery/fluent-plugin-concat" rel="nofollow noreferrer">https://github.com/fluent-plugins-nursery/fluent-plugin-concat</a></p>
<p>did the job combine to one log.
solve docker's max log line (of 16Kb)
solve long lines in my container logs get split into multiple lines
solve max size of the message seems to be 16KB therefore for a message of 85KB the result is that 6 messages were created in different chunks.</p>
| DevFromI |
<p>My overall goal is to create MySQL users (despite <code>root</code>) automatically after the deployment in Kubernetes.</p>
<p>I found the following resources:<br />
<a href="https://stackoverflow.com/questions/64946194/how-to-create-mysql-users-and-database-during-deployment-of-mysql-in-kubernetes">How to create mysql users and database during deployment of mysql in kubernetes?</a><br />
<a href="https://stackoverflow.com/questions/50373869/add-another-user-to-mysql-in-kubernetes">Add another user to MySQL in Kubernetes</a></p>
<p>People suggested that <code>.sql</code> scripts can be mounted to <code>docker-entrypoint-initdb.d</code> with a ConfigMap to create these users. In order to do that, I have to put the password of these users in this script in plain text. This is a potential security issue. Thus, I want to store MySQL usernames and passwords as Kubernetes Secrets.</p>
<p>This is my ConfigMap</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
labels:
app: mysql-image-db
data:
initdb.sql: |-
CREATE USER <user>@'%' IDENTIFIED BY <password>;
</code></pre>
<p>How can I access the associated Kubernetes secrets within this ConfigMap?</p>
| Marcel Gohsen | <p>I am finally able to provide a solution to my own question. Since PjoterS made me aware that you can mount Secrets into a Pod as a volume, I came up with following solution.</p>
<p>This is the ConfigMap for the user creation scipt:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-init-script
labels:
app: mysql-image-db
data:
init-user.sh: |-
#!/bin/bash
sleep 30s
mysql -u root -p"$(cat /etc/mysql/credentials/root_password)" -e \
"CREATE USER '$(cat /etc/mysql/credentials/user_1)'@'%' IDENTIFIED BY '$(cat /etc/mysql/credentials/password_1)';"
</code></pre>
<p>To made this work, I needed to mount the ConfigMap and the Secret as Volumes of my Deployment and added a <code>postStart</code> lifecycle hook to execute the user creation script.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-image-db
spec:
selector:
matchLabels:
app: mysql-image-db
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-image-db
spec:
containers:
- image: mysql:8.0
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: root_password
name: mysql-user-credentials
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-volume
mountPath: /var/lib/mysql
- name: mysql-config-volume
mountPath: /etc/mysql/conf.d
- name: mysql-init-script-volume
mountPath: /etc/mysql/init
- name: mysql-credentials-volume
mountPath: /etc/mysql/credentials
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "/etc/mysql/init/init-user.sh"]
volumes:
- name: mysql-persistent-volume
persistentVolumeClaim:
claimName: mysql-volume-claim
- name: mysql-config-volume
configMap:
name: mysql-config
- name: mysql-init-script-volume
configMap:
name: mysql-init-script
defaultMode: 0777
- name: mysql-credentials-volume
secret:
secretName: mysql-user-credentials
</code></pre>
| Marcel Gohsen |
<p>I am using Docker Desktop version 3.6.0 which has Kubernetes 1.21.3.</p>
<p>I am following this tutorial to get started on Istio</p>
<p><a href="https://istio.io/latest/docs/setup/getting-started/" rel="nofollow noreferrer">https://istio.io/latest/docs/setup/getting-started/</a></p>
<p>Istio is properly installed as per the instructions.</p>
<p>Now whenever i try to apply the Istio configuration</p>
<p>by issuing the command <code>kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml</code>.</p>
<p>I get the following error</p>
<pre><code>unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3"
unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "VirtualService" in version "networking.istio.io/v1alpha3"
</code></pre>
<p>I checked in internet and found that the Gateway and VirtualService resources are missing.</p>
<p>If i perform <code>kubectl get crd</code> i get no resources found</p>
<p>Content of bookinfo-gatway.yaml</p>
<pre><code> apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
</code></pre>
| saurav | <p>The CRDs for istio should be installed as part of the istioctl install process, I'd recommend re-running the install if you don't have them available.</p>
<pre><code>>>> ~/.istioctl/bin/istioctl install --set profile=demo -y
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete
</code></pre>
<p>kubectl get po -n istio-system should look like</p>
<pre><code>>>> kubectl get po -n istio-system
NAME READY STATUS RESTARTS AGE
istio-egressgateway-7ddb45fcdf-ctnp5 1/1 Running 0 3m20s
istio-ingressgateway-f7cdcd7dc-zdqhg 1/1 Running 0 3m20s
istiod-788ff675dd-9p75l 1/1 Running 0 3m32s
</code></pre>
<p>Otherwise your initial install has gone wrong somewhere.</p>
| lukerobertson96 |
<p>I'm using busybox image in my pod. I'm trying to curl another pod, but "curl is not found". How to fix it?</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: busybox
name: front
command:
- /bin/sh
- -c
- sleep 1d
</code></pre>
<p>this cmd:</p>
<pre><code>k exec -it front -- sh
curl service-anotherpod:80 -> 'curl not found'
</code></pre>
| ERJAN | <p><code>busybox</code> is a single binary program which you can't install additional program to it. You can either use <code>wget</code> or you can use a different variant of busybox like <a href="https://github.com/progrium/busybox" rel="nofollow noreferrer">progrium</a> which come with a package manager that allows you to do <code>opkg-install curl</code>.</p>
| gohm'c |
<p>I'm trying to apply a terraform resource (helm_release) to k8s and the apply command is failed half way through.</p>
<p>I checked the pod issue now I need to update some values in the local chart.</p>
<p>Now I'm in a dilemma, where I can't apply the helm_release as the names are in use, and I can't destroy the helm_release since it is not created.</p>
<p>Seems to me the only option is to manually delete the k8s resources that were created by the helm_release chart?</p>
<p>Here is the terraform for helm_release:</p>
<pre><code>cat nginx-arm64.tf
resource "helm_release" "nginx-ingress" {
name = "nginx-ingress"
chart = "/data/terraform/k8s/nginx-ingress-controller-arm64.tgz"
}
</code></pre>
<p>BTW: I need to use the local chart as the official chart does not support the ARM64 architecture.
Thanks,</p>
<p>Edit #1:</p>
<p>Here is the list of helm release -> there is no gninx ingress</p>
<pre><code>/data/terraform/k8s$ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager default 1 2021-12-08 20:57:38.979176622 +0000 UTC deployed cert-manager-v1.5.0 v1.5.0
/data/terraform/k8s$
</code></pre>
<p>Here is the describe pod output:</p>
<pre><code>$ k describe pod/nginx-ingress-nginx-ingress-controller-99cddc76b-62nsr
Name: nginx-ingress-nginx-ingress-controller-99cddc76b-62nsr
Namespace: default
Priority: 0
Node: ocifreevmalways/10.0.0.189
Start Time: Wed, 08 Dec 2021 11:11:59 +0000
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=nginx-ingress
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=nginx-ingress-controller
helm.sh/chart=nginx-ingress-controller-9.0.9
pod-template-hash=99cddc76b
Annotations: <none>
Status: Running
IP: 10.244.0.22
IPs:
IP: 10.244.0.22
Controlled By: ReplicaSet/nginx-ingress-nginx-ingress-controller-99cddc76b
Containers:
controller:
Container ID: docker://0b75f5f68ef35dfb7dc5b90f9d1c249fad692855159f4e969324fc4e2ee61654
Image: docker.io/rancher/nginx-ingress-controller:nginx-1.1.0-rancher1
Image ID: docker-pullable://rancher/nginx-ingress-controller@sha256:177fb5dc79adcd16cb6c15d6c42cef31988b116cb148845893b6b954d7d593bc
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--default-backend-service=default/nginx-ingress-nginx-ingress-controller-default-backend
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--configmap=default/nginx-ingress-nginx-ingress-controller
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Wed, 08 Dec 2021 22:02:15 +0000
Finished: Wed, 08 Dec 2021 22:02:15 +0000
Ready: False
Restart Count: 132
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: nginx-ingress-nginx-ingress-controller-99cddc76b-62nsr (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wzqqn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-wzqqn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 8m38s (x132 over 10h) kubelet Container image "docker.io/rancher/nginx-ingress-controller:nginx-1.1.0-rancher1" already present on machine
Warning BackOff 3m39s (x3201 over 10h) kubelet Back-off restarting failed container
</code></pre>
<p>The terraform state list shows nothing:</p>
<pre><code>/data/terraform/k8s$ t state list
/data/terraform/k8s$
</code></pre>
<p>Though the terraform.tfstate.backup shows the nginx ingress (I guess that I did run the destroy command in between?):</p>
<pre><code>/data/terraform/k8s$ cat terraform.tfstate.backup
{
"version": 4,
"terraform_version": "1.0.11",
"serial": 28,
"lineage": "30e74aa5-9631-f82f-61a2-7bdbd97c2276",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "helm_release",
"name": "nginx-ingress",
"provider": "provider[\"registry.terraform.io/hashicorp/helm\"]",
"instances": [
{
"status": "tainted",
"schema_version": 0,
"attributes": {
"atomic": false,
"chart": "/data/terraform/k8s/nginx-ingress-controller-arm64.tgz",
"cleanup_on_fail": false,
"create_namespace": false,
"dependency_update": false,
"description": null,
"devel": null,
"disable_crd_hooks": false,
"disable_openapi_validation": false,
"disable_webhooks": false,
"force_update": false,
"id": "nginx-ingress",
"keyring": null,
"lint": false,
"manifest": null,
"max_history": 0,
"metadata": [
{
"app_version": "1.1.0",
"chart": "nginx-ingress-controller",
"name": "nginx-ingress",
"namespace": "default",
"revision": 1,
"values": "{}",
"version": "9.0.9"
}
],
"name": "nginx-ingress",
"namespace": "default",
"postrender": [],
"recreate_pods": false,
"render_subchart_notes": true,
"replace": false,
"repository": null,
"repository_ca_file": null,
"repository_cert_file": null,
"repository_key_file": null,
"repository_password": null,
"repository_username": null,
"reset_values": false,
"reuse_values": false,
"set": [],
"set_sensitive": [],
"skip_crds": false,
"status": "failed",
"timeout": 300,
"values": null,
"verify": false,
"version": "9.0.9",
"wait": true,
"wait_for_jobs": false
},
"sensitive_attributes": [],
"private": "bnVsbA=="
}
]
}
]
}
</code></pre>
<p>When I try to apply in the same directory, it prompts the error again:</p>
<pre><code>Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
helm_release.nginx-ingress: Creating...
╷
│ Error: cannot re-use a name that is still in use
│
│ with helm_release.nginx-ingress,
│ on nginx-arm64.tf line 1, in resource "helm_release" "nginx-ingress":
│ 1: resource "helm_release" "nginx-ingress" {
</code></pre>
<p>Please share your thoughts. Thanks.</p>
<p>Edit2:</p>
<p>The DEBUG logs show some more clues:</p>
<pre><code>2021-12-09T04:30:14.118Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceDiff: nginx-ingress] Release validated: timestamp=2021-12-09T04:30:14.118Z
2021-12-09T04:30:14.118Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceDiff: nginx-ingress] Done: timestamp=2021-12-09T04:30:14.118Z
2021-12-09T04:30:14.119Z [WARN] Provider "registry.terraform.io/hashicorp/helm" produced an invalid plan for helm_release.nginx-ingress, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .cleanup_on_fail: planned value cty.False for a non-computed attribute
- .create_namespace: planned value cty.False for a non-computed attribute
- .verify: planned value cty.False for a non-computed attribute
- .recreate_pods: planned value cty.False for a non-computed attribute
- .render_subchart_notes: planned value cty.True for a non-computed attribute
- .replace: planned value cty.False for a non-computed attribute
- .reset_values: planned value cty.False for a non-computed attribute
- .disable_crd_hooks: planned value cty.False for a non-computed attribute
- .lint: planned value cty.False for a non-computed attribute
- .namespace: planned value cty.StringVal("default") for a non-computed attribute
- .skip_crds: planned value cty.False for a non-computed attribute
- .disable_webhooks: planned value cty.False for a non-computed attribute
- .force_update: planned value cty.False for a non-computed attribute
- .timeout: planned value cty.NumberIntVal(300) for a non-computed attribute
- .reuse_values: planned value cty.False for a non-computed attribute
- .dependency_update: planned value cty.False for a non-computed attribute
- .disable_openapi_validation: planned value cty.False for a non-computed attribute
- .atomic: planned value cty.False for a non-computed attribute
- .wait: planned value cty.True for a non-computed attribute
- .max_history: planned value cty.NumberIntVal(0) for a non-computed attribute
- .wait_for_jobs: planned value cty.False for a non-computed attribute
helm_release.nginx-ingress: Creating...
2021-12-09T04:30:14.119Z [INFO] Starting apply for helm_release.nginx-ingress
2021-12-09T04:30:14.119Z [INFO] Starting apply for helm_release.nginx-ingress
2021-12-09T04:30:14.119Z [DEBUG] helm_release.nginx-ingress: applying the planned Create change
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] setting computed for "metadata" from ComputedKeys: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Started: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Getting helm configuration: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [INFO] GetHelmConfiguration start: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] Using kubeconfig: /home/ubuntu/.kube/config: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [INFO] Successfully initialized kubernetes config: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.121Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [INFO] GetHelmConfiguration success: timestamp=2021-12-09T04:30:14.121Z
2021-12-09T04:30:14.121Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Getting chart: timestamp=2021-12-09T04:30:14.121Z
2021-12-09T04:30:14.125Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Preparing for installation: timestamp=2021-12-09T04:30:14.125Z
2021-12-09T04:30:14.125Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 ---[ values.yaml ]-----------------------------------
{}: timestamp=2021-12-09T04:30:14.125Z
2021-12-09T04:30:14.125Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Installing chart: timestamp=2021-12-09T04:30:14.125Z
╷
│ Error: cannot re-use a name that is still in use
│
│ with helm_release.nginx-ingress,
│ on nginx-arm64.tf line 1, in resource "helm_release" "nginx-ingress":
│ 1: resource "helm_release" "nginx-ingress" {
│
╵
2021-12-09T04:30:14.158Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-12-09T04:30:14.160Z [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/helm/2.4.1/linux_arm64/terraform-provider-helm_v2.4.1_x5 pid=558800
2021-12-09T04:30:14.160Z [DEBUG] provider: plugin exited
</code></pre>
| ozmhsh | <p>You don't have to manually delete all the resources using <code>kubectl</code>. Under the hood the Terraform Helm provider still uses Helm. So if you run <code>helm list -A</code> you will see all the Helm releases on your cluster, including the <code>nginx-ingress</code> release. Deleting the release is then done via <code>helm uninstall nginx-ingress -n REPLACE_WITH_YOUR_NAMESPACE</code>.</p>
<p>Before re-running <code>terraform apply</code> do check if the Helm release is still in your Terraform state via <code>terraform state list</code> (run this from the same directory as where you run <code>terraform apply</code> from). If you don't see <code>helm_release.nginx-ingress</code> in that list then it is not in your Terraform state and you can just rerun your <code>terraform apply</code>. Else you have to delete it via <code>terraform state rm helm_release.nginx-ingress</code> and then you can run <code>terraform apply</code> again.</p>
| avinashpancham |
<p>We all know the use of ingress in a kubernetes cluster as a solution for path/context/url based routing for multiple services hosted on a kubernetes cluster. However, my requirement is a bit different. I have 2 clusters - EU n US regions hosted. Both these clusters host different set of applications app1, app2 in US cluster while app3 n app4 in EU cluster. Now i require an ingress type router sitting outside my clusters that i have a common entry point for all the applications ie <a href="http://www.example.com/us" rel="nofollow noreferrer">www.example.com/us</a> for US clusters and <a href="http://www.example.com/eu" rel="nofollow noreferrer">www.example.com/eu</a> for EU clusters. And then the corresponding ingress on each cluster will route based on app OR
<a href="http://www.example.com/app1" rel="nofollow noreferrer">www.example.com/app1</a>, /app2, /app3 etc based on applications which is routed to correct cluster and then ingress fwds to correct service in that cluster.</p>
<p>I have my custom domain <a href="http://www.example.com" rel="nofollow noreferrer">www.example.com</a>, but what is the solution or product i use in my network layer to implement this context/path based routing to the 2 clusters? DNS team said it cant be achieved at DNS level, and other load balancers follow round robin etc types algos which need all 4 apps on both the clusters but that is not a feasible solution for me. Can anyone suggest what to do here? My kubernetes clusters are in IBM Cloud. I cant have redundant services but need common entry point for both the clusters.</p>
| Nidhi Goel | <p>You can have a look at the <a href="https://cloud.ibm.com/catalog/services/api-gateway" rel="nofollow noreferrer">IBM Cloud API Gateway</a> service.</p>
<p>For this example I was logged in to <a href="http://cloud.ibm.com/" rel="nofollow noreferrer">cloud.ibm.com</a> with my IBM ID, all steps were done within the UI. I have created an instance of the API Gateway service with the Lite plan (free of charge).</p>
<p>To give it a try go to <a href="https://cloud.ibm.com/catalog/services/api-gateway" rel="nofollow noreferrer">IBM Cloud API Gateway</a>, choose a meaningful name for your instance of the API Gateway service and hit the "Create" button.</p>
<p>As a next step find the button "Create API Proxy", hit it and in the next screen choose a name for your API, for instance USA, specify the base path /us and copy the URL of your US based application.</p>
<p>Repeat this step and choose a different name (for instance EU), this time specify the base path /eu and copy the URL of your EU based application.</p>
<p>Now you have a setup that is very close to what you have been looking for. Both API Proxies share the same default domain and the requests for path '/us' are routed to the US based application and the requests for path '/eu' are routed to the EU based application.</p>
<p>To replace the default domain with your custom domain you need to navigate to "API Management" -> "Custom Domains" and provide the details.</p>
<p>I hope this helps. I tried the setup with an <a href="https://cloud.ibm.com/cloudfoundry/overview" rel="nofollow noreferrer">IBM Cloud Cloud Foundry</a> sample app as the application for EU and an <a href="https://cloud.ibm.com/codeengine/overview" rel="nofollow noreferrer">IBM Cloud Code Engine</a> application as the US based app and for this simple test everything worked as expected with the default domain of my API Gateway instance.</p>
<p>Please comment if this also works in your case where your apps are running on <a href="https://cloud.ibm.com/kubernetes/overview" rel="nofollow noreferrer">Kubernetes</a> cluster within IBM Cloud.</p>
| habercde |
<p>I am a junior developer studying eks.</p>
<p>When you create eks, you can see that the IAM user used to create it is granted as system:master.</p>
<p>I can't find how system:master is specified</p>
<p>I can't see the contents of IAM in the generated aws-auth configmap of Kubernetes.</p>
<p>Can you find out which part to look for?</p>
<p>Started out of curiosity and still looking:(</p>
<p>please help i've been looking all day</p>
| bob | <p><code>I can't find how system:master is specified</code></p>
<p>system:masters is a logical group <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/rbac/escalation_check.go#L38" rel="nofollow noreferrer">defined</a> in kubernetes source. This is not something created by EKS or yourself.</p>
<p><code>see the contents of IAM in the generated aws-auth configmap</code></p>
<p>kubectl get configmap aws-auth --namespace kube-system --output yaml</p>
<p>Try this <a href="https://medium.com/the-programmer/aws-eks-fundamentals-core-components-for-absolute-beginners-part1-9b16e19cedb3" rel="nofollow noreferrer">beginner guide</a>.</p>
| gohm'c |
<p>We have a setup with external-DNS to create and bind dns entries based on service annotations.</p>
<p>For example we have a service for the alertmanager like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: prometheus-kube-prometheus-alertmanager
namespace: prometheus
labels:
...
heritage: Helm
prometheus-monitor-https: 'true'
release: prometheus
self-monitor: 'true'
annotations:
external-dns.alpha.kubernetes.io/hostname: alertmanager.ourdomain.com
external-dns.alpha.kubernetes.io/ttl: '60'
spec:
ports:
- name: web
protocol: TCP
port: 80
targetPort: 9093
nodePort: 31126
selector:
alertmanager: prometheus-kube-prometheus-alertmanager
app.kubernetes.io/name: alertmanager
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Cluster
</code></pre>
<p>(abbreviated)</p>
<p>I want to use the blackbox exporter with the data from the annotations, so we don't have to manually add the monitoring here, but rather rely on kubernetes to provide the information what to monitor.</p>
<p>For that I wrote a servicemonitor, but it doesn't match the services and calls the blackbox exporter.</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: blackbox-exporter-monitor-https-external
namespace: prometheus
spec:
namespaceSelector:
any: true
selector:
matchLabels:
prometheus-monitor-https: any
targetLabels:
- environment
- instance
endpoints:
- metricRelabelings:
- sourceLabels: [__meta_kubernetes_service_annotation_external_dns_alpha_kubernetes_io_hostname]
targetLabel: __param_target
replacement: "https://$1"
- sourceLabels: [__param_target]
targetLabel: instance
- targetLabel: __param_scheme
replacement: https
- targetLabel: __address__
replacement: prometheus-blackbox-exporter:9115
path: /probe
params:
debug:
- "true"
module:
- "http_2xx"
</code></pre>
<p>I am not seeing why it shouldn't match the service. Do you have any hints?</p>
| Patrick Cornelissen | <p>The service has label <code>prometheus-monitor-https: 'true'</code>, while the ServiceMonitor has a <code>selector.matchLabels</code> of <code>prometheus-monitor-https: any</code>.</p>
<p>If you change this such that the <code>selector.matchLabels</code> of the ServiceMonitor equals <code>prometheus-monitor-https: 'true'</code>, then I think it should work. The matchLabels looks for expected matches of the label key, value pair.</p>
<p>Also I see that you wrote <code>namespaceSelector</code> is <code>any: true</code>. It is good to know that the namespaceSelector works in a different way. It expects the labels of the namespace it should find the resource in. In your case it will look for a namespace that has the label <code>any: true</code>. But I think you actually want to select all namespaces, which is equal to not specifying a namespaceSelector at all.</p>
| avinashpancham |
<blockquote>
<p>prometheus-prometheus-kube-prometheus-prometheus-0 0/2 Terminating 0 4s
alertmanager-prometheus-kube-prometheus-alertmanager-0 0/2 Terminating 0 10s</p>
</blockquote>
<p>After updating EKS cluster to 1.16 from 1.15 everything works fine except these two pods, they keep on terminating and unable to initialise. Hence, prometheus monitoring does not work. I am getting below errors while describing the pods.</p>
<pre><code>Error: failed to start container "prometheus": Error response from daemon: OCI runtime create failed: container_linux.go:362: creating new parent process caused: container_linux.go:1941: running lstat on namespace path "/proc/29271/ns/ipc" caused: lstat /proc/29271/ns/ipc: no such file or directory: unknown
Error: failed to start container "config-reloader": Error response from daemon: cannot join network of a non running container: 7e139521980afd13dad0162d6859352b0b2c855773d6d4062ee3e2f7f822a0b3
Error: cannot find volume "config" to mount into container "config-reloader"
Error: cannot find volume "config" to mount into container "prometheus"
</code></pre>
<p>here is my yaml file for the deployment:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: eks.privileged
creationTimestamp: "2021-04-30T16:39:14Z"
deletionGracePeriodSeconds: 600
deletionTimestamp: "2021-04-30T16:49:14Z"
generateName: prometheus-prometheus-kube-prometheus-prometheus-
labels:
app: prometheus
app.kubernetes.io/instance: prometheus-kube-prometheus-prometheus
app.kubernetes.io/managed-by: prometheus-operator
app.kubernetes.io/name: prometheus
app.kubernetes.io/version: 2.26.0
controller-revision-hash: prometheus-prometheus-kube-prometheus-prometheus-56d9fcf57
operator.prometheus.io/name: prometheus-kube-prometheus-prometheus
operator.prometheus.io/shard: "0"
prometheus: prometheus-kube-prometheus-prometheus
statefulset.kubernetes.io/pod-name: prometheus-prometheus-kube-prometheus-prometheus-0
name: prometheus-prometheus-kube-prometheus-prometheus-0
namespace: mo
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: prometheus-prometheus-kube-prometheus-prometheus
uid: 326a09f2-319c-449d-904a-1dd0019c6d80
resourceVersion: "9337443"
selfLink: /api/v1/namespaces/monitoring/pods/prometheus-prometheus-kube-prometheus-prometheus-0
uid: e2be062f-749d-488e-a6cc-42ef1396851b
spec:
containers:
- args:
- --web.console.templates=/etc/prometheus/consoles
- --web.console.libraries=/etc/prometheus/console_libraries
- --config.file=/etc/prometheus/config_out/prometheus.env.yaml
- --storage.tsdb.path=/prometheus
- --storage.tsdb.retention.time=10d
- --web.enable-lifecycle
- --storage.tsdb.no-lockfile
- --web.external-url=http://prometheus-kube-prometheus-prometheus.monitoring:9090
- --web.route-prefix=/
image: quay.io/prometheus/prometheus:v2.26.0
imagePullPolicy: IfNotPresent
name: prometheus
ports:
- containerPort: 9090
name: web
protocol: TCP
readinessProbe:
failureThreshold: 120
httpGet:
path: /-/ready
port: web
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /etc/prometheus/config_out
name: config-out
readOnly: true
- mountPath: /etc/prometheus/certs
name: tls-assets
readOnly: true
- mountPath: /prometheus
name: prometheus-prometheus-kube-prometheus-prometheus-db
- mountPath: /etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-kube-prometheus-prometheus-token-mh66q
readOnly: true
- args:
- --listen-address=:8080
- --reload-url=http://localhost:9090/-/reload
- --config-file=/etc/prometheus/config/prometheus.yaml.gz
- --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
- --watched-dir=/etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
command:
- /bin/prometheus-config-reloader
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: SHARD
value: "0"
image: quay.io/prometheus-operator/prometheus-config-reloader:v0.47.0
imagePullPolicy: IfNotPresent
name: config-reloader
ports:
- containerPort: 8080
name: reloader-web
protocol: TCP
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /etc/prometheus/config
name: config
- mountPath: /etc/prometheus/config_out
name: config-out
- mountPath: /etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-kube-prometheus-prometheus-token-mh66q
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: prometheus-prometheus-kube-prometheus-prometheus-0
nodeName: ip-10-1-49-45.ec2.internal
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccount: prometheus-kube-prometheus-prometheus
serviceAccountName: prometheus-kube-prometheus-prometheus
subdomain: prometheus-operated
terminationGracePeriodSeconds: 600
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: config
secret:
defaultMode: 420
secretName: prometheus-prometheus-kube-prometheus-prometheus
- name: tls-assets
secret:
defaultMode: 420
secretName: prometheus-prometheus-kube-prometheus-prometheus-tls-assets
- emptyDir: {}
name: config-out
- configMap:
defaultMode: 420
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- emptyDir: {}
name: prometheus-prometheus-kube-prometheus-prometheus-db
- name: prometheus-kube-prometheus-prometheus-token-mh66q
secret:
defaultMode: 420
secretName: prometheus-kube-prometheus-prometheus-token-mh66q
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-04-30T16:39:14Z"
status: "True"
type: PodScheduled
phase: Pending
qosClass: Burstable
</code></pre>
| kru | <p>If someone needs to know the answer, in my case(the above situation) there were 2 Prometheus operators running in different different namespace, 1 in default & another monitoring namespace. so I removed the one from the default namespace and it resolved my pods crashing issue.</p>
| kru |
<p>I am running spark 3.1.1 on kubernetes 1.19. Once job finishes executor pods get cleaned up but driver pod remains in completed state. How to clean up driver pod once it is completed? any configuration option to set?</p>
<pre><code>NAME READY STATUS RESTARTS AGE
my-job-0e85ea790d5c9f8d-driver 0/1 Completed 0 2d20h
my-job-8c1d4f79128ccb50-driver 0/1 Completed 0 43h
my-job-c87bfb7912969cc5-driver 0/1 Completed 0 43h
</code></pre>
| Shivaji Mutkule | <p>Concerning the initial question "Spark on Kubernetes driver pod cleanup", it seems that there is no way to pass, at spark-submit time, a TTL parameter to kubernetes for avoiding the never-removal of driver pods in completed status.</p>
<p>From Spark documentation:
<a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html</a>
<em>When the application completes, the executor pods terminate and are cleaned up, but the driver pod persists logs and remains in “completed” state in the Kubernetes API until it’s eventually garbage collected or manually cleaned up.</em></p>
<p>It is not very clear who is doing this 'eventually garbage collected'.</p>
| user14392764 |
<p>I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command <code>kubectl get pod -n <namspaceName></code>:</p>
<pre><code>userX@ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
userX@ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
</code></pre>
<p><code>kubectl cluster-info</code> as well as other related commands gives same output.
in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.</p>
<p>We also tried to add below entry in /etc/hosts , it is not working.</p>
<pre><code>127.0.0.1 localhost \n
192.168.214.136 localhost \n
127.0.1.1 ubuntu
</code></pre>
<p>I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.</p>
| Vicky | <p>If you use minikube sometimes all you need is just to restart minikube.</p>
<p>Run:
<code>minikube start</code></p>
| Karina Titov |
<p>We are using K8S in a managed Azure environment, Minikube in Ubuntu and a Rancher cluster built on on-prem machines and in general, our deployments take up to about 30 seconds to pull containers, run up and be ready. However, my latest attempt to create a deployment (on-prem) takes upwards of a minute and sometimes longer. It is a small web service which is very similar to our other deployments. The only (obvious) difference is the use of a startup probe and a liveness probe, although some of our other services do have probes, they are different though.</p>
<p>After removing Octopus deploy from the equation by extracting the yaml it was running and using kubectl, as soon as the (single) pod starts, I start reading the logs and as expected, the startup and liveness probes are called very quickly. Startup succeeds and the cluster starts calling the live probe, which also succeeds. However, if I use <code>kubectl describe</code> on the pod, it shows Initialized and PodScheduled as True but ContainersReady (there is one container) and Ready are both false for around a minute. I can't see what would cause this other than probe failures but these are logged as successful.</p>
<p>They eventually start and work OK but I don't know why they take so long.</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: 'redirect-files-deployments-28775'
labels:
Octopus.Kubernetes.SelectionStrategyVersion: "SelectionStrategyVersion2"
OtherOctopusLabels
spec:
replicas: 1
selector:
matchLabels:
Octopus.Kubernetes.DeploymentName: 'redirect-files-deployments-28775'
template:
metadata:
labels:
Octopus.Kubernetes.SelectionStrategyVersion: "SelectionStrategyVersion2"
OtherOctopusLabels
spec:
containers:
- name: redirect-files
image: ourregistry.azurecr.io/microservices.redirectfiles:1.0.34
ports:
- name: http
containerPort: 80
protocol: TCP
env:
- removed connection strings etc
livenessProbe:
httpGet:
path: /api/version
port: 80
scheme: HTTP
successThreshold: 1
startupProbe:
httpGet:
path: /healthcheck
port: 80
scheme: HTTP
httpHeaders:
- name: X-SS-Authorisation
value: asdkjlkwe098sad0akkrweklkrew
initialDelaySeconds: 5
timeoutSeconds: 5
imagePullSecrets:
- name: octopus-feedcred-feeds-azure-container-registry
</code></pre>
| Luke Briner | <p>So the cause was the startup and/or liveness probes. When I removed them, the deployment time went from over a minute to 18 seconds, despite the logs proving that the probes were called successfully very quickly after containers were started.</p>
<p>At least I now have something more concrete to look for.</p>
| Luke Briner |
<p>I'm trying to run minikube with hyperv without open an Administrator powershell.
Is there any way?
I'm doing this:</p>
<pre><code>choco install minikube
minikube.exe start --vm-driver "hyperv"
</code></pre>
<p>If I try to launch minikube start from a normal powershell it gives me this message:</p>
<pre><code>X hyperv does not appear to be installed
</code></pre>
| Manuel Castro | <p>To launch minikube from non-admin powershell. You need to add a non-admin user to "Hyper-V Administrators" group.</p>
<p>Open PowerShell with administrator right. And run below command to add current user name to
"Hyper-V Administrators" group. You need to sign off and sign in to take effect.</p>
<pre><code>Add-LocalGroupMember -Group "Hyper-V Administrators" -Member [System.Security.Principal.WindowsIdentity]::GetCurrent().Name
</code></pre>
| Lim Sze Seong |
<p>How do you change ip address of the master or any worker node</p>
<p>I have experimented with:</p>
<pre><code>kubeadm init --control-plane-endpoint=cluster-endpoint --apiserver-advertise-address=<x.x.x.x>
</code></pre>
<p>And then I guess I need the new config with the right certificate:</p>
<pre><code>sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
</code></pre>
<p>I tried the following suggested from Hajed.Kh:</p>
<p>Changed ip address in:</p>
<pre><code>etcd.yaml (contained ip)
kube-apiserver.yaml (contained ip)
kube-controller-manager.yaml (not this one?
kube-scheduler.yaml (not this one?)
</code></pre>
<p>But I still get the same ip address in:</p>
<pre><code>sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
</code></pre>
| Chris G. | <p>The <code> apiserver-advertise-address</code> flag is located in the api-server manifest file and all Kubernetes components manifests are located here <code>/etc/kubernetes/manifest/</code>.Those are realtime updated files so change and save and it will be redeployed instantally :</p>
<pre><code>etcd.yaml
kube-apiserver.yaml
kube-controller-manager.yaml
kube-scheduler.yaml
</code></pre>
<p>For the worker I think it will automatically update changes while the kubelet is connected to the api-server.</p>
| Hajed.Kh |
<p>Whenever I set up a Rancher Kubernetes cluster with RKE, the cluster sets up perfectly. However, I'm getting the following warning message:</p>
<pre><code>WARN[0011] [reconcile] host [host.example.com] is a control plane node without reachable Kubernetes API endpoint in the cluster
WARN[0011] [reconcile] no control plane node with reachable Kubernetes API endpoint in the cluster found
</code></pre>
<p><i>(in the above message, the <code>host.example.com</code> is a placeholder for my actual host name, this message is given for each controlplane host specified in the cluster.yml)</i></p>
<p>How can I modify the RKE <code>cluster.yml</code> file or any other setting to avoid this warning?</p>
| Maxim Masiutin | <p>I don't believe you can suppress this warning since as you indicate in your comments, the warning is valid on the first <code>rke up</code> command. It is only a warning, and a valid one at that, even though your configuration appears to have a handle on that. If you are worried about the logs, you could perhaps have your log aggregation tool ignore the warning if it is in close proximity to the initial <code>rke up</code> command, or even filter it out. However, I would think twice about filtering blindly on it as it would indicate a potential issue (if, for example, you thought the control plane containers were already running).</p>
| Foghorn |
<p>I have a deployment that creates pods running a container on a specific port. I am looking to give each pod a unique port and exposing it outside the cluster on that unique port. I tried using a service, but this will create one port for all the pods and act as a load balancer.</p>
<p>deployment yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: name-deployment
labels:
app: name
spec:
replicas: 3
selector:
matchLabels:
app: name
template:
metadata:
labels:
app: name
spec:
containers:
- name: name
image: image
ports:
- containerPort: 25566
</code></pre>
| Ferskfisk | <p>It's not possible as all pods under one deployment will have same configuration including exposed ports. Creating different deployments and setting custom scaling logic would help you here.</p>
| Rushikesh |
<p>We are running a AKS Kubernetes cluster on Azure. I'm using the "<a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX Ingress Controller</a>" and "<a href="https://cert-manager.io" rel="nofollow noreferrer">cert-manager</a>" for routing and certificate generation (through Let's Encrypt). I followed the basic setup advice from the Microsoft documentation: <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-tls" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-tls</a></p>
<p>When visiting our page in a web-browser, we notice nothing out of the ordinary at first - HTTPS was working fine. The browser can validate the Let's Encrypt certificate. However, we noticed later on that the ingress controller actually serves two certificates (one of which has a common name: "Kubernetes Ingress Controller Fake Certificate" and an Alternative name: "ingress.local"): <a href="https://www.ssllabs.com/ssltest/analyze.html?d=test-aks-ingress.switzerlandnorth.cloudapp.azure.com&hideResults=on" rel="nofollow noreferrer">https://www.ssllabs.com/ssltest/analyze.html?d=test-aks-ingress.switzerlandnorth.cloudapp.azure.com&hideResults=on</a></p>
<p>Long story short - yesterday, I tried everything from re-installing the Nginx-ingress and cert-manager to starting a new Azure Kubernetes Service from scratch, but every time I end up in the same situation.</p>
<p>I have read many of the discussions from people experiencing similar problems. Typically, they are a bit different though, as they don't actually see a valid certificate at all. I confirmed that we are using the production Let's Encrypt <code>ClusterIssuer</code>:</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: ***@***
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
</code></pre>
<p>I also created a new test-app with test-ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- test-aks-ingress.switzerlandnorth.cloudapp.azure.com
secretName: tls-secret
rules:
- host: test-aks-ingress.switzerlandnorth.cloudapp.azure.com
http:
paths:
- backend:
serviceName: aks-helloworld-one
servicePort: 80
path: /hello-world-one(/|$)(.*)
</code></pre>
<p>From my understanding there is usually some issue with the <code>secret</code> for the people that have reported this previously. Here I assume that the <code>ClusterIssuer</code> will generate the relevant certificates and store them in <code>tls-secret</code>, which has been generated automatically:</p>
<pre><code>Name: tls-secret
Namespace: test
Labels: <none>
Annotations: cert-manager.io/alt-names: test-aks-ingress.switzerlandnorth.cloudapp.azure.com
cert-manager.io/certificate-name: tls-secret
cert-manager.io/common-name: test-aks-ingress.switzerlandnorth.cloudapp.azure.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group: cert-manager.io
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt
cert-manager.io/uri-sans:
Type: kubernetes.io/tls
Data
====
tls.crt: 3530 bytes
tls.key: 1675 bytes
</code></pre>
<p>Maybe what I am still confused about is the different secrets / certificates at play here. The <code>cert-manager</code> operates in the <code>cert-manager</code> namespace and creates a <code>letsencrypt</code> secret there, while my test-setup is running everything else in a <code>test</code> namespace (including the ingress controller).</p>
<p><strong>[UPDATE]</strong>
But what is the actual problem here? Everything "just works" in a normal browser, right? Unfortunately, the real problem is that connections do not work for a specific client application, which may not have SNI support.</p>
<p>Is there a way to not have a default certificate? How would I change the configuration here to provide the "Let's Encrypt" signed certificate by default - is that possible?</p>
| Chris | <p>It is expected behavior. By default Ingress controller creates self-signed certificate with CN indicating it's fake one. This is used when a request doesn't match to rules defined in Ingress. So when we access this URL from browser, it returns correct certificate but with openssl s_client without servername field, it doesn't match the rule defined in Ingress and goes to default backend and returns self-signed certificate.</p>
<p>You can also specify default certificate for Ingress. Refer <a href="https://github.com/kubernetes/ingress-nginx/issues/4674" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/4674</a> for more details.</p>
| Rushikesh |
<p>I have 2 backend application running on the same cluster on gke. Applications A and B. A has 1 pod and B has 2 pods. A is exposed to the outside world and receives IP address that he then sends to B via http requests in the header.</p>
<p>B has a Kubernetes service object that is configured like that.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: svc-{{ .Values.component_name }}
namespace: {{ include "namespace" .}}
spec:
ports:
- port: 80
targetPort: {{.Values.app_port}}
protocol: TCP
selector:
app: pod-{{ .Values.component_name }}
type: ClusterIP
</code></pre>
<p>In that configuration, The http requests from A are equally balanced between the 2 pods of application B, but when I add <code>sessionAffinity: ClientIP</code> to the configuration, every http requests are sent to the same B pod even though I thought it should be a round robin type of interaction.</p>
<p>To be clear, I have the IP adress stored in the header X-Forwarded-For so the service should look at it to be sure to which B pod to send the request as the documentation says <a href="https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws</a></p>
<p>In my test I tried to create has much load has possible to one of the B pod to try to contact the second pod without any success. I made sure that I had different IPs in my headers and that it wasn't because some sort of proxy in my environment. The IPs were not previously used for test so it is not because of already existing stickiness.</p>
<p>I am stuck now because I don't know how to test it further and have been reading the doc and probably missing something. My guess was that sessionAffinity disable load balancing for ClusterIp type but this seems highly unlikely...</p>
<p>My questions are :</p>
<p>Is the comportment I am observing normal? What am I doing wrong?</p>
<p>This might help to understand if it is still unclear what I'm trying to say : <a href="https://stackoverflow.com/a/59109265/12298812">https://stackoverflow.com/a/59109265/12298812</a></p>
<p>EDIT : I did test on the client upstream and I saw at least a little bit of the requests get to the second pod of B, but this load test was performed from the same IP for every request. So this time I should have seen only a pod get the traffic...</p>
| Marc-Antoine Caron | <p>The behaviour suggests that x-forward-for header is not respected by cluster-ip service. </p>
<p>To be sure I would suggest to load test from upstream client service which consumes the above service and see what kind of behaviour you get. Chances are you will see the same incorrect behaviour there which will affect scaling your service.</p>
<p>That said, using session affinity for internal service is highly unusual as client IP addresses do not vary as much. Session affinity limits scaling ability of your application. Typically you use memcached or redis as session store which is likely to be more scalable than session affinity based solutions.</p>
| Parth Mehta |
<p>This is one that works when using single tail input</p>
<pre><code>inputs: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
filters: |
[FILTER]
Name kubernetes
Match *
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
[FILTER]
Name Lua
Match kube.*
code function dummy_filter(a,b,c)local n=c;n["dummy"]="dummy";return 2,b,n end
call dummy_filter
[FILTER]
Name parser
Match kube.*
Key_Name log
Parser tomcat_parser
Preserve_Key On
Reserve_Data On
[FILTER]
Name Lua
Match kube.*
code function dummy_filter1(a,b,c)local n=c;n["dummy1"]="dummy1";return 2,b,n end
call dummy_filter1
customParsers: |
[PARSER]
Format regex
Name tomcat_parser
Regex ^(?<apptime>[0-9-a-zA-Z]+\s[0-9:\.]+)\s+(?<level>[a-zA-Z]+)\s+\[(?<thread>[a-zA-Z]+)\]\s+(?<applog>.*$)
outputs: |
[OUTPUT]
Name cloudwatch_logs
Match kube.*
Region ${region}
Log_Group_Name /myapps/logs
Log_Stream_Prefix my
Auto_Create_Group On
net.keepalive Off
</code></pre>
<p>And this doesn't work. final output in /myapps/tomcatlogs has data from all the 3 remaining filters except from the kubernetes.</p>
<pre><code>inputs: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
[INPUT]
Name tail
Tag tomcat.*
Path /var/log/containers/tomcat*.log. (checked even *.log doesn't work)
Parser docker
filters: |
[FILTER]
Name kubernetes
Match *
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
[FILTER]
Name Lua
Match tomcat.*
code function dummy_filter(a,b,c)local n=c;n["dummy"]="dummy";return 2,b,n end
call dummy_filter
[FILTER]
Name parser
Match tomcat.*
Key_Name log
Parser tomcat_parser
Preserve_Key On
Reserve_Data On
[FILTER]
Name Lua
Match tomcat.*
code function dummy_filter1(a,b,c)local n=c;n["dummy1"]="dummy1";return 2,b,n end
call dummy_filter1
customParsers: |
[PARSER]
Format regex
Name tomcat_parser
Regex ^(?<apptime>[0-9-a-zA-Z]+\s[0-9:\.]+)\s+(?<level>[a-zA-Z]+)\s+\[(?<thread>[a-zA-Z]+)\]\s+(?<applog>.*$)
outputs: |
[OUTPUT]
Name cloudwatch_logs
Match kube.*
Region ${region}
Log_Group_Name /myapps/logs
Log_Stream_Prefix my
Auto_Create_Group On
net.keepalive Off
[OUTPUT]
Name cloudwatch_logs
Match tomcat.*
Region ${region}
Log_Group_Name /myapps/tomcatlogs
Log_Stream_Prefix my
Auto_Create_Group On
net.keepalive Off
</code></pre>
<p>I don't like the existing sol as non-tomcat logs too gets evaluated in the tomcat filter.
Any guidance will be appreciated.</p>
| S Kr | <p>Your tomcat Input tag the records as tomcat.* which means that they will managed as:</p>
<pre><code>tomcat.var.log.containers.tomcat*.log
</code></pre>
<p>And your kubernetes filter has</p>
<pre><code>Kube_Tag_Prefix kube.var.log.containers.
</code></pre>
<p>So the tomcat tagged records won fit the kube tag prefix and so it won't be able to parse correctly the log files names.
You can set the Log_level as debug for fluent-bit inside the [SERVICE]. This will give you a more detail information about what is happening.</p>
<p>Hope this helps!</p>
| Jose David Palacio |
<p>I spinned up k8 cluster using Kesley KTHW (<a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/</a>) in GCP.</p>
<p>Trying to do some exercise on this link => <a href="https://github.com/dgkanatsios/CKAD-exercises/blob/master/b.multi_container_pods.md" rel="nofollow noreferrer">https://github.com/dgkanatsios/CKAD-exercises/blob/master/b.multi_container_pods.md</a> and my external dns resolution fails from the pod.</p>
<p><strong>Version:</strong></p>
<pre><code>sshanmugagani@MSI:~/cka/skk8/practise-1$ kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>Pod fails to resolve google.com:</strong></p>
<pre><code>sshanmugagani@MSI:~/cka/skk8/practise-1$ kubectl exec -ti dnsutils -- nslookup google
Server: 10.32.0.10
Address: 10.32.0.10#53
** server can't find google.us-west1-c.c.test.internal: SERVFAIL
command terminated with exit code 1
</code></pre>
<p><strong>Pod's /etc/resolv.conf:</strong></p>
<pre><code>sshanmugagani@MSI:~/cka/skk8/practise-1$ kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml pod/dnsutils created
</code></pre>
<p>sshanmugagani@MSI:~/cka/skk8/practise-1$ k exec -it dnsutils -- cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local us-west1-c.c.test.internal c.test.internal google.internal
nameserver 10.32.0.10
options ndots:5</p>
<p><strong>Getting worker node where pod runs:</strong></p>
<pre><code>sshanmugagani@MSI:~/cka/skk8/practise-1$ kgp -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dnsutils 1/1 Running 0 60s 10.200.0.65 worker-0 <none> <none> multi 0/2 Completed 0 12h 10.200.0.53 worker-0 <none> <none>
</code></pre>
<p><strong>Worker node resolves:</strong></p>
<pre><code>sshanmugagani@worker-0:~$ nslookup google.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: google.com
Address: 74.125.20.101
Name: google.com
Address: 74.125.20.100
Name: google.com
Address: 74.125.20.139
Name: google.com
Address: 74.125.20.138
Name: google.com
Address: 74.125.20.113
Name: google.com
Address: 74.125.20.102
Name: google.com
Address: 2607:f8b0:400e:c09::65
Name: google.com
Address: 2607:f8b0:400e:c09::8a
Name: google.com
Address: 2607:f8b0:400e:c09::8b
Name: google.com
Address: 2607:f8b0:400e:c09::71
</code></pre>
<p><strong>Coredns:</strong></p>
<pre><code>sshanmugagani@MSI:~/cka/skk8/practise-1$ kgp $ks NAME READY STATUS RESTARTS AGE coredns-5677dc4cdb-cfl2j 1/1 Running 1 11h coredns-5677dc4cdb-xqm44 1/1 Running 1 11h
</code></pre>
<p><strong>Coredns logs:</strong></p>
<pre><code>sshanmugagani@MSI:~/cka/skk8/practise-1$ kubectl logs coredns-5677dc4cdb-cfl2j $ks .:53 [INFO] plugin/reload: Running configuration MD5 = fbb756dad13bce75afc40db627b38529 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [ERROR] plugin/errors: 2 2953017454530458158.338294255644342916. HINFO: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 google.com.us-west1-c.c.test.internal. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 google.com. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 google.com. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 kube-dns. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.us-west1-c.c.test.internal. AAAA: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.c.test.internal. AAAA: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com. AAAA: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.c.test.internal. AAAA: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.goo. AAAA: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.us-west1-c.c.test.internal. AAAA: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.c.test.internal. AAAA: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.goo. AAAA: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.goo. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.us-west1-c.c.test.internal. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.c.test.internal. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.goo. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 neverssl.com.us-west1-c.c.test.internal. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 google.com. A: plugin/loop: no next plugin found [ERROR] plugin/errors: 2 google. A: plugin/loop: no next plugin found
</code></pre>
| sshanmugagani | <p>Got this issue fixed by
==> Loaded kernel module
modprobe br_netfilter
===> Add flag for kube-proxy start up file
--masquerade-all</p>
| sshanmugagani |
<p>In Kubernetes I use the Traefik ingress controller. My Kubernetes cluster is Bare Metal.</p>
<p>I have two services listening on ports 8080 and 8082. These two services are tied to one deployment. The deployment has an application that listens on these two ports for different tasks.</p>
<p>Can traffic be routed to these two ports through the same entrypoint, or is this not recommended?</p>
| Maksim | <p>I'm not familiar with kubernetes, so excuse me if I misunderstood the question.</p>
<p>I'm running traefik with a single entry point on port 443 in front of multiple docker-compose services. That's no problem whatsoever. However, traefik needs to know which service the client wants to reach. This is done by specifying different host rules for these services. <a href="https://doc.traefik.io/traefik/routing/routers/#rule" rel="nofollow noreferrer">see</a></p>
| Deffa |
<p>I have a statefulset from mongo with 2 volumemounts:</p>
<pre><code> volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-config-storage
mountPath: /data/configdb
</code></pre>
<p>I want to know how to add the second volume in volumeClaimTemplates:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "sc-infra"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
</code></pre>
| Thiago Oliveira | <p>Just append additional claim to your <code>volumeClaimTemplates</code>. Example:</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "sc-infra"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
- metadata: # <-- append another claim
name: mongo-config-storage
spec:
storageClassName: sc-infra
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
</code></pre>
| gohm'c |
<p>I have a PHP Laravel project, I have <code>Dockerfile</code>:</p>
<pre><code>FROM php:7.4-fpm
COPY . /myapp
WORKDIR /myapp
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
RUN curl -sS https://getcomposer.org/installer | php
RUN php composer.phar update
EXPOSE 8000
CMD [ "php", "artisan", "serve", "--host", "0.0.0.0" ]
</code></pre>
<p>I build the image with <code>docker build -t laravel-app .</code> and run it with <code>docker run -d -p 8000:8000 --name backend app</code>, on http://localhost:8000 I can access the api correctly.</p>
<hr />
<h1>The issue:</h1>
<p>I am trying to use Kubernetes for this project, I've written a <code>laravel-deployment.yaml</code> file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
spec:
replicas: 1
selector:
matchLabels:
app: backend-laravel
template:
metadata:
labels:
app: backend-laravel
spec:
containers:
- name: laravel-app
image: laravel-app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
</code></pre>
<p>When I try to deploy it <code>kubectl apply -f laravel-deployment.yaml</code>, the deployment is successful and the pod is created, but I can't access with http://localhost:8000
What I previously did:</p>
<ul>
<li>I've set docker to point to minikube with <code>eval $(minikube docker-env)</code></li>
<li>Create the service <code>kubectl expose -f laravel-deployment.yaml --port=8000 --target-port=8000</code></li>
</ul>
| Hamza Ince | <p><code>...can't access with http://localhost:8000 What I previously did</code></p>
<p>You can access http://localhost:8000 with <code>kubectl port-forward <backend-deployment-xxxx-xxxx> 8000:8000</code>. You can also expose as <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">NodePort or LoadBalancer</a> where port-forward will not be required.</p>
| gohm'c |
<p>I have Kubernetes 1.17.5 and Istio 1.6.8 installed with demo profile.</p>
<p>And here is my test setup [nginx-ingress-controller] -> [proxy<->ServiceA] -> [proxy<->ServiceB]</p>
<ul>
<li>Proxies for serviceA and serviceB are auto-injected by Istio (istio-injection=enabled)</li>
<li>Nginx ingress controller does not have tracing enabled and has no envoy proxy as a sidecar</li>
<li>ServiceA passes tracing headers down to ServiceB</li>
<li>I'm trying to trace calls from ServiceA to ServiceB and do not care about Ingress->ServiceA span at the moment</li>
</ul>
<p>When I'm sending requests to ingress controller I can see that ServiceA receives all required tracing headers from the proxy</p>
<pre><code>x-b3-traceid: d9bab9b4cdc8d0a7772e27bb7d15332f
x-request-id: 60e82827a270070cfbda38c6f30f478a
x-envoy-internal: true
x-b3-spanid: 772e27bb7d15332f
x-b3-sampled: 0
x-forwarded-proto: http
</code></pre>
<p>Problem is <strong>x-b3-sampled</strong> is always set to 0 and no spans/traces are getting pushed to Jaeger</p>
<p>Few things I've tried</p>
<ol>
<li>I've added Gateway and VirtualService to ServiceA to expose it through Istio ingressgateway. When I send traffic through ingressgateway everything works as expected. I can see traces [ingress-gateway]->[ServiceA]->[ServiceB] in the JaegerUI</li>
<li>I've also tried to install Istio with custom config and play with tracing related parameters with no luck.</li>
</ol>
<p>Here is the config I've tried to use</p>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
enableTracing: true
defaultConfig:
tracing:
sampling: 100
addonComponents:
tracing:
enabled: true
grafana:
enabled: false
istiocoredns:
enabled: false
kiali:
enabled: false
prometheus:
enabled: false
values:
tracing:
enabled: true
pilot:
traceSampling: 100
</code></pre>
| arkadi4 | <p>After few days of digging I've figured it out. Problem is in the format of the <code>x-request-id</code> header that nginx ingress controller uses.</p>
<p>Envoy proxy expects it to be an UUID (e.g. <code>x-request-id: 3e21578f-cd04-9246-aa50-67188d790051</code>) but ingrex controller passes it as a non-formatted random string (<code>x-request-id: 60e82827a270070cfbda38c6f30f478a</code>). When I pass properly formatted x-request-id header in the request to ingress controller its getting passed down to envoy proxy and request is getting sampled as expected. I also tried to remove
x-request-id header from the request from ingress controller to ServiceA with a simple EnvoyFilter. And it also works as expected. Envoy proxy generates a new x-request-id and request is getting traced.</p>
| arkadi4 |
<p>A pod can be created by Deployment or ReplicaSet or DaemonSet, if I am updating a pod's container specs, is it OK for me to simply modify the yaml file that created the pod? Would it be erroneous once I have done that?</p>
<p>Brief Question:
Is <code>kubectl apply -f xxx.yml</code> the silver bullet for all pod update?</p>
| Steve Wu | <p><code>...if I am updating a pod's container specs, is it OK for me to simply modify the yaml file that created the pod?</code></p>
<p>The fact that the pod spec is part of the controller spec (eg. deployment, daemonset), to update the container spec you naturally start with the controller spec. Also, a running pod is largely immutable, there isn't much you can change directly unless you do a replace - which is what the controller already doing.</p>
| gohm'c |
<p>I want to mock a response from the Python Kubernetes client. Below code of my Kubernetes service:</p>
<pre class="lang-py prettyprint-override"><code>import os
from kubernetes.client.rest import ApiException
from kubernetes import client
from kubernetes.config import load_config
from exceptions.logs_not_found_exceptions import LogsNotFound
import logging
log = logging.getLogger("services/kubernetes_service.py")
class KubernetesService:
def __init__(self):
super().__init__()
if os.getenv("DISABLE_KUBERNETES_CONFIG") == "False":
load_config()
self.api_instance = client.CoreV1Api()
def get_namespaces(self):
try:
api_response = self.api_instance.list_namespace()
dict_response = api_response.to_dict()
namespaces = []
for item in dict_response['items']:
namespaces.append(item['metadata']['name'])
log.info(f"Retrieved the namespaces: {namespaces}")
return namespaces
except ApiException as e:
raise e
</code></pre>
<p>When I want to mock this using mock.patch I'm getting a ModuleNotFoundError. Below code of my Test class</p>
<pre class="lang-py prettyprint-override"><code>import os
from unittest import mock
from tests.unit_tests import utils
from services.kubernetes_service import KubernetesService
class TestKubernetesService:
@mock.patch.dict(os.environ, {"DISABLE_KUBERNETES_CONFIG": "True"})
def test_get_namespaces(self):
self.service = KubernetesService()
print(self.service.api_instance)
with mock.patch('services.kubernetes_service.KubernetesService.api_instance.list_namespace',
return_value=utils.kubernetes_namespaces_response()):
actual_result = self.service.get_namespaces()
assert actual_result == ['default', 'kube-node-lease', 'kube-public', 'kube-system']
</code></pre>
<p>When I edit the path in <code>mock.patch</code> from <code>services.kubernetes_service.KubernetesService.api_instance.list_namespace</code> to <code>services.kubernetes_service.KubernetesService.get_namespaces</code> it successfully mocks the return value I put in. But I want to mock the response of the line <code>self.api_instance.list_namespace()</code> in the KubernetesService class.</p>
<p>Someone an idea?</p>
| Lucas Scheepers | <p>In your example, you try to patch an instance attribute of the class (<code>api_instance</code>). This cannot be done by just referencing it from the class, as it is not a class attribute - you need an instance instead.</p>
<p>There are generally two standard methods to mock an instance attribute:</p>
<ul>
<li>mock the whole class, in which case the <code>return_value</code> attribute on the mocked class will be a mock that replaces any instance of the class and can therefore be used for mocking instance attributes</li>
<li>mock a concrete instance using <a href="https://docs.python.org/3/library/unittest.mock.html#patch-object" rel="nofollow noreferrer">mock.patch.object</a> or similar - this requires that you have access to the instance in your test</li>
</ul>
<p>Mocking the whole class is not an option in your case, as you need to use the functionality of the class, but as you have access to the instance via <code>self.service</code>, you can use <code>patch.object</code>:</p>
<pre class="lang-py prettyprint-override"><code> def test_get_namespaces(self):
self.service = KubernetesService()
with mock.patch.object(self.service.api_instance, 'list_namespace',
return_value=utils.kubernetes_namespaces_response()):
actual_result = self.service.get_namespaces()
assert actual_result == ['default', 'kube-node-lease', 'kube-public', 'kube-system']
</code></pre>
| MrBean Bremen |
<p>I am trying to deploy elasticsearch and kibana to kubernetes using <a href="https://github.com/elastic/helm-charts" rel="nofollow noreferrer">this chart</a> and getting this error inside the kibana container, therefore ingress returns 503 error and container is never ready.</p>
<p>Error:</p>
<pre><code>[2022-11-08T12:30:53.321+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.112.130.148:42748, Remote: 10.96.237.95:9200
</code></pre>
<p>Ip adress 10.96.237.95 is a valid elasticsearch service address, and port is right.</p>
<p>When i am doing curl to elasticsearch from inside the kibana container, it successfully returns a response.</p>
<p>Am i missing something in my configurations?</p>
<p><strong>Chart version: 7.17.3</strong></p>
<p>Values for elasticsearch chart:</p>
<pre><code>clusterName: "elasticsearch"
nodeGroup: "master"
createCert: false
roles:
master: "true"
data: "true"
ingest: "true"
ml: "true"
transform: "true"
remote_cluster_client: "true"
protocol: https
replicas: 2
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
imageTag: "7.17.3"
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: password
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: username
clusterHealthCheckParams: "wait_for_status=green&timeout=20s"
antiAffinity: "soft"
resources:
requests:
cpu: "100m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "1Gi"
esJavaOpts: "-Xms512m -Xmx512m"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
secretMounts:
- name: elastic-certificates
secretName: elastic-certificates
path: /usr/share/elasticsearch/config/certs
</code></pre>
<p>Values for kibana chart:</p>
<pre><code>elasticSearchHosts: "https://elasticsearch-master:9200"
extraEnvs:
- name: ELASTICSEARCH_USERNAME
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: username
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: password
- name: KIBANA_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: encryption-key
key: encryption_key
kibanaConfig:
kibana.yml: |
server.ssl:
enabled: true
key: /usr/share/kibana/config/certs/elastic-certificate.pem
certificate: /usr/share/kibana/config/certs/elastic-certificate.pem
xpack.security.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
elasticsearch.ssl:
certificateAuthorities: /usr/share/kibana/config/certs/elastic-certificate.pem
verificationMode: certificate
protocol: https
secretMounts:
- name: elastic-certificate-pem
secretName: elastic-certificate-pem
path: /usr/share/kibana/config/certs
imageTag: "7.17.3"
ingress:
enabled: true
ingressClassName: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-issuer
kubernetes.io/ingress.allow-http: 'false'
paths:
- path: /
pathType: Prefix
backend:
serviceName: kibana
servicePort: 5601
hosts:
- host: mydomain.com
paths:
- path: /
pathType: Prefix
backend:
serviceName: kibana
servicePort: 5601
tls:
- hosts:
- mydomain.com
secretName: mydomain.com
</code></pre>
<p>UPD: tried it with other image version (8.4.1), nothing has changed, i am getting the same error. By the way, logstash is successfully shipping logs to this elasticsearch instance, so i think problem is in kibana.</p>
| A735 | <p>Figured it out. It was a complete pain in the ass. I hope these tips will help others:</p>
<ol>
<li><code>xpack.security.http.ssl.enabled</code> should be set to false. I can't find another way around it, but if you do i'd be glad to hear any advices. As i see it, you don't need security for http layer since kibana connects to elastic via transport layer (correct me if i am wrong). Therefore <code>xpack.security.transport.ssl.enabled</code> should be still set to true, but <code>xpack.security.http.ssl.enabled</code> should be set to false. (don't forget to change your <code>protocol</code> field for readinessProbe to http, and also change protocol for elasticsearch in kibana chart to http.</li>
<li><code>ELASTIC_USERNAME</code> env variable is pointless in elasticsearch chart, only password is used, user is always <code>elastic</code></li>
<li><code>ELASTICSEARCH_USERNAME</code> in kibana chart should be actually set to <code>kibana_systems</code> user with according password for that user</li>
</ol>
| A735 |
<p>I have a private helm repo using apache, after migrating to helm3 I cannot install/search charts anymore.</p>
<p>Using helm v3</p>
<pre><code>helm repo list
NAME URL
mas http://localhost:8080/charts/
helm search repo mas/devops-openshift
No results found
</code></pre>
<p>Using helm 2.*</p>
<pre><code>helm search -r mas/devops-openshift
NAME CHART VERSION APP VERSION DESCRIPTION
mas/devops-openshift 7.0.0 Devops (OpenShift)
</code></pre>
<p>Same happens when using "helm install" command, it cannot find the charts.</p>
<p>I guess it could be something related to the helm repo index file. Maybe helmv3 is expecting a different structure? But same happen when generating index file from helmv3.</p>
| João Paulo Karol Nunes | <p>Thanks all for the answers but I've found the issue.
My repository were using development version of the charts so I had something like this 1.0.0-pre.dev (Semantic Versioning 2.0.0).
By default helm 3 does not look at non production charts.
You have to set the flag -devel. something like:
<code>helm search repo mas/devops-openshift --devel</code></p>
| João Paulo Karol Nunes |
<p>I have a situation where my process (pid 1) in a pod sometimes is not responding to the SIGTERM signal due to it has entered a really bad state. This means that it will not be killed by k8s after the prestop hook is run until the grace period is over (which has to be long in my case for other reasons).</p>
<p>In the prestop I'm able to detect this state fairly accurate but the problem is then how to initiate a forceful kill of the process. I cannot for instance <code>kill -9 1</code> because that is not allowed on pid 1.</p>
<p>So the question is if there are other options to do that. If I have read documentation correctly, k8s does not care about the exit code from the prestop hook (otherwise that could have been an option to indicate back to k8s that it should force kill the process directly without waiting for the grace period).
Changing the grace period dynamically for this specific pod when this happens does not possible either.
Any ideas?</p>
| Magnus Håkansson | <p>You can package your image with <a href="https://github.com/krallin/tini" rel="nofollow noreferrer">tini</a> and let tini spawn your application. <code>tini</code> will not miss SIGTERM and it ensure stale child process is removed during termination. <code>tini</code> is incorporated in <code>docker</code> with <code>--init</code> flag, too. There is no need of <code>preStop</code> hook to manually terminate process in this case.</p>
| gohm'c |
<p>I have mysql deployment.yaml, in which i have some data at <strong>mysql/data</strong> directory which is in local. I want mount this data in deployment. After referring blogs some recommend <strong>hostpath</strong>. Since we are not doing ssh into nodes, i cant recommend or use <strong>hostpath</strong>. is there any way i can do with pv with EBS or any other way ? Thanks in advance.</p>
| Akshay Awate | <p>Here's a minimum spec using EKS default gp2 storage class to get you started.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
clusterIP: None
selector:
app: mysql
ports:
- port: 3306
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0.28
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
volumeMounts:
- name: mysql
mountPath: /var/lib/mysql
volumes:
- name: mysql
persistentVolumeClaim:
claimName: mysql
</code></pre>
<p>You can connect to the mysql via <code>kubectl run -it --rm --image=mysql:8.0.28 --restart=Never mysql-client -- mysql -h mysql -ppassword</code></p>
| gohm'c |
<p>I was going through the official <a href="https://github.com/tektoncd/pipeline/blob/master/docs/tutorial.md" rel="nofollow noreferrer">Tekton documentation</a> where it deploys an image to Kubernetes using <code>kubectl</code> standard deployment object <a href="https://github.com/GoogleContainerTools/skaffold/blob/master/examples/microservices/leeroy-web/kubernetes/deployment.yaml" rel="nofollow noreferrer">manifest</a> . However I am trying use Tekton pipeline as CI/CD to deploy in a knative service which should either use <code>knctl</code> or <code>kubectl</code> with a knative service yaml instead of Deployment yaml of <a href="https://github.com/knative/serving/blob/master/docs/spec/spec.md" rel="nofollow noreferrer">knative serving spec</a></p>
<p>Something like </p>
<pre><code>apiVersion: serving.knative.dev/v1 # Current version of Knative
kind: Service
metadata:
name: helloworld-go # The name of the app
namespace: default # The namespace the app will use
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
env:
- name: TARGET # The environment variable printed out by the sample app
value: "Go Sample v1"
</code></pre>
<p>How do I take advantage of Tekton in this case. If I were to install images to any random Kubenetes cluster, I could any other CI/CD tool along with Deployment manifest. I believe Tekton is supposed replace Knative build to make it easy.</p>
| Neil | <p>There is no one way you could approach deploying a Knative Service with Tekton, but a couple key things to consider are as follows. These instructions assume proper installation of both Tekton/Knative Serving on a Kubernetes cluster.</p>
<h3>ServiceAccount permissions to create a Knative Service</h3>
<p>Tekton allows the use of Kubernetes ServiceAccounts to assign permissions (e.g. creating a Knative service) so that a TaskRun or PipelineRun (i.e. the execution of a Tekton CI/CD process) is able to create and update certain resources. In this case, the creation of a Knative Service.</p>
<p>In order to create a ServiceAccount with permissions to create and update a Knative Service, you would need to create a role that is similar to what is shown below:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: create-knative-service
namespace: default
rules:
# Create and update Knative Service
- apiGroups: ["serving.knative.dev"]
resources: ["services"]
verbs: ["get", "create", "update"]
</code></pre>
<p>The Role above allows for the creation and updating of Knative Services.</p>
<p>The Role can be associated with a ServiceAccount via a RoleBinding, which assumes the creation of a ServiceAccount:</p>
<pre class="lang-yaml prettyprint-override"><code>---
kind: ServiceAccount
apiVersion: v1
metadata:
name: tekton-sa
namespace: default
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: create-knative-service-binding
subjects:
- kind: ServiceAccount
name: tekton-sa
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: create-knative-service
</code></pre>
<p>Once the Role, RoleBinding, and ServiceAccount are available, you can use this ServiceAccount during a TaskRun or PipelineRun depending on how you are deploying a Knative Service. The ServiceAccount to use can be specified via the <a href="https://github.com/tektoncd/pipeline/blob/master/docs/pipelineruns.md#configuring-a-pipelinerun" rel="nofollow noreferrer">ServiceAccountName property</a> of a TaskRun or PipelineRun.</p>
<p>If you are using the Tekton cli (<code>tkn</code>), you can use <code>-s</code> flag to specify what ServiceAccount to use when starting a PipelineRun or TaskRun:</p>
<pre><code>tkn pipeline start knative-service-deploy -s tekton-sa
</code></pre>
<p><strong>NOTE:</strong> The Role and RoleBinding examples are for a specific case that only allows deployment of the Knative Service to the same namespace where the TaskRun/PipelineRun is executing.</p>
<h3>What step image to use to deploy the Knative Service</h3>
<p>To deploy the Knative Service via a TaskRun or PipelineRun, you will need to create a Task with a Step that deploys the Knative Service. There is no one tool that you could use in this case as you mention as far as using <code>kubectl</code>, <code>kn</code>, or any other tool to deploy to Kubernetes.</p>
<p>The important things for Tekton to know is the Task definition, the image used by the Step, and the command to run to deploy the Knative Service. An example using <code>kn</code> is shown below:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: deploy-knative-service
spec:
# Step to create Knative Service from built image using kn. Replaces service if it already exists.
steps:
- name: kn
image: "gcr.io/knative-releases/knative.dev/client/cmd/kn:latest"
command: ["/ko-app/kn"]
args: ["service",
"create", "knative-service",
"--image", "gcr.io/knative-samples/helloworld-go:latest",
"--force"]
</code></pre>
<p>In the example above, a Task is defined with one Step called <code>kn</code>. The Step uses the official <code>kn</code> image; specifies to run the <code>kn</code> root command; and then passes arguments to the <code>kn</code> command to create a Knative Service named <code>knative-service</code>, use an image for the Knative Service named <code>gcr.io/knative-samples/helloworld-go</code>, and the <code>--force</code> flag that updates the Knative Service if it already exists.</p>
<p><strong>EDIT: Adding kubectl Example</strong></p>
<p>The question asks about using <code>kubectl</code>, so I am adding in an example of using <code>kubectl</code> in a Tekton Task to deploy a Knative Service from a YAML file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: deploy-knative-service
spec:
# Step to create Knative Service from YAML file using kubectl.
steps:
- name: kubectl
image: "bitnami/kubectl"
command: ["kubectl"]
args: ["apply", "-f", "https://raw.githubusercontent.com/danielhelfand/lab-knative-serving/master/knative/service.yaml"]
</code></pre>
<h3>Summary</h3>
<p>There is no exact way to answer this question, but hopefully the examples and main points help with considerations around permissions for a PipelineRun or TaskRun as well as how to use certain tools with Tekton.</p>
| dhelfand |
<p>Somewhere along the way I messed up installation of the kubectl, now I want to reinstall the kubectl and kubelet
But when I checked the syslog I am still gettting the following error:</p>
<pre><code>systemd[133926]: kubelet.service: Failed to execute command: No such file or directory
systemd[133926]: kubelet.service: Failed at step EXEC spawning /usr/bin/kubelet: No such file or directory
systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC
systemd[1]: kubelet.service: Failed with result 'exit-code'.
</code></pre>
<p>How do i get rid of this error</p>
| tinashe.chipomho | <p>Not good to remove kubelet as it's the main components of the node. If you need to remove kubelet, better to remove the node itself and add back which would most preferred and easy one... Like scale-in and then scale out...</p>
<p>Kubectl.. can done by uninstall package...</p>
<p>For CentOS,</p>
<pre><code>yum uninstall -y kubectl
</code></pre>
<p>Can check and execute command based on specific os distribution...</p>
| Santhosh Kumar |
<p>I have a rails app that is deployed on K8S. Inside my web app, there is a cronjob thats running every day at 8pm and it takes 6 hours to finish. I noticed <code>OOMkilled</code> error occurs after a few hours from cronjob started. I also increased memory of a pod but the error still happened.</p>
<p>This is my <code>yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: sync-data
spec:
schedule: "0 20 * * *" # At 20:00:00pm every day
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
jobTemplate:
spec:
ttlSecondsAfterFinished: 100
template:
spec:
serviceAccountName: sync-data
containers:
- name: sync-data
resources:
requests:
memory: 2024Mi # OOMKilled
cpu: 1000m
limits:
memory: 2024Mi # OOMKilled
cpu: 1000m
image: xxxxxxx.dkr.ecr.ap-northeast-1.amazonaws.com/path
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-c"
- |
rake xxx:yyyy # Will take ~6 hours to finish
restartPolicy: Never
</code></pre>
<p>Are there any best practices to run long consuming cronjob on K8S?
Any help is welcome!</p>
| Tran B. V. Son | <p>OOM Killed can happen for 2 reasons.</p>
<ol>
<li><p>Your pod is taking more memory than the limit specified. In that case, you need to increase the limit obviously.</p>
</li>
<li><p>If all the pods in the node are taking more memory than they have requested then Kubernetes will kill some pods to free up space. In that case, you can give higher priority to this pod.</p>
</li>
</ol>
<p>You should have monitoring in place to actually determine the reasons for this. Proper monitoring will show you which pods are performing as per expectations and which are not. You could also use node selectors for long-running pods and set priority class which will remove non-cron pods first.</p>
| Saurabh Nigam |
<p>I am new to Kubernetes.</p>
<p>I found some errors while using google-cloud-storage.</p>
<p>The problem is,
when I specify GCLOUD_PRIVATE_KEY directly in .yaml file,
I work nicely.</p>
<pre><code> - name: GCLOUD_PRIVATE_KEY
value: "-----BEGIN PRIVATE KEY-----\n(...)\n-----END PRIVATE KEY-----\n"
</code></pre>
<p>However, when I inject the variable to cluster.</p>
<p>AT terminal</p>
<pre><code>kubectl create secret generic gcloud-private-key --from-literal=GCLOUD_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\n(...)\n-----END PRIVATE KEY-----\n"
</code></pre>
<p>AT .yaml</p>
<pre><code>- name: GCLOUD_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: gcloud-private-key
key: GCLOUD_PRIVATE_KEY
</code></pre>
<p>The error related to key occurs.
I even tried for escape notation just in case,</p>
<p>AT terminal</p>
<pre><code>kubectl create secret generic gcloud-private-key --from-literal=GCLOUD_PRIVATE_KEY='"-----BEGIN PRIVATE KEY-----\n(...)\n-----END PRIVATE KEY-----\n"'
</code></pre>
<p>However, it doesn't work as well! Can you let me know how I can fix it??</p>
| Jun | <p><code>kubectl create secret generic gcloud-private-key --from-literal=GCLOUD_PRIVATE_KEY='"...\n...</code></p>
<p>'\n', '"' are invalid character for TLS key when create from literal. You can load the key directly from the original file as-is:</p>
<p><code>kubectl create secret generic gcloud-private-key --from-literal GCLOUD_PRIVATE_KEY="$(cat <file>)"</code></p>
| gohm'c |
<p>I'm following this <a href="https://phoenixnap.com/kb/how-to-install-jenkins-kubernetes" rel="nofollow noreferrer">Link</a> to setup Jenkins on Kubernetes cluster.</p>
<p>The environment information is mentioned below,</p>
<pre><code>Environment:-
On-Premise Physical Server
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-server Ready master 2d23h v1.19.16
node-server1 Ready worker1 2d23h v1.19.16
node-server2 Ready worker2 2d23h v1.19.16
node-server3 Ready worker3 2d23h v1.19.16
</code></pre>
<p>I have below <code>yaml</code> files.</p>
<pre><code>deploy-jenkins.yaml
sa-jenkins.yaml
service-jenkins.yaml
volume-jenkins.yaml
</code></pre>
<p><code>PersistentVolume</code> i want to use my <code>master</code> server local path, So in the <code>volume-jenkins.yaml</code> file I have updated <code>path</code> and <code>values</code> as below.</p>
<pre class="lang-yaml prettyprint-override"><code> local:
path: /home/linux-user/kubernetes/jenkins
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master-server
</code></pre>
<p>When i apply the <code>yaml</code> files, My jenkins pod remain in <code>pending</code> status always.</p>
<p>Jenkins Pod status:-</p>
<pre><code># kubectl get pods -n jenkins
NAME READY STATUS RESTARTS AGE
jenkins-69b8564b9f-gm48n 0/1 Pending 0 102m
</code></pre>
<p>Jenkins Pod describe Status:-</p>
<pre><code># kubectl describe pod jenkins-69b8564b9f-gm48n -n jenkins
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m45s (x68 over 104m) default-scheduler 0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) had volume node affinity conflict.
</code></pre>
<p>PV describe details:-</p>
<pre><code># kubectl describe pv jenkins-pv -n jenkins
Name: jenkins-pv
Labels: type=local
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Bound
Claim: jenkins/jenkins-pvc
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [master-server]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /home/linux-user/kubernetes/jenkins
Events: <none>
</code></pre>
<p>What is wrong with my <code>yaml</code> files? and let me know the way to solve the node conflict issue. Thanks in advance.</p>
| user4948798 | <p><code>...i want to use my master server local path</code></p>
<p>Add <code>nodeSelector</code> and <code>tolerations</code> to your deployment spec:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
containers:
- name: jenkins
...
</code></pre>
| gohm'c |
<p>I try to integrate vault and <em>gitlab</em>.</p>
<p><em>Vault</em> side is ok , and I try to locate vault in our <em>gitlab-ci.yaml</em> but I confused something.</p>
<p>Where is the location of <em>vault</em> in <em>yaml</em> ?</p>
<p>We use <em>gitlab ee</em> (community).</p>
<p>Our <em>yaml</em>:</p>
<pre><code>.kaniko-build:
stage: build
before_script:
- mkdir -p /kaniko/.docker
- |
cat <<EOF > /kaniko/.docker/config.json
{
"auths":{
"${CI_REGISTRY}":{
"auth":"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')"
},
"https://index.docker.io/v1/":{
"auth":"$(printf "%s:%s" "${DOCKERHUB_USERNAME}" "${DOCKERHUB_PASSWORD}" | base64 | tr -d '\n')"
}
}
}
EOF
- cat /kaniko/.docker/config.json
script:
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${DOCKERFILE_PATH}"
--destination "${CI_REGISTRY_IMAGE}:${CI_PIPELINE_IID}"
--destination "${CI_REGISTRY_IMAGE}:latest"
--cache
- echo $(date) $(date)
image:
name: gcr.io/kaniko-project/executor:v1.8.0-debug
entrypoint: [""]
test-build:
extends: .kaniko-build
when: manual
variables:
DOCKERFILE_PATH: "devops/test/Dockerfile"
rules:
- if: $CI_COMMIT_BRANCH
exists:
- devops/test/Dockerfile
interruptible: true
</code></pre>
| Sinankoylu | <p>If you've not already done so, you first need to configure vault for jwt authentication.</p>
<pre><code>vault auth enable -path=jwt/gitlab jwt
</code></pre>
<p>Then configure the new jwt auth with a token validation endpoint that references your gitlab instance.</p>
<pre><code>vault write auth/jwt/config \
jwks_url="https://gitlab.example.com/-/jwks" \
bound_issuer="gitlab.example.com"
</code></pre>
<p>Now in your gitlab-ci.yml, login to vault.</p>
<pre><code>- export VAULT_ADDR="https://gitlab.example.com"
- export VAULT_TOKEN="$(vault write -field=token auth/jwt/gitlab/login role=SOME_ROLE_NAME jwt=$CI_JOB_JWT)"
</code></pre>
<p>Next in your gitlab-ci.yml, retrieve the secret.</p>
<pre><code>- export EXAMPLE_SECRET="$(vault kv get -field=EXAMPLE_SECRET_KEY kv-v2/example/secret/path)"
</code></pre>
<p>This is all covered in more detail in the official GitLab docs <a href="https://docs.gitlab.com/ee/ci/examples/authenticating-with-hashicorp-vault/" rel="nofollow noreferrer">here</a></p>
| LiveByTheCode |
<p>I'm trying to run a PySpark Job using Kubernetes. Both the main script and the py-files are hosted on Google Cloud storage.
If I launch the Job using the standalone resource manager:</p>
<pre><code>spark-submit \
--master local \
--deploy-mode client \
--repositories "http://central.maven.org/maven2/" \
--packages "org.postgresql:postgresql:42.2.2" \
--py-files https://storage.googleapis.com/foo/some_dependencies.zip \
https://storage.googleapis.com/foo/script.py some args
</code></pre>
<p>It works fine.
But if I try the same using Kubernetes:</p>
<pre><code>spark-submit \
--master k8s://https://xx.xx.xx.xx \
--deploy-mode cluster \
--conf spark.kubernetes.container.image=gcr.io/my-spark-image \
--repositories "http://central.maven.org/maven2/" \
--packages "org.postgresql:postgresql:42.2.2" \
--py-files https://storage.googleapis.com/foo/some_dependencies.zip \
https://storage.googleapis.com/foo/script.py some args
</code></pre>
<p>Then the main script runs, but it can't find the modules in the dependencies files.
I know I can copy all the files inside the Docker image but I would prefer doing it this way.</p>
<p>Is this possible? Am I missing something?</p>
<p>Thanks</p>
| pacuna | <p>So the idea behind the k8s scheduler is to put absolutely everything in the container.</p>
<p>So your CI/CD would build a Dockerfile with the Apache Spark kubernetes Docker as its base and then have a zipped copy of your python repo and driver python script inside the docker image. Like this:</p>
<pre><code>$ bin/spark-submit \
--master k8s://<k8s-apiserver-host>:<k8s-apiserver-port> \
--deploy-mode cluster \
--py-files local:///path/to/repo/in/container/pyspark-repo.zip \
--conf spark.kubernetes.container.image=pyspark-repo-docker-image:1.0.0 \
local:///path/to/repo/in/container/pyspark-driver.py
</code></pre>
<p>Your <code>spark.kubernetes.container.image</code> should be your full application complete with a</p>
<ul>
<li>zip of the repo for <code>--py-files</code> (ex: repo.zip)</li>
<li>your <code>requirements.txt</code> installed to the container's version of python (done in your repo's Dockerfile)</li>
<li>driver script (ex: driver.py)</li>
</ul>
| Jason Hatton |
<p>I have cassandra operator installed and I setup cassandra datacenter/cluster with 3 nodes.
I have created sample keyspace, table and inserted the data. I see it has created 3 PVC's in my storage section. When I deleting the dataceneter its delete associated PVC's as well ,So when I setup same configuration Datacenter/cluster , its completely new , No earlier keyspace or tables.
How can I make them persistence for future use? I am using sample yaml from below
<a href="https://github.com/datastax/cass-operator/tree/master/operator/example-cassdc-yaml/cassandra-3.11.x" rel="nofollow noreferrer">https://github.com/datastax/cass-operator/tree/master/operator/example-cassdc-yaml/cassandra-3.11.x</a></p>
<p>I don't find any persistentVolumeClaim configuration in it , Its having storageConfig:
cassandraDataVolumeClaimSpec:
Is anyone came across such scenario?</p>
<p>Edit: Storage class details:</p>
<pre><code>allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
description: Provides RWO and RWX Filesystem volumes with Retain Policy
storageclass.kubernetes.io/is-default-class: "false"
name: ocs-storagecluster-cephfs-retain
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
fsName: ocs-storagecluster-cephfilesystem
provisioner: openshift-storage.cephfs.csi.ceph.com
reclaimPolicy: Retain
volumeBindingMode: Immediate
</code></pre>
<p>Here is Cassandra cluster YAML:</p>
<pre><code> apiVersion: cassandra.datastax.com/v1beta1
kind: CassandraDatacenter
metadata:
name: dc
generation: 2
spec:
size: 3
config:
cassandra-yaml:
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: CassandraRoleManager
jvm-options:
additional-jvm-opts:
- '-Ddse.system_distributed_replication_dc_names=dc1'
- '-Ddse.system_distributed_replication_per_dc=1'
initial_heap_size: 800M
max_heap_size: 800M
resources: {}
clusterName: cassandra
systemLoggerResources: {}
configBuilderResources: {}
serverVersion: 3.11.7
serverType: cassandra
storageConfig:
cassandraDataVolumeClaimSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ocs-storagecluster-cephfs-retain
managementApiAuth:
insecure: {}
</code></pre>
<p>EDIT: PV Details:</p>
<pre><code>oc get pv pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com
creationTimestamp: "2022-02-23T20:52:54Z"
finalizers:
- kubernetes.io/pv-protection
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:pv.kubernetes.io/provisioned-by: {}
f:spec:
f:accessModes: {}
f:capacity:
.: {}
f:storage: {}
f:claimRef:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:resourceVersion: {}
f:uid: {}
f:csi:
.: {}
f:controllerExpandSecretRef:
.: {}
f:name: {}
f:namespace: {}
f:driver: {}
f:nodeStageSecretRef:
.: {}
f:name: {}
f:namespace: {}
f:volumeAttributes:
.: {}
f:clusterID: {}
f:fsName: {}
f:storage.kubernetes.io/csiProvisionerIdentity: {}
f:subvolumeName: {}
f:volumeHandle: {}
f:persistentVolumeReclaimPolicy: {}
f:storageClassName: {}
f:volumeMode: {}
manager: csi-provisioner
operation: Update
time: "2022-02-23T20:52:54Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2022-02-23T20:52:54Z"
name: pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7
resourceVersion: "51684941"
selfLink: /api/v1/persistentvolumes/pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7
uid: 8ded2de5-6d4e-45a1-9b89-a385d74d6d4a
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: server-data-cstone-cassandra-cstone-dc-default-sts-1
namespace: dv01-cornerstone
resourceVersion: "51684914"
uid: 15def0ca-6cbc-4569-a560-7b9e89a7b7a7
csi:
controllerExpandSecretRef:
name: rook-csi-cephfs-provisioner
namespace: openshift-storage
driver: openshift-storage.cephfs.csi.ceph.com
nodeStageSecretRef:
name: rook-csi-cephfs-node
namespace: openshift-storage
volumeAttributes:
clusterID: openshift-storage
fsName: ocs-storagecluster-cephfilesystem
storage.kubernetes.io/csiProvisionerIdentity: 1645064620191-8081-openshift-storage.cephfs.csi.ceph.com
subvolumeName: csi-vol-92d5e07d-94ea-11ec-92e8-0a580a20028c
volumeHandle: 0001-0011-openshift-storage-0000000000000001-92d5e07d-94ea-11ec-92e8-0a580a20028c
persistentVolumeReclaimPolicy: Retain
storageClassName: ocs-storagecluster-cephfs-retain
volumeMode: Filesystem
status:
phase: Bound
</code></pre>
| Sanjay Bagal | <p>According to the spec:</p>
<blockquote>
<p>The storage configuration. This sets up a 100GB volume at /var/lib/cassandra
on each server pod. The user is left to create the server-storage storage
class by following these directions...
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/ssd-pd" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/ssd-pd</a></p>
</blockquote>
<p>Before you deploy the Cassandra spec, first ensure your cluster already have the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver#enabling_the_on_an_existing_cluster" rel="nofollow noreferrer">CSI driver</a> installed and working properly, then proceed to create the StorageClass as the spec required:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: server-storage
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Retain
parameters:
type: pd-ssd
</code></pre>
<p>Re-deploy your Cassandra now should have the data disk retain upon deletion.</p>
| gohm'c |
<p>I have Prometheus installed on GCP, and i'm able to do a port-forward and access the Prometheus UI</p>
<p>Prometheus Pods, Events on GCP :</p>
<pre><code>Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get pods -n monitoring -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
grafana-5ccfb68647-8fjrz 0/1 Terminated 0 28h <none> gke-strimzi-prometheus-default-pool-38ca804d-nfvm <none> <none>
grafana-5ccfb68647-h7vbr 1/1 Running 0 5h24m 10.76.0.9 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-operator-85d84bb848-hw6d5 1/1 Running 0 5h24m 10.76.0.4 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-operator-85d84bb848-znjs6 0/1 Terminated 0 28h <none> gke-strimzi-prometheus-default-pool-38ca804d-nfvm <none> <none>
prometheus-prometheus-0 2/2 Running 0 5h24m 10.76.0.10 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-prometheus-1 2/2 Running 0 5h24m 10.76.0.7 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-prometheus-2 2/2 Running 0 5h24m 10.76.0.11 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get endpoints -n monitoring
NAME ENDPOINTS AGE
grafana 10.76.0.9:3000 28h
grafana-lb 10.76.0.9:3000 54m
prometheus-lb 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 155m
prometheus-nodeport 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 149m
prometheus-operated 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 28h
prometheus-operator 10.76.0.4:8080 29h
</code></pre>
<p>I've create a NodePort(port 30900), and also create a firewall allowing ingress to the port 30900</p>
<pre><code>Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get svc -n monitoring | grep prometheus-nodeport
prometheus-nodeport NodePort 10.80.7.195 <none> 9090:30900/TCP 146m
</code></pre>
<p>However, when i try to access using http://<node_ip>:30900,
the url is not accessible.
Also, telnet to the host/port is not working</p>
<pre><code>Karans-MacBook-Pro:prometheus-yamls karanalang$ telnet 10.76.0.11 30900
Trying 10.76.0.11...
Karans-MacBook-Pro:prometheus-yamls karanalang$ ping 10.76.0.7
PING 10.76.0.7 (10.76.0.7): 56 data bytes
Request timeout for icmp_seq 0
</code></pre>
<p>Here is the yaml used to create the NodePort (in monitoring namespace)</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: prometheus-nodeport
spec:
type: NodePort
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: 9090
selector:
prometheus: prometheus
</code></pre>
<p>Any ideas on what the issue is ?
How do i debug/resolve this ?</p>
| Karan Alang | <blockquote>
<p>Karans-MacBook-Pro:prometheus-yamls karanalang$ telnet <strong>10.76.0.11</strong>
30900 Trying 10.76.0.11...</p>
<p>Karans-MacBook-Pro:prometheus-yamls karanalang$ ping <strong>10.76.0.7</strong> PING
10.76.0.7 (10.76.0.7): 56 data bytes</p>
</blockquote>
<p>The IP that you used above appeared to be in the Pod CIDR range when judged from the EndPoints result in the question. These are not the <strong>worker node</strong> IP, which means you need to first check if you can reach any of the worker node over the network that you reside now (home? vpn? internet?), and the worker node already has the correct port (30900) opened.</p>
| gohm'c |
<p>In our environment, our traffic arrives at our applications via a proxy, and this traffic is received & passed on by an nginx-ingress-controller.</p>
<p>At this nginx-ingress-controller, I'd like to do the following 3 things:</p>
<ul>
<li>Retrieve the "real" client's IP Address, so that we can use it for logging and so on, in our upstream applications.</li>
<li>Enforce rate-limiting based on the client's "real" IP Address, to ensure that we don't have bad actors trying to muck about with our applications</li>
<li>Only allow connections to our nginx-ingress-controller from the proxy server</li>
</ul>
<p>From all of my experiments, it seems like it's an either or scenario. I.e. Either I can retrieve the clients "real" IP address and use it for rate-limiting/pass it upstream for logging OR I can work with the Proxy server's connecting IP Address and enforce my whitelist.</p>
<p>It feels like it should be possible to do all three, but I just haven't managed to get it right yet.</p>
<p>We're running the controller on kubernetes, and I'm injecting all of the relevant config using a config map. Here are the settings that I'm injecting:</p>
<pre><code> 'proxy-real-ip-cidr': '173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22,2400:cb00::/32,2606:4700::/32,2803:f800::/32,2405:b500::/32,2405:8100::/32,2a06:98c0::/29,2c0f:f248::/32'
'use-forwarded-headers': 'true'
'forwarded-for-header': 'CF-Connecting-IP'
'server-tokens': 'false'
'proxy-body-size': '100M'
'http-snippet' : |
limit_req_zone $binary_remote_addr zone=perip:10m rate=20r/s;
'location-snippet' : |
limit_req zone=perip burst=40 nodelay;
limit_req_status 429;
limit_conn_status 429;
allow 173.245.48.0/20;
allow 103.21.244.0/22;
allow 103.22.200.0/22;
allow 103.31.4.0/22;
allow 141.101.64.0/18;
allow 108.162.192.0/18;
allow 190.93.240.0/20;
allow 188.114.96.0/20;
allow 197.234.240.0/22;
allow 198.41.128.0/17;
allow 162.158.0.0/15;
allow 104.16.0.0/12;
allow 172.64.0.0/13;
allow 131.0.72.0/22;
allow 2400:cb00::/32;
allow 2606:4700::/32;
allow 2803:f800::/32;
allow 2405:b500::/32;
allow 2405:8100::/32;
allow 2a06:98c0::/29;
allow 2c0f:f248::/32;
deny all;
</code></pre>
<p>Please let me know if you have any questions, or if I can explain any of the above a bit more clearly. Any and all help is greatly appreciated!</p>
<p>EDIT:
I've found a feature request on Kubernetes/inress-nginx that deals with the exact mechanism that I'm looking for, but it doesn't seem to have ever been addressed: <a href="https://github.com/kubernetes/ingress-nginx/issues/2257" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/2257</a>
So if anybody knows of any workarounds, that would be greatly appreciated.</p>
| Duncan Gener8 | <p>So ultimately, I wasn't able to achieve the goal specifically laid out in my original post, but I was able to create a workaround.</p>
<p>Since our infrastructure is hosted in GCP on GKE, I was able to replace our existing Layer 4 load balancer with a Layer 7 Load Balancer, which allowed me to enforce the IP Address whitelisting by using a Cloud Armor policy, and leave the rate-limiting on the "real" client IPs up to nginx.</p>
| Duncan Gener8 |
<p>For some reasons, I cannot use the helm chart given <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/" rel="nofollow noreferrer">here</a> inside my premise. Is there any reference how can we do this?</p>
| Saurav Bhagat | <p><strong>Yes, you can deploy JupyterHub without using Helm.</strong></p>
<p>Follow the tutorial on: <a href="https://github.com/jupyterhub/jupyterhub#installation" rel="nofollow noreferrer">Jupyterhub Github Installation page</a></p>
<p><strong>But,</strong></p>
<p><strong>The Helm installation was created to automate a long part of the installation process.</strong></p>
<ul>
<li>I know you can't maintain external Helm repositories in your premise, but you can download manually the package, and install it.</li>
<li>It will be really easier and faster than creating the whole setup manually.</li>
</ul>
<hr>
<p><strong>TL;DR:</strong> The only thing different <a href="https://zero-to-jupyterhub.readthedocs.io/en/stable/setup-jupyterhub.html" rel="nofollow noreferrer">From Documentation</a> will be this command: </p>
<pre><code>helm upgrade --install jhub jupyterhub-0.8.2.tgz \
--namespace jhub \
--version=0.8.2 \
--values config.yaml
</code></pre>
<p>Bellow is my full reproduction of the local installation.</p>
<pre class="lang-sh prettyprint-override"><code>user@minikube:~/jupyterhub$ openssl rand -hex 32
e278e128a9bff352becf6c0cc9b029f1fe1d5f07ce6e45e6c917c2590654e9ee
user@minikube:~/jupyterhub$ cat config.yaml
proxy:
secretToken: "e278e128a9bff352becf6c0cc9b029f1fe1d5f07ce6e45e6c917c2590654e9ee"
user@minikube:~/jupyterhub$ wget https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz
2020-02-10 13:25:31 (60.0 MB/s) - ‘jupyterhub-0.8.2.tgz’ saved [27258/27258]
user@minikube:~/jupyterhub$ helm upgrade --install jhub jupyterhub-0.8.2.tgz \
--namespace jhub \
--version=0.8.2 \
--values config.yaml
Release "jhub" does not exist. Installing it now.
NAME: jhub
LAST DEPLOYED: Mon Feb 10 13:27:20 2020
NAMESPACE: jhub
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing JupyterHub!
You can find the public IP of the JupyterHub by doing:
kubectl --namespace=jhub get svc proxy-public
It might take a few minutes for it to appear!
user@minikube:~/jupyterhub$ k get all -n jhub
NAME READY STATUS RESTARTS AGE
pod/hub-68d9d97765-ffrz6 0/1 Pending 0 19m
pod/proxy-56694f6f87-4cbgj 1/1 Running 0 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hub ClusterIP 10.96.150.230 <none> 8081/TCP 19m
service/proxy-api ClusterIP 10.96.115.44 <none> 8001/TCP 19m
service/proxy-public LoadBalancer 10.96.113.131 <pending> 80:31831/TCP,443:31970/TCP 19m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hub 0/1 1 0 19m
deployment.apps/proxy 1/1 1 1 19m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hub-68d9d97765 1 1 0 19m
replicaset.apps/proxy-56694f6f87 1 1 1 19m
NAME READY AGE
statefulset.apps/user-placeholder 0/0 19m
</code></pre>
<p>If you have any problem in the process, just let me know.</p>
| Will R.O.F. |
<p>I write the query below to get the up time for the microvices.</p>
<p><code>base_jvm_uptime_seconds{kubernetes_name="namepspce1"}</code></p>
<p>However, it returns multiple values, so the grafana returns "Only queries that return single series/table is supported". I am wondering how can I the first velus from the query result?</p>
<p>I tried <code>base_jvm_uptime_seconds{kubernetes_name="namepspce1"}[0]</code>, but it doesn't work..</p>
<p>Thanks!</p>
| user3464179 | <p>I suggest you first inspect the label values of these multiple time series by running the query in Prometheus Graph console.
The you'll need to decide which one you want to display. Random first usually isn't the best idea.
But you can always do <code>topk(1,query)</code> if it helps. Just turn the Instant mode on in the Grafana Query editor.</p>
| sskrlj |
<p>I am going to install kubernetes on my VPS servers. The VPS servers based on Ubuntu 18.04 server and I am using <a href="https://en.wikipedia.org/wiki/Uncomplicated_Firewall" rel="nofollow noreferrer">Uncomplicated Firewall</a>.</p>
<p>I have to <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports" rel="nofollow noreferrer">open several ports</a> on Ubuntu server, but one of them is marked with a wildcard:</p>
<pre><code>TCP Inbound 6443* Kubernetes API server All
</code></pre>
<p>How can I open a port with a wildcard? Would the following be correct?</p>
<pre><code>sudo ufw allow 6443*
</code></pre>
| softshipper | <p>The wildcard <code>*</code> in this case means that it could be <strong>any port</strong> that fits your needs (except, of course, ports already in use or reserved).</p>
<p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports" rel="nofollow noreferrer">In documentation</a>:</p>
<blockquote>
<p>Any port numbers marked with <code>*</code> are overridable, so you will need to ensure any custom ports you provide are also open.</p>
</blockquote>
<p>Open the port with: <code>sudo ufw allow 6443</code> and you are good to go.</p>
<hr />
<p>Also related to this question, UWF <em>does not accept</em> the wildcard for rules.</p>
<ul>
<li>You can specify one port: <code>ufw allow 6443</code></li>
<li>You can specify the service: <code>uwf allow ftp</code></li>
<li>You can specify a range: <code>ufw allow 1234:5678/tcp</code></li>
</ul>
| Will R.O.F. |
<p>We have a GKE cluster set up on google cloud platform.</p>
<p>We have an activity that requires 'bursts' of computing power. </p>
<p>Imagine that we usually do 100 computations an hour on average, then suddently we need to be able to process 100000 in less then two minutes. However most of the time, everything is close to idle.</p>
<p>We do not want to pay for idle servers 99% of the time, and want to scale clusters depending on actual use (no data persistance needed, servers can be deleted afterwards). I looked up the documentation available on kubernetes regarding auto scaling, for adding more pods with <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">HPA</a> and adding more nodes with <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="nofollow noreferrer">cluster autoscaler</a></p>
<p>However it doesn't seem like any of these solutions would actually reduce our costs or improve performances, because they do not seem to scale past the GCP plan:</p>
<p>Imagine that we have a <a href="https://cloud.google.com/compute/all-pricing" rel="nofollow noreferrer">google plan</a> with 8 CPUs. My understanding is if we add more nodes with cluster autoscaler we will just instead of having e.g. 2 nodes using 4 CPUs each we will have 4 nodes using 2 cpus each. But the total available computing power will still be 8 CPU.
Same reasoning go for HPA with more pods instead of more nodes.</p>
<p>If we have the 8 CPU payment plan but only use 4 of them, my understanding is we still get billed for 8 so scaling down is not really useful.</p>
<p>What we want is autoscaling to change our payment plan temporarly (imagine from n1-standard-8 to n1-standard-16) and get actual new computing power.</p>
<p>I can't believe we are the only ones with this use case but I cannot find any documentation on this anywhere! Did I misunderstand something ?</p>
| Xavier Burckel | <p><strong>TL;DR:</strong></p>
<ul>
<li>Create a small persistant node-pool </li>
<li>Create a powerfull node-pool that can be scaled to zero (and cease billing) while not in use.</li>
<li>Tools used:
<ul>
<li>GKE’s <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="noreferrer">Cluster Autoscaling</a>, <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="noreferrer">Node selector</a>, <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="noreferrer">Anti-affinity rules</a> and <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">Taints and tolerations</a>.</li>
</ul></li>
</ul>
<hr>
<p><strong>GKE Pricing:</strong></p>
<ul>
<li>From <a href="https://cloud.google.com/kubernetes-engine/pricing" rel="noreferrer">GKE Pricing</a>:
<blockquote>
<p>Starting June 6, 2020, GKE will charge a cluster management fee of $0.10 per cluster per hour. The following conditions apply to the cluster management fee:</p>
<ul>
<li>One <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters" rel="noreferrer">zonal cluster</a> per billing account is <strong>free</strong>.</li>
<li>The fee is flat, irrespective of cluster size and topology.</li>
<li>Billing is computed on a <strong>per-second basis</strong> for each cluster. The total amount is rounded to the nearest cent, at the end of each month.</li>
</ul>
</blockquote></li>
<li><p>From <a href="https://cloud.google.com/kubernetes-engine/pricing#pricing_for_worker_nodes" rel="noreferrer">Pricing for Worker Nodes</a>:</p>
<blockquote>
<p>GKE uses Compute Engine instances for <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#nodes" rel="noreferrer">worker nodes in the cluster</a>. You are billed for each of those instances according to <a href="https://cloud.google.com/compute/pricing" rel="noreferrer">Compute Engine's pricing</a>, <strong>until the nodes are deleted</strong>. Compute Engine resources are billed on a per-second basis with a one-minute minimum usage cost.</p>
</blockquote></li>
<li><p>Enters, <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="noreferrer">Cluster Autoscaler</a>:</p>
<blockquote>
<p>automatically resize your GKE cluster’s node pools based on the demands of your workloads. When demand is high, cluster autoscaler adds nodes to the node pool. When demand is low, cluster autoscaler scales back down to a minimum size that you designate. This can increase the availability of your workloads when you need it, while controlling costs.</p>
</blockquote></li>
</ul>
<hr>
<ul>
<li>Cluster Autoscaler cannot scale the entire cluster to zero, at least one node must always be available in the cluster to run system pods. </li>
<li><p>Since you already have a persistent workload, this wont be a problem, what we will do is create a new <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools" rel="noreferrer">node pool</a>:</p>
<blockquote>
<p>A node pool is a group of nodes within a cluster that all have the same configuration. Every cluster has at least one <em>default</em> node pool, but you can add other node pools as needed.</p>
</blockquote></li>
<li><p>For this example I'll create two node pools:</p>
<ul>
<li>A default node pool with a fixed size of one node with a small instance size (emulating the cluster you already have).</li>
<li>A second node pool with more compute power to run the jobs (I'll call it power-pool).
<ul>
<li>Choose the machine type with the power you need to run your AI Jobs, for this example I'll create a <code>n1-standard-8</code>.</li>
<li>This power-pool will have autoscaling set to allow max 4 nodes, minimum 0 nodes.</li>
<li>If you like to add GPUs you can check this great: <a href="https://medium.com/google-cloud/scale-your-kubernetes-cluster-to-almost-zero-with-gke-autoscaler-9c78051cbf40" rel="noreferrer">Guide Scale to almost zero + GPUs</a>.</li>
</ul></li>
</ul></li>
</ul>
<p><strong>Taints and Tolerations:</strong></p>
<ul>
<li>Only the jobs related to the AI workload will run on the power-pool, for that use a <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="noreferrer">node selector</a> in the job pods to make sure they run in the power-pool nodes.</li>
<li>Set a <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="noreferrer">anti-affinity</a> rule to ensure that two of your training pods cannot be scheduled on the same node (optimizing the price-performance ratio, this is optional depending on your workload).</li>
<li>Add a <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">taint</a> to the power-pool to avoid other workloads (and system resources) to be scheduled on the autoscalable pool.</li>
<li>Add the tolerations to the AI Jobs to let them run on those nodes.</li>
</ul>
<hr>
<p><strong>Reproduction:</strong></p>
<ul>
<li>Create the Cluster with the persistent default-pool:</li>
</ul>
<pre><code>PROJECT_ID="YOUR_PROJECT_ID"
GCP_ZONE="CLUSTER_ZONE"
GKE_CLUSTER_NAME="CLUSTER_NAME"
AUTOSCALE_POOL="power-pool"
gcloud container clusters create ${GKE_CLUSTER_NAME} \
--machine-type="n1-standard-1" \
--num-nodes=1 \
--zone=${GCP_ZONE} \
--project=${PROJECT_ID}
</code></pre>
<ul>
<li>Create the auto-scale pool:</li>
</ul>
<pre><code>gcloud container node-pools create ${GKE_BURST_POOL} \
--cluster=${GKE_CLUSTER_NAME} \
--machine-type=n1-standard-8 \
--node-labels=load=on-demand \
--node-taints=reserved-pool=true:NoSchedule \
--enable-autoscaling \
--min-nodes=0 \
--max-nodes=4 \
--zone=${GCP_ZONE} \
--project=${PROJECT_ID}
</code></pre>
<ul>
<li><p>Note about parameters:</p>
<ul>
<li><code>--node-labels=load=on-demand</code>: Add a label to the nodes in the power pool to allow selecting them in our AI job using a <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="noreferrer">node selector</a>.</li>
<li><code>--node-taints=reserved-pool=true:NoSchedule</code>: Add a <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">taint</a> to the nodes to prevent any other workload from accidentally being scheduled in this node pool.</li>
</ul></li>
<li><p>Here you can see the two pools we created, the static pool with 1 node and the autoscalable pool with 0-4 nodes. </p></li>
</ul>
<p><a href="https://i.stack.imgur.com/1krf5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1krf5.png" alt="enter image description here"></a></p>
<p>Since we don't have workload running on the autoscalable node-pool, it shows 0 nodes running (and with no charge while there is no node in execution).</p>
<ul>
<li>Now we'll create a job that create 4 parallel pods that run for 5 minutes.
<ul>
<li>This job will have the following parameters to differentiate from normal pods:</li>
<li><code>parallelism: 4</code>: to use all 4 nodes to enhance performance</li>
<li><code>nodeSelector.load: on-demand</code>: to assign to the nodes with that label.</li>
<li><code>podAntiAffinity</code>: to declare that we do not want two pods with the same label <code>app: greedy-job</code> running in the same node (optional).</li>
<li><code>tolerations:</code> to match the toleration to the taint that we attached to the nodes, so these pods are allowed to be scheduled in these nodes.</li>
</ul></li>
</ul>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: greedy-job
spec:
parallelism: 4
template:
metadata:
name: greedy-job
labels:
app: greedy-app
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "300"
nodeSelector:
load: on-demand
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- greedy-app
topologyKey: "kubernetes.io/hostname"
tolerations:
- key: reserved-pool
operator: Equal
value: "true"
effect: NoSchedule
restartPolicy: OnFailure
</code></pre>
<ul>
<li>Now that our cluster is in standby we will use the job yaml we just created (I'll call it <code>greedyjob.yaml</code>). This job will run four processes that will run in parallel and that will complete after about 5 minutes.</li>
</ul>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 42m v1.14.10-gke.27
$ kubectl get pods
No resources found in default namespace.
$ kubectl apply -f greedyjob.yaml
job.batch/greedy-job created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 0/1 Pending 0 11s
greedy-job-72j8r 0/1 Pending 0 11s
greedy-job-9dfdt 0/1 Pending 0 11s
greedy-job-wqct9 0/1 Pending 0 11s
</code></pre>
<ul>
<li>Our job was applied, but is in pending, let's see what's going on in those pods:</li>
</ul>
<pre><code>$ kubectl describe pod greedy-job-2xbvx
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 28s (x2 over 28s) default-scheduler 0/1 nodes are available: 1 node(s) didn't match node selector.
Normal TriggeredScaleUp 23s cluster-autoscaler pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/owilliam/zones/us-central1-b/instanceGroups/gke-autoscale-to-zero-clus-power-pool-564148fd-grp 0->1 (max: 4)}]
</code></pre>
<ul>
<li>The pod can't be scheduled on the current node due to the rules we defined, this triggers a Scale Up routine on our power-pool. This is a very dynamic process, after 90 seconds the first node is up and running:</li>
</ul>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 0/1 Pending 0 93s
greedy-job-72j8r 0/1 ContainerCreating 0 93s
greedy-job-9dfdt 0/1 Pending 0 93s
greedy-job-wqct9 0/1 Pending 0 93s
$ kubectl nodes
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 44m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 11s v1.14.10-gke.27
</code></pre>
<ul>
<li>Since we set pod anti-affinity rules, the second pod can't be scheduled on the node that was brought up and triggers the next scale up, take a look at the events on the second pod:</li>
</ul>
<pre><code>$ k describe pod greedy-job-2xbvx
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal TriggeredScaleUp 2m45s cluster-autoscaler pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/owilliam/zones/us-central1-b/instanceGroups/gke-autoscale-to-zero-clus-power-pool-564148fd-grp 0->1 (max: 4)}]
Warning FailedScheduling 93s (x3 over 2m50s) default-scheduler 0/1 nodes are available: 1 node(s) didn't match node selector.
Warning FailedScheduling 79s (x3 over 83s) default-scheduler 0/2 nodes are available: 1 node(s) didn't match node selector, 1 node(s) had taints that the pod didn't tolerate.
Normal TriggeredScaleUp 62s cluster-autoscaler pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/owilliam/zones/us-central1-b/instanceGroups/gke-autoscale-to-zero-clus-power-pool-564148fd-grp 1->2 (max: 4)}]
Warning FailedScheduling 3s (x3 over 68s) default-scheduler 0/2 nodes are available: 1 node(s) didn't match node selector, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules.
</code></pre>
<ul>
<li>The same process repeats until all requirements are satisfied:</li>
</ul>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 0/1 Pending 0 3m39s
greedy-job-72j8r 1/1 Running 0 3m39s
greedy-job-9dfdt 0/1 Pending 0 3m39s
greedy-job-wqct9 1/1 Running 0 3m39s
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 46m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 2m16s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-sf6q Ready <none> 28s v1.14.10-gke.27
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 0/1 Pending 0 5m19s
greedy-job-72j8r 1/1 Running 0 5m19s
greedy-job-9dfdt 1/1 Running 0 5m19s
greedy-job-wqct9 1/1 Running 0 5m19s
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 48m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 Ready <none> 63s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 4m8s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-sf6q Ready <none> 2m20s v1.14.10-gke.27
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 1/1 Running 0 6m12s
greedy-job-72j8r 1/1 Running 0 6m12s
greedy-job-9dfdt 1/1 Running 0 6m12s
greedy-job-wqct9 1/1 Running 0 6m12s
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 48m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 Ready <none> 113s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv Ready <none> 26s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 4m58s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-sf6q Ready <none> 3m10s v1.14.10-gke.27
</code></pre>
<p><a href="https://i.stack.imgur.com/CRXUU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CRXUU.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/y7Bs1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/y7Bs1.png" alt="enter image description here"></a>
Here we can see that all nodes are now up and running (thus, being billed by second)</p>
<ul>
<li>Now all jobs are running, after a few minutes the jobs complete their tasks:</li>
</ul>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 1/1 Running 0 7m22s
greedy-job-72j8r 0/1 Completed 0 7m22s
greedy-job-9dfdt 1/1 Running 0 7m22s
greedy-job-wqct9 1/1 Running 0 7m22s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 0/1 Completed 0 11m
greedy-job-72j8r 0/1 Completed 0 11m
greedy-job-9dfdt 0/1 Completed 0 11m
greedy-job-wqct9 0/1 Completed 0 11m
</code></pre>
<ul>
<li>Once the task is completed, the autoscaler starts downsizing the cluster.</li>
<li>You can learn more about the rules for this process here: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="noreferrer">GKE Cluster AutoScaler</a></li>
</ul>
<pre><code>$ while true; do kubectl get nodes ; sleep 60; done
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 54m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 Ready <none> 7m26s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv Ready <none> 5m59s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 10m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-sf6q Ready <none> 8m43s v1.14.10-gke.27
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 62m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 Ready <none> 15m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv Ready <none> 14m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 18m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-sf6q NotReady <none> 16m v1.14.10-gke.27
</code></pre>
<ul>
<li>Once conditions are met, autoscaler flags the node as <code>NotReady</code> and starts removing them: </li>
</ul>
<pre><code>NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 64m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 NotReady <none> 17m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv NotReady <none> 16m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 20m v1.14.10-gke.27
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 65m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 NotReady <none> 18m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv NotReady <none> 17m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw NotReady <none> 21m v1.14.10-gke.27
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 66m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv NotReady <none> 18m v1.14.10-gke.27
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 67m v1.14.10-gke.27
</code></pre>
<hr>
<ul>
<li>Here is the confirmation that the nodes were removed from GKE and from VMs(remember that every node is a Virtual Machine billed as Compute Engine):</li>
</ul>
<p>Compute Engine: (note that <code>gke-cluster-1-default-pool</code> is from another cluster, I added it to the screenshot to show you that there is no other node from cluster <code>gke-autoscale-to-zero</code> other than the default persistent one.)
<a href="https://i.stack.imgur.com/xSjiP.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xSjiP.png" alt="enter image description here"></a></p>
<p>GKE:
<a href="https://i.stack.imgur.com/N47iH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/N47iH.png" alt="enter image description here"></a></p>
<hr>
<p><strong>Final Thoughts:</strong></p>
<blockquote>
<p>When scaling down, cluster autoscaler respects scheduling and eviction rules set on Pods. These restrictions can prevent a node from being deleted by the autoscaler. A node's deletion could be prevented if it contains a Pod with any of these conditions:
An application's <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#how-disruption-budgets-work" rel="noreferrer">PodDisruptionBudget</a> can also prevent autoscaling; if deleting nodes would cause the budget to be exceeded, the cluster does not scale down.</p>
</blockquote>
<p>You can note that the process is really fast, in our example it took around 90 seconds to upscale a node and 5 minutes to finish downscaling a standby node, providing a HUGE improvement in your billing.</p>
<ul>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms" rel="noreferrer">Preemptible VMs</a> can reduce even further your billing, but you will have to consider the kind of workload you are running:</li>
</ul>
<blockquote>
<p><strong>Preemptible VMs</strong> are Compute Engine <a href="https://cloud.google.com/compute/docs/instances" rel="noreferrer">VM instances</a> that last a maximum of 24 hours and provide no availability guarantees. Preemptible VMs are <a href="https://cloud.google.com/compute/pricing" rel="noreferrer">priced lower</a> than standard Compute Engine VMs and offer the same <a href="https://cloud.google.com/compute/docs/machine-types" rel="noreferrer">machine types</a> and options.</p>
</blockquote>
<p>I know you are still considering the best architecture for your app.</p>
<p>Using <a href="https://cloud.google.com/appengine" rel="noreferrer">APP Engine</a> and <a href="https://cloud.google.com/ai-platform" rel="noreferrer">IA Platform</a> are optimal solutions as well, but since you are currently running your workload on GKE I wanted to show you an example as requested.</p>
<p>If you have any further questions let me know in the comments.</p>
| Will R.O.F. |
<p>I have my app running on EKS which is using istio-ingressgateway service for load balancer and Knative serving I have added ACM to my ELB, but after patching the service with</p>
<pre><code>metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:xx-xxxx-1:1234567890:certificate/xxxxxx-xxx-dddd-xxxx-xxxxxxxx"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
</code></pre>
<p>my domain is not opening on HTTPS but works fine on HTTP giving this error on HTTPS</p>
<pre><code>< HTTP/1.1 408 REQUEST_TIMEOUT
HTTP/1.1 408 REQUEST_TIMEOUT
< Content-Length:0
Content-Length:0
< Connection: Close
Connection: Close
</code></pre>
| Akash Verma | <p>Hope you your load balancer forward the traffic from 443 to the backend target port 3190 in case of Istio. Check your Istio gateway file wether you have 443 port mapped with the targets.</p>
| vijaykumar y |
<p>bare in mind that I'm new to Kubernetes.</p>
<p>I'm trying to integrate our existing K8 cluster in GitLab. I have added the cluster to gitlab and I can see projects are fetched. However under Health tab I see that I need to install Prometheus.
<a href="https://i.stack.imgur.com/C0FEw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C0FEw.png" alt="enter image description here" /></a></p>
<p>After trying to install I get
<a href="https://i.stack.imgur.com/CAq5e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CAq5e.png" alt="enter image description here" /></a></p>
<p>On the cluster this is the error I get</p>
<pre><code>[user]$ kubectl describe pvc prometheus-prometheus-server -ngitlab-managed-apps
Name: prometheus-prometheus-server
Namespace: gitlab-managed-apps
StorageClass:
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-9.5.2
component=server
heritage=Tiller
release=prometheus
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: prometheus-prometheus-server-78bdf8f5b7-dkctg
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 6s (x2 over 6s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
</code></pre>
<p>I tried both, specifying storage class and adding persistent volume to no avail, the error remains the same. I can't understand why the volume is not claimed.</p>
<p>This is how the added volume looks like</p>
<pre><code>[user]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
prometheus-prometheus-server 2Gi RWX Retain Available manual 17m
</code></pre>
<pre><code>kubectl describe pv prometheus-prometheus-server
Name: prometheus-prometheus-server
Labels: app=prometheus
chart=prometheus-9.5.2
component=server
heritage=Tiller
release=prometheus
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"app":"prometheus","chart":"prometheus-9.5.2","compone...
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /var/prometheus-server
HostPathType:
Events: <none>
</code></pre>
| CyberProdigy | <p>I'm not quite sure where i found the exact PVC definition for the GitLab managed Prometheus but i am sure that the PVC wants to claim a PV with a capacity of at least 8Gi and an Access Mode of <code>RWO</code>. You need to create a Persistence Volume that meets this criteria (currently you provide 2Gi and RWX).</p>
<p>As far as i experienced it a PVC with RWO does not get assigned to an RWX PV.</p>
| Oliver |
<p>I have tested the app using minikube locally and it works. When I use the same Doeckerfile with deploymnt.yml, the pod returns to Error state with the below reason</p>
<p>Error: Cannot find module '/usr/src/app/server.js'</p>
<p>Dockerfile:</p>
<pre><code>FROM node:13-alpine
WORKDIR /api
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
</code></pre>
<p>Deployment.yml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app-dep
labels:
app: nodejs-app
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
serviceAccountName: opp-sa
imagePullSecrets:
- name: xxx
containers:
- name: nodejs-app
image: registry.xxxx.net/k8s_app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
</code></pre>
<p>Assuming it could be a problem with "node_modules", I had "ls" on the WORDIR inside the Dockerfile and it does show me "node_modules". Does anyone what else to check to resolve this issue ?</p>
| techPM | <ul>
<li>Since I can't give you this level of suggestions on a comment I'm writing you a fully working example so you can compare to yours and check if there is something different.</li>
</ul>
<p><strong>Sources:</strong></p>
<ul>
<li>Your Dockerfile:</li>
</ul>
<pre><code>FROM node:13-alpine
WORKDIR /api
COPY package*.json .
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
</code></pre>
<ul>
<li>Sample <code>package.json</code>:</li>
</ul>
<pre><code>{
"name": "docker_web_app",
"version": "1.0.0",
"description": "Node.js on Docker",
"author": "First Last <[email protected]>",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^4.16.1"
}
}
</code></pre>
<ul>
<li>sample <code>server.js</code>:</li>
</ul>
<pre><code>'use strict';
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
</code></pre>
<ul>
<li>Build image:</li>
</ul>
<pre><code>$ ls
Dockerfile package.json server.js
$ docker build -t k8s_app .
...
Successfully built 2dfbfe9f6a2f
Successfully tagged k8s_app:latest
$ docker images k8s_app
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s_app latest 2dfbfe9f6a2f 4 minutes ago 118MB
</code></pre>
<ul>
<li>Your deployment sample + service for easy access (called <code>nodejs-app.yaml</code>):</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app-dep
labels:
app: nodejs-app
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
containers:
- name: web-app
image: k8s_app
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: web-app-svc
spec:
type: NodePort
selector:
app: nodejs-app
ports:
- port: 8080
targetPort: 8080
</code></pre>
<p><strong>Note:</strong> I'm using the minikube docker registry for this example, that's why <code>imagePullPolicy: Never</code> is set.</p>
<hr>
<ul>
<li>Now I'll deploy it:</li>
</ul>
<pre><code>$ kubectl apply -f nodejs-app.yaml
deployment.apps/nodejs-app-dep created
service/web-app-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nodejs-app-dep-5d75f54c7d-mfw8x 1/1 Running 0 3s
</code></pre>
<ul>
<li>Whenever you need to troubleshoot inside a pod you can use <code>kubectl exec -it <pod_name> -- /bin/sh</code> (or <code>/bin/bash</code> depending on the base image.)</li>
</ul>
<pre><code>$ kubectl exec -it nodejs-app-dep-5d75f54c7d-mfw8x -- /bin/sh
/api # ls
Dockerfile node_modules package-lock.json package.json server.js
</code></pre>
<p>The pod is running and the files are in the <code>WORKDIR</code> folder as stated in the <code>Dockerfile</code>.</p>
<ul>
<li>Finally let's test accessing from outside the cluster:</li>
</ul>
<pre><code>$ minikube service list
|-------------|-------------|--------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|-------------|--------------|-------------------------|
| default | web-app-svc | 8080 | http://172.17.0.2:31446 |
|-------------|-------------|--------------|-------------------------|
$ curl -i http://172.17.0.2:31446
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 11
ETag: W/"b-Ck1VqNd45QIvq3AZd8XYQLvEhtA"
Date: Thu, 14 May 2020 18:49:40 GMT
Connection: keep-alive
Hello World$
</code></pre>
<p>The Hello World is being served as desired.</p>
<p>To Summarize:</p>
<ol>
<li>I Build the Docker Image in <code>minikube ssh</code> so it is cached.</li>
<li>Created the manifest containing the deployment pointing to the image, added the service part to allow access externally using Nodeport.</li>
<li><code>NodePort</code> routes all traffic to the Minikube IP in the port assigned to the service (i.e:31446) and deliver to the pods matching the selector listening on port 8080.</li>
</ol>
<hr>
<p><strong>A few pointers for troubleshooting:</strong></p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe" rel="nofollow noreferrer"><code>kubectl describe</code></a> <code>pod <pod_name></code>: provides precious information when the pod status is in any kind of error.</li>
<li><code>kubectl exec</code> is great to troubleshoot inside the container as it's running, it's pretty similar to <code>docker run</code> command.</li>
<li>Review your code files to ensure there is no baked path in it.</li>
<li>Try using <code>WORKDIR /usr/src/app</code> instead of <code>/api</code> and see if you get a different result.</li>
<li>Try using a <a href="https://nodejs.org/fr/docs/guides/nodejs-docker-webapp/#dockerignore-file" rel="nofollow noreferrer"><code>.dockerignore</code></a> file with <code>node_modules</code> on it's content.</li>
</ul>
<p>Try out and let me know in the comments if you need further help</p>
| Will R.O.F. |
<p>I have the following Dockerfile that i have set up to use a new user rather than using root for my nginx server. The nginx server is built upon Redhat UBI image.
The image builds fine, however when I run the container I get the following error: nginx: [nginx: [emerg] open() "/run/nginx.pid" failed (13: Permission denied)</p>
<p>Below is my dockerfile.</p>
<pre><code>USER root
RUN microdnf --setopt=tsflags=nodocs install -y nginx procps shadow-utils net-tools ca-certificates dirmngr gnupg wget vim\
&& microdnf clean all \
&& rpm -q procps-ng
ENV NGINX_USER="api-gatway" \
NGINXR_UID="8987" \
NGINX_GROUP="api-gatway" \
NGINX_GID="8987"
RUN set -ex; \
groupadd -r --gid "$NGINX_GID" "$NGINX_GROUP"; \
useradd -r --uid "$NGINXR_UID" --gid "$NGINX_GID" "$NGINX_USER"
COPY nginx.conf /etc/nginx/nginx.conf
RUN mkdir -p /var/lib/nginx/tmp /var/log/nginx \
&& chown -R api-gatway:api-gatway /var/lib/nginx /var/log/nginx \
&& chmod -R 755 /var/lib/nginx /var/log/nginx
EXPOSE 1080
USER api-gatway
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p>When i build the image, it builds without any errors, but when i deploy on my K8 cluster using helm, it gives me the following errors.</p>
<pre><code>nginx: [emerg] open() "/run/nginx.pid" failed (13: Permission denied)
</code></pre>
<p>Here is my nginx.conf file that I have set up</p>
<pre><code>worker_processes 1;
error_log /tmp/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 1080;
server_name localhost 127.0.0.1;
access_log /tmp/access.log;
client_max_body_size 0;
set $allowOriginSite *;
proxy_pass_request_headers on;
proxy_pass_header Set-Cookie;
# External settings, do not remove
#ENV_ACCESS_LOG
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_set_header X-Forwarded-Proto $scheme;
location /search/ {
proxy_pass http://*******-svc:8983/***/;
}
location /auth/ {
proxy_pass http://********:8080;
}
location /mapbox {
rewrite ^/mapbox(.*)https://****$1 break;
}
}
}
</code></pre>
<p>How can I fix nginx: [emerg] open() "/var/run/nginx.pid" failed (13: Permission denied) and what have i done wrong in my configurations?</p>
| Bilal Yousaf | <p>UPDATE</p>
<p>In order to fix my "/var/run/nginx.pid" permission denied error.</p>
<p>I had to add nginx.pid permission errors inside my dockerfile for the new user to work.</p>
<p>Below are the changes i made in my dockerfile</p>
<pre><code>RUN touch /run/nginx.pid \
&& chown -R api-gatway:api-gatway /run/nginx.pid /cache/nginx
</code></pre>
| Bilal Yousaf |
<p>I have created a custom <strong>alpine</strong> image (alpine-audit) which includes a <em>jar</em> file in the <em><strong>/tmp</strong></em> directory. What I need is to use that alpine-audit image as the <strong>initContainers</strong> base image and copy that <em>jar</em> file that I've included, to a location where the Pod container can access.</p>
<p>My yaml file is like below</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: my.private-artifactory.com/audit/alpine-audit:0.1.0
command: ['cp', '/tmp/auditlogger-1.0.0.jar', 'auditlogger-1.0.0.jar']
volumeMounts:
- name: workdir
mountPath: "/tmp"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
</code></pre>
<p>I think there is some mistake in the <code>command</code> line.
I assumed <em>initContainer</em> copy that jar file to <em>emptyDir</em> then the nginx based container can access that jar file using the <em>mountPath</em>.
But it does not even create the Pod. Can someone point me where it has gone wrong?</p>
| AnujAroshA | <p>When you are mouting a volume to a directory in pod, that directory have only content of the volume. If you are mounting <code>emptyDir</code> into your <code>alpine-audit:0.1.0</code> the /tmp directory becomes empty. I would mount that volume on some other dir, like <code>/app</code>, then copy the <code>.jar</code> from /tmp to /app.</p>
<p>The container is not starting probably because the initContainer failes running the command.</p>
<p>Try this configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: my.private-artifactory.com/audit/alpine-audit:0.1.0
command: ['cp', '/tmp/auditlogger-1.0.0.jar', '/app/auditlogger-1.0.0.jar'] # <--- copy from /tmp to new mounted EmptyDir
volumeMounts:
- name: workdir
mountPath: "/app" # <-- change the mount path not to overwrite your .jar
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
</code></pre>
| Cloudziu |
<p>I was also encountering the error message when running <code>helm version</code> or <code>helm list</code></p>
<pre><code>kubectl port-forward -n kube-system tiller-deploy-xxxxxxxxxxxxxxx 44134
error: error upgrading connection: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-xxxxxxxxxxx"?
</code></pre>
<p>The root issue appears to be related to the GKE port-forwarding. is the ssh key configurable anywhere? I can see this key being added to my metadata, but it is not part of the metadata for the GKE nodes. </p>
| turnupthechill | <ul>
<li><a href="https://github.com/helm/helm/issues/4286#issuecomment-401427287" rel="nofollow noreferrer">Under the hood</a> helm is initiating a short-lived <code>kubectl port-forward</code> to tiller.</li>
</ul>
<p>If it's not working, your issue is with that, not tiller:</p>
<ul>
<li><p>Kubectl port-forward rely on the cluster's <strong>master being able to talk to the nodes</strong> in the cluster. However, because the master isn't in the same Compute Engine network as your cluster's nodes, we rely on <strong>SSH tunnels</strong> to enable secure communication.</p></li>
<li><p>GKE saves an SSH public key file in your Compute Engine project metadata. All Compute Engine VMs using Google-provided images regularly check their project's common metadata and their instance's metadata for SSH keys to add to the VM's list of authorized users. GKE also adds a firewall rule to your Compute Engine network allowing SSH access from the master's IP address to each node in the cluster.</p></li>
</ul>
<p>If any of the above <code>kubectl</code> commands don't run, it's likely that the master is unable to open SSH tunnels with the nodes. Check for these potential causes:</p>
<ol>
<li><strong>The cluster doesn't have any nodes:</strong><br>
If you've scaled down the number of nodes in your cluster to zero, SSH tunnels won't work. </li>
</ol>
<p>To fix it, <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-container-cluster" rel="nofollow noreferrer">resize your cluster</a> to have at least one node.</p>
<ol start="2">
<li><strong>Pods in the cluster have gotten stuck in a terminating state</strong> and have prevented nodes that no longer exist from being removed from the cluster:<br>
This is an issue that should only affect Kubernetes version 1.1, but could be caused by repeated resizing of the cluster.</li>
</ol>
<p>To fix it, <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete/" rel="nofollow noreferrer">delete the Pods</a> that have been in a terminating state for more than a few minutes. The old nodes are then removed from the master's API and replaced by the new nodes.</p>
<ol start="3">
<li><p>Your network's firewall rules don't allow for SSH access to the master.</p>
<p>All Compute Engine networks are created with a firewall rule called "default-allow-ssh" that allows SSH access from all IP addresses (requiring a valid private key, of course). GKE also inserts an SSH rule for each cluster of the form <code>gke-cluster-name-random-characters-ssh</code> that allows SSH access specifically from the cluster's master IP to the cluster's nodes. If neither of these rules exists, then the master will be unable to open SSH tunnels.</p></li>
</ol>
<p>To fix it, <a href="https://cloud.google.com/compute/docs/vpc/using-firewalls" rel="nofollow noreferrer">re-add a firewall rule</a> allowing access to VMs with the tag that's on all the cluster's nodes from the master's IP address.</p>
<ol start="4">
<li><p>Your project's common metadata entry for "ssh-keys" is full.</p>
<p>If the project's metadata entry named "ssh-keys" is close to the 32KiB size limit, then GKE isn't able to add its own SSH key to enable it to open SSH tunnels. You can see your project's metadata by running the following command:</p>
<pre><code>gcloud compute project-info describe [--project=PROJECT]
</code></pre>
<p>And then check the length of the list of ssh-keys.</p></li>
</ol>
<p>To fix it, <a href="https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#project-wide" rel="nofollow noreferrer">delete some of the SSH keys</a> that are no longer needed.</p>
<ol start="5">
<li><p>You have set a metadata field with the key "ssh-keys" on the VMs in the cluster.</p>
<p>The node agent on VMs prefers per-instance ssh-keys to project-wide SSH keys, so if you've set any SSH keys specifically on the cluster's nodes, then the master's SSH key in the project metadata won't be respected by the nodes. To check, run <code>gcloud compute instances describe <VM-name></code> and look for an "ssh-keys" field in the metadata.</p></li>
</ol>
<p>To fix it, <a href="https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#instance-only" rel="nofollow noreferrer">delete the per-instance SSH keys</a> from the instance metadata.</p>
<p>You can also find this troubleshooting steps at <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting" rel="nofollow noreferrer">GCP Kubernetes Engine Troubleshooting kubectl</a></p>
<hr>
<p><strong>Note:</strong> As a side note I also suggest trying <a href="https://helm.sh/docs/intro/install/" rel="nofollow noreferrer">Helm 3</a>:</p>
<p>It does not need Tiller Installation anymore, has a lot of new features and is supported on GKE.</p>
| Will R.O.F. |
<p>After spending a couple of hours I found that the relative path is not supported as a mounted path in Kubernetes. I found the reference here <a href="https://github.com/kubernetes/kubernetes/pull/48815/files" rel="nofollow noreferrer">mountpath should be absolute</a> if it is yes then why it doesn't have that capability of the relative path could anyone please explain a bit deeper?</p>
<pre class="lang-yaml prettyprint-override"><code> Example code:
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
hostPath:
# directory location on host
# path: "./code" # this is not supporting
path: "/var/www/html/kubernetes/code" # this is supporting
# this field is optional
type: DirectoryOrCreate
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
</code></pre>
<p>Under the <code>code</code> directory in the above code example, I just have an <code>index.html</code> page</p>
<p><strong>Screenshot of the project structure:</strong></p>
<p><a href="https://i.stack.imgur.com/OOTb9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OOTb9.png" alt="enter image description here"></a></p>
<p>If I use <code>path: "./code"</code> then the error shows like this: </p>
<pre><code>Error response from daemon: create ./code: "./code" includes invalid
characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed.
If you intended to pass a host directory, use absolute path.
</code></pre>
<p>Thanks in advance!! </p>
| Bablu Ahmed | <p><strong>I believe the source of your confusion is that you are running a single node Cluster</strong>, like Minikube.</p>
<ul>
<li>The process chain (roughly summarized) is:</li>
</ul>
<p><em>Kubectl > Kube-API(master) > Kubelet agent (node) > Pod creation (as specified on the yaml manifest).</em></p>
<p>In a single node cluster all these agents are on the same computer, that's why the files in <code>/var/www/html/kubernetes/code</code> were mounted to the pod.</p>
<ul>
<li>I'll clarify it with this example:
<ul>
<li>You have a cluster with 3 nodes.</li>
<li>You manage your nodes remotely, with kubectl from your notebook.</li>
</ul>
</li>
</ul>
<p>When you use <code>hostPath</code> the files must exist on the <code>node</code>, not on your notebook, because it's not the <code>kubectl</code> on your computer that will trigger the creation of the pod and mount the files/directories.</p>
<p>This is the job of the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer"><code>kubelet</code></a> agent of the node that will create the pod and apply it's manifest. This is why you need to specify the full path of the file/dir you want to mount.</p>
<hr />
<p>According to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="nofollow noreferrer">PersistentVolumes</a> documentation:</p>
<blockquote>
<p>Kubernetes supports <code>hostPath</code> <strong>for development and testing on a single-node cluster</strong>. A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.</p>
<p><strong>In a production cluster, you would not use hostPath</strong>. Instead a cluster administrator would provision a network resource like a Google Compute Engine persistent disk, an NFS share, or an Amazon Elastic Block Store volume. Cluster administrators can also use <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#storageclass-v1-storage" rel="nofollow noreferrer">StorageClasses</a> to set up <a href="https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes" rel="nofollow noreferrer">dynamic provisioning</a>.</p>
</blockquote>
<p>Watch out when using <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> type, because:</p>
<ul>
<li>Pods with identical configuration (such as created from a podTemplate) may behave differently on different nodes due to different files on the nodes.</li>
<li>when Kubernetes adds resource-aware scheduling, as is planned, it will not be able to account for resources used by a <code>hostPath</code>.</li>
<li>the files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a <a href="https://kubernetes.io/docs/user-guide/security-context" rel="nofollow noreferrer">privileged Container</a> or modify the file permissions on the host to be able to write to a <code>hostPath</code> volume</li>
</ul>
<p>If you have any question let me know in the comments.</p>
| Will R.O.F. |
<p>I would like to set the value of <code>terminationMessagePolicy</code> to <code>FallbackToLogsOnError</code> by default for all my pods.</p>
<p>Is there any way to do that?</p>
<p>I am running Kubernetes 1.21.</p>
| ITChap | <p><code>terminationMessagePolicy</code> is a field in container spec, currently beside set it in your spec there is no cluster level setting that could change the default value ("File").</p>
| gohm'c |
<p>I'm trying to create a single control-plane cluster with kubeadm on 3 bare metal nodes (1 master and 2 workers) running on Debian 10 with Docker as a container runtime. Each node has an external IP and internal IP.
I want to configure a cluster on the internal network and be accessible from the Internet.
Used this command for that (please correct me if something wrong):</p>
<pre><code>kubeadm init --control-plane-endpoint=10.10.0.1 --apiserver-cert-extra-sans={public_DNS_name},10.10.0.1 --pod-network-cidr=192.168.0.0/16
</code></pre>
<p>I got:</p>
<pre><code>kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
dev-k8s-master-0.public.dns Ready master 16h v1.18.2 10.10.0.1 <none> Debian GNU/Linux 10 (buster) 4.19.0-8-amd64 docker://19.3.8
</code></pre>
<p>Init phase complete successfully and the cluster is accessible from the Internet. All pods are up and running except coredns that should be running after networking will be applied.</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
</code></pre>
<p>After networking applied, coredns pods still not ready:</p>
<pre><code>kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-75d56dfc47-g8g9g 0/1 CrashLoopBackOff 192 16h
kube-system calico-node-22gtx 1/1 Running 0 16h
kube-system coredns-66bff467f8-87vd8 0/1 Running 0 16h
kube-system coredns-66bff467f8-mv8d9 0/1 Running 0 16h
kube-system etcd-dev-k8s-master-0 1/1 Running 0 16h
kube-system kube-apiserver-dev-k8s-master-0 1/1 Running 0 16h
kube-system kube-controller-manager-dev-k8s-master-0 1/1 Running 0 16h
kube-system kube-proxy-lp6b8 1/1 Running 0 16h
kube-system kube-scheduler-dev-k8s-master-0 1/1 Running 0 16h
</code></pre>
<p>Some logs from failed pods:</p>
<pre><code>kubectl -n kube-system logs calico-kube-controllers-75d56dfc47-g8g9g
2020-04-22 08:24:55.853 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", ReconcilerPeriod:"5m", CompactionPeriod:"10m", EnabledControllers:"node", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", HealthEnabled:true, SyncNodeLabels:true, DatastoreType:"kubernetes"}
2020-04-22 08:24:55.855 [INFO][1] k8s.go 228: Using Calico IPAM
W0422 08:24:55.855525 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2020-04-22 08:24:55.856 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2020-04-22 08:25:05.857 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2020-04-22 08:25:05.857 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
</code></pre>
<p>coredns:</p>
<pre><code>[INFO] plugin/ready: Still waiting on: "kubernetes"
I0422 08:29:12.275344 1 trace.go:116] Trace[1050055850]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.274382393 +0000 UTC m=+59491.429700922) (total time: 30.000897581s):
Trace[1050055850]: [30.000897581s] [30.000897581s] END
E0422 08:29:12.275388 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0422 08:29:12.276163 1 trace.go:116] Trace[188478428]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.275499997 +0000 UTC m=+59491.430818380) (total time: 30.000606394s):
Trace[188478428]: [30.000606394s] [30.000606394s] END
E0422 08:29:12.276198 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0422 08:29:12.277424 1 trace.go:116] Trace[16697023]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.276675998 +0000 UTC m=+59491.431994406) (total time: 30.000689778s):
Trace[16697023]: [30.000689778s] [30.000689778s] END
E0422 08:29:12.277452 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
</code></pre>
<p>Any thoughts what's wrong?</p>
| fish | <p>This answer is to call attention to @florin suggestion:</p>
<blockquote>
<p>I've seen a similar behavior when I had multiple public interfaces on the node and calico selected the wrong one.</p>
<p>What I did is to set <strong>IP_AUTODETECT_METHOD</strong> in the calico config.</p>
</blockquote>
<ul>
<li>From <a href="https://docs.projectcalico.org/reference/node/configuration" rel="nofollow noreferrer">Calico Configuration</a> on IP_AUTO_DETECT_METHOD:</li>
</ul>
<blockquote>
<p>The method to use to autodetect the IPv4 address for this host. This is only used when the IPv4 address is being autodetected. See IP Autodetection methods for details of the valid methods.</p>
</blockquote>
<p>Learn more Here: <a href="https://docs.projectcalico.org/reference/node/configuration#ip-autodetection-methods" rel="nofollow noreferrer">https://docs.projectcalico.org/reference/node/configuration#ip-autodetection-methods</a></p>
| Will R.O.F. |
<p>We have a setup where we want to run 3 replicas of our Image. Each replica will be run in independent node and corresponding pods inside it.
So to summarize we will have 3 nodes in 3 separate JVMs and 3 corresponding pods.
Please provide following details,</p>
<ol>
<li>Can we fix POD IP and hostName always?</li>
<li>Can the Node IP and hostname be same as machine IP and hostname?</li>
<li>Can the same Machine IP and hostname be made POD IP and hostname?</li>
</ol>
| Shreyas Holla P | <p><code>Can we fix POD IP and hostName always?</code></p>
<p>There is a <code>hostname</code> field for Pod that you can use. Using static IP for Pod is possible if the CNI plugin that you used support it. For example, Calico does <a href="https://projectcalico.docs.tigera.io/networking/use-specific-ip" rel="nofollow noreferrer">support</a> this use case. You need to check your CNI manual.</p>
<p><code>Can the Node IP and hostname be same as machine IP and hostname?</code></p>
<p>Yes.</p>
<p><code>Can the same Machine IP and hostname be made POD IP and hostname?</code></p>
<p>Pod name is up to you to set, but Pod IP is always in the range of Pod CIDR which is not applicable to machine IP.</p>
| gohm'c |
<p>we have a deployment with hpa(horizontal pod autoscaling) of maxReplicas set to 3. And in our deployment.yaml we have a preStop hook defined, which calls an API exposed in the container and a terminationGracePeriodInSeconds defined to 60 seconds(the calls are internal to the cluster and should be sufficient).
Is it guaranteed that the preStop hook waits for the execution of the request from all three pods?</p>
<pre><code> lifecycle:
preStop:
exec:
command: [ "curl", "-XPOST", http://$SERVICE_NAME:$PORT_NUMBER/exit-gracefully ]
terminationGracePeriodSeconds: 60 ```
</code></pre>
| Reddy_73 | <p>No guaranteed as the <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">official document</a> stated:</p>
<blockquote>
<p>The Pod's termination grace period countdown begins before the PreStop
hook is executed, so regardless of the outcome of the handler, the
container will <strong>eventually terminate within the Pod's termination grace
period</strong>.</p>
</blockquote>
| gohm'c |
<p>I am new to Kubernetes and I need to do some customizations using API's. My aim is to get pods from all zones of cluster or specific zones. Using Kubernetes API and Golang I am able to get list of Pods and list of Nodes but i am unable to find any interface which will give me list of Zones and interface which will give me list of Pods within zone.</p>
<p>Need guidance on how I can achieve this.</p>
| Rajjy | <p>You can get info about node Zone and Region by reading its <code>labels</code>. You can check out well-known labels here: <a href="https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone</a>.</p>
<p>You can build from this information about placement nodes in zones. Next you can <code>get pods</code> and filter the output by field selector.
Found example on GH: <a href="https://github.com/kubernetes/client-go/issues/410" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/issues/410</a></p>
<pre class="lang-golang prettyprint-override"><code>nodeName := "my-node"
pods, err := clientset.CoreV1().Pods("").List(metav1.ListOptions{
FieldSelector: "spec.nodeName=" + nodeName,
})
</code></pre>
| Cloudziu |
<p>Or in simple words, what is syntax of the <code>kubectl set image</code>?</p>
<p>Note:
kubectl provides set image command but it often confuses developers. I hope this question helps =)</p>
| Muhammad Ali | <p>We can use this format to easily remember how this command works:</p>
<pre><code>kubectl set image deployment <deployment-name> <container-name>=<image-name>:<image-version>
</code></pre>
<p>For example:</p>
<pre><code>kubectl set image deployment frontend simple-webapp=kodekloud/webapp-color:v1
</code></pre>
<p>Here in the above example, <code>frontend</code> is the deployment name, <code>simple-webapp</code> is the name of the container, <code>kodekloud/webapp-color</code> is the name of the image and finally, <code>v1</code> is the image version.</p>
<p>P.S.
If this helps, do upvote!</p>
<p>Wanna connect, feel free to share an invite at <a href="https://www.linkedin.com/in/alihussainia" rel="nofollow noreferrer">Linkedin</a></p>
| Muhammad Ali |
<p>I have minikube running and I am trying to list the keys on my ETCD. </p>
<p>I downloaded the latest <code>etcdctl</code> client from github:<br>
<a href="https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz" rel="nofollow noreferrer">https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz</a> </p>
<p>I tried to run it with the certificates from <code>/home/myuser/.minikube/certs</code>: </p>
<pre><code>./etcdctl --ca-file /home/myuser/.minikube/certs/ca.pem
--key-file /home/myuser/.minikube/certs/key.pem
--cert-file /home/myuser/.minikube/certs/cert.pem
--endpoints=https://10.240.0.23:2379 get /
</code></pre>
<p>I received an error: </p>
<blockquote>
<p>Error: client: etcd cluster is unavailable or misconfigured; error
#0: x509: certificate signed by unknown authority </p>
<p>error #0: x509: certificate signed by unknown authority</p>
</blockquote>
<p>Did I used the correct certificates ? </p>
<p>I tried different certificates like that: </p>
<pre><code>./etcdctl --ca-file /var/lib/minikube/certs/ca.crt
--key-file /var/lib/minikube/certs/apiserver-etcd-client.key
--cert-file /var/lib/minikube/certs/apiserver-etcd-client.crt
--endpoints=https://10.240.0.23:2379 get /
</code></pre>
<p>I received the same error from before. </p>
<p>Any idea what is the problem ? </p>
| E235 | <p>For minikube the correct path for etcd certificates is: /var/lib/minikube/certs/etcd/ so the command will be like that:</p>
<pre><code># kubectl -n kube-system exec -it etcd-minikube -- sh -c "ETCDCTL_API=3 ETCDCTL_CACERT=/var/lib/minikube/certs/etcd/ca.crt ETCDCTL_CERT=/var/lib/minikube/certs/etcd/server.crt ETCDCTL_KEY=/var/lib/minikube/certs/etcd/server.key etcdctl endpoint health"
</code></pre>
| user14495228 |
<p>I am creating a POD file with multiple containers. One is a webserver container and another is my PostgreSQL container. Here is my pod file named <code>simple.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-01-01T16:28:15Z"
labels:
app: eltask
name: eltask
spec:
containers:
- name: el_web
command:
- ./entrypoints/entrypoint.sh
env:
- name: PATH
value: /usr/local/bundle/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: RUBY_MAJOR
value: "2.7"
- name: BUNDLE_SILENCE_ROOT_WARNING
value: "1"
- name: BUNDLE_APP_CONFIG
value: /usr/local/bundle
- name: LANG
value: C.UTF-8
- name: RUBY_VERSION
value: 2.7.2
- name: RUBY_DOWNLOAD_SHA256
value: 1b95ab193cc8f5b5e59d2686cb3d5dcf1ddf2a86cb6950e0b4bdaae5040ec0d6
- name: GEM_HOME
value: /usr/local/bundle
image: docker.io/hmtanbir/elearniotask
ports:
- containerPort: 3000
hostPort: 3000
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- CAP_MKNOD
- CAP_NET_RAW
- CAP_AUDIT_WRITE
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
tty: true
workingDir: /app
- name: el_db
image: docker.io/library/postgres:10-alpine3.13
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: PG_MAJOR
value: "10"
- name: PG_VERSION
value: "10.17"
- name: PGDATA
value: /var/lib/postgresql/data
- name: LANG
value: en_US.utf8
- name: PG_SHA256
value: 5af28071606c9cd82212c19ba584657a9d240e1c4c2da28fc1f3998a2754b26c
- name: POSTGRES_PASSWORD
value: password
args:
- postgres
command:
- docker-entrypoint.sh
ports:
- containerPort: 5432
hostPort: 9876
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- CAP_MKNOD
- CAP_NET_RAW
- CAP_AUDIT_WRITE
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
tty: true
workingDir: /
dnsConfig: {}
restartPolicy: Never
status: {}
</code></pre>
<p>I am running a webserver container in <code>3000:3000</code> port and the DB container port is <code>9876:5432</code>.
But when I run cmd using PODMAN <code>podman play kube simple.yaml</code>, DB container is running <code>127.0.0.0:9876</code> but webserver can't connect with the DB server.</p>
<p><a href="https://i.stack.imgur.com/q4U9y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q4U9y.png" alt="enter image description here" /></a></p>
<p>My webserver DB config:</p>
<pre><code>ELTASK_DATABASE_HOST=localhost
ELTASK_DATABASE_PORT=9876
ELTASK_DATABASE_USERNAME=postgres
ELTASK_DATABASE_PASSWORD=password
</code></pre>
<p>If I run the webserver without Podman, the server can connect with the DB using <code>9876</code> port.</p>
<p>So, Why the webserver can't connect with the database container while it is running through Podman?</p>
| HM Tanbir | <p>For your web container to connect to the postgresql container within the pod, use <code>ELTASK_DATABASE_PORT=5432</code> instead of ELTASK_DATABASE_PORT=9876.</p>
| gohm'c |
<h2>Background and Context</h2>
<p>I am working on a Terraform project that has an end goal of an EKS cluster with the following properties:</p>
<ol>
<li>Private to the outside internet</li>
<li>Accessible via a bastion host</li>
<li>Uses worker groups</li>
<li>Resources (deployments, cron jobs, etc) configurable via the Terraform Kubernetes module</li>
</ol>
<p>To accomplish this, I've modified the Terraform EKS example slightly (code at bottom of the question). The problems that I am encountering is that after SSH-ing into the bastion, I cannot ping the cluster and any commands like <code>kubectl get pods</code> timeout after about 60 seconds.</p>
<p>Here are the facts/things I know to be true:</p>
<ol>
<li>I have (for the time being) switched the cluster to a public cluster for testing purposes. Previously when I had <code>cluster_endpoint_public_access</code> set to <code>false</code> the <code>terraform apply</code> command would not even complete as it could not access the <code>/healthz</code> endpoint on the cluster.</li>
<li>The Bastion configuration works in the sense that the user data runs successfully and installs <code>kubectl</code> and the kubeconfig file</li>
<li>I am able to SSH into the bastion via my static IP (that's the <code>var.company_vpn_ips</code> in the code)</li>
<li>It's entirely possible this is fully a networking problem and not an EKS/Terraform problem as my understanding of how the VPC and its security groups fit into this picture is not entirely mature.</li>
</ol>
<h2>Code</h2>
<p>Here is the VPC configuration:</p>
<pre><code>locals {
vpc_name = "my-vpc"
vpc_cidr = "10.0.0.0/16"
public_subnet_cidr = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
private_subnet_cidr = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}
# The definition of the VPC to create
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.2.0"
name = local.vpc_name
cidr = local.vpc_cidr
azs = data.aws_availability_zones.available.names
private_subnets = local.private_subnet_cidr
public_subnets = local.public_subnet_cidr
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
}
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
data "aws_availability_zones" "available" {}
</code></pre>
<p>Then the security groups I create for the cluster:</p>
<pre><code>resource "aws_security_group" "ssh_sg" {
name_prefix = "ssh-sg"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"10.0.0.0/8",
]
}
}
resource "aws_security_group" "all_worker_mgmt" {
name_prefix = "all_worker_management"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
]
}
}
</code></pre>
<p>Here's the cluster configuration:</p>
<pre><code>locals {
cluster_version = "1.21"
}
# Create the EKS resource that will setup the EKS cluster
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
# The name of the cluster to create
cluster_name = var.cluster_name
# Disable public access to the cluster API endpoint
cluster_endpoint_public_access = true
# Enable private access to the cluster API endpoint
cluster_endpoint_private_access = true
# The version of the cluster to create
cluster_version = local.cluster_version
# The VPC ID to create the cluster in
vpc_id = var.vpc_id
# The subnets to add the cluster to
subnets = var.private_subnets
# Default information on the workers
workers_group_defaults = {
root_volume_type = "gp2"
}
worker_additional_security_group_ids = [var.all_worker_mgmt_id]
# Specify the worker groups
worker_groups = [
{
# The name of this worker group
name = "default-workers"
# The instance type for this worker group
instance_type = var.eks_worker_instance_type
# The number of instances to raise up
asg_desired_capacity = var.eks_num_workers
asg_max_size = var.eks_num_workers
asg_min_size = var.eks_num_workers
# The security group IDs for these instances
additional_security_group_ids = [var.ssh_sg_id]
}
]
}
data "aws_eks_cluster" "cluster" {
name = module.eks_cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks_cluster.cluster_id
}
output "worker_iam_role_name" {
value = module.eks_cluster.worker_iam_role_name
}
</code></pre>
<p>And the finally the bastion:</p>
<pre><code>locals {
ami = "ami-0f19d220602031aed" # Amazon Linux 2 AMI (us-east-2)
instance_type = "t3.small"
key_name = "bastion-kp"
}
resource "aws_iam_instance_profile" "bastion" {
name = "bastion"
role = var.role_name
}
resource "aws_instance" "bastion" {
ami = local.ami
instance_type = local.instance_type
key_name = local.key_name
associate_public_ip_address = true
subnet_id = var.public_subnet
iam_instance_profile = aws_iam_instance_profile.bastion.name
security_groups = [aws_security_group.bastion-sg.id]
tags = {
Name = "K8s Bastion"
}
lifecycle {
ignore_changes = all
}
user_data = <<EOF
#! /bin/bash
# Install Kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
# Install Helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm version
# Install AWS
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install
aws --version
# Install aws-iam-authenticator
curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
aws-iam-authenticator help
# Add the kube config file
mkdir ~/.kube
echo "${var.kubectl_config}" >> ~/.kube/config
EOF
}
resource "aws_security_group" "bastion-sg" {
name = "bastion-sg"
vpc_id = var.vpc_id
}
resource "aws_security_group_rule" "sg-rule-ssh" {
security_group_id = aws_security_group.bastion-sg.id
from_port = 22
protocol = "tcp"
to_port = 22
type = "ingress"
cidr_blocks = var.company_vpn_ips
depends_on = [aws_security_group.bastion-sg]
}
resource "aws_security_group_rule" "sg-rule-egress" {
security_group_id = aws_security_group.bastion-sg.id
type = "egress"
from_port = 0
protocol = "all"
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
depends_on = [aws_security_group.bastion-sg]
}
</code></pre>
<h2>Ask</h2>
<p>The most pressing issue for me is finding a way to interact with the cluster via the bastion so that the other part of the Terraform code can run (the resources to spin up in the cluster itself). I am also hoping to understand how to setup a private cluster when it ends up being inaccessible to the <code>terraform apply</code> command. Thank you in advance for any help you can provide!</p>
| Jimmy McDermott | <p>See how your node group is communicate with the control plane, you need to add the same cluster security group to your bastion host in order for it to communicate with the control plane. You can find the SG id on the EKS console - Networking tab.</p>
| gohm'c |
<p>I just updated my eks from 1.15 to 1.16 and I couldn't get my deployments in my namespaces up and running. when I do kubectl get po and try to list my pods they're all stuck in CrashLoopBackOff state. I tried describe one pod and this is what I get in the events section</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 56m (x8 over 72m) kubelet Pulling image "xxxxxxx.dkr.ecr.us-west-2.amazonaws.com/xxx-xxxx-xxxx:master.697.7af45fff8e0"
Warning BackOff 75s (x299 over 66m) kubelet Back-off restarting failed container
</code></pre>
<p>kuberntets version -</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-eks-e1a842", GitCommit:"e1a8424098604fa0ad8dd7b314b18d979c5c54dc", GitTreeState:"clean", BuildDate:"2021-07-31T01:19:13Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| DevopsinAfrica | <p>So the problem is I was trying to deploy x86 containers on ARM node instance. Everything worked once I changed my launch template image for my node group</p>
| DevopsinAfrica |
<p>I am following this tutorial: <a href="https://kubernetes.io/blog/2019/07/23/get-started-with-kubernetes-using-python/" rel="nofollow noreferrer">https://kubernetes.io/blog/2019/07/23/get-started-with-kubernetes-using-python/</a> on mac osx</p>
<p>I have completed all of the steps</p>
<pre><code>hello-python-6c7b478cf5-hxfvb 1/1 Running 0 114s
hello-python-6c7b478cf5-rczp9 1/1 Running 0 114s
hello-python-6c7b478cf5-snww5 1/1 Running 0 114s
hello-python-6c7b478cf5-wr8gf 1/1 Running 0 114s
</code></pre>
<p>I cannot visit localhost:6000 on my browser. I get an error:</p>
<pre><code>The web page at http://localhost:6000/ might be temporarily down or it may have moved permanently to a new web address.
</code></pre>
<p>But I can curl:</p>
<pre><code>app git:(master) ✗ curl localhost:6000
Hello from Python!%
</code></pre>
<ul>
<li><p>Why is this happening?</p>
</li>
<li><p>How to fix it?</p>
</li>
</ul>
| nz_21 | <p>If you are running this demo application on minikube then minikube doesn't supports LB external IP. You can check pending status with this command : <code>kubectl get svc -o wide</code>.</p>
<p>Resolution :
The LoadBalancer service get a node port assigned too so you can access services via:
<code>$ minikube service my-loadbalancer-service-name</code> to open browser or add --url flag to output service URL to terminal. You should see something like:
<code>$ minikube service hello-python-service --url</code>
this command will give url output .</p>
| Shashi Kumar |
<p>I have a <code>env</code> variable called <code>app_conf_path</code> which points to a <code>\location\file.yaml</code> which in turn contains all the values required for the application to work.The application needs this <code>app_conf_path</code> which has the location of <code>file.yaml</code> to run the application. How can i create a <code>configmap</code> for this type of setup. Right now i am having that <code>file.yaml</code> in a <code>persistentvolume</code> and have that <code>env</code> variable pointing to that <code>mountlocation</code>. I came to know about <code>configmaps</code> only recently. Any help on this would be appreciated.</p>
| doc_noob | <blockquote>
<p>I have a <code>env</code> variable called <code>app_conf_path</code> which points to a <code>\location\file.yaml</code> which in turn contains all the values required for the application to work.The application needs this <code>app_conf_path</code> which has the location of <code>file.yaml</code> to run the application. How can i create a <code>configmap</code> for this type of setup?</p>
</blockquote>
<p>I'll begin talking about the concepts of ConfigMaps:</p>
<ul>
<li>ConfigMap is a dictionary of configuration settings. It consists of key-value pairs of strings.</li>
<li>ConfigMaps are useful to keep your code separate from configuration.</li>
<li>You can generate a configmap <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-directories" rel="nofollow noreferrer">from directories</a>, <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-literal-values" rel="nofollow noreferrer">from literal values</a> or what we want: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">from a file</a>.</li>
<li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#configmap" rel="nofollow noreferrer">ConfigMap</a> can be treated like a volume in kubernetes:
<blockquote>
<ul>
<li>The data stored in a <code>ConfigMap</code> object can be referenced in a volume of type <code>configMap</code> and then consumed by containerized applications running in a Pod.</li>
<li>When referencing a <code>configMap</code> object, you can simply provide its name in the volume to reference it. You can also customize the path to use for a specific entry in the ConfigMap</li>
</ul>
</blockquote></li>
</ul>
<p><strong>Creating a ConfigMap From File:</strong></p>
<ul>
<li>To create a configmap you run the command:</li>
</ul>
<p><code>kubectl create configmap <CONFIGMAP_NAME> --from-file=/location/file.yaml</code></p>
<ul>
<li>You can also add more than one file to a single configmap, just repeat the <code>--from-file</code> argument, example:</li>
</ul>
<pre><code>kubectl create configmap <CONFIGMAP_NAME> \
--from-file=path/db.properties \
--from-file=path/ui.properties
</code></pre>
<hr>
<blockquote>
<p>I want to stop mounting the <code>persistentvolume</code> which has this <code>file.yaml</code> and the <code>file.yaml</code> is a simple <code>yaml</code> file with details of <code>dbconnectionstrings</code> and <code>paths</code> for other <code>apps</code></p>
</blockquote>
<p>From the concepts we saw above, your intention to stop having to mount the file to a PV to serve the config file can be fully realized using a <code>ConfigMap</code>. </p>
<ul>
<li>I'd like to suggest you <a href="https://theithollow.com/2019/02/20/kubernetes-configmaps/" rel="nofollow noreferrer">The ITHollow ConfigMap Example</a>. I was going to use it here but your app is already built with a function to look for the configuration file outside. I'll leave this link so you can see how you could use a ConfigMap to other apps that needs external configuration and are not hardcoded to look for it in a specific file.</li>
</ul>
<hr>
<p><strong>Reproducible Example:</strong></p>
<ul>
<li><p>This will be a example to show you how to achieve the portion your question requires.</p>
<ul>
<li>It will be a simple <code>ubuntu</code> pod which has a config file mounted in <code>/tmp/file.yaml</code> and that file path will be a Env variable called <code>app_conf_path</code>.</li>
</ul></li>
<li><p>First, I'll create a file called <code>file.yaml</code> and add 3 values:</p></li>
</ul>
<pre><code>$ cat file.yaml
key1: value1
key2: value2
key3: value3
</code></pre>
<p><strong>NOTE:</strong> <em>the name <code>file.yaml</code> is not very common, I'm using it to emulate your environment, usually we use something like <code>app.properties</code> and it does not require any previous structure, just all values in a <code>key:value</code> pair form, like in my example.</em> </p>
<ul>
<li>Now we will create the configmap called <code>app.config</code> from the file <code>file.yaml</code>. The file is on the same folder I'm running the command, thus I don't have to specify the full path:</li>
</ul>
<pre><code>$ kubectl create configmap app.config --from-file=file.yaml
configmap/app.config created
</code></pre>
<p><em>The filename becomes the reference inside the configmap and will be used later</em>.</p>
<ul>
<li>Let's see the configmap we created:</li>
</ul>
<pre><code>$ kubectl describe configmap app.config
Name: app.config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
file.yaml:
----
key1: value1
key2: value2
key3: value3
Events: <none>
</code></pre>
<ul>
<li>Now your goal is to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="nofollow noreferrer">add the configmap data to a volume</a>, and add the ENV variable that points <code>app_conf_path</code> to <code>/tmp/file.yaml</code>, here is the <code>app-deploy.yaml</code> for that:</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: ubuntu
image: ubuntu
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 3000; done;" ]
volumeMounts:
- name: config-volume
mountPath: /tmp
env:
- name: app_conf_path
value: "/tmp/file.yaml"
volumes:
- name: config-volume
configMap:
name: app.config
</code></pre>
<p><strong>NOTE:</strong> This is a very interesting step. We create a <code>volume</code> using the <code>configmap</code> and we set the location desired to <code>mount</code> that <code>volume</code>. Each section of the <code>configmap</code> will be a file inside that folder. Since we created it from only 1 file, it's the only file that will be mounted. We also set the <code>ENV name</code> you need with the <code>value</code> as the path to the file. </p>
<ul>
<li>Now let's apply it and open a shell inside the pod with <code>kubectl exec -it <POD_NAME> -- /bin/bash</code> to see our result:</li>
</ul>
<pre><code>$ kubectl apply -f app-deploy.yaml
deployment.apps/my-app created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-68b5b69fc8-xxqpw 1/1 Running 0 3s
$ kubectl exec -it my-app-68b5b69fc8-xxqpw -- /bin/bash
root@my-app-68b5b69fc8-xxqpw:/# printenv | grep app_conf_path
app_conf_path=/tmp/file.yaml
root@my-app-68b5b69fc8-xxqpw:/# cat $app_conf_path
key1: value1
key2: value2
key3: value3
</code></pre>
<hr>
<p>Now we reached the goal of your request.</p>
<p>Inside the pod there is a configuration file called <code>file.yaml</code> with the configuration settings we used to generate the config file.</p>
<p>You don't have to worry about creating and maintaining the volume separately.</p>
<p>If you still have any question about it let me know in the comments.</p>
| Will R.O.F. |
<p>I am new to kubernetes.
So, I created few pods.
Then I deleted all pods using</p>
<p>kubectl delete pods --all</p>
<p>But output of <code>df -h</code> still shows kubernetes consumed disk space.</p>
<pre class="lang-sh prettyprint-override"><code>Filesystem Size Used Avail Use% Mounted on
/dev/root 194G 19G 175G 10% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 1.6G 2.2M 1.6G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/loop0 34M 34M 0 100% /snap/amazon-ssm-agent/3552
/dev/loop2 56M 56M 0 100% /snap/core18/2246
/dev/loop1 25M 25M 0 100% /snap/amazon-ssm-agent/4046
/dev/loop3 56M 56M 0 100% /snap/core18/2253
/dev/loop4 68M 68M 0 100% /snap/lxd/21835
/dev/loop5 44M 44M 0 100% /snap/snapd/14295
/dev/loop6 62M 62M 0 100% /snap/core20/1242
/dev/loop7 43M 43M 0 100% /snap/snapd/14066
/dev/loop8 68M 68M 0 100% /snap/lxd/21803
/dev/loop9 62M 62M 0 100% /snap/core20/1270
tmpfs 1.6G 20K 1.6G 1% /run/user/123
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/a2054657-e24d-434f-8ba5-b93813a405fc/volumes/kubernetes.io~secret/local-path-provisioner-service-account-token-4hkj6
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/fa06c678-814f-4f98-8d2d-806e85923830/volumes/kubernetes.io~secret/metrics-server-token-pjbwh
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/rootfs
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/956d3b341a87e4232792ebf1ad0925f07c180d6d86de149a6ec801f74c0b47f8/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/rootfs
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/babfe080e5ec18297a219e65f99d6156fbd8b8651950a63052606ffebd7a618a/rootfs
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/4e3b15c1-f051-42eb-a3d1-9b3de38dae12/volumes/kubernetes.io~secret/default-token-lnpwv
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/df53096e-f89b-4fc7-ab8a-672d841ac44f/volumes/kubernetes.io~secret/coredns-token-sxtjn
tmpfs 7.8G 8.0K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/ssl
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/traefik-token-46qmp
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/rootfs
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/39b88e479947c9240a7c5233555c7a19b29f3ccc7bd1da117251c8e8959aca3c/rootfs
shm 64M 0 64M 0%
</code></pre>
<p>What are these spaces showing in df -h ? How to free up these spaces ?</p>
<p>EDIT:</p>
<p>I noticed that pods are restarting after I delete them.</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mylab-airflow-redis-0 1/1 Running 0 33m
mylab-airflow-postgresql-0 1/1 Running 0 34m
mylab-postgresql-0 1/1 Running 0 34m
mylab-keyclo-0 1/1 Running 0 34m
mylab-keycloak-postgres-0 1/1 Running 0 34m
mylab-airflow-scheduler-788f7f4dd6-ppg6v 2/2 Running 0 34m
mylab-airflow-worker-0 2/2 Running 0 34m
mylab-airflow-flower-6d8585794d-s2jzd 1/1 Running 0 34m
mylab-airflow-webserver-859766684b-w9zcm 1/1 Running 0 34m
mylab-5f7d84fcbc-59mkf 1/1 Running 0 34m
</code></pre>
<p><strong>Edited</strong></p>
<p>So I deleted the deployments.</p>
<pre><code>kubectl delete deployment --all
</code></pre>
<p>Now, there are no deployments.</p>
<pre><code>$ kubectl get deployment
No resources found in default namespace.
</code></pre>
<p>Then after, I stopped the cluster.</p>
<pre><code>systemctl stop k3s
</code></pre>
<p><strong>Disk space is still not released.</strong></p>
<p>Output of latest disk usage.</p>
<pre><code>Filesystem Size Used Avail Use% Mounted on
/dev/root 194G 35G 160G 18% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 1.6G 2.5M 1.6G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/loop0 34M 34M 0 100% /snap/amazon-ssm-agent/3552
/dev/loop2 56M 56M 0 100% /snap/core18/2246
/dev/loop1 25M 25M 0 100% /snap/amazon-ssm-agent/4046
/dev/loop3 56M 56M 0 100% /snap/core18/2253
/dev/loop4 68M 68M 0 100% /snap/lxd/21835
/dev/loop5 44M 44M 0 100% /snap/snapd/14295
/dev/loop6 62M 62M 0 100% /snap/core20/1242
/dev/loop7 43M 43M 0 100% /snap/snapd/14066
/dev/loop8 68M 68M 0 100% /snap/lxd/21803
/dev/loop9 62M 62M 0 100% /snap/core20/1270
tmpfs 1.6G 20K 1.6G 1% /run/user/123
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/a2054657-e24d-434f-8ba5-b93813a405fc/volumes/kubernetes.io~secret/local-path-provisioner-service-account-token-4hkj6
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/fa06c678-814f-4f98-8d2d-806e85923830/volumes/kubernetes.io~secret/metrics-server-token-pjbwh
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/956d3b341a87e4232792ebf1ad0925f07c180d6d86de149a6ec801f74c0b47f8/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/rootfs
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/4e3b15c1-f051-42eb-a3d1-9b3de38dae12/volumes/kubernetes.io~secret/default-token-lnpwv
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/df53096e-f89b-4fc7-ab8a-672d841ac44f/volumes/kubernetes.io~secret/coredns-token-sxtjn
tmpfs 7.8G 8.0K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/ssl
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/traefik-token-46qmp
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/39b88e479947c9240a7c5233555c7a19b29f3ccc7bd1da117251c8e8959aca3c/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/6eddeab3511cf326a530dd042f5348978c6ba98bf8d595c2936cb6f56e30f754/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/6eddeab3511cf326a530dd042f5348978c6ba98bf8d595c2936cb6f56e30f754/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/78568d4850964c9c7b8ca5df11bf532a477492119813094631641132aadd23a0/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/14d87054e0c7a2a86ae64be70a79f94e2d193bc4739d97e261e85041c160f3bc/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/0971fe44fc6f0f5c9e0b8c1a0e3279c20b3bc574e03d12607644e1e7d427ff65/rootfs
tmpfs 1.6G 4.0K 1.6G 1% /run/user/1000
</code></pre>
<p>Output of <code>ctr containers ls</code></p>
<pre><code># ctr container list
CONTAINER IMAGE RUNTIME
</code></pre>
| Saurav Pathak | <p>There are mandatory data to be maintain when a cluster is running (eg. default service token). When you shutdown (eg. systemctl stop k3s) the cluster (not just delete pods) these will be released.</p>
| gohm'c |
<p>I'm trying to deploy my Apollo Server application to my GKE cluster. However, when I visit the static IP for my site I receive a 502 Bad Gateway error. I was able to get my client to deploy properly in a similar fashion so I'm not sure what I'm doing wrong. My deployment logs seem to show that the server started properly. However my ingress indicates that my service is unhealthy since it seems to be failing the health check.</p>
<p>Here is my deployment.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: <DEPLOYMENT_NAME>
labels:
app: <DEPLOYMENT_NAME>
spec:
replicas: 1
selector:
matchLabels:
app: <POD_NAME>
template:
metadata:
name: <POD_NAME>
labels:
app: <POD_NAME>
spec:
serviceAccountName: <SERVICE_ACCOUNT_NAME>
containers:
- name: <CONTAINER_NAME>
image: <MY_IMAGE>
imagePullPolicy: Always
ports:
- containerPort: <CONTAINER_PORT>
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- '/cloud_sql_proxy'
- '-instances=<MY_PROJECT>:<MY_DB_INSTANCE>=tcp:<MY_DB_PORT>'
securityContext:
runAsNonRoot: true
</code></pre>
<p>My service.yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: <MY_SERVICE_NAME>
labels:
app: <MY_SERVICE_NAME>
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: <CONTAINER_PORT>
selector:
app: <POD_NAME>
</code></pre>
<p>And my ingress.yml</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: <INGRESS_NAME>
annotations:
kubernetes.io/ingress.global-static-ip-name: <CLUSTER_NAME>
networking.gke.io/managed-certificates: <CLUSTER_NAME>
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: <SERVICE_NAME>
servicePort: 80
</code></pre>
<p>Any ideas what is causing this failure?</p>
| Jest Games | <p>With Apollo Server you need the health check to look at the correct endpoint. So add the following to your deployment.yml under the container.</p>
<pre><code>livenessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: '/.well-known/apollo/server-health'
port: <CONTAINER_PORT>
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: '/.well-known/apollo/server-health'
port: <CONTAINER_PORT>
</code></pre>
| Jest Games |
<p>I'm using the <code>stable/prometheus</code> helm chart and I've configured a custom values file that further configures alertmanager for the chart deployment. I can install the chart via Helm3 without any issues, however, there's one thing I'm not able to figure out. For the <code>Slack Reciever/slack_configs/api_url</code> I want to pass that through the <code>set</code> command so I don't have to keep it hardcoded into file. </p>
<p>I hope I'm on the right path, and here's what I'm thinking of running to access the value.</p>
<pre><code>helm install test-release stable/prometheus -f customALM.yml --set alertmanagerFiles.alertmanager.yml.receivers[0].api_url=https://hooks.slack.com/services/XXXXXXXXX/XXXXXXXXX/xxxxxxxxxxxxxxxxxxxxxxxxxxx
</code></pre>
<p><strong><code>customALM.yml</code></strong></p>
<pre><code>alertmanagerFiles:
alertmanager.yml:
route:
group_wait: 10s
group_interval: 5m
repeat_interval: 30m
receiver: "slack"
routes:
- receiver: "slack"
group_wait: 10s
match_re:
severity: error|warning
continue: true
receivers:
- name: "slack"
slack_configs:
- api_url: '[howDoIsetThisAtTheCLI?'
channel: 'someChannel'
text: "Text message template etc."
</code></pre>
<p><strong>Update 4/8:</strong> I'm making progress thanks to willro! I'm able to get a value inserted but it puts it at the root of the alertmanager block. I've tried a few different combinations to access receivers/slack_configs but no luck yet =/</p>
<pre><code>helm install test-release stable/prometheus -f customALM.yml --set alertmanagerFiles.api_url=PleaseInsertPrettyPlease --dry-run
</code></pre>
<p><strong>Update 4/9:</strong> I've decided to move the receivers block into a separate file that's encrypted and stored securely. </p>
| RomeNYRR | <blockquote>
<p>Running the command to change the URL after its been deployed is definitely an option that I want to have.</p>
</blockquote>
<p>I'd like to write this answer to give you this option!</p>
<ul>
<li><p>You can chain a few commands with <a href="https://www.grymoire.com/Unix/Sed.html" rel="nofollow noreferrer">SED</a> to edit data on that ConfigMap (it's very similar to what <code>kubectl edit</code> does!)</p></li>
<li><p>For that you will need to use the string deployed on the <code>customALM.yml</code>. For this example I set the parameter as <code>api_url: ChangeMeLater</code> before deploying.</p></li>
<li><p>Then I deployed the chart with <code>helm install test-release stable/prometheus -f customALM.yml</code></p></li>
<li><p>Lastly we run: </p></li>
</ul>
<pre><code>kubectl get cm <CONFIG_MAP_NAME> -o yaml | sed -e "s,<OLD_VALUE>,<NEW_VALUE>,g" | kubectl replace -f -
</code></pre>
<ul>
<li><p>Explaining what's going on:</p>
<ul>
<li><code>kubectl get cm <CONFIG_MAP_NAME> -o yaml |</code> = gets the deployed configmap in yaml format and pipe it to the next command</li>
<li><code>sed -e "s,<OLD_VALUE>,<NEW_VALUE>,g" |</code> = use <code>sed</code> to replace <code>old_value</code> for <code>new_value</code> and pipe it to the next comand</li>
<li><code>kubectl replace -f -</code> = use the output from the last command and replace the object currently deployed with the same name.</li>
</ul></li>
<li><p>I'll leave an example here step by step to elucidate more:</p></li>
</ul>
<pre><code>$ helm install test-release stable/prometheus -f customALM.yml
Release "test-release" has been installed. Happy Helming!
...
$ kubectl get cm
NAME DATA AGE
test-release-prometheus-alertmanager 1 44m
test-release-prometheus-server 5 44m
$ kubectl get cm test-release-prometheus-alertmanager -o yaml
apiVersion: v1
data:
alertmanager.yml: |
global: {}
receivers:
- name: slack
slack_configs:
- api_url: ChangeMeLater
channel: someChannel
text: Text message template etc.
route:
group_interval: 5m
group_wait: 10s
receiver: slack
repeat_interval: 30m
routes:
- continue: true
group_wait: 10s
match_re:
severity: error|warning
receiver: slack
kind: ConfigMap
metadata:
creationTimestamp: "2020-04-10T13:41:15Z"
labels:
app: prometheus
chart: prometheus-11.0.6
component: alertmanager
heritage: Helm
release: test-release
name: test-release-prometheus-alertmanager
namespace: default
resourceVersion: "218148"
selfLink: /api/v1/namespaces/default/configmaps/test-release-prometheus-alertmanager
uid: 323fdd40-2f29-4cde-833c-c6300d5688c0
$ kubectl get cm test-release-prometheus-alertmanager -o yaml | sed -e "s,ChangeMeLater,theurl.com/any,g" | kubectl replace -f -
configmap/test-release-prometheus-alertmanager replaced
$ kubectl get cm test-release-prometheus-alertmanager -o yaml
apiVersion: v1
data:
alertmanager.yml: |
global: {}
receivers:
- name: slack
slack_configs:
- api_url: theurl.com/any
channel: someChannel
text: Text message template etc.
route:
group_interval: 5m
group_wait: 10s
receiver: slack
repeat_interval: 30m
routes:
- continue: true
group_wait: 10s
match_re:
severity: error|warning
receiver: slack
kind: ConfigMap
metadata:
creationTimestamp: "2020-04-10T13:41:15Z"
labels:
app: prometheus
chart: prometheus-11.0.6
component: alertmanager
heritage: Helm
release: test-release
name: test-release-prometheus-alertmanager
namespace: default
resourceVersion: "219507"
selfLink: /api/v1/namespaces/default/configmaps/test-release-prometheus-alertmanager
uid: 323fdd40-2f29-4cde-833c-c6300d5688c0
</code></pre>
<p>You can see that the command changed the <code>ChangeMeLater</code> for <code>theurl.com/any</code>.</p>
<p>I'm still thinking about your first option, but this is a good workaround to have in hand.</p>
<p>If you have any doubt let me know!</p>
| Will R.O.F. |
<p>We have several K8S clusters which we need to monitor from <strong>one</strong> operator cluster (cluster A)
We are using Prometheus on each cluster to monitor the cluster itself, now in addition we want to monitor from a specific api of application which will tell us if our cluster (according to our specific services) is functinal or not, im not talking about monitor the cluster ,we want the the operator will monitor 3 application on each cluster( all the 3 applications are deployed on all the monitored clusters)</p>
<blockquote>
<p>Cluster A (operator) should monitor service/apps on cluster B,C,D etc</p>
</blockquote>
<p>e.g. The operator cluster will call to deplyed app in clusterA like <code>host://app1/status</code> to get the status if 0 or 1, and save the status in some DB. (maybe prometehusDB) and report them outside the cluster.</p>
<p>Currently after some search I found this option but maybe there is more which I dont khow</p>
<ol>
<li><p>Use blackbox exporter - <a href="https://github.com/prometheus/blackbox_exporter" rel="nofollow noreferrer">https://github.com/prometheus/blackbox_exporter</a></p>
</li>
<li><p>Create my own programs (in golang) which will like a cronjob and which will be runing in the operator cluster using prom lib.</p>
</li>
</ol>
<p><a href="https://github.com/prometheus/client_golang" rel="nofollow noreferrer">https://github.com/prometheus/client_golang</a></p>
<p>I mean running a rest call and use Prometheus api to store the status inside Prometheus <code>tsdb</code> via go "github.com/prometheus/client_golang/prometheus/promhttp" code. but not sure how..</p>
<ol start="3">
<li>Federation ??</li>
</ol>
<p>In addition, in case I was able to collect all the data from the clusters into the operator cluster, How and where should I keep it? in Prometheus db tsdb? other way ?</p>
<p><strong>What should be the best practice to support our case ?</strong> How should we do it ?</p>
| Beno Odr | <p>Ideally you would instrument your code and expose Prometheus compatible metrics for whatever needs monitored. But, there is something to be said for blackbox and/or 3rd party monitoring/smoke testing.</p>
<p>The http module in Blackbox Exporter is probably what you want (I've used it similarly before). If that isn't flexible enough for the testing you need to do, I like to run custom testing scripts in Lambda that record the results in Cloudwatch (if running in AWS, otherwise use the equivalent in your environment). If you haven't done that before, there is a bit of a learning curve, but it is well worth the effort.</p>
<p>If the APIs are externally accessible, services like Pingdom and Site24x7 offer flexible testing options (for a price), and it is generally recommended to utilize a 3rd party for at least basic up-time testing for the cases where your entire environment goes down--along with all of your monitoring!</p>
<p>But, it does sound like you just want to do some basic blackbox style monitoring which the Blackbox Exporter would be well suited to. It will need a host to run on, and then you'll need to add a job for it to Prometheus' scrape config. Best practice is to use each host for a single purpose, so I'd provision a specific host for the purpose of running blackbox exporter (even if it is just another container in the cluster).</p>
| Benjamin Isaacson |
<p>Folks, when running the following kubectl command:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: openvpn-data-claim
namespace: openvpn
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<pre><code>error: SchemaError(io.k8s.api.autoscaling.v1.Scale): invalid object doesn't have additional properties
</code></pre>
<p>kubectl version</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.9-gke.24", GitCommit:"39e41a8d6b7221b901a95d3af358dea6994b4a40", GitTreeState:"clean", BuildDate:"2020-02-29T01:24:35Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| Cmag | <ul>
<li>This answer is is an addition to @Cmag answer and
my intention is to provide more insights about this issue to help the community.</li>
</ul>
<p>According to Kubernetes <strong><a href="https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl" rel="nofollow noreferrer">Version Skew Policy</a>:</strong></p>
<blockquote>
<p><code>kubectl</code> is supported within one minor version (older or newer) of <code>kube-apiserver</code>.</p>
<p><strong>IF</strong> <code>kube-apiserver</code> is at <strong>1.15</strong>: <code>kubectl</code> is supported at <strong>1.16</strong>, <strong>1.15</strong>, and <strong>1.14</strong>.</p>
<p><strong>Note:</strong> If version skew exists between kube-apiserver instances in an HA cluster, for example <code>kube-apiserver</code> instances are at <strong>1.15</strong> and <strong>1.14</strong>, <code>kubectl</code> will support only <strong>1.15</strong> and <strong>1.14</strong> since any other versions would be more than one minor version skewed.</p>
</blockquote>
<ul>
<li>Each update of kubernetes has many components that are added, changed, moved, deprecated or removed. Here is the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md" rel="nofollow noreferrer">Kubernetes Changelog of version 1.15</a>.</li>
</ul>
<p><strong>Even running a much newer client versions may give you some issues</strong></p>
<ul>
<li>In K8s 1.10 the <code>kubectl run</code> had a default behavior of creating deployments:</li>
</ul>
<pre><code>❯ ./kubectl-110 run ubuntu --image=ubuntu
deployment.apps "ubuntu" created
</code></pre>
<ul>
<li>Starting on 1.12 the <code>kubectl run</code> was <a href="https://github.com/kubernetes/kubernetes/pull/68132" rel="nofollow noreferrer">deprecated</a> to all generators except pods, here is an example with <strong>kubectl 1.16</strong>:</li>
</ul>
<pre><code>❯ ./kubectl-116 run ubuntu --image=ubuntu --dry-run
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/ubuntu created (dry run)
</code></pre>
<ul>
<li>Besides the warning, it still work as intended, but it <a href="https://github.com/kubernetes/kubernetes/pull/87077" rel="nofollow noreferrer">changed</a> in K8s 1.18 client:</li>
</ul>
<pre><code>❯ ./kubectl-118 version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.9-gke.24", GitCommit:"39e41a8d6b7221b901a95d3af358dea6994b4a40", GitTreeState:"clean", BuildDate:"2020-02-29T01:24:35Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl run --generator=deployment/apps.v1 ubuntu --image=ubuntu --dry-run=client
Flag --generator has been deprecated, has no effect and will be removed in the future.
pod/ubuntu created (dry run)
</code></pre>
<p>It ignored the flag and created only a pod. That flag is supported by kubernetes 1.15 as we saw in the test, but the kubectl 1.18 had significant changes that did not allowed running it.</p>
<ul>
<li>This is a simple example to illustrate the importance to follow the skew policy on Kubernetes, it can save a lot of troubleshooting time in the future!</li>
</ul>
| Will R.O.F. |
<p>I start the app using yarn, in my local machine, configure a local nginx service listening on the port 8083, and the yarn service listening with port 3000, when the url path start with <code>/manage</code>, forward the http request to the backend service deployment in the remote kubernetes cluster. This is my local nginx forward config:</p>
<pre><code>server {
listen 8083;
server_name admin.reddwarf.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location ^~ /manage/ {
proxy_pass https://admin.example.top;
proxy_redirect off;
proxy_set_header Host https://admin.example.top;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
</code></pre>
<p>In the kubernetes cluster, I am using traefik to forward the http request, it forward by the http <code>host</code> header. This is the traefik config:</p>
<pre><code> routes:
- kind: Rule
match: Host(`admin.example.top`) && PathPrefix(`/manage`)
priority: 2
services:
- name: dolphin-gateway
port: 8081
</code></pre>
<p>now the http request give 404 not found error, I am sure the api path exists becuase I could invoke api in the test tools. I think the request send from local debugging app was <code>localhost:8083</code> that make the traefik could not recognize the request correctly. what should I do to change the local machine header to <code>admin.example.top</code> that make traefik could recognize? or my config was mistake? what should I do to make it work as expect? This is the local request demo:</p>
<pre><code>curl 'http://localhost:8083/manage/admin/user/login' \
-H 'Connection: keep-alive' \
-H 'sec-ch-ua: " Not A;Brand";v="99", "Chromium";v="96", "Google Chrome";v="96"' \
-H 'Accept: application/json, text/plain, */*' \
-H 'DNT: 1' \
-H 'Content-Type: application/json;charset=UTF-8' \
-H 'sec-ch-ua-mobile: ?0' \
-H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36' \
-H 'sec-ch-ua-platform: "macOS"' \
-H 'Origin: http://localhost:8083' \
-H 'Sec-Fetch-Site: same-origin' \
-H 'Sec-Fetch-Mode: cors' \
-H 'Sec-Fetch-Dest: empty' \
-H 'Referer: http://localhost:8083/' \
-H 'Accept-Language: en,zh-CN;q=0.9,zh;q=0.8,zh-TW;q=0.7,fr;q=0.6' \
--data-raw '{"phone":"+8615623741658","password":"123"}' \
--compressed
</code></pre>
| Dolphin | <p>Try add to curl request: <code>-H 'HOST: admin.example.top'</code>. Then update your nginx config:</p>
<pre><code>server {
listen 8083;
server_name admin.example.top;
...
</code></pre>
<p>Server block that match the HOST will process the request.</p>
| gohm'c |
<p>I am using a statefulset to deploy mognodb to kubernetes.</p>
<p>I have two pods called:</p>
<p>mongo-replica-0.mongo:27017 and mongo-replica-1.mongo:27017 (.mongo is added because of the kube service)</p>
<p>I am running this command from a kube job after the pods are started</p>
<pre class="lang-bash prettyprint-override"><code>mongo "mongodb://mongo-replica-0.mongo:27017" -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --eval "rs.initiate({ _id: 'rs0', members: [{ _id: 0, host: 'mongo-replica-0.mongo:27017' }, { _id: 1, host: 'mongo-replica-1.mongo:27017' },] })"
</code></pre>
<p>I receive this error:</p>
<blockquote>
<p>"errmsg" : "The hosts mongo-replica-0.mongo:27017 and mongo-replica-1.mongo:27017 all map to this node in new configuration with {version: 1, term: 0} for replica set rs0</p>
</blockquote>
<p>How can I initiate my replicaset?</p>
| Brandon Kauffman | <p>I needed to set the service's IP to null and session affinity to null to make the service headless. When mongo tried to intercommunicate with sthe service originally, it saw the service IP and thought it was referencing itself. After the updates it succeeded.</p>
<p>Terraform setting:</p>
<pre><code>resource "kubernetes_service" "mongodb-service" {
metadata {
name = "mongo"
namespace = kubernetes_namespace.atlas-project.id
labels = {
"name" = "mongo"
}
}
spec {
selector = {
app = "mongo"
}
cluster_ip = null
session_affinity = null
port {
port = 27017
target_port = 27017
}
type = "LoadBalancer"
}
lifecycle {
ignore_changes = [spec[0].external_ips]
}
}
</code></pre>
| Brandon Kauffman |
<p>I have this deployment.yaml for kubernetes</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: smart-flats
labels:
app: smart-flats
spec:
type: NodePort
selector:
app: smart-flats
ports:
- protocol: TCP
port: 5000
name: http
---
apiVersion: v1
kind: ReplicationController
metadata:
name: smart-flats
spec:
replicas: 1
template:
metadata:
labels:
app: smart-flats
spec:
containers:
- name: smart-flats
image: sleezy/go-hello-world:<VERSION>
env:
- name: SECRETKEY
value: "${CONFIG_SECRETKEY}"
ports:
- containerPort: 5000
livenessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 30
timeoutSeconds: 1
</code></pre>
<p>but when i try to push new version of app, kubectl get pods still show the first one and no updated version, what should i do? i need update pod everytime i push new version. Thanks!</p>
| Sizor | <p>First off <code>kind: Deployment</code> is recommended instead of <code>kind: ReplicationController</code>, see <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/" rel="nofollow noreferrer">here</a> for details. Once you have updated the spec, you can update image version with <code>kubectl set image deployment smart-flats smart-flats=sleezy/go-hello-world:<new version></code>. Your pods will automatic restart with the new image version.</p>
| gohm'c |
<p>I have followed the steps mentioned in this <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">nginx for kubernetes</a>, For installing this in <code>azure</code> i ran the following </p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>I opened that file and under the section <code># Source: ingress-nginx/templates/controller-deployment.yaml</code> i could see the <code>resources</code>, is there a way to override this and set the <code>cpu</code> and <code>memory</code> limit for that <code>ingress</code> and also i would like to know whether everything in there is customisable.</p>
| doc_noob | <blockquote>
<p>I would like to know whether everything in there is customizable.</p>
</blockquote>
<p>Almost everything is customizable, but keep in mind that you must know exactly what are you changing, otherwise it can break your ingress.</p>
<blockquote>
<p>Is there a way to override this and set the cpu and memory limit for that ingress?</p>
</blockquote>
<p>Aside for download and editing the file before deploying it, Here are three ways you can customize it on the run:</p>
<ol>
<li><p><strong><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#edit" rel="nofollow noreferrer">Kubectl Edit:</a></strong></p>
<ul>
<li>The edit command allows you to directly edit any API resource you can retrieve via the command line tools. </li>
<li>It will open the editor defined by your KUBE_EDITOR, or EDITOR environment variables, or fall back to 'vi' for Linux or 'notepad' for Windows.</li>
<li>You can edit multiple objects, although changes are applied one at a time.
Example:</li>
</ul></li>
</ol>
<pre><code>kubectl edit deployment ingress-nginx-controller -n ingress-nginx
</code></pre>
<p>This is the command that will open the deployment mentioned in the file. If you make an invalid change, it will not apply and will save to a temporary file, so use it with that in mind, if it's not applying, you changed something you shouldn't like the structure.</p>
<ol start="2">
<li><strong><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#patch" rel="nofollow noreferrer">Kubectl Patch</a> using a yaml file</strong>:
<ul>
<li>Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.</li>
<li>JSON and YAML formats are accepted.</li>
</ul></li>
</ol>
<p>Create a simple file called <code>patch-nginx.yaml</code> with the minimal following content (the parameter you wish to change and his structure):</p>
<pre><code>spec:
template:
spec:
containers:
- name: controller
resources:
requests:
cpu: 111m
memory: 99Mi
</code></pre>
<p>The command structure is: <code>kubectl patch <KIND> <OBJECT_NAME> -n <NAMESPACE> --patch "$(cat <FILE_TO_PATCH>)"</code> </p>
<p>Here is a full example:</p>
<pre><code>$ kubectl patch deployment ingress-nginx-controller -n ingress-nginx --patch "$(cat patch-nginx.yaml)"
deployment.apps/ingress-nginx-controller patched
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep cpu
cpu: 111m
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep memory
memory: 99Mi
</code></pre>
<ol start="3">
<li><strong>Kubectl <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/#alternate-forms-of-the-kubectl-patch-command" rel="nofollow noreferrer">Patch with JSON format</a></strong>:
<ul>
<li>This is the one-liner version and it follows the same structure as the yaml version, but we will pass the parameter in a json structure instead:</li>
</ul></li>
</ol>
<pre><code>$ kubectl patch deployment ingress-nginx-controller -n ingress-nginx --patch '{"spec":{"template":{"spec":{"containers":[{"name":"controller","resources":{"requests":{"cpu":"122m","memory":"88Mi"}}}]}}}}'
deployment.apps/ingress-nginx-controller patched
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep cpu
cpu: 122m
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep memory
memory: 88Mi
</code></pre>
<p>If you have any doubts, let me know in the comments.</p>
| Will R.O.F. |
<p>I was trying to setup an elasticsearch cluster in AKS using helm chart but due to the log4j vulnerability, I wanted to set it up with option <code>-Dlog4j2.formatMsgNoLookups</code> set to <code>true</code>. I am getting unknown flag error when I pass the arguments in helm commands.
Ref: <a href="https://artifacthub.io/packages/helm/elastic/elasticsearch/6.8.16" rel="noreferrer">https://artifacthub.io/packages/helm/elastic/elasticsearch/6.8.16</a></p>
<pre><code>helm upgrade elasticsearch elasticsearch --set imageTag=6.8.16 esJavaOpts "-Dlog4j2.formatMsgNoLookups=true"
Error: unknown shorthand flag: 'D' in -Dlog4j2.formatMsgNoLookups=true
</code></pre>
<p>I have also tried to add below in <code>values.yaml</code> file</p>
<pre><code>esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
log4j2.properties: |
-Dlog4j2.formatMsgNoLookups = true
</code></pre>
<p>but the values are not adding to the <code>/usr/share/elasticsearch/config/jvm.options</code>, <code>/usr/share/elasticsearch/config/log4j2.properties</code> or in the environment variables.</p>
| theG | <p>First of all, here's a good source of knowledge about mitigating <a href="https://xeraa.net/blog/2021_mitigate-log4j2-log4shell-elasticsearch/" rel="nofollow noreferrer">Log4j2 security issue</a> if this is the reason you reached here.</p>
<p>Here's how you can write your <code>values.yaml</code> for the Elasticsearch chart:</p>
<pre><code>esConfig:
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
</code></pre>
<p>A ConfigMap will be generated by Helm:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-master-config
...
data:
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
</code></pre>
<p>And the Log4j configuration will be mount to your Elasticsearch as:</p>
<pre><code>...
volumeMounts:
...
- name: esconfig
mountPath: /usr/share/elasticsearch/config/log4j2.properties
subPath: log4j2.properties
</code></pre>
<p><strong>Update:</strong> How to set and add multiple configuration files.</p>
<p>You can setup other ES configuration files in your <code>values.yaml</code>, all the files that you specified here will be part of the ConfigMap, each of the files will be mounted at <code>/usr/share/elasticsearch/config/</code> in the Elasticsearch container. Example:</p>
<pre><code>esConfig:
elasticsearch.yml: |
node.master: true
node.data: true
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
jvm.options: |
# You can also place a comment here.
-Xmx1g -Xms1g -Dlog4j2.formatMsgNoLookups=true
roles.yml: |
click_admins:
run_as: [ 'clicks_watcher_1' ]
cluster: [ 'monitor' ]
indices:
- names: [ 'events-*' ]
privileges: [ 'read' ]
field_security:
grant: ['category', '@timestamp', 'message' ]
query: '{"match": {"category": "click"}}'
</code></pre>
<p><strong>ALL of the configurations above are for illustration only to demonstrate how to add multiple configuration files in the values.yaml. Please substitute these configurations with your own settings.</strong></p>
| gohm'c |
<p>we are using Rancher to setup clusters with Canal as the CNI. We decided to use Traefik as an Ingress Controller and wanted to create a NetworkPolicy. We disabled ProjectIsolation and Traefik is running in the System project in the kube-system namespace.</p>
<p>I created this Policy:</p>
<pre class="lang-yaml prettyprint-override"><code># deny all ingress traffic
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
---
# allow traefik
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: ingress-allow-traefik
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
namespace: kube-system
podSelector:
matchLabels:
app: traefik
---
# allow backnet
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: ingress-allow-backnet
spec:
podSelector: {}
ingress:
- from:
- ipBlock:
cidr: 10.0.0.0/24
- ipBlock:
cidr: 10.1.0.0/24
- ipBlock:
cidr: 10.2.0.0/24
- ipBlock:
cidr: 192.168.0.0/24
</code></pre>
<p>But somehow we can't get this to work. The connection gets time-outed and that's it. Is there a major problem with this policy? Something i didn't understand about NetworkPolicies?</p>
<p>Thanks in advance</p>
| mreiners | <p>I solved the Problem. It was a plain beginner mistake:</p>
<pre class="lang-yaml prettyprint-override"><code>- namespaceSelector:
matchLabels:
namespace: kube-system
</code></pre>
<p>I didn't add the <code>Label</code> <code>namespace: kube-system</code> to the <code>Namespace</code> <code>kube-system</code>.</p>
<p>After adding the Label it worked instantly.</p>
| mreiners |
<p>I have a simple service and pod as described below but the readiness probe fails complaining for connection refused</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: keystone-api
spec:
selector:
app: keystone
ports:
- protocol: TCP
port: 5000
targetPort: 5000
name: public
- protocol: TCP
port: 35357
targetPort: 35357
name: admin
...
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keystone
labels:
app: keystone
spec:
replicas: 1
selector:
matchLabels:
app: keystone
template:
metadata:
labels:
app: keystone
spec:
containers:
- name: keystone
image: openio/openstack-keystone
readinessProbe:
tcpSocket:
port: 5000
env:
- name: OS_IDENTITY_ADMIN_PASSWD
value: password
- name: IPADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 5000
name: public
- containerPort: 35357
name: admin
</code></pre>
<p>Error:</p>
<pre><code> Normal Pulled 37m kubelet, kind-pl Successfully pulled image "openio/openstack-keystone"
Normal Created 37m kubelet, kind-pl Created container keystone
Normal Started 37m kubelet, kind-pl Started container keystone
Warning Unhealthy 35m (x8 over 37m) kubelet, kind-pl Readiness probe failed: dial tcp 10.244.0.10:5000: connect: connection refused
</code></pre>
<p>This is how I launched the deployment and service
kubectl apply -f application.yaml --namespace=heat</p>
<p>What am i missing here? Service spec</p>
<pre><code>spec:
clusterIP: 10.96.162.65
ports:
- name: public
port: 5000
protocol: TCP
targetPort: 5000
- name: admin
port: 35357
protocol: TCP
targetPort: 35357
selector:
app: keystone
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>From my VM:
telnet 10.96.162.65 5000
Trying 10.96.162.65...</p>
<p>Kubectl describe pod logs:</p>
<pre><code>Namespace: heat
Priority: 0
Node: kind-control-plane/172.17.0.2
Start Time: Sun, 19 Apr 2020 16:04:36 +0530
Labels: app=keystone
pod-template-hash=8587f8dc76
Annotations: <none>
Status: Running
IP: 10.244.0.10
IPs:
IP: 10.244.0.10
Controlled By: ReplicaSet/keystone-8587f8dc76
Containers:
keystone:
Container ID: containerd://9888e62ac7df3f076bd542591a6413a0ef5b70be2c792bbf06e423b5dae89ca0
Image: openio/openstack-keystone
Image ID: docker.io/openio/openstack-keystone@sha256:62c8e36046ead4289ca4a6a49774bc589e638f46c0921f40703570ccda47a320
Ports: 5000/TCP, 35357/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Sun, 19 Apr 2020 16:08:01 +0530
Ready: True
Restart Count: 0
Readiness: tcp-socket :5000 delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
OS_IDENTITY_ADMIN_PASSWD: password
IPADDR: (v1:status.podIP)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wf2bp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-wf2bp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wf2bp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
## Kubectl log podname logs:
10.244.0.10 - - [19/Apr/2020 11:14:33] "POST /v3/auth/tokens HTTP/1.1" 201 2161
2020-04-19 11:14:33.699 49 INFO keystone.common.wsgi [req-fc64c89f-724c-4838-bc34-3907a8f79041 411ecaea9d3241a88e86355ba22f7a0f 277a0fe02d174c47bae4d67e697be0a7 - default default] GET http://10.244.0.10:35357/v3/services/heat
2020-04-19 11:14:33.705 49 WARNING keystone.common.wsgi [req-fc64c89f-724c-4838-bc34-3907a8f79041 411ecaea9d3241a88e86355ba22f7a0f 277a0fe02d174c47bae4d67e697be0a7 - default default] Could not find service: heat.: ServiceNotFound: Could not find service: heat.
10.244.0.10 - - [19/Apr/2020 11:14:33] "GET /v3/services/heat HTTP/1.1" 404 90
2020-04-19 11:14:33.970 49 INFO keystone.common.wsgi [req-3589e675-8818-4b82-ad7d-c944d9e2a232 411ecaea9d3241a88e86355ba22f7a0f 277a0fe02d174c47bae4d67e697be0a7 - default default] GET http://10.244.0.10:35357/v3/services?name=heat
10.244.0.10 - - [19/Apr/2020 11:14:34] "GET /v3/services?name=heat HTTP/1.1" 200 341
2020-04-19 11:14:34.210 49 INFO keystone.common.wsgi [req-492a3e9f-8892-4204-8ca9-c1465e28e709 411ecaea9d3241a88e86355ba22f7a0f 277a0fe02d174c47bae4d67e697be0a7 - default default] POST http://10.244.0.10:35357/v3/endpoints
10.244.0.10 - - [19/Apr/2020 11:14:34] "POST /v3/endpoints HTTP/1.1" 201 360
10.244.0.10 - - [19/Apr/2020 11:14:38] "GET / HTTP/1.1" 300 267
2020-04-19 11:14:38.089 49 INFO keystone.common.wsgi [req-4c8952b3-7d5b-4ee3-9cf9-f736e1628448 - - - - -] POST http://10.244.0.10:35357/v3/auth/tokens
10.244.0.10 - - [19/Apr/2020 11:14:38] "POST /v3/auth/tokens HTTP/1.1" 201 2367
2020-04-19 11:14:38.737 49 INFO keystone.common.wsgi [req-ebd817f5-d473-4909-b04d-ff0e1d5badab - - - - -] POST http://10.244.0.10:35357/v3/auth/tokens
10.244.0.10 - - [19/Apr/2020 11:14:39] "POST /v3/auth/tokens HTTP/1.1" 201 2367
2020-04-19 11:14:39.635 49 INFO keystone.common.wsgi [req-b68139dc-c62f-4fd7-9cfc-e472a88b9022 411ecaea9d3241a88e86355ba22f7a0f 277a0fe02d174c47bae4d67e697be0a7 - default default] GET http://10.244.0.10:35357/v3/services/heat
2020-04-19 11:14:39.640 49 WARNING keystone.common.wsgi [req-b68139dc-c62f-4fd7-9cfc-e472a88b9022 411ecaea9d3241a88e86355ba22f7a0f 277a0fe02d174c47bae4d67e697be0a7 - default default] Could not find service: heat.: ServiceNotFound: Could not find service: heat.
10.244.0.10 - - [19/Apr/2020 11:14:39] "GET /v3/services/heat HTTP/1.1" 404 90
2020-04-19 11:14:39.814 49 INFO keystone.common.wsgi [req-6562f24f-f032-4150-86d9-951318918871 411ecaea9d3241a88e86355ba22f7a0f 277a0fe02d174c47bae4d67e697be0a7 - default default] GET http://10.244.0.10:35357/v3/services?name=heat
10.244.0.10 - - [19/Apr/2020 11:14:39] "GET /v3/services?name=heat HTTP/1.1" 200 341
2020-04-19 11:14:40.043 49 INFO keystone.common.wsgi [req-6542d767-29bf-4c1a-bbd9-a81a72e106dc 411ecaea9d3241a88e86355ba22f7a0f 277a0fe02d174c47bae4d67e697be0a7 - default default] POST http://10.244.0.10:35357/v3/endpoints
10.244.0.10 - - [19/Apr/2020 11:14:40] "POST /v3/endpoints HTTP/1.1" 201 362
</code></pre>
Have manually created heat service
<pre><code>[root@keystone-8587f8dc76-rthmn /]# openstack service list
+----------------------------------+--------------+---------------+
| ID | Name | Type |
+----------------------------------+--------------+---------------+
| ec5ad9402b3b46599f3f8862e79429b3 | keystone | identity |
| 625d8b82a67d472981789f10ba37c381 | openio-swift | object-store |
| 415b33b5d45c48f6916d38f7b146953a | heat | orchestration |
+----------------------------------+--------------+---------------+
</code></pre>
| Saurabh Arora | <p><strong>TL;DR:</strong></p>
<p>I've made some tests, your docker image and deployment seems really fine ,I was able to log into the pod, it was running and listening on the port.</p>
<ul>
<li>The reason why your readiness probe was returning <code>Warning Unhealthy...: connection refused</code> was because <strong>it was not given enough time for the pod to start.</strong></li>
</ul>
<p>I edited your deployment with the following lines:</p>
<pre><code> readinessProbe:
tcpSocket:
port: 5000
initialDelaySeconds: 300
periodSeconds: 30
</code></pre>
<hr />
<p><strong>Explanation:</strong></p>
<ul>
<li>From <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="noreferrer">Configuring Probes</a> Documentation:</li>
</ul>
<blockquote>
<p><code>initialDelaySeconds</code>: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.</p>
<p><code>periodSeconds</code>: How often (in seconds) to perform the probe. Default to 10s. Minimum value is 1s.</p>
</blockquote>
<p><strong>NOTE:</strong> During my tests I noticed that the pod takes about 5 minutes to be running, way longer than the default 10s, that's why I set it as 300 seconds.</p>
<p>Meaning that after 5 minutes the pod was serving on port 5000.</p>
<p>Add the <code>initialDelaySeconds</code> line to your deployment and you should be fine.</p>
<hr />
<p><strong>Here is my Reproduction:</strong></p>
<ul>
<li>Edited Deployment:</li>
</ul>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: keystone-api
spec:
selector:
app: keystone
ports:
- protocol: TCP
port: 5000
targetPort: 5000
name: public
- protocol: TCP
port: 35357
targetPort: 35357
name: admin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keystone
labels:
app: keystone
spec:
replicas: 1
selector:
matchLabels:
app: keystone
template:
metadata:
labels:
app: keystone
spec:
containers:
- name: keystone
image: openio/openstack-keystone
readinessProbe:
tcpSocket:
port: 5000
initialDelaySeconds: 300
periodSeconds: 30
env:
- name: OS_IDENTITY_ADMIN_PASSWD
value: password
- name: IPADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 5000
name: public
- containerPort: 35357
name: admin
</code></pre>
<ul>
<li>Create the resource and wait:</li>
</ul>
<pre><code>$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
keystone-7fd895cfb5-kqnnn 0/1 Running 0 3m28s
ubuntu 1/1 Running 0 113m
keystone-7fd895cfb5-kqnnn 1/1 Running 0 5m4s
</code></pre>
<ul>
<li>After 5min4s the container was running <code>1/1</code> and I <code>describe</code> the pod:</li>
</ul>
<pre><code>$ kubectl describe pod keystone-586b8948d5-c4lpq
Name: keystone-586b8948d5-c4lpq
Namespace: default
Priority: 0
Node: minikube/192.168.39.39
Start Time: Mon, 20 Apr 2020 15:02:24 +0000
Labels: app=keystone
pod-template-hash=586b8948d5
Annotations: <none>
Status: Running
IP: 172.17.0.7
IPs:
IP: 172.17.0.7
Controlled By: ReplicaSet/keystone-586b8948d5
Containers:
keystone:
Container ID: docker://8bc14d2b6868df6852967c4a68c997371006a5d83555c500d86060e48c549165
Image: openio/openstack-keystone
Image ID: docker-pullable://openio/openstack-keystone@sha256:62c8e36046ead4289ca4a6a49774bc589e638f46c0921f40703570ccda47a320
Ports: 5000/TCP, 35357/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 20 Apr 2020 15:02:26 +0000
Ready: True
Restart Count: 0
Readiness: tcp-socket :5000 delay=300s timeout=1s period=30s #success=1 #failure=3
Environment:
OS_IDENTITY_ADMIN_PASSWD: password
IPADDR: (v1:status.podIP)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kcw8c (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-kcw8c:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kcw8c
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/keystone-586b8948d5-c4lpq to minikube
Normal Pulling 7m12s kubelet, minikube Pulling image "openio/openstack-keystone"
Normal Pulled 7m11s kubelet, minikube Successfully pulled image "openio/openstack-keystone"
Normal Created 7m11s kubelet, minikube Created container keystone
Normal Started 7m11s kubelet, minikube Started container keystone
</code></pre>
<p>As you can see now there is no error.</p>
<p>Let me know in the comments if you have any doubt.</p>
| Will R.O.F. |
<p>I can't connect to my kafka cluster from the outside. There seems to be a problem with the listeners and advertised listeners.</p>
<p>Any suggestions?</p>
<p><strong>When I try to connect from the outside on port 30092, then I always get a reference back to kafka-svc:9092</strong></p>
<ul>
<li>Cluster name: dev-docker-x02</li>
<li>How I test: default kafka for windows: .\bin\windows\kafka-topics.bat --list --bootstrap-server dev-docker-x02:30092</li>
<li>Requirement: use confluentinc/cp-kafka:5.4.0-1-ubi8</li>
</ul>
<p>My setup:</p>
<p><a href="https://i.stack.imgur.com/37LWw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/37LWw.jpg" alt="enter image description here"></a></p>
<p>My broker configuration (The problem seems to be in the (advertised) listeners.</p>
<pre><code>kind: Deployment
metadata:
name: kafka-deploy
spec:
replicas: 1
selector:
matchLabels:
app: kafka-pod
template:
metadata:
labels:
app: kafka-pod
spec:
containers:
- name: kafka-ctr # Container name
image: confluentinc/cp-kafka:5.4.0-1-ubi8
ports:
- containerPort: 9092 # Port exposed by the container
env:
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-svc:2181
- name: KAFKA_LISTENERS
value: "LISTENER_INTERNAL://:9092,LISTENER_EXTERNAL://:30092"
- name: KAFKA_ADVERTISED_LISTENERS
value: "LISTENER_INTERNAL://kafka-svc:9092,LISTENER_EXTERNAL://dev-kube-x02:30092"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "LISTENER_INTERNAL:PLAINTEXT,LISTENER_EXTERNAL:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "LISTENER_EXTERNAL"
- name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
value: "false"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS
value: "100"
</code></pre>
| Dimitri Dewaele | <p>Kafka has a lot of components like Headless Services, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#using-statefulsets" rel="nofollow noreferrer">Statefulsets</a> and each one has a distinctive role.
For that reason I'd suggest too the usage of <a href="https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-kafka/README.md#external-access" rel="nofollow noreferrer">Kafka Confluentic Helm Chart</a>.</p>
<p>This guide is based on the helm chart since you mentioned you'd use it in the comments but the concepts here can be extended to any application that uses headless services and need external access.</p>
<p>For what you provided, I believe you are facing some difficulties because you are referencing a headless service externally, which will not work since the headless service does not have an internal operational IP.</p>
<blockquote>
<p>The <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Headless Service</a> is created alongside the StatefulSet. The created service will <strong>not</strong> be given a <code>clusterIP</code>, but will instead simply include a list of <code>Endpoints</code>.
These <code>Endpoints</code> are then used to generate instance-specific DNS records in the form of:
<code><StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local</code></p>
</blockquote>
<p>It creates a DNS name for each pod, e.g:</p>
<pre><code>[ root@curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
</code></pre>
<ul>
<li><p>This is what makes this services connect to each other inside the cluster.</p></li>
<li><p>You can't, therefore, expose <code>cp-kafka:9092</code> which is the headless service, also only used internally, as I explained above.</p></li>
<li>In order to get outside access <strong>you have to set the parameters <code>nodeport.enabled</code> to <code>true</code></strong> as stated here: <a href="https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-kafka/README.md#external-access" rel="nofollow noreferrer">External Access Parameters</a>.</li>
<li>It adds one service to each kafka-N pod during chart deployment.</li>
<li>Note that the service created has the selector <code>statefulset.kubernetes.io/pod-name: demo-cp-kafka-0</code> this is how the service identifies the pod it is intended to connect to.</li>
</ul>
<hr>
<p><strong>Reproduction:</strong></p>
<ul>
<li><code>git clone https://github.com/confluentinc/cp-helm-charts.git</code></li>
<li>edit the file <code>cp-helm-charts/cp-kafka/values.yaml</code> changing the <code>nodeport</code> from <code>false</code> to <code>true</code> and change the ports as you'd like:</li>
</ul>
<pre><code>nodeport:
enabled: true
servicePort: 19092
firstListenerPort: 31090
</code></pre>
<ul>
<li>Deploy the chart:</li>
</ul>
<pre><code>$ helm install demo cp-helm-charts
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-cp-control-center-6d79ddd776-ktggw 1/1 Running 3 113s
demo-cp-kafka-0 2/2 Running 1 113s
demo-cp-kafka-1 2/2 Running 0 94s
demo-cp-kafka-2 2/2 Running 0 84s
demo-cp-kafka-connect-79689c5c6c-947c4 2/2 Running 2 113s
demo-cp-kafka-rest-56dfdd8d94-79kpx 2/2 Running 1 113s
demo-cp-ksql-server-c498c9755-jc6bt 2/2 Running 2 113s
demo-cp-schema-registry-5f45c498c4-dh965 2/2 Running 3 113s
demo-cp-zookeeper-0 2/2 Running 0 112s
demo-cp-zookeeper-1 2/2 Running 0 93s
demo-cp-zookeeper-2 2/2 Running 0 74s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-control-center ClusterIP 10.0.13.134 <none> 9021/TCP 50m
demo-cp-kafka ClusterIP 10.0.15.71 <none> 9092/TCP 50m
demo-cp-kafka-0-nodeport NodePort 10.0.7.101 <none> 19092:31090/TCP 50m
demo-cp-kafka-1-nodeport NodePort 10.0.4.234 <none> 19092:31091/TCP 50m
demo-cp-kafka-2-nodeport NodePort 10.0.3.194 <none> 19092:31092/TCP 50m
demo-cp-kafka-connect ClusterIP 10.0.3.217 <none> 8083/TCP 50m
demo-cp-kafka-headless ClusterIP None <none> 9092/TCP 50m
demo-cp-kafka-rest ClusterIP 10.0.14.27 <none> 8082/TCP 50m
demo-cp-ksql-server ClusterIP 10.0.7.150 <none> 8088/TCP 50m
demo-cp-schema-registry ClusterIP 10.0.7.84 <none> 8081/TCP 50m
demo-cp-zookeeper ClusterIP 10.0.9.119 <none> 2181/TCP 50m
demo-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 50m
</code></pre>
<ul>
<li>My Node is on IP <code>35.226.189.123</code> and I'll connect to the <code>demo-cp-kafka-0-nodeport</code> nodeport service which is on port <code>31090</code>, now let's try to connect from outside the cluster. For that I'll connect to another VM where I have a minikube, so I can use <code>kafka-client</code> pod to test:</li>
</ul>
<pre><code>user@minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-client 1/1 Running 0 17h
user@minikube:~$ kubectl exec kafka-client -it -- bin/bash
root@kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root@kafka-client:/#
</code></pre>
<p>As you can see, I was able to access the kafka from outside.</p>
<ul>
<li>Using this method, the helm chart will create 1 external service for each replica you define.</li>
<li>If you need external access to Zookeeper it's not automatically provisioned like the kafka agent, but I'll leave a service model for you:</li>
</ul>
<p><code>zookeeper-external-0.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: cp-zookeeper
pod: demo-cp-zookeeper-0
name: demo-cp-zookeeper-0-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: external-broker
nodePort: 31181
port: 12181
protocol: TCP
targetPort: 31181
selector:
app: cp-zookeeper
statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
sessionAffinity: None
type: NodePort
</code></pre>
<ul>
<li>It will create a service for it:</li>
</ul>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-zookeeper-0-nodeport NodePort 10.0.5.67 <none> 12181:31181/TCP 2s
</code></pre>
<ul>
<li>Test it with your external IP:</li>
</ul>
<pre><code>pod/zookeeper-client created
user@minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root@zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled
</code></pre>
<p>If you have any doubts, let me know in the comments!</p>
| Will R.O.F. |
<p>Faced the following issue:
I need to add a search domain on some pods to be able to communicate with headless service. Kubernetes documentation recommends to set a dnsConfig and set everything in it.That's what I did. Also there is a limitation that only 6 search domains can be set.
Part of the manifest:</p>
<pre><code> spec:
hostname: search
dnsPolicy: ClusterFirst
dnsConfig:
searches:
- indexer.splunk.svc.cluster.local
containers:
- name: search
</code></pre>
<p>Unfortunately it has no effect and resolv.conf file on targeted pod doesn't include this search domain:</p>
<pre><code>search splunk.svc.cluster.local svc.cluster.local cluster.local us-east4-c.c.'project-id'.internal c.'project-id'.internal google.internal
nameserver 10.39.240.10
options ndots:5
</code></pre>
<p>After a quick look at this config I found that <strong>currently there are 6 search domens are specified and probably this is the reason why new search domain is not added</strong>. You can add manually and everything will work,but this isn't what I 'm trying to achieve.</p>
<p>Do you have any ideas how to bypass this limitation? </p>
<p>P.S Set dnsPolicy to None is not an option also as set prestart hooks to add my search zone.</p>
<pre><code>---
# Search-head deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: search
namespace: splunk
labels:
app: splunk
spec:
replicas: 1
selector:
matchLabels:
app: splunk
template:
metadata:
labels:
app: splunk
spec:
hostname: search
dnsPolicy: ClusterFirst
dnsConfig:
searches:
- indexer.splunk.svc.cluster.local
containers:
- name: search
image: splunk/splunk
env:
- name: SPLUNK_START_ARGS
value: "--accept-license"
- name: SPLUNK_PASSWORD
valueFrom:
secretKeyRef:
name: splunk-password
key: password
- name: SPLUNK_ROLE
value: splunk_search_head
- name: SPLUNK_SEARCH_HEAD_URL
value: search
- name: SPLUNK_INDEXER_URL # TODO: make this part dynamic.
value: indexer-0,indexer-1
ports:
- name: web
containerPort: 8000
- name: mgmt
containerPort: 8089
- name: kv
containerPort: 8191
volumeMounts:
- mountPath: /opt/splunk/var
name: sh-volume
volumes:
- name: sh-volume
persistentVolumeClaim:
claimName: sh-volume
</code></pre>
| DavidGreen55 | <p>According to <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="nofollow noreferrer">Pods DnsConfig Documentation</a>:</p>
<blockquote>
<p><code>searches</code>: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are removed. <strong>Kubernetes allows for at most 6 search domains.</strong></p>
</blockquote>
<ul>
<li><p>Even though <a href="http://man7.org/linux/man-pages/man5/resolv.conf.5.html" rel="nofollow noreferrer">resolv.conf docs</a> mention it accepts more than 6 search domains on latest versions, it's not yet possible to surpass this number of search domains through kubernetes deployment.</p></li>
<li><p>I created a workaround on which an <code>InitContainer</code> creates and mount to the pod a new <code>resolv.conf</code> and after the container is up it replaces the automatically generated one.
This way if the container crashes or gets rebooted the <code>resolv.conf</code> will always be reinforced.</p></li>
</ul>
<p><strong>nginx-emulating-your-splunk-deploy.yaml:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: search
namespace: default
labels:
app: splunk
spec:
replicas: 1
selector:
matchLabels:
app: splunk
template:
metadata:
labels:
app: splunk
spec:
hostname: search
initContainers:
- name: initdns
image: nginx
imagePullPolicy: IfNotPresent
command: ["/bin/bash","-c"]
args: ["echo -e \"nameserver 10.39.240.10\nsearch indexer.splunk.svc.cluster.local splunk.svc.cluster.local svc.cluster.local cluster.local us-east4-c.c.'project-id'.internal c.'project-id'.internal google.internal\noptions ndots:5\n \" > /mnt/resolv.conf"]
volumeMounts:
- mountPath: /mnt
name: volmnt
containers:
- name: search
image: nginx
env:
- name: SPLUNK_START_ARGS
value: "--accept-license"
- name: SPLUNK_PASSWORD
value: password
- name: SPLUNK_ROLE
value: splunk_search_head
- name: SPLUNK_SEARCH_HEAD_URL
value: search
ports:
- name: web
containerPort: 8000
- name: mgmt
containerPort: 8089
- name: kv
containerPort: 8191
volumeMounts:
- mountPath: /mnt
name: volmnt
command: ["/bin/bash","-c"]
args: ["cp /mnt/resolv.conf /etc/resolv.conf ; nginx -g \"daemon off;\""]
volumes:
- name: volmnt
emptyDir: {}
</code></pre>
<ul>
<li>Remember to check the following fields and set according to your environment:
<ul>
<li><code>namespace</code>, <code>nameserver</code>, <code>container.image</code>, <code>container.args</code> </li>
</ul></li>
</ul>
<hr>
<ul>
<li><strong>Reproduction:</strong></li>
</ul>
<pre><code>$ kubectl apply -f search-head-splunk.yaml
deployment.apps/search created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
search-64b6fb5854-shm2x 1/1 Running 0 5m14sa
$ kubectl exec -it search-64b6fb5854-shm2x -- cat /etc/resolv.conf
nameserver 10.39.240.10
search indexer.splunk.svc.cluster.local splunk.svc.cluster.local svc.cluster.local cluster.local us-east4-c.c.'project-id'.internal c.'project-id'.internal google.internal
options ndots:5
</code></pre>
<p>You can see that the resolv.conf stays as configured, please reproduce in your environment and let me know if you find any problem.</p>
<hr>
<p><strong>EDIT 1:</strong></p>
<ul>
<li>The above scenario is designed for an environment where you need more than 6 search domains.</li>
<li><p>We have to Hardcode the DNS server, but <code>kube-dns</code> service sticks with the same IP during Cluster lifespan and sometimes even after Cluster recreation, it depends on network configuration.</p></li>
<li><p>If you need 6 or less domains you can just change <code>dnsPolicy</code> to <code>None</code> and skip the <code>InitContainer</code>:</p></li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: search
namespace: splunk
labels:
app: splunk
spec:
replicas: 1
selector:
matchLabels:
app: splunk
template:
metadata:
labels:
app: splunk
spec:
hostname: search
dnsPolicy: "None"
dnsConfig:
nameservers:
- 10.39.240.10
searches:
- indexer.splunk.svc.cluster.local
- splunk.svc.cluster.local
- us-east4-c.c.'project-id'.internal
- c.'project-id'.internal
- svc.cluster.local
- cluster.local
options:
- name: ndots
- value: "5"
containers:
- name: search
image: splunk/splunk
...
{{{the rest of your config}}}
</code></pre>
| Will R.O.F. |
<p>I have deployed Elasticsearch, Kibana and Enterprise Search to my local Kubernetes Cluster <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-orchestrating-elastic-stack-applications.html" rel="nofollow noreferrer">via this official guide</a> and they are working fine individually (and are connected to the Elasticsearch instance).</p>
<p>Now I wanted to setup Kibana to connect with Enterprise search like this:</p>
<p><a href="https://i.stack.imgur.com/fFQ8h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fFQ8h.png" alt="enter image description here" /></a></p>
<p>I tried it with localhost, but that obviously did not work in Kubernetes.
So I tried the service name inside Kubernetes, but now I am getting this error:</p>
<p><a href="https://i.stack.imgur.com/xJuoe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xJuoe.png" alt="enter image description here" /></a></p>
<p>The Log from Kubernetes is the following:</p>
<pre><code>{"type":"log","@timestamp":"2021-01-15T15:18:48Z","tags":["error","plugins","enterpriseSearch"],"pid":8,"message":"Could not perform access check to Enterprise Search: FetchError: request to https://enterprise-search-quickstart-ent-http.svc:3002/api/ent/v2/internal/client_config failed, reason: getaddrinfo ENOTFOUND enterprise-search-quickstart-ent-http.svc enterprise-search-quickstart-ent-http.svc:3002"}
</code></pre>
<p>So the questions is how do I configure my kibana <code>enterpriseSearch.host</code> so that it will work?</p>
<p>Here are my deployment yaml files:</p>
<pre><code># Kibana
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: quickstart
config:
enterpriseSearch.host: 'https://enterprise-search-quickstart-ent-http.svc:3002'
# Enterprise Search
apiVersion: enterprisesearch.k8s.elastic.co/v1beta1
kind: EnterpriseSearch
metadata:
name: enterprise-search-quickstart
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: quickstart
config:
ent_search.external_url: https://localhost:3002
</code></pre>
| Daniel Habenicht | <p>I encountered quite the same issue, but on a development environment based on docker-compose.</p>
<p>I fixed it by setting <code>ent_search.external_url</code> value the same as <code>enterpriseSearch.host</code> value</p>
<p>In your case, i guess, your 'Enterprise Search' deployment yaml file should look like this :</p>
<pre><code># Enterprise Search
apiVersion: enterprisesearch.k8s.elastic.co/v1beta1
kind: EnterpriseSearch
metadata:
name: enterprise-search-quickstart
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: quickstart
config:
ent_search.external_url: 'https://enterprise-search-quickstart-ent-http.svc:3002'
</code></pre>
| Alexandre Chevallier |
<p>By following this document <a href="https://github.com/Azure/kubelogin/blob/master/README.md#user-principal-login-flow-non-interactive" rel="nofollow noreferrer">https://github.com/Azure/kubelogin/blob/master/README.md#user-principal-login-flow-non-interactive</a>,
i had enabled kubelogin auth to azure kubernetes services. It didnt work as expected and now i want to disable kubelogin auth. But even for the new AKS clusters that I create with the option 'Azure AD auth with Kubernetes rbac' enabled, when I get credentials</p>
<p>az aks get-credentials --resource-group centralus-aks-01-rg --name aks-centralus-01</p>
<p>i see below in kube config file. It is still using kubelogin auth.</p>
<pre><code>users:
- name: clusterUser_centralus-aks-01-rg_aks-private-centralus-01
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- get-token
- --environment
- AzurePublicCloud
- --server-id
- 6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx0
- --client-id
- xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx0
- --tenant-id
- xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx0
- --login
- devicecode
command: kubelogin
env: null
interactiveMode: IfAvailable
provideClusterInfo: false
</code></pre>
<p>Can someone let me know how to disable kubelogin and get to regular auth provider on getting the credentials. So when I do Kubectl get nodes, i should get a new browser tab open and i can enter the user and the code. I couldn't find any reference to disable this.</p>
| medinibster | <p>See, <a href="https://github.com/Azure/AKS/issues/2728" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/2728</a> - kubelogin is becoming mandatory, so you'll need to get used to the process.</p>
<p>Try to run "kubectl config unset clusters" to clear config, then you'll need to get-credentials and you'll be prompted to use your browser on running a kubectl command.</p>
| Sanners |
<p>I just installed ingress controller in an aks cluster using this deployment resource :</p>
<blockquote>
<p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml</a></p>
</blockquote>
<p>specific for azure.</p>
<p>So far everything works fine the issue i am having is, i get this error on my certificate that :</p>
<blockquote>
<p>Kubernetes Ingress Controller Fake Certificate</p>
</blockquote>
<p>i Know i followed all steps as i should, but i can figure out why my certificate says that. I would appreciate if anyone can help guide on a possible fix for the issue.</p>
<p>issuer manifest</p>
<blockquote>
</blockquote>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: TargetPods-6dc98445c4-jr6pt
spec:
tls:
- hosts:
- test.domain.io
secretName: TargetPods-tls
rules:
- host: test.domain.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: TargetPod-6dc98445c4-jr6pt
port:
number: 80
</code></pre>
<p>Below is the result of : kubectl get secrets -n ingress-nginx</p>
<pre><code>> NAME TYPE DATA AGE
default-token-dh88n kubernetes.io/service-account-token 3 45h
ingress-nginx-admission Opaque 3 45h
ingress-nginx-admission-token-zls6p kubernetes.io/service-account-token 3 45h
ingress-nginx-token-kcvpf kubernetes.io/service-account-token 3 45h
</code></pre>
<p>also the secrets from cert-manager : kubectl get secrets -n cert-manager</p>
<pre><code>> NAME TYPE DATA AGE
cert-manager-cainjector-token-2m8nw kubernetes.io/service-account-token 3 46h
cert-manager-token-vghv5 kubernetes.io/service-account-token 3 46h
cert-manager-webhook-ca Opaque 3 46h
cert-manager-webhook-token-chz6v kubernetes.io/service-account-token 3 46h
default-token-w2jjm kubernetes.io/service-account-token 3 47h
letsencrypt-cluster-issuer Opaque 1 12h
letsencrypt-cluster-issuer-key Opaque 1 45h
</code></pre>
<p>Thanks in advance</p>
| Ribo01 | <p>You're seeing this as it is the default out of the box TLS certificate. You should replace this with your own certificate.</p>
<p>Here is some information in the <a href="https://github.com/kubernetes/ingress-nginx/blob/c6a8ad9a65485b1c4593266ab067dc33f3140c4f/docs/user-guide/tls.md#default-ssl-certificate" rel="nofollow noreferrer">documentation</a></p>
<p>You essentially want to create a TLS certificate (try <a href="https://shocksolution.com/2018/12/14/creating-kubernetes-secrets-using-tls-ssl-as-an-example/" rel="nofollow noreferrer">this</a> method if you are unfamiliar) and then add --default-ssl-certificate=default/XXXXX-tls in the nginx-controller deployment in you yaml. You can add this as an argument, search for "/nginx-ingress-controller" in your yaml and that'll take you to the relevant section.</p>
| Sanners |
<p>I have a very simple node.js app, server is listening on port <code>8282</code>:</p>
<pre><code>const http = require('http');
const os = require('os');
...
...
var server = http.createServer(handler);
server.listen(8282);
</code></pre>
<p>I have a <code>Dockerfile</code> for it:</p>
<pre><code>FROM node:12
COPY app.js /app.js
ENTRYPOINT ["node","app.js"]
</code></pre>
<p>Then I built the image <code>myreg/my-app:1.0</code> successfully & deployed to my k8s cluster (AWS EKS) with the following manifest:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: my-ns
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: myreg/my-app:1.0
ports:
- containerPort: 8282
imagePullSecrets:
- name: my-reg-cred
---
apiVersion: v1
kind: Service
metadata:
name: my-svc
namespace: my-ns
spec:
ports:
- name: http
port: 8282
targetPort: 8282
selector:
app: my-app
</code></pre>
<p>I can see pods are running:</p>
<pre><code>kubectl --namespace=my-ns get po
NAME READY STATUS RESTARTS AGE
my-app-5477c9c798-5q4v4 1/1 Running 0 5m3s
</code></pre>
<p>Then I want to do <code>port-forwarding</code>, on my terminal:</p>
<pre><code>$kubectl --namespace=my-ns port-forward my-app-5477c9c798-5q4v4 8282
Forwarding from 127.0.0.1:8282 -> 8282
Forwarding from [::1]:8282 -> 8282
</code></pre>
<p>I open another terminal window, using curl to communicate with my pod:</p>
<pre><code>curl localhost:8282
curl: (52) Empty reply from server
</code></pre>
<p>On the other terminal window where port-forwarding is running:</p>
<pre><code>Handling connection for 8282
E0411 23:12:25.254291 45793 portforward.go:400] an error occurred forwarding 8282 -> 8282: error forwarding port 8282 to pod ca30fad7ea7100d684d1743573dea426caa9a333163ccbca395ed57eaa363061, uid : exit status 1: 2022/04/11 20:12:25 socat[1326] E connect(5, AF=2 127.0.0.1:8282, 16): Connection refused
</code></pre>
<p>Why port forwarding is failed in my implementation? What do I miss?</p>
| user842225 | <p>You app is only <a href="https://nodejs.org/api/net.html#netcreateconnectionport-host-connectlistener" rel="nofollow noreferrer">listen to localhost</a>, try change <code>server.listen(8282)</code> to <code>server.listen(8282,"0.0.0.0")</code>, rebuild and update your image and restart your deployment.</p>
| gohm'c |
Subsets and Splits