Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have a simple node API app, which runs using Mongo DB.
I want to deploy it using Kubernetes.
I want to keep node and MySQL both in separate namespaces.
I have managed to run MongoDB in another namespace and node app in another namespace.Node app connects to MongoDB using a fully-qualified-domain name (<code>db.namespace.svc.cluster.local</code>).
but, now I want to open my node app to the public so that it can be accessed over the internet.
I have tried to create a nodeport service in the namespace where the node app resides, but when I connect to service on nodeport using a browser, nothing happens. Service hostname also doesn't resolve. But, when when I curl this nodeport service from a pod from the same namespace, then it gives the correct output.
What is the issue? Please help.</p>
| Amarjeet | <p>I think I have figured out theproblem.
problem is with acloudguru's lab environment.They have opened port 30080 on worker nodes,and I was using nodePort service without provideing a nodePort value,hence it was opening a random port which was not allowed.When,I am using 30080 as nodePort,and then try to connect it,application works fine.</p>
| Amarjeet |
<p>My use case is the following I want o intercept calls to the LDAP in 172.28.0.3:389 and forward to 172.28.0.3:636 with TLS.</p>
<p>I have followed the steps of <a href="https://istio.io/v1.6/docs/tasks/traffic-management/egress/egress-tls-origination/" rel="nofollow noreferrer">egress tls originate</a> and it works fine. Now I am trying to use the gateway, unfortunately I am having problems setting up the ports. I have basically copied and paste the setup of <a href="https://istio.io/v1.6/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/#perform-tls-origination-with-an-egress-gateway" rel="nofollow noreferrer">documentation</a> and adapted the protocols from HTTP and HTTPS to TCP and the ports 80 and 443 to 389 and 636 respectively:</p>
<pre><code> apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: cnn
spec:
hosts:
- ldap.host
addresses:
- 172.28.0.3
ports:
- number: 389
name: tcp
protocol: TCP
- number: 636
name: tcp-secure
protocol: TCP
resolution: STATIC
endpoints:
- address: 172.28.0.3
------
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 389 # I am not sure about this
name: tpc-port-for-tls-origination
protocol: tcp
hosts:
- ldap.host
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-cnn
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: cnn
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-cnn-through-egress-gateway
spec:
hosts:
- ldap.host
gateways:
- istio-egressgateway
- mesh
tcp: # I AM NOT SURE ABOUT THIS PART
- match:
- gateways:
- mesh
port: 389
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: cnn
port:
number: 389
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 389
route:
- destination:
host: ldap.host
port:
number: 636
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-tls-for-edition-cnn-com
spec:
host: ldap.host
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 636
tls:
mode: SIMPLE # initiates HTTPS for connections to edition.cnn.com
</code></pre>
<p>I have the feeling that the problem is on the <code>VirtualService</code>, however I have tried many things but without success, any hint what might be the issue would be highly appreciated.</p>
| Learner | <p>Looking into this post and previous post: it looks like you are interested with external <a href="https://istio.io/v1.6/docs/concepts/security/#authentication" rel="nofollow noreferrer">custom authentication</a> provider which support LDAP integration. For example you can use <a href="https://www.keycloak.org/" rel="nofollow noreferrer">keycloak</a>, <a href="https://auth0.com/" rel="nofollow noreferrer">Auth0</a>, <a href="https://developers.google.com/identity/protocols/oauth2/openid-connect" rel="nofollow noreferrer">Google Auth</a>.</p>
<p>This documentation shows an external authentication, that it can be <a href="https://www.keycloak.org/2018/02/keycloak-and-istio.html" rel="nofollow noreferrer">integrated with istio</a>. Please note that the documentation may be outdated (02/2018).</p>
<hr />
<p>Here you can find <a href="https://serverfault.com/questions/1006234/how-to-implement-user-authentication-with-istio-along-with-ldap-or-other-compone">similar problem</a>:</p>
<blockquote>
<p>As far as I'm concerned LDAP is not working in <a href="https://github.com/istio/istio/issues/15972#issuecomment-517340074" rel="nofollow noreferrer">istio</a>.
Workaround here would be either <a href="https://www.keycloak.org/" rel="nofollow noreferrer">keycloak</a> or <a href="https://auth0.com/" rel="nofollow noreferrer">auth0</a>
You can integrate both of them with istio, but it's just for authentication, It won't work as LDAP itself, at least as far as I know.</p>
</blockquote>
<hr />
<p>You can also eanble authentication with JSON Web Token (JWT) validation. Istio takes care of the task of validating the JWT tokens in the incoming user requests. So if you implement Istio JWT authentication feature, your application code doesn’t need to bother about the JWT token validation. Istio will do it for you. Not JWT token generation. Istio will not generate the tokens for you. You have to have an Authentication micro-service that generates the token.
<a href="https://stackoverflow.com/questions/60483362/how-to-integrate-openidconnect-with-istio">Here</a> is thread on how to authenticate end users using JWT.</p>
| Mikołaj Głodziak |
<p>I have deployed this using Kubernetes:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: voting-app-deploy
labels:
name: voting-app-deploy
app: demo-voting-app
spec:
replicas: 1
selector:
matchLabels:
name: voting-app-pod
app: demo-voting-app
template:
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: demo-voting-app
spec:
containers:
- name: voting-app
image: kodekloud/examplevotingapp_vote:v1
ports:
- containerPort: 80
</code></pre>
<p>With this Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: demo-voting-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30004
selector:
name: voting-app-pod
app: demo-voting-app
</code></pre>
<p>After checking if the Pods are running, I get a shell to the running container where I can see the source code and make changes using vi:</p>
<pre><code>kubectl exec -it podsname -- sh
</code></pre>
<p>The problem is whenever I destroy the deployment and bring it back I lose all my changes in the source code.</p>
<p>This is the application's Dockerfile, which I got from within the running container:</p>
<pre><code># Using official python runtime base image
FROM python:2.7-alpine
# Set the application directory
WORKDIR /app
# Install our requirements.txt
ADD requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
# Copy our code from the current folder to /app inside the container
ADD . /app
# Make port 80 available for links and/or publish
EXPOSE 80
# Define our command to be run when launching the container
CMD ["gunicorn", "app:app", "-b", "0.0.0.0:80", "--log-file", "-", "--access-logfile", "-", "--workers", "4", "--keep-alive", "0"]
</code></pre>
<p>So, how can I get this source code and generate a new image?</p>
| Kaio H. Cunha | <p>I've found a solution to my problem that's not common, I guess. I have to clarify that i'm not working on production environment. I just want to "steal" the source code from an image, since I don't have access to the repository.</p>
<p>Here is what I did:</p>
<p>1 . Created a folder and subfolder to organize the code.
2. Got a shell inside the pod's container:</p>
<pre><code>docker exec -it containername sh
</code></pre>
<p>or</p>
<pre><code>kubectl exec -it podsname -- sh
</code></pre>
<ol start="3">
<li>From the shell, edited what I wanted to edit in the source code.</li>
<li>Copied the content of the app folder IN THE CONTAINER to my local subfolder(called 'dados'):</li>
</ol>
<pre><code>cp -R /app /dados/
</code></pre>
<ol start="3">
<li>Created a Dockerfile in the folder that's based on the image I want to edit, then run a command to remove the /app folder, and finally, a command to copy the contents of my local folder to the new image I was going to build.</li>
</ol>
<pre><code>FROM kodekloud/examplevotingapp_vote:v1
RUN rm -rf /app/*.*
COPY ./dados/app /app
</code></pre>
<ol start="4">
<li>Created a new repository on DockerHub called 'myusername/my-voting-app' to receive my new image.</li>
<li>Built the new image:</li>
</ol>
<pre><code>docker build -t myusername/my-voting-app .
</code></pre>
<ol start="6">
<li>Tagged the image with my repo's name:</li>
</ol>
<pre><code>docker tag kaiohenricunha/my-voting-app kaiohenricunha/my-voting-app
</code></pre>
<ol start="7">
<li>Sent to dockerhub:</li>
</ol>
<pre><code>docker push kaiohenricunha/my-voting-app
</code></pre>
<ol start="8">
<li>Now on Kubernetes, changed my pod's container image to the new image I had built and deployed them.</li>
</ol>
| Kaio H. Cunha |
<p>I am using the <a href="https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/values.yaml" rel="nofollow noreferrer">Gitlab helm chart</a> to install Gitlab on my cluster. I want to set initialRootPassword so that I can login without doing kubectl get secret</p>
<pre><code> ## Initial root password for this GitLab installation
## Secret created according to doc/installation/secrets.md#initial-root-password
## If allowing shared-secrets generation, this is OPTIONAL.
initialRootPassword: {}
# secret: RELEASE-gitlab-initial-root-password
# key: password
</code></pre>
<p>The above block is a bit confusing. Can you please help me with this? Thanks.</p>
| Karthick Sundar | <p>The initialRootPassword refers to a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secret</a> object within kubernetes, so you must first create a secret within the same namespace as your gitlab instance and then point initialRootPassword to it.</p>
<p>For example, if you want the root password to be "password", first you need to base64 encode it</p>
<pre><code>$ echo -n "password"|base64
cGFzc3dvcmQ=
</code></pre>
<p>Then add it to kubernetes</p>
<pre><code># secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: gitlab-root-password
data:
password: cGFzc3dvcmQ=
</code></pre>
<pre><code>kubectl apply -f secret.yaml
</code></pre>
<p>There are other ways to create the secret, <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret" rel="nofollow noreferrer">see the docs</a> for more info on that.</p>
<p>You can then set the initialRootPassword</p>
<pre><code>initialRootPassword:
secret: gitlab-root-password
key: password
</code></pre>
<p>The key here refers to the name of the data key in the secret object.</p>
<p>An alternative is to use Gitlab <a href="https://docs.gitlab.com/charts/installation/command-line-options.html#basic-configuration" rel="nofollow noreferrer">default values</a> which allow you to create a secret object that will be used automatically without explicitly setting initialRootPassword</p>
<p>This example is taken from <a href="https://docs.gitlab.com/charts/installation/secrets.html#initial-root-password" rel="nofollow noreferrer">gitlab docs</a> (Replace <code><name></code> with the name of the release).</p>
<pre><code>kubectl create secret generic <name>-gitlab-initial-root-password --from-literal=password=$(head -c 512 /dev/urandom | LC_CTYPE=C tr -cd 'a-zA-Z0-9' | head -c 32)
</code></pre>
| Diddi Oskarsson |
<p>This is my DockerFile</p>
<pre><code># set base image (host OS)
FROM python:3.8
# set the working directory in the container
WORKDIR /code
# command to run on container start
RUN mkdir -p /tmp/xyz-agent
</code></pre>
<p>And when I execute the following command -
<code>docker -v build .</code> the docker builds successfully and I don't get any error. This is the output -</p>
<pre><code>Step 1/3 : FROM python:3.8
3.8: Pulling from library/python
b9a857cbf04d: Already exists
d557ee20540b: Already exists
3b9ca4f00c2e: Already exists
667fd949ed93: Already exists
4ad46e8a18e5: Already exists
381aea9d4031: Pull complete
8a9e78e1993b: Pull complete
9eff4cbaa677: Pull complete
1addfed3cc19: Pull complete
Digest: sha256:fe08f4b7948acd9dae63f6de0871f79afa017dfad32d148770ff3a05d3c64363
Status: Downloaded newer image for python:3.8
---> b0358f6298cd
Step 2/3 : WORKDIR /code
---> Running in 486aaa8f33ad
Removing intermediate container 486aaa8f33ad
---> b798192954bd
Step 3/3 : CMD ls
---> Running in 831ef6e6996b
Removing intermediate container 831ef6e6996b
---> 36298963bfa5
Successfully built 36298963bfa5
</code></pre>
<p>But when I login inside the container using terminal. I don't see the directory created.
Same goes for other commands as well. Doesn't throw error, doesn't create anything.
NOTE: I'm using Docker for Desktop with Kubernetes running.</p>
| bosari | <p>For creating a directory inside a container it is better to use the <code>RUN</code> command and specify <code>-p</code> parameter for <code>mkdir</code> to create the parent directories.</p>
<p>You can also try to build your container via docker-compose.yml which contains</p>
<pre class="lang-yaml prettyprint-override"><code>version: '3'
services:
python-app:
build: .
container_name: <your_python_container_name>
working_dir: /code
volumes:
- <path_on_host_for_share>:/code
stdin_open: true
tty: true
</code></pre>
<p>and build your container with <code>docker-compose build</code> and <code>docker-compose up</code> afterwards.</p>
| Mike |
<p>We have a little GKE cloud with 3 nodes (2 nodes n1-s1 and other one n1-s2), lets call them (A, B and C), running versión "v1.14.10-gke.27"
Yesterday after a performance problem with a MySQL POD, we started to dig the reason of the problem, and discovered a high load average in the Virtual Machine node (A) and (B) ... (C) was created after in order to move the DB pod inside.</p>
<p>Well, in our checks (kubectl top nodes) and (kubectl -n MYNAMESPACE top pods), saw that the CPU/memory used in the nodes was medium about 60% CPU and 70% of memory.</p>
<p>Ok, so we did this test. We drain the node A and restarted the virtual machine. By Doing:</p>
<pre><code>kubectl drain --ignore-daemonsets
gcloud compute ssh A
sudo reboot
</code></pre>
<p>After rebooting the virtual machine node (A), and wait about 15 minutes, we connect again, and saw this:</p>
<pre><code>gcloud compute ssh A
top
</code></pre>
<p>show a load average about 1.0 (0.9 - 1.2) ... but this machines (1 core and 3.5GB RAM) has no POD inside.
I checked the machine about 30 minutes, and the core linux system for GKE was always about load average near 1.0</p>
<p>Why ?</p>
<p>Then I did another check. In the node (B), there was only a SFTP server (CPU ussage about 3 millis).
I did the same test:</p>
<pre><code>gcloud compute ssh B
top
</code></pre>
<p>And this is what showed:</p>
<pre><code>top - 19:02:48 up 45 days, 4:40, 1 user, load average: 1.00, 1.04, 1.09
Tasks: 130 total, 1 running, 129 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.4 us, 1.3 sy, 0.0 ni, 95.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3697.9 total, 1383.6 free, 626.3 used, 1688.1 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 2840.3 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1065 root 20 0 924936 117608 66164 S 1.7 3.1 1356:05 kubelet
1932 root 20 0 768776 82748 11676 S 1.0 2.2 382:32.65 ruby
1008 root 20 0 806080 90408 26644 S 0.7 2.4 818:40.25 dockerd
183 root 20 0 0 0 0 S 0.3 0.0 0:26.09 jbd2/sda1-8
1 root 20 0 164932 7212 4904 S 0.0 0.2 17:47.38 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.09 kthreadd
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
</code></pre>
<p>But:</p>
<pre><code>kubectl -n MYNAMESPACE top pods | grep sftp
sftp-7d7f58cd96-fw6tm 1m 11Mi
</code></pre>
<p>CPU ussage only 1m, and RAM 11MB</p>
<p>Why is so high load average ?</p>
<p>I'm worried about this, so this load average could pains the performance of the pods in the cluster nodes.</p>
<p>By other side, I mounted a testing self kubernetes cluster at office with Debian VM nodes, and a the node (2 cores 4 GB RAM), but running PODs for Zammad and Jira, show this load average:
OFFICE KUBERNETES CLOUD</p>
<pre><code>ssh user@node02
top
top - 21:11:29 up 17 days, 6:04, 1 user, load average: 0,21, 0,37, 0,21
Tasks: 161 total, 2 running, 159 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2,4 us, 1,0 sy, 0,0 ni, 96,3 id, 0,3 wa, 0,0 hi, 0,0 si, 0,0 st
MiB Mem : 3946,8 total, 213,4 free, 3249,4 used, 483,9 buff/cache
MiB Swap: 0,0 total, 0,0 free, 0,0 used. 418,9 avail Mem
</code></pre>
<p>At offices's node the load average, running pods is about 0.21-0.4 ....
This is more realistic and similar to what it's spected to be.</p>
<p>Another problem is that when I connected by ssh to GKE node (A, B or C), there is no tools for monitoring the hard driver / storage like iostat and similars, so I don't know why base KDE nodes are with so high load average, with no pod scheduled.</p>
<p>Today, at critical hour, this is the GKE cloud status:</p>
<pre><code>kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-n1-s1-A 241m 25% 1149Mi 43%
gke-n1-s1-B 81m 8% 1261Mi 47%
gke-n1-s2-C 411m 21% 1609Mi 28%
</code></pre>
<p>but a top in node B, shows</p>
<pre><code>top - 11:20:46 up 45 days, 20:58, 1 user, load average: 1.66, 1.25, 1.13
Tasks: 128 total, 1 running, 127 sleeping, 0 stopped, 0 zombie
%Cpu(s): 6.0 us, 2.3 sy, 0.0 ni, 91.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3697.9 total, 1367.8 free, 629.6 used, 1700.6 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 2837.7 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1065 root 20 0 924936 117608 66164 S 3.3 3.1 1376:27 kubelet
1008 root 20 0 806080 90228 26644 S 1.3 2.4 829:21.65 dockerd
2590758 root 20 0 136340 29056 20908 S 0.7 0.8 18:38.56 kube-dns
443 root 20 0 36200 19736 5808 S 0.3 0.5 3:51.49 google_accounts
1932 root 20 0 764164 82748 11676 S 0.3 2.2 387:52.03 ruby
1 root 20 0 164932 7212 4904 S 0.0 0.2 18:03.44 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.09 kthreadd
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
7 root 20 0 0 0 0 S 0.0 0.0 14:55.03 ksoftirqd/0
</code></pre>
<p><strong>EDIT 1: FINALLY LAST TEST:</strong></p>
<p><strong>1.- Create a pool with 1 node</strong></p>
<pre><code>gcloud container node-pools create testpool --cluster MYCLUSTER --num-nodes=1 --machine-type=n1-standard-1
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
testpool n1-standard-1 100 1.14.10-gke.36
</code></pre>
<p><strong>2.- Drain the node and check node status</strong></p>
<pre><code>kubectl drain --ignore-daemonsets gke-MYCLUSTER-testpool-a84f3036-16lr
kubectl get nodes
gke-MYCLUSTER-testpool-a84f3036-16lr Ready,SchedulingDisabled <none> 2m3s v1.14.10-gke.36
</code></pre>
<p><strong>3.- Restart machine, wait and top</strong></p>
<pre><code>gcloud compute ssh gke-MYCLUSTER-testpool-a84f3036-16lr
sudo reboot
gcloud compute ssh gke-MYCLUSTER-testpool-a84f3036-16lr
top
top - 11:46:34 up 3 min, 1 user, load average: 1.24, 0.98, 0.44
Tasks: 104 total, 1 running, 103 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.1 us, 1.0 sy, 0.0 ni, 95.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3697.9 total, 2071.3 free, 492.8 used, 1133.9 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 2964.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1066 root 20 0 895804 99900 65136 S 2.1 2.6 0:04.28 kubelet
1786 root 20 0 417288 74176 11660 S 2.1 2.0 0:03.13 ruby
1009 root 20 0 812868 97168 26456 S 1.0 2.6 0:09.17 dockerd
1 root 20 0 99184 6960 4920 S 0.0 0.2 0:02.25 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H
5 root 20 0 0 0 0 I 0.0 0.0 0:00.43 kworker/u2:0
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
7 root 20 0 0 0 0 S 0.0 0.0 0:00.08 ksoftirqd/0
8 root 20 0 0 0 0 I 0.0 0.0 0:00.20 rcu_sched
9 root 20 0 0 0 0 I 0.0 0.0 0:00.00 rcu_bh
10 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
11 root rt 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0
13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
14 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 netns
15 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khungtaskd
16 root 20 0 0 0 0 S 0.0 0.0 0:00.00 oom_reaper
17 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 writeback
</code></pre>
<p><strong>1.24</strong> of load average without pod cutom pods ?</p>
<p><strong>EDIT 2</strong>
Thanks @willrof. I tryed by using "toolbox", and run "atop", and "iotop" commands. I see nothing anormal but the load average is about (1 - 1.2). As you can see the CPU is doing "nothing" and the IO operations are near zero. Here are the results:</p>
<pre><code>iotop
Total DISK READ : 0.00 B/s | Total DISK WRITE : 0.00 B/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % systemd noresume noswap cros_efi
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
2591747 be/4 nobody 0.00 B/s 0.00 B/s 0.00 % 0.00 % monitor --source=kube-proxy:http://local~ng.googleapis.com/ --export-interval=120s
4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H]
3399685 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sudo systemd-nspawn --directory=/var/lib~/resolv.conf:/etc/resolv.conf --user=root
6 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq]
7 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0]
8 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_sched]
9 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_bh]
10 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0]
11 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/0]
12 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [cpuhp/0]
13 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kdevtmpfs]
14 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [netns]
15 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [khungtaskd]
16 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [oom_reaper]
17 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [writeback]
18 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kcompactd0]
19 be/7 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [khugepaged]
20 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [crypto]
21 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kintegrityd]
22 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kblockd]
23 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ata_sff]
24 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdogd]
2590745 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % containerd-shim -namespace moby -workdir~runtime-root /var/run/docker/runtime-runc
atop
PRC | sys 14h12m | user 41h11m | #proc 140 | #trun 1 | #tslpi 544 | #tslpu 1 | #zombie 0 | clones 118e5 | #exit 0 |
CPU | sys 2% | user 5% | irq 0% | idle 93% | wait 0% | steal 0% | guest 0% | curf 2.30GHz | curscal ?% |
CPL | avg1 1.17 | avg5 1.17 | avg15 1.17 | | csw 669768e4 | | intr 26835e5 | | numcpu 1 |
MEM | tot 3.6G | free 221.1M | cache 2.1G | buff 285.2M | slab 313.3M | shmem 2.2M | vmbal 0.0M | hptot 0.0M | hpuse 0.0M |
SWP | tot 0.0M | free 0.0M | | | | | | vmcom 6.4G | vmlim 1.8G |
PAG | scan 54250 | steal 37777 | stall 0 | | | | | swin 0 | swout 0 |
LVM | dm-0 | busy 0% | read 6747 | write 0 | KiB/r 36 | KiB/w 0 | MBr/s 0.0 | MBw/s 0.0 | avio 2.00 ms |
DSK | sda | busy 0% | read 19322 | write 5095e3 | KiB/r 37 | KiB/w 8 | MBr/s 0.0 | MBw/s 0.0 | avio 0.75 ms |
DSK | sdc | busy 0% | read 225 | write 325 | KiB/r 24 | KiB/w 13315 | MBr/s 0.0 | MBw/s 0.0 | avio 1.75 ms |
DSK | sdb | busy 0% | read 206 | write 514 | KiB/r 26 | KiB/w 10 | MBr/s 0.0 | MBw/s 0.0 | avio 0.93 ms |
NET | transport | tcpi 69466e3 | tcpo 68262e3 | udpi 135509 | udpo 135593 | tcpao 4116e3 | tcppo 2797e3 | tcprs 738077 | udpie 0 |
NET | network | ipi 222967e3 | ipo 216603e3 | ipfrw 1533e5 | deliv 6968e4 | | | icmpi 74445 | icmpo 6254 |
NET | vethf6a 0% | pcki 40168e3 | pcko 39391e3 | sp 10 Gbps | si 15 Kbps | so 43 Kbps | erri 0 | erro 0 | drpo 0 |
NET | veth046 0% | pcki 8800433 | pcko 9133058 | sp 10 Gbps | si 2 Kbps | so 4 Kbps | erri 0 | erro 0 | drpo 0 |
NET | vethe89 0% | pcki 10923 | pcko 23560 | sp 10 Gbps | si 0 Kbps | so 0 Kbps | erri 0 | erro 0 | drpo 0 |
NET | veth647 0% | pcki 2583709 | pcko 2845889 | sp 10 Gbps | si 0 Kbps | so 0 Kbps | erri 0 | erro 0 | drpo 0 |
NET | veth6be 0% | pcki 374054 | pcko 448480 | sp 10 Gbps | si 0 Kbps | so 0 Kbps | erri 0 | erro 0 | drpo 0 |
NET | eth0 ---- | pcki 12094e4 | pcko 11533e4 | sp 0 Mbps | si 103 Kbps | so 56 Kbps | erri 0 | erro 0 | drpo 0 |
NET | cbr0 ---- | pcki 98061e3 | pcko 92356e3 | sp 0 Mbps | si 36 Kbps | so 71 Kbps | erri 0 | erro 0 | drpo 0 |
NET | lo ---- | pcki 9076898 | pcko 9076898 | sp 0 Mbps | si 5 Kbps | so 5 Kbps | erri 0 | erro 0 | drpo 0 |
*** system and process activity since boot ***
</code></pre>
<p>Anyone could help me ?</p>
<p>What can I do ?</p>
<p>Is this behaviour normal in GKE nodes without pods ?</p>
<p>Should I change to another Kubernetes provider ?</p>
<p>Thanks in advance.</p>
| Juan Miguel | <p>After crossing messages with google support, this seems a problem with the stable release version of google VM.</p>
<p>The last official stable version is v1.14.10-gke.36.</p>
<p>We have checked bad load performance since v.1.14.10-gke.27 (we don't go earlier).</p>
<p>We are waiting a response from Google product engineers about this.
We checked out the last version available today "1.16.9-gke.2", and the load average is normal in iddle, about 0.15 and lower, but this is no a "stable" release.</p>
<p>If you create a cluster with gcloud command, it gives you the last "stable" and this is "v1.14.10-gke.36" today, so everybody using "v1.14.10-gke.X" should have this problem.</p>
<p>The solution is ...</p>
<p>a) Wait the official response from Google product engineers.</p>
<p>b) Move / update to other version of cluster / nodes (perhaps not stable).</p>
<p><strong>EDIT. 2020/06/24. Google's Response</strong></p>
<blockquote>
<p>1- I have informed your feedback to our GKE product engineering team
and they are able to reproduce the issue in gke version 1.14.10-gke-36
in cos and cos_containerd but the average load ubuntu and
ubuntu_containerd have lower average load. So, our GKE product
engineer suggested the quick workaround to upgrade the cluster and
node pool to 1.15. For the permanent fix our GKE product team is
working but I do not have any ETA to share as of now.</p>
<p>2- How to upgrade the cluster: for best practice I found a
document[1], in this way you can upgrade your cluster with zero
downtime. Please note while the master (cluster) is upgrade the
workload is not impacted but we will not be able to reach the api
server. So we can not deploy new workload or make any change or
monitor the status during the upgrade. But we can make the cluster to
regional cluster which has multiple master node. Also this document
suggested two way to upgrade nodepool Rolling update and Migration
with node pools. Now to address the PV and PVC, I have tested in my
project and found during the nodepool upgrade the PVC is not deleted
so as the PV is not deleted (though the reclaim policy defined as
Delete). But I would suggest to take the backup of your disk
(associated with PV) and recreate the PV with following the
document[2].</p>
<p>3- lastly why the 1.14.10-gke.36 is default version? The default
version is set and updated gradually and as of document[3] the last
time it was set to 1.14.10-gke-36 on May 13, and this can be change in
any next update. But we can define the gke cluster version manually.</p>
<p>Please let me know if you have query or feel like I have missed something here. And for 1.14.10-gke-36 issue update you can expect an update from me on Friday (June 26, 2020) 16:00 EDT time.</p>
<p>[1]-
<a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime</a></p>
<p>[2]-
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd</a></p>
<p>[3]-
<a href="https://cloud.google.com/kubernetes-engine/docs/release-notes#new_default_version" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/release-notes#new_default_version</a></p>
</blockquote>
| Juan Miguel |
<p>Can anyone help with this, please?</p>
<p>I have a mongo pod assigned with its service. I need to execute some commands while starting the container in the pod.</p>
<p>I found a small examples like this:</p>
<pre><code>command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
</code></pre>
<p>But I want to execute these commands while starting the pod:</p>
<pre><code>use ParkInDB
db.roles.insertMany( [ {name :"ROLE_USER"}, {name:"ROLE_MODERATOR"}, {name:"ROLE_ADMIN"} ])
</code></pre>
| benhsouna chaima | <p>you need to choice one solution :</p>
<p>1- use <em><strong>init-container</strong></em> to deployment for change and execute some command or file</p>
<p>2- use command and args in deployment yaml</p>
<p>for init-container visit this <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use" rel="nofollow noreferrer">page</a> and use.</p>
<p>for comnad and args use this model in your deployment yaml file:</p>
<pre><code>- image:
name:
command: ["/bin/sh"]
args: ["-c" , "PUT_YOUR_COMMAND_HERE"]
</code></pre>
| SAEED mohassel |
<p>Below is my Pod manifest:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod-debian-container
spec:
containers:
- name: pi
image: debian
command: ["/bin/echo"]
args: ["Hello, World."]
</code></pre>
<p>And below is the output of "describe" command for this Pod:</p>
<pre><code>C:\Users\so.user\Desktop>kubectl describe pod/pod-debian-container
Name: pod-debian-container
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 15 Feb 2021 21:47:43 +0530
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.0.21
IPs:
IP: 10.244.0.21
Containers:
pi:
Container ID: cri-o://f9081af183308f01bf1de6108b2c988e6bcd11ab2daedf983e99e1f4d862981c
Image: debian
Image ID: docker.io/library/debian@sha256:102ab2db1ad671545c0ace25463c4e3c45f9b15e319d3a00a1b2b085293c27fb
Port: <none>
Host Port: <none>
Command:
/bin/echo
Args:
Hello, World.
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 15 Feb 2021 21:56:49 +0530
Finished: Mon, 15 Feb 2021 21:56:49 +0530
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sxlc9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-sxlc9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sxlc9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/pod-debian-container to minikube
Normal Pulled 15m kubelet Successfully pulled image "debian" in 11.1633901s
Normal Pulled 15m kubelet Successfully pulled image "debian" in 11.4271866s
Normal Pulled 14m kubelet Successfully pulled image "debian" in 11.0252907s
Normal Pulled 14m kubelet Successfully pulled image "debian" in 11.1897469s
Normal Started 14m (x4 over 15m) kubelet Started container pi
Normal Pulling 13m (x5 over 15m) kubelet Pulling image "debian"
Normal Created 13m (x5 over 15m) kubelet Created container pi
Normal Pulled 13m kubelet Successfully pulled image "debian" in 9.1170801s
Warning BackOff 5m25s (x31 over 15m) kubelet Back-off restarting failed container
Warning Failed 10s kubelet Error: ErrImagePull
</code></pre>
<p>And below is another output:</p>
<pre><code>C:\Users\so.user\Desktop>kubectl get pod,job,deploy,rs
NAME READY STATUS RESTARTS AGE
pod/pod-debian-container 0/1 CrashLoopBackOff 6 15m
</code></pre>
<p>Below are my question:</p>
<ul>
<li>I can see that Pod is running but Container inside it is crashing. I can't understand "why" because I see that Debian image is successfully pulled</li>
<li>As you can see in "kubectl get pod,job,deploy,rs" output, <code>RESTARTS</code> is equal to 6, is it the Pod which has restarted 6 times or is it the container?</li>
<li>Why 6 restart happened, I didn't mention anything in my spec</li>
</ul>
| pjj | <p>This looks like a liveness problem related to the CrashLoopBackOff have you cosidered taking a look into this blog it explains very well how to debug the problem <a href="https://managedkube.com/kubernetes/pod/failure/crashloopbackoff/k8sbot/troubleshooting/2019/02/12/pod-failure-crashloopbackoff.html" rel="nofollow noreferrer">blog</a></p>
| Jinja_dude |
<p>I have a headless service running with multiple replicas. When trying to verify client-side load balancing using round robin I see that all requests end up in the same replica.
Client setup looks like following:</p>
<pre><code> conn, err := grpc.Dial(
address,
grpc.WithInsecure(),
grpc.WithDefaultServiceConfig(`{"loadBalancingPolicy":"round_robin"}`),)
</code></pre>
<p>I've verified multiple endpoints in service. I also verified that service resolves to these multiple IP, but somehow it does connect only to the first pod in that list.
<code>MAX_CONNECTION_AGE</code> is set to 30s on the server side to ensure client re-connects occasionally in case there has been a scale-up.
I've followed numerous articles on how to set this up and it just does not work. What am I missing?</p>
| Vardan Saakian | <p>The key was to explicitly use <code>dns:///</code> as a prefix to target. Despite the fact it is stated as a default in documentation.</p>
| Vardan Saakian |
<p>I've a EKS setup (v1.16) with 2 ASG: one for compute ("c5.9xlarge") and the other gpu ("p3.2xlarge").
Both are configured as Spot and set with desiredCapacity 0.</p>
<p>K8S CA works as expected and scale out each ASG when necessary, the issue is that the newly created gpu instance is not recognized by the master and running <code>kubectl get nodes</code> emits nothing.
I can see that the ec2 instance was in Running state and also I could ssh the machine.</p>
<p>I double checked the the labels and tags and compared them to the "compute".
Both are configured almost similarly, the only difference is that the gpu nodegroup has few additional tags.</p>
<p>Since I'm using eksctl tool (v.0.35.0) and the compute nodeGroup vs. gpu nodeGroup is basically copy&paste, I can't figured out what could be the problem.</p>
<p>UPDATE:
ssh the instance I could see the following error (/var/log/messages)</p>
<pre><code>failed to run Kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
</code></pre>
<p>and the kubelet service crashed.</p>
<p>would it possible the my GPU uses wrong AMI (amazon-eks-gpu-node-1.18-v20201211)?</p>
| Cowabunga | <p>As a simple you can use this preBootstrapCommands in eksctl yaml config file:</p>
<pre><code>- name: test-node-group
preBootstrapCommands:
- "sed -i 's/cgroupDriver:.*/cgroupDriver: cgroupfs/' /etc/eksctl/kubelet.yaml"
</code></pre>
| Wael Gaith |
<p>to start with - I am a sort of newbie to Kubernetes and I might omit some fundamentals.</p>
<p>I have a working containerized app that is orchestrated with docker-compose (and works alright) and I am rewriting it to deploy into Kubernetes. I've converted it to K8s .yaml files via Kompose and modified it to some degree. I am struggling to set up a connection between a Python app and Kafka that are running on separate pods. The Python app constantly returns NoBrokersAvailable() error no matter what I try to apply - it's quite obvious that it cannot connect to a broker. What am I missing? I've defined proper listeners and network policy. I am running it locally on Minikube with local Docker images registry.</p>
<p>The Python app connects to the following address:
<code>KafkaProducer(bootstrap_servers='kafka-service.default.svc.cluster.local:9092')</code></p>
<p>kafka-deployment.yaml (the Dockerfile image is based on confluentinc/cp-kafka:6.2.0 with a topics setup script added to it):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: kafka
name: kafka-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: kafka
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.network/pipeline-network: "true"
io.kompose.service: kafka
spec:
containers:
- env:
- name: KAFKA_LISTENERS
value: "LISTENER_INTERNAL://0.0.0.0:29092,LISTENER_EXTERNAL://0.0.0.0:9092"
- name: KAFKA_ADVERTISED_LISTENERS
value: "LISTENER_INTERNAL://localhost:29092,LISTENER_EXTERNAL://kafka-service.default.svc.cluster.local:9092"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "LISTENER_EXTERNAL:PLAINTEXT,LISTENER_INTERNAL:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "LISTENER_INTERNAL"
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS
value: "0"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR
value: "1"
- name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: finnhub-streaming-data-pipeline-kafka:latest
imagePullPolicy: Never
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","/kafka-setup-k8s.sh"]
name: kafka-app
ports:
- containerPort: 9092
- containerPort: 29092
resources: {}
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service
spec:
selector:
app: kafka
ports:
- protocol: TCP
name: firstport
port: 9092
targetPort: 9092
- protocol: TCP
name: secondport
port: 29092
targetPort: 29092
</code></pre>
<p>finnhub-producer.yaml (aka my Python app deployment):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: finnhubproducer
name: finnhubproducer
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: finnhubproducer
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.network/pipeline-network: "true"
io.kompose.service: finnhubproducer
spec:
containers:
- env:
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_SERVER
value: kafka-service.default.svc.cluster.local
- name: KAFKA_TOPIC_NAME
value: market
image: docker.io/library/finnhub-streaming-data-pipeline-finnhubproducer:latest
imagePullPolicy: Never
name: finnhubproducer
ports:
- containerPort: 8001
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: finnhubproducer
name: finnhubproducer
spec:
ports:
- name: "8001"
port: 8001
targetPort: 8001
selector:
io.kompose.service: finnhubproducer
status:
loadBalancer: {}
</code></pre>
<p>pipeline-network-networkpolicy.yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: pipeline-network
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/pipeline-network: "true"
podSelector:
matchLabels:
io.kompose.network/pipeline-network: "true"
</code></pre>
<p>EDIT:
Dockerfile for Kafka image:</p>
<pre><code>FROM confluentinc/cp-kafka:6.2.0
COPY ./scripts/kafka-setup-k8s.sh /kafka-setup-k8s.sh
</code></pre>
<p>kafka-setup-k8s.sh:</p>
<pre><code>
# blocks until kafka is reachable
kafka-topics --bootstrap-server localhost:29092 --list
echo -e 'Creating kafka topics'
kafka-topics --bootstrap-server localhost:29092 --create --if-not-exists --topic market --replication-factor 1 --partitions 1
echo -e 'Successfully created the following topics:'
kafka-topics --bootstrap-server localhost:29092 --list
</code></pre>
| RSK RSK | <p>I have managed to make it work by deleting services from deployment and running <code>kubectl expose deployment kafka-app</code>. The issue comes from Kompose labeling.</p>
| RSK RSK |
<p>I have a Python web service that collects data from frontend clients. Every few seconds, it creates a Pulsar producer on our topic and sends the collected data. I have also set up a dockerfile to build an image and am working on deploying it to our organization's Kubernetes cluster.</p>
<p>The Pulsar code relies on certificate and key .pem files for TLS authentication, which are loaded over file paths in the test code. However, if the .pem files are included in the built Docker image, it will result in an obvious compliance violation from the Twistlock scan on our Kubernetes instance.</p>
<p>I am pretty inexperienced with Docker, Kubernetes, and security with certificates in general. What would be the best way to store and load the .pem files for use with this web service?</p>
| user3093540 | <p>You can mount certificates in the Pod with Kubernetes secret.</p>
<p>First, you need to create a Kubernetes secret:
(Copy your certificate to somewhere kubectl is configured for your Kubernetes cluster. For example file mykey.pem and copy it to the /opt/certs folder.)</p>
<pre><code>kubectl create secret generic mykey-pem --from-file=/opt/certs/
</code></pre>
<p>Confirm it was created correctly:</p>
<pre><code>kubectl describe secret mykey-pem
</code></pre>
<p>Mount your secret in your deployment (for example nginx deployment):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: "/etc/nginx/ssl"
name: nginx-ssl
readOnly: true
ports:
- containerPort: 80
volumes:
- name: nginx-ssl
secret:
secretName: mykey-pem
restartPolicy: Always
</code></pre>
<p>After that .pem files will be available inside the container and you don't need to include them in the docker image.</p>
| Alex0M |
<p>I enabled <code>ingress</code> on <code>minikube</code></p>
<pre><code>C:\WINDOWS\system32>minikube addons enable ingress
- Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
* Verifying ingress addon...
* The 'ingress' addon is enabled
</code></pre>
<p>But when I list it, I don't see it</p>
<pre><code>C:\WINDOWS\system32>minikube kubectl -- get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-px725 1/1 Running 0 13d
etcd-minikube 1/1 Running 0 13d
kube-apiserver-minikube 1/1 Running 6 13d
kube-controller-manager-minikube 1/1 Running 0 13d
kube-proxy-h7r79 1/1 Running 0 13d
kube-scheduler-minikube 1/1 Running 0 13d
storage-provisioner 1/1 Running 76 13d
</code></pre>
<p>Is the <code>ingress</code> not enabled? How can I check?</p>
| Manu Chadha | <p>I have recreated this situation and got the same situation. After execution the command:</p>
<pre><code>minikube addons enable ingress
</code></pre>
<p>I have same output as yours:</p>
<pre><code> - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
* Verifying ingress addon...
* The 'ingress' addon is enabled
</code></pre>
<p>I have also the same output, when I have executed:</p>
<pre><code>minikube kubectl -- get pod -n kube-system
</code></pre>
<hr />
<p><strong>Solution:</strong>
First you can list namespaces with command:</p>
<pre><code>minikube kubectl get namespaces
</code></pre>
<p>And your output should be as follow:</p>
<pre><code>NAME STATUS AGE
default Active 4m46s
ingress-nginx Active 2m28s
kube-node-lease Active 4m47s
kube-public Active 4m47s
kube-system Active 4m47s
</code></pre>
<p>The ingress should be in the <code>ingress-nginx</code> namespace. Execute:</p>
<pre><code>minikube kubectl -- get pods --namespace ingress-nginx
</code></pre>
<p>and then your output should be as follow:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-nqnvj 0/1 Completed 0 2m56s
ingress-nginx-admission-patch-62z9z 0/1 Completed 0 2m55s
ingress-nginx-controller-5d88495688-ssv5c 1/1 Running 0 2m56s
</code></pre>
<p><strong>Summary - your ingress controller should work, just in a different namespace.</strong></p>
| Mikołaj Głodziak |
<p>I have managed to install Prometheus and it's adapter and I want to use one of the pod metrics for autoscaling</p>
<pre><code> kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . |grep "pods/http_request".
"name": "pods/http_request_duration_milliseconds_sum",
"name": "pods/http_request",
"name": "pods/http_request_duration_milliseconds",
"name": "pods/http_request_duration_milliseconds_count",
"name": "pods/http_request_in_flight",
</code></pre>
<p>Checking api I want to use <code>pods/http_request</code> and added it to my HPA configuration</p>
<pre><code>---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: app
namespace: app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app
minReplicas: 4
maxReplicas: 8
metrics:
- type: Pods
pods:
metric:
name: http_request
target:
type: AverageValue
averageValue: 200
</code></pre>
<p>After applying the yaml and check the hpa status it shows up as <code><unkown></code></p>
<pre><code>$ k apply -f app-hpa.yaml
$ k get hpa
NAME REFERENCE TARGETS
app Deployment/app 306214400/2000Mi, <unknown>/200 + 1 more...
</code></pre>
<p>But when using other pod metrics such as <code>pods/memory_usage_bytes</code> the value is properly detected</p>
<p>Is there a way to check the proper values for this metric? and how do I properly add it for my hpa configuration</p>
<p>Reference <a href="https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/manage_cluster/hpa.html" rel="nofollow noreferrer">https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/manage_cluster/hpa.html</a></p>
| Ryan Clemente | <p>1st deploy metrics server, it should be up and running.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
</code></pre>
<p>Then in a few sec. metrics server deployed. check HPA it should resolved.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get deployment -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
.
.
kube-system metrics-server 1/1 1 1 34s
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
ha-xxxx-deployment Deployment/xxxx-deployment 1%/5% 1 10 1 6h46m
</code></pre>
| Shahabaj S. Shaikh |
<p>Kubernetes <code>vertical pod autoscaler</code> (autoscale memory, cpu resources of pods) necessitates a restart of the pod to be able to use the newly assigned resources which might add small window of unavailability.</p>
<p>My question is that if the deployment of the pod is running a <code>rolling update</code> would that ensure zero down time, and zero window of unavailability when the VPA recommendation is applied.</p>
<p>Thank you.</p>
| Mazen Ezzeddine | <p>From the <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p><strong>Rolling updates</strong> allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.</p>
</blockquote>
<p>In this documentation, you will find a very good rolling update overview:</p>
<blockquote>
<p>Rolling updates allow the following actions:</p>
<ul>
<li>Promote an application from one environment to another (via container image updates)</li>
<li>Rollback to previous versions</li>
<li>Continuous Integration and Continuous Delivery of applications with zero downtime</li>
</ul>
</blockquote>
<p>Here you can find information about <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling update deployment</a>:</p>
<blockquote>
<p>The Deployment updates Pods in a rolling update fashion when <code>.spec.strategy.type==RollingUpdate</code>. You can specify <code>maxUnavailable</code> and <code>maxSurge</code> to control the rolling update process.</p>
</blockquote>
<p>Additionally, you can add another 2 fields: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable" rel="nofollow noreferrer">Max Unavailable</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge" rel="nofollow noreferrer">Max Surge</a>.</p>
<blockquote>
<p><code>.spec.strategy.rollingUpdate.maxUnavailable</code> is an optional field that specifies the maximum number of Pods that can be unavailable during the update process.</p>
</blockquote>
<blockquote>
<p><code>.spec.strategy.rollingUpdate.maxSurge</code> is an optional field that specifies the maximum number of Pods that can be created over the desired number of Pods.</p>
</blockquote>
<p>Now it's up to you how you set these values. Here are some options:</p>
<ul>
<li><strong>Deploy by adding a Pod, then remove an old one:</strong> <code>maxSurge</code> = 1, <code>maxUnavailable</code> = 0. With this configuration, Kubernetes will spin up an additional Pod, then stop an “old” one down.</li>
<li><strong>Deploy by removing a Pod, then add a new one:</strong> <code>maxSurge</code> = 0, <code>maxUnavailable</code> = 1. In that case, Kubernetes will first stop a Pod before starting up a new one.</li>
<li><strong>Deploy by updating pods as fast as possible:</strong> <code>maxSurge</code> = 1, <code>maxUnavailable</code> = 1. This configuration drastically reduce the time needed to switch between application versions, but combines the cons from both the previous ones.</li>
</ul>
<p>See also:</p>
<ul>
<li><a href="https://www.exoscale.com/syslog/kubernetes-zero-downtime-deployment/" rel="nofollow noreferrer">good article about zero downtime</a></li>
<li><a href="https://medium.com/platformer-blog/enable-rolling-updates-in-kubernetes-with-zero-downtime-31d7ec388c81" rel="nofollow noreferrer">guide with examples</a></li>
</ul>
| Mikołaj Głodziak |
<p>I would like to set the value of <code>terminationMessagePolicy</code> to <code>FallbackToLogsOnError</code> by default for all my pods.</p>
<p>Is there any way to do that?</p>
<p>I am running Kubernetes 1.21.</p>
| ITChap | <p>Community wiki answer to summarise the topic.</p>
<p>The answer provided by the <a href="https://stackoverflow.com/users/14704799/gohmc">gohm'c</a> is good. It is not possible to change this value from the cluster level. You can find more information about it <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/_print/#customizing-the-termination-message" rel="nofollow noreferrer">in the official documentation</a>:</p>
<blockquote>
<p>Moreover, users can set the <code>terminationMessagePolicy</code> field of a Container for further customization. This field defaults to "<code>File</code>" which means the termination messages are retrieved only from the termination message file. By setting the <code>terminationMessagePolicy</code> to "<code>FallbackToLogsOnError</code>", you can tell Kubernetes to use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller.</p>
</blockquote>
<p>See also this page about <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#podspec-v1-core" rel="nofollow noreferrer">Container v1 core API</a> for 1.21 version. You can find there information about <code>terminationMessagePolicy</code>:</p>
<blockquote>
<p>Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.</p>
</blockquote>
<p>This can be done only from the Container level.</p>
| Mikołaj Głodziak |
<p>Hallo fellow K8s Users,</p>
<p>I'm trying to understand whether the is a base request and\or limit per pod\container in k8s as they are today or if you know of a change in the future regarding that.</p>
<p>I've seen this answer:
<a href="https://stackoverflow.com/questions/57041111/what-is-the-default-memory-allocated-for-a-pod">What is the default memory allocated for a pod</a>
stating that there isn't any, at least for Google's implementation of k8s and I'd like to know for sure of that also for the current state of k8s in on-prem deployments.</p>
<p>Are there any base request or limit values for a container\pod?</p>
<p>EDIT:
Also is there a way k8s would predict container memory request by the application development language or environment variables set for deployment (like from java container RUN command or env: JVM_OPTS -Xms1G -Xmx1G)?</p>
| Noam Yizraeli | <p>There isn't a default limit or request.
In order to configure default resources you should create a LimitRange resource as described here: <a href="https://docs.openshift.com/container-platform/3.11/dev_guide/compute_resources.html#dev-viewing-limit-ranges" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.11/dev_guide/compute_resources.html#dev-viewing-limit-ranges</a></p>
<p>If you want every new project to be created with certain resources limits you can modify the default project template as described here: <a href="https://docs.openshift.com/container-platform/3.11/admin_guide/managing_projects.html#modifying-the-template-for-new-projects" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.11/admin_guide/managing_projects.html#modifying-the-template-for-new-projects</a></p>
<p>This doesn't change in 4.6 as well, only the implementation of how to modify the LimitRange or default project template. (The methodology is exactly the same)</p>
<p>As for your question of predicting resources of applications, there are some players around this issue. I've only heard about turbonomic which can even change your Deployments resources automatically by utilization and maybe also some custom metrics.</p>
| Stav Bernaz |
<p>I need to create a deployment descriptor "A" yaml in which I can find the endpoint IP address of a pod (that belongs to a deployment "B") . There is an option to use Downward API but I don't know if I can use it in that case.</p>
| Angel | <p>If I understand correctly, you want to map the <code>test.api.com</code> hostname to the IP address of a specific Pod.<br />
As <strong>@whites11</strong> rightly pointed out, you can use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#with-selectors" rel="nofollow noreferrer">Headless Services with selectors</a>:</p>
<blockquote>
<p>For headless Services that define selectors, the endpoints controller creates Endpoints records in the API, and modifies the DNS configuration to return A records (IP addresses) that point directly to the Pods backing the Service.</p>
</blockquote>
<p>In this case, it may be difficult to properly configure the <code>/etc/hosts</code> file inside a Pod, but it is possible to configure the Kubernetes cluster DNS to achieve this goal.</p>
<p>If you are using <code>CoreDNS</code> as a DNS server, you can configure <code>CoreDNS</code> to map one domain (<code>test.api.com</code>) to another domain (headless service DNS name) by adding a <code>rewrite</code> rule.</p>
<p>I will provide an example to illustrate how it works.</p>
<hr />
<p>First, I prepared a sample <code>web</code> Pod with an associated <code>web</code> Headless Service:</p>
<pre><code># kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/web 1/1 Running 0 66m 10.32.0.2 kworker <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/web ClusterIP None <none> 80/TCP 65m run=web
</code></pre>
<p>We can check if the <code>web</code> headless Service returns A record (IP address) that points directly to the <code>web</code> Pod:</p>
<pre><code># kubectl exec -i -t dnsutils -- nslookup web.default.svc
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: web.default.svc.cluster.local
Address: 10.32.0.2
</code></pre>
<p>Next, we need to configure <code>CoreDNS</code> to map <code>test.api.com</code> -> <code>web.default.svc.cluster.local</code>.</p>
<p>Configuration of <code>CoreDNS</code> is stored in the <code>coredns</code> <code>ConfigMap</code> in the <code>kube-system</code> namespace. You can edit it using:</p>
<pre><code># kubectl edit cm coredns -n kube-system
</code></pre>
<p>Just add one <code>rewrite</code> rule, like in the example below:</p>
<pre><code>apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
rewrite name test.api.com web.default.svc.cluster.local # mapping test.api.com to web.default.svc.cluster.local
...
</code></pre>
<p>To reload CoreDNS, we may delete <code>coredns</code> Pods (<code>coredns</code> is deployed as Deployment, so new Pods will be created)</p>
<p>Finally, we can check how it works:</p>
<pre><code># kubectl exec -i -t dnsutils -- nslookup test.api.com
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: test.api.com
Address: 10.32.0.2
</code></pre>
<p>As you can see, the <code>test.api.com</code> domain also returns the IP address of the <code>web</code> Pod.</p>
<p>For more information on the <code>rewrite</code> plugin, see the <a href="https://coredns.io/plugins/rewrite/" rel="nofollow noreferrer">Coredns rewrite documentation</a>.</p>
| matt_j |
<p>With Kubernetes 1.22, the beta API for the <code>CustomResourceDefinition</code> <code>apiextensions.k8s.io/v1beta1</code> was removed and replaced with <code>apiextensions.k8s.io/v1</code>. While changing the CRDs, I have come to realize that my older controller (operator pattern, originally written for <code>v1alpha1</code>) still tries to list <code>apiextensions.k8s.io/v1alpha1</code> even though I have changed the CRD to <code>apiextensions.k8s.io/v1</code>.</p>
<p>I have read <a href="https://stackoverflow.com/questions/58481850/no-matches-for-kind-deployment-in-version-extensions-v1beta1">this source</a> and it states that for deployment, I should change the API version but my case is an extension of this since I don't have the controller for the new API.</p>
<p>Do I need to write a new controller for the new API version?</p>
| The_Lost_Avatar | <blockquote>
<p>Do I need to write a new controller for the new API version ?</p>
</blockquote>
<p>Unfortunately, it looks like it does. If you are unable to apply what is described in <a href="https://stackoverflow.com/questions/54778620/how-to-update-resources-after-customresourcedefinitions-changes">this similar question</a>, because you are using a custom controller then you need to create your own new controller (if you cannot change API inside it) that will work with the supported API. Look at <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-controllers" rel="nofollow noreferrer">Custom controllers</a> page in the official documentation.</p>
<blockquote>
<p>I am not sure if the controller can manage the new API version. Even after changing the API version of the CRD to v1 from v1Alpha1, I get an error message stating that tha controller is trying to list CRD with API version v1alpha1.</p>
</blockquote>
<p>It looks like the controller has some bugs. There should be no problem referencing the new API as written [in this documentation](It looks like the controller is badly written. There should be no problem referencing the new API as written in this documentation.):</p>
<blockquote>
<p>The <strong>v1.22</strong> release will stop serving the following deprecated API versions in favor of newer and more stable API versions:</p>
<ul>
<li>Ingress in the <strong>extensions/v1beta1</strong> API version will no longer be served</li>
<li>Migrate to use the <strong>networking.k8s.io/v1beta1</strong> API version, available since v1.14. >Existing persisted data can be retrieved/updated via the new version.</li>
</ul>
</blockquote>
<blockquote>
<p>Kubernetes 1.16 is due to be released in September 2019, so be sure to audit your configuration and integrations now!</p>
<ul>
<li>Change YAML files to reference the newer APIs</li>
<li><strong>Update custom integrations and controllers to call the newer APIs</strong></li>
<li>Update third party tools (ingress controllers, continuous delivery systems) to call the newer APIs</li>
</ul>
</blockquote>
<p>See also <a href="https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/" rel="nofollow noreferrer">Kubernetes API and Feature Removals In 1.22: Here’s What You Need To Know</a>.</p>
| Mikołaj Głodziak |
<p>I installed Istio 1.6, using <em><strong>istioctl install --set profile=demo</strong></em>.
But I could only couple of metrics related to Kubernetes nodes. I can see configuration related Kubernetes Node:</p>
<pre><code>kubernetes_sd_configs:
- role: node relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
</code></pre>
<p>Do I need to install node exporter daemonset?</p>
<p>Thanks</p>
| Pragmatic | <p>You must have missed some step. I reproduced and it is looking good on my end.
<br></p>
<p><strong>Double check this steps:</strong></p>
<p>Verify that the Prometheus service is running in the cluster:</p>
<pre><code>$ kubectl -n istio-system get svc prometheus
</code></pre>
<p>Launch the Prometheus UI</p>
<pre><code>istioctl dashboard prometheus
</code></pre>
<p>Execute a Prometheus query(click <kbd>Execute</kbd> ). E.g.:</p>
<pre><code>istio_requests_total
</code></pre>
<p>Generate some traffic against the product page:</p>
<pre><code>export INGRESS_HOST=$(minikube ip)
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
curl http://$GATEWAY_URL/productpage
</code></pre>
<hr />
<p><strong>Edit:</strong> for node metrics</p>
<p>Yes, you are right: node exporter is not included.<br />
Fastest way to add it manually is using Helm(literally one line after helm is prepared):</p>
<pre><code>// Install helm
curl -L https://git.io/get_helm.sh | bash
// Install tiller
helm init
// Deploy node-exporter
helm install stable/prometheus-node-exporter
// Launch prometheus
istioctl dashboard prometheus
// Or even better, grafana
istioctl dashboard grafana
</code></pre>
<p>If you are using grafana, you can import dashboard ID: 11074 for a fancy display of the data gathered from node exporter:</p>
<p><a href="https://i.stack.imgur.com/ALfhT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ALfhT.png" alt="enter image description here" /></a></p>
| Neo Anderson |
<h3>What I'm trying to achieve</h3>
<p>I'm trying to deploy an elixir (phoenix) application in a microk8s cluster namespace with TLS using let's encrypt. The cluster is hosted on an AWS EC2 instance.</p>
<h3>The problem I'm facing</h3>
<ul>
<li>The ingress is created in the namespace</li>
<li>ingress routes to the correct domain</li>
<li>the application is working and displayed on the given domain</li>
</ul>
<p><strong>The TLS secret is not being created in the namespace and a 'default' one is created</strong></p>
<p>The secrets after deploying both phoenix app and httpbin app:</p>
<pre class="lang-sh prettyprint-override"><code>me@me:~/Documents/kubernetes-test$ kubectl get secret -n production
NAME TYPE DATA AGE
default-token-jmgrg kubernetes.io/service-account-token 3 20m
httpbin-tls kubernetes.io/tls 2 81s
</code></pre>
<p><strong>The domain is insecure, i.e the TLS is not working.</strong></p>
<p>Logs from the ingress controller after applying the yml files:</p>
<pre><code>W0106 17:26:36.967036 6 controller.go:1192] Error getting SSL certificate "production/phoenix-app-tls": local SSL certificate production/phoenix-app-tls was not found. Using default certificate
W0106 17:26:46.445248 6 controller.go:1192] Error getting SSL certificate "production/phoenix-app-tls": local SSL certificate production/phoenix-app-tls was not found. Using default certificate
W0106 17:26:49.779680 6 controller.go:1192] Error getting SSL certificate "production/phoenix-app-tls": local SSL certificate production/phoenix-app-tls was not found. Using default certificate
I0106 17:26:56.431925 6 status.go:281] "updating Ingress status" namespace="production" ingress="phoenix-app-ingress" currentValue=[] newValue=[{IP:127.0.0.1 Hostname: Ports:[]}]
I0106 17:26:56.443405 6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"production", Name:"phoenix-app-ingress", UID:"REDACTED", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1145907", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0106 17:26:56.443655 6 backend_ssl.go:46] Error obtaining X.509 certificate: no object matching key "production/phoenix-app-tls" in local store
W0106 17:26:56.443781 6 controller.go:1192] Error getting SSL certificate "production/phoenix-app-tls": local SSL certificate production/phoenix-app-tls was not found. Using default certificate
</code></pre>
<p>The description of the created ingress, note that here at the bottom it says <code>Successfully created Certificate "phoenix-app-tls" but the secret does not exist</code>:</p>
<pre class="lang-sh prettyprint-override"><code>me@me:~/Documents/kubernetes-test$ kubectl describe ing phoenix-app-ingress -n production
Name: phoenix-app-ingress
Labels: app=phoenix-app
Namespace: production
Address: 127.0.0.1
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
phoenix-app-tls terminates phoenix.sub.mydomain.com
Rules:
Host Path Backends
---- ---- --------
phoenix.sub.mydomain.com
/ phoenix-app-service-headless:8000 (REDACTED_IP:4000,REDACTED_IP:4000)
Annotations: cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/cors-allow-credentials: true
nginx.ingress.kubernetes.io/cors-allow-methods: GET, POST, OPTIONS
nginx.ingress.kubernetes.io/cors-allow-origin: *
nginx.ingress.kubernetes.io/enable-cors: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 29m cert-manager Successfully created Certificate "phoenix-app-tls"
Normal Sync 8m43s (x3 over 29m) nginx-ingress-controller Scheduled for sync
</code></pre>
<h3>Resources</h3>
<p>The deployment yml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: phoenix-app
labels:
app: phoenix-app
spec:
replicas: 2
selector:
matchLabels:
app: phoenix-app
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: phoenix-app
spec:
containers:
- name: phoenix-app
image: REDACTED
imagePullPolicy: Always
command: ["./bin/hello", "start"]
lifecycle:
preStop:
exec:
command: ["./bin/hello", "stop"]
ports:
- containerPort: 4000
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
envFrom:
- configMapRef:
name: phoenix-app-config
- secretRef:
name: phoenix-app-secrets
imagePullSecrets:
- name: gitlab-pull-secret
</code></pre>
<p>The service yml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: phoenix-app-service-headless
labels:
app: phoenix-app
spec:
clusterIP: None
selector:
app: phoenix-app
ports:
- name: http
port: 8000
targetPort: 4000 # The exposed port by the phoenix app
</code></pre>
<p>Note: I removed my actual domain</p>
<p>The ingress yml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: phoenix-app-ingress
labels:
app: phoenix-app
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
cert-manager.io/cluster-issuer: "letsencrypt"
spec:
tls:
- hosts:
- "phoenix.sub.mydomain.com"
secretName: phoenix-app-tls
rules:
- host: "phoenix.sub.mydomain.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: phoenix-app-service-headless
port:
number: 8000 # Same port as in service.yml
</code></pre>
<h3>Tested with different service</h3>
<p>I deployed a sample service using httpbin (is not a headless service) and the TLS works fine in the same namespace. Here are the resources that I used to deploy it:</p>
<p>deplyoment.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
labels:
app: httpbin
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: Always
name: httpbin
ports:
- containerPort: 80
</code></pre>
<p>The service yml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: httpbin
labels:
app: httpbin
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
</code></pre>
<p>The ingress yml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: httpbin
labels:
app: httpbin
annotations:
cert-manager.io/cluster-issuer: "letsencrypt"
spec:
tls:
- hosts:
- "httpbin.sub.mydomain.com"
secretName: httpbin-tls
rules:
- host: "httpbin.sub.mydomain.com" # This is a subdomain we want to route these requests to
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: httpbin
port:
number: 8000
</code></pre>
<p>My best guess is that it has something to do with the fact that the service is headless, but I have no clue as to how I can resolve the issue.</p>
| Beefcake | <p>I found out that you can actually check for certificates with kubectl:
<code>kubectl get certificate -n production</code></p>
<p>The status of this certificate was READY = FALSE.</p>
<p>I checked the description:
<code>kubectl describe certificate <certificate_name> -n production</code></p>
<p>At the bottom it said:
Too many certificates have been created in the last 164 hours for this exact domain.</p>
<p>I just changed the domain and voila! It works.</p>
| Beefcake |
<p>I have an issue when OpenShift project deployed with autoscaler configuration like this:</p>
<ul>
<li>Min Pods = 10</li>
<li>Max Pods = 15</li>
</ul>
<p>I can see that deployer immediately creates 5 pods and <em>TcpDiscoveryKubernetesIpFinder</em> creates not one grid, but multiple grids with same <em>igniteInstanceName</em>.</p>
<p><strong>This issue could be is solved by this workaround</strong></p>
<p>I changed autoscaler configuration to start with ONE pod:</p>
<ul>
<li>Min Pods = 1</li>
<li>Max Pods = 15</li>
</ul>
<p>And then scale up to 10 pods (or replicas=10):</p>
<ul>
<li>Min Pods = 10</li>
<li>Max Pods = 15</li>
</ul>
<p>Looks like <em>TcpDiscoveryKubernetesIpFinder</em> is not locking when it reads data from Kubernetes service that maintains list of IP addresses of all project pods.
So when multiple pods started simultaneously it cause multiple grids creation.
But when there is ONE pod started and grid with this pod created - new autoscaled pods are joining this existing grid.</p>
<p>PS No issues with ports 47100 or 47500, comms and discovery is working.</p>
| Valeri Shibaev | <p>OP confirmed in the comment, that the problem is resolved:</p>
<blockquote>
<p>Thank you, let me know when TcpDiscoveryKubernetesIpFinder early adoption fix will be available. For now I've switched my Openshift micro-service IgniteConfiguration#discoverySpi to TcpDiscoveryJdbcIpFinder - which solved this issue (as it has this kind of lock, transactionIsolation=READ_COMMITTED).</p>
</blockquote>
<p>You can read more about <a href="https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/discovery/tcp/ipfinder/jdbc/TcpDiscoveryJdbcIpFinder.html" rel="nofollow noreferrer">TcpDiscoveryJdbcIpFinder - here</a>.</p>
| Mikołaj Głodziak |
<p>I have been working on creating a application which can perform verification test on the deployed istio components in the kube-cluster. The constraint in my case is that I have run this application as a pod inside the kubernetes and I cannot provide cluster-admin role to the pod of the application so that it can do all the operations. I have to create a restricted <code>ClusterRole</code> just to provide enough access so that application list and get all the required deployed istio resources (Reason for creating a cluster role is because when istio is deployed it created both namespace level and cluster level resources). Currently my application won't run at all if I use my restricted <code>ClusterRole</code> and outputs and error</p>
<pre><code>Error: failed to fetch istiod pod, error: pods is forbidden: User "system:serviceaccount:istio-system:istio-deployment-verification-sa" cannot list resource "pods" in API group "" in the namespace "istio-system"
</code></pre>
<p>Above error doesn't make sense as I have explicitly mentioned the core api group in my <code>ClusterRole</code> and also mentioned pods as a resource in the <code>resourceType</code> child of my <code>ClusterRole</code> definition.</p>
<p><strong>Clusterrole.yaml</strong></p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterrole.name }}
namespace: {{ .Values.clusterrole.clusterrolens}}
rules:
- apiGroups:
- "rbac.authorization.k8s.io"
- "" #enabling access to core API
- "networking.istio.io"
- "install.istio.io"
- "autoscaling"
- "apps"
- "admissionregistration.k8s.io"
- "policy"
- "apiextensions.k8s.io"
resources:
- "clusterroles"
- "clusterolebindings"
- "serviceaccounts"
- "roles"
- "rolebindings"
- "horizontalpodautoscalers"
- "configmaps"
- "deployments"
- "mutatingwebhookconfigurations"
- "poddisruptionbudgets"
- "envoyfilters"
- "validatingwebhookconfigurations"
- "pods"
- "wasmplugins"
- "destinationrules"
- "envoyfilters"
- "gateways"
- "serviceentries"
- "sidecars"
- "virtualservices"
- "workloadentries"
- "workloadgroups"
- "authorizationpolicies"
- "peerauthentications"
- "requestauthentications"
- "telemetries"
- "istiooperators"
resourceNames:
- "istiod-istio-system"
- "istio-reader-istio-system"
- "istio-reader-service-account"
- "istiod-service-account"
- "wasmplugins.extensions.istio.io"
- "destinationrules.networking.istio.io"
- "envoyfilters.networking.istio.io"
- "gateways.networking.istio.io"
- "serviceentries.networking.istio.io"
- "sidecars.networking.istio.io"
- "virtualservices.networking.istio.io"
- "workloadentries.networking.istio.io"
- "workloadgroups.networking.istio.io"
- "authorizationpolicies.security.istio.io"
- "peerauthentications.security.istio.io"
- "requestauthentications.security.istio.io"
- "telemetries.telemetry.istio.io"
- "istiooperators.install.istio.io"
- "istiod"
- "istiod-clusterrole-istio-system"
- "istiod-gateway-controller-istio-system"
- "istiod-clusterrole-istio-system"
- "istiod-gateway-controller-istio-system"
- "istio"
- "istio-sidecar-injector"
- "istio-reader-clusterrole-istio-system"
- "stats-filter-1.10"
- "tcp-stats-filter-1.10"
- "stats-filter-1.11"
- "tcp-stats-filter-1.11"
- "stats-filter-1.12"
- "tcp-stats-filter-1.12"
- "istio-validator-istio-system"
- "istio-ingressgateway-microservices"
- "istio-ingressgateway-microservices-sds"
- "istio-ingressgateway-microservices-service-account"
- "istio-ingressgateway-public"
- "istio-ingressgateway-public-sds"
- "istio-ingressgateway-public-service-account"
verbs:
- get
- list
</code></pre>
<p>Application I have built leverage the <code>istioctl</code> docker container published by istio on dockerhub. <a href="https://hub.docker.com/r/istio/istioctl/tags" rel="nofollow noreferrer">Link</a>.</p>
<p>I want to understand what changes are required in above <code>ClusterRole</code> definition so that I can perform the get and list operations for the pods in the namespace.</p>
<p>I would also want to understand that is it possible that the error I am getting is trying to reference some other resource in the cluster?</p>
<p>Cluster information:</p>
<pre><code>Kubernetes version: 1.20
Istioctl docker image version: 1.12.2
Istio version: 1.12.1
</code></pre>
| Kunal Malhotra | <p>As OP mentioned in the comment problem is resolved after my suggestion:</p>
<blockquote>
<p>Please run the command <code>kubectl auth can-i list pods --namespace istio-system --as system:serviceaccount:istio-system:istio-deployment-verification-sa</code> and attach result to the question. Look also <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">here</a></p>
</blockquote>
<p>OP has confirmed that problem is resolved:</p>
<blockquote>
<p>thanx for the above command using above I was finally able to nail down the issue and found the issue to be with first resourceName and second we need to mention core api in the api group before any other. Thank you issue is resolved now.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I'm trying to deploy a vue.js application on a k8s cluster using docker and docker-compose. I'm also using a nginx ingress controller.</p>
<p>I'm using a configmap to load my custom nginx conf, according to this :</p>
<p><a href="https://router.vuejs.org/guide/essentials/history-mode.html#example-server-configurations" rel="nofollow noreferrer">https://router.vuejs.org/guide/essentials/history-mode.html#example-server-configurations</a></p>
<p>As a matter of fact my application loads properly, but refreshing a page other than the homepage results in a 404 error. And that's just the same if I try and access any given page by its URL.</p>
<p>What am I doing wrong ?</p>
<p>I'm using kubectl in command line to deploy.</p>
<p>Here's my Dockerfile :</p>
<pre><code># build environment
FROM node:12.2.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY ./[project folder]/ .
RUN npm config set unsafe-perm true
RUN npm install --silent
RUN npm install @vue/[email protected] -g
RUN npm run build
# production environment
FROM nginx:1.16.0-alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p>My ingress.yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: foo-ingress-lb
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
cert-manager.io/issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- www.foo.com
# This assumes tls-secret exists and the SSL
# certificate contains a CN for foo.bar.com
secretName: tls-secret
rules:
- host: www.foo.com
http:
paths:
- path: /.*
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
</code></pre>
<p>My configmap.yaml</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: vue-config
labels:
app: vue-config
data:
default.conf:
server {
listen 8080 default;
root /var/www/app;
location / {
try_files $uri $uri/ /index.html;
}
}
</code></pre>
<p>service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- port: 80
name: http-front
protocol: TCP
targetPort: 80
selector:
app: my-app
</code></pre>
<p>And finally my deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: "my-deployment"
spec:
replicas: 1
selector:
matchLabels:
app: "my-app"
template:
metadata:
labels:
app: "my-app"
spec:
containers:
- name: "vue-nginx"
image: XXX/nginx:alpine
volumeMounts:
- mountPath: /var/www/app
name: html-files
- mountPath: /etc/nginx/conf.d
name: config
ports:
- containerPort: 8080
imagePullPolicy: "Always"
- name: "vue"
image:XXX/my_image:latest
volumeMounts:
- mountPath: /var/www/app
name: html-files
ports:
- containerPort: 8080
imagePullPolicy: "Always"
imagePullSecrets:
- name: registry-secret
restartPolicy: Always
volumes:
- name: html-files
emptyDir: {}
- name: config
configMap:
name: vue-config
</code></pre>
<p>kubectl logs -n kube-system nginx-ingress-xxx</p>
<pre><code>195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET / HTTP/2.0" 200 3114 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 684 0.019 [default-maorie-service-front-80] [] 100.64.0.205:80 3114 0.016 200 eaa454a87cf4cee8929f15f3ecd75dcc
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /css/app.8b722d7e.css HTTP/2.0" 200 11488 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 85 0.001 [default-maorie-service-front-80] [] 100.64.0.205:80 11488 0.000 200 6cce6ff53f0b3b57807eef9df4ab1e2d
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /css/chunk-vendors.2072d5c4.css HTTP/2.0" 200 398846 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 36 0.005 [default-maorie-service-front-80] [] 100.64.0.205:80 398846 0.004 200 ea59b05209b2d7e910ac380ceda13b3f
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /js/app.147dc57f.js HTTP/2.0" 200 46213 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 38 0.002 [default-maorie-service-front-80] [] 100.64.0.205:80 46213 0.000 200 cc6b44751229b2ef4a279defae770da5
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /js/chunk-vendors.ed6dc4c7.js HTTP/2.0" 200 590498 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 36 0.004 [default-maorie-service-front-80] [] 100.64.0.205:80 590498 0.004 200 49e3731caaf832ec21e669affa6c722d
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /css/chunk-2678b26c.8577e149.css HTTP/2.0" 200 2478 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 38 0.002 [default-maorie-service-front-80] [] 100.64.0.205:80 2478 0.000 200 001a3ce8a18e6c9b8433f84ddd7a0412
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /css/chunk-2470e996.066be083.css HTTP/2.0" 200 72983 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 94 0.006 [default-maorie-service-front-80] [] 100.64.0.205:80 72983 0.008 200 092515d8d6804324d24fc3fabad87eba
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /css/chunk-30fff3f3.e2b55839.css HTTP/2.0" 200 20814 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 38 0.001 [default-maorie-service-front-80] [] 100.64.0.205:80 20814 0.000 200 f9f78eb5b9b1963a06d386a1c9421189
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /css/chunk-3db1ab7a.0a0e84c4.css HTTP/2.0" 200 3329 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 38 0.001 [default-maorie-service-front-80] [] 100.64.0.205:80 3329 0.000 200 d66e57d023158381d0eb7b4ce0fcf4c1
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /css/chunk-7ac8b24c.353e933b.css HTTP/2.0" 200 10170 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 38 0.003 [default-maorie-service-front-80] [] 100.64.0.205:80 10170 0.000 200 fcb3655b95599822f79587650ca0c017
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /js/chunk-2678b26c.e69fb49a.js HTTP/2.0" 200 103310 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 37 0.009 [default-maorie-service-front-80] [] 100.64.0.205:80 103310 0.008 200 3423ac43407db755c1a23bca65ca8a0e
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /js/canvg.a381dd7b.js HTTP/2.0" 200 143368 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 31 0.009 [default-maorie-service-front-80] [] 100.64.0.205:80 143368 0.008 200 61d6e047f66dc9b36c836c1b49d2452d
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /js/chunk-3db1ab7a.6fc5dc72.js HTTP/2.0" 200 8157 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 37 0.009 [default-maorie-service-front-80] [] 100.64.0.205:80 8157 0.008 200 90231ff6f00b168861f10511ab58ae29
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /js/chunk-6e83591c.163e5349.js HTTP/2.0" 200 22685 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 37 0.009 [default-maorie-service-front-80] [] 100.64.0.205:80 22685 0.008 200 7d56be2022473cc6055cf8101090fdb7
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /js/chunk-7ac8b24c.1a4727cd.js HTTP/2.0" 200 37637 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 37 0.009 [default-maorie-service-front-80] [] 100.64.0.205:80 37637 0.008 200 e38c234d67700b4f4e2ffe4125bed445
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /css/chunk-b09da666.b0ea57ae.css HTTP/2.0" 200 1414 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 38 0.009 [default-maorie-service-front-80] [] 100.64.0.205:80 1414 0.008 200 e8b8c016069e4c59c929b61e8ba5502b
195.154.69.132 - - [21/Jan/2021:09:23:06 +0000] "GET /js/chunk-b09da666.e5df996b.js HTTP/2.0" 200 1228 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 38 0.009 [default-maorie-service-front-80] [] 100.64.0.205:80 1228 0.008 200 4964cb095baf55cebed49bbcc3fe1af2
195.154.69.132 - - [21/Jan/2021:09:23:08 +0000] "GET /js/chunk-30fff3f3.f6defc09.js HTTP/2.0" 200 3386928 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 37 1.634 [default-maorie-service-front-80] [] 100.64.0.205:80 3386928 1.632 200 4ed3d78d4d72be629deaf579670168ff
195.154.69.132 - - [21/Jan/2021:09:23:08 +0000] "GET /img/logo.956610d4.png HTTP/2.0" 200 6620 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 75 0.355 [default-maorie-service-front-80] [] 100.64.0.205:80 6620 0.352 200 c16abf3b959c147ab469b73fd548dc95
195.154.69.132 - - [21/Jan/2021:09:23:13 +0000] "GET /img/icons/favicon-32x32.png HTTP/2.0" 200 1690 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 34 0.583 [default-maorie-service-front-80] [] 100.64.0.205:80 1690 0.584 200 f7c2c72bebd294c07f174fa91c4fd40f
195.154.69.132 - - [21/Jan/2021:09:23:15 +0000] "GET /js/pdfmake.b17ba0e4.js HTTP/2.0" 200 2127948 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 32 8.697 [default-maorie-service-front-80] [] 100.64.0.205:80 2127948 8.696 200 edb7e05dd9d87cfc91bc5986b80ff8a8
195.154.69.132 - - [21/Jan/2021:09:23:20 +0000] "GET /js/xlsx.a4e6cbf1.js HTTP/2.0" 200 924857 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 30 13.529 [default-maorie-service-front-80] [] 100.64.0.205:80 924857 13.528 200 8ffa5bb28d9d255e69021ccce35a4dfe
195.154.69.132 - - [21/Jan/2021:09:23:20 +0000] "GET /js/chunk-2470e996.bc9a0d30.js HTTP/2.0" 200 9615968 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 37 13.992 [default-maorie-service-front-80] [] 100.64.0.205:80 9615968 13.992 200 6e42aeb8a644afa385aa8a166fbb5860
195.154.69.132 - - [21/Jan/2021:09:23:31 +0000] "GET /js/chunk-2470e996.bc9a0d30.js HTTP/2.0" 200 9615968 "https://dev.maorie.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 36 10.464 [default-maorie-service-front-80] [] 100.64.0.205:80 9615968 10.464 200 f8eb813e1091ed422525611f61a17d68
195.154.69.132 - - [21/Jan/2021:09:35:01 +0000] "GET /service-worker.js HTTP/2.0" 200 1069 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0" 919 0.004 [default-maorie-service-front-80] [] 100.64.0.205:80 1069 0.004 200 a03dd0f950c451a8016823c28a958dae
195.154.69.132 - - [21/Jan/2021:09:35:01 +0000] "GET /precache-manifest.2a722082efdadd279fa63223d8219496.js HTTP/2.0" 200 3232 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0" 51 0.009 [default-maorie-service-front-80] [] 100.64.0.205:80 3232 0.008 200 e8cdcd5bc52b8a9a8777de4a1e680f1d
195.154.69.132 - - [21/Jan/2021:09:35:05 +0000] "GET /service-worker.js HTTP/2.0" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0" 50 0.002 [default-maorie-service-front-80] [] 100.64.0.205:80 0 0.000 304 c0e319cf30abfffd9dc13da6bad8453c
193.32.164.26 - - [21/Jan/2021:10:06:33 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.021 [] [] - - - - 5de24d60404270018011b04db4194bd4
193.32.164.26 - - [21/Jan/2021:10:06:33 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.019 [] [] - - - - 860c19f03a627f6de82bd538fc0c68f1
195.154.69.132 - - [21/Jan/2021:10:20:14 +0000] "\xAA\xAA\xAA\xAAUUUUUUUU\xAA\xAA\xAA\xAAUUUU\xAA\xAA\xAA\xAAUUUU\xAA\xAA\xAA\xAAUUUU\xAA\xAA\xAA\xAAUUUU\xAA\xAA\xAA\xAAUUUU\xAA\xAA\xAA\xAAUUUU\xAA\xAA\xAA\xAA" 400 150 "-" "-" 0 0.015 [] [] - - - - a604c37ad900da39307e364e55f4db90
195.154.69.132 - - [21/Jan/2021:10:25:19 +0000] "GET /service-worker.js HTTP/2.0" 200 1069 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0" 914 0.015 [default-maorie-service-front-80] [] 100.64.0.205:80 1069 0.016 200 5856773011e3f4887361fd864e7ca3cc
195.154.69.132 - - [21/Jan/2021:10:25:24 +0000] "GET /service-worker.js HTTP/2.0" 200 1069 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0" 13 0.001 [default-maorie-service-front-80] [] 100.64.0.205:80 1069 0.000 200 a2157ea238ab87fa87f59262a4023076
5.188.210.227 - - [21/Jan/2021:10:30:35 +0000] "\x05\x01\x00" 400 150 "-" "-" 0 0.622 [] [] - - - - b42d10d3b0579c8c55d3febbebc1da59
5.188.210.227 - - [21/Jan/2021:10:31:47 +0000] "\x04\x01\x00P\x05\xBC\xD2\xE3\x00" 400 150 "-" "-" 0 0.380 [] [] - - - - 77536a3304f9d249198240df300dec18
195.154.69.132 - - [21/Jan/2021:10:44:04 +0000] "GET / HTTP/1.1" 200 3114 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0" 255 0.013 [default-maorie-service-front-80] [] 100.64.0.205:80 3114 0.012 200 2e568e4805e1bfd0ccdfd8e91e5807c3
157.245.176.143 - - [21/Jan/2021:10:54:31 +0000] "SSTP_DUPLEX_POST /sra_{BA195980-CD49-458b-9E23-C84EE0ADCD75}/ HTTP/1.1" 400 150 "-" "-" 192 0.000 [] [] - - - - e28b66acd059bf0d8c9264395c119aad
E0121 11:00:21.036598 6 leaderelection.go:357] Failed to update lock: etcdserver: request timed out
195.154.69.132 - - [21/Jan/2021:11:21:48 +0000] "GET / HTTP/1.1" 308 164 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0" 255 0.000 [default-maorie-service-front-80] [] - - - - 5cf25e7eba89ae4a9c06d81532951824
18.136.126.138 - - [21/Jan/2021:11:46:06 +0000] "\x16\x03\x01\x02\x00\x01\x00\x01\xFC\x03\x03\x8F^\xE5@p\xB3\xA4\xA7H\xB1`\xF7\x9FZ\xF7=|\xA6\x82" 400 150 "-" "-" 0 0.494 [] [] - - - - 83a6709f7eaa820c4f9829979de617a4
195.154.69.132 - - [21/Jan/2021:12:56:23 +0000] "CONNECT ip.ws.126.net:443 HTTP/1.1" 400 150 "-" "-" 0 0.913 [] [] - - - - 1ddfb9204b6efa687d05a138f213bef2
222.186.136.150 - - [21/Jan/2021:13:15:16 +0000] "CONNECT ip.ws.126.net:443 HTTP/1.1" 400 150 "-" "-" 0 0.257 [] [] - - - - 87c08e487d0c9a7e786685c8dd1a589b
64.227.97.195 - - [21/Jan/2021:13:18:32 +0000] "\x00\x0E8\x97\xAB\xB2\xBB\xBA\xB1\x1D\x90\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.152 [] [] - - - - 4d89d6868e909990414d125fe5e3862d
167.71.102.181 - - [21/Jan/2021:13:19:52 +0000] "\x00\x0E8f:d5\xBE\xC3\xBC_\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.086 [] [] - - - - efbe674f606c5bf20e6903da0ae39855
I0121 14:15:35.862702 6 controller.go:144] "Configuration changes detected, backend reload required"
I0121 14:15:35.865621 6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"maorie-ingress-lb", UID:"a5ad31d8-c913-4139-a721-2a5c0f7119d3", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"5627714245", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0121 14:15:36.262759 6 controller.go:161] "Backend successfully reloaded"
I0121 14:15:36.265762 6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"nginx-ingress-29787", UID:"07845f40-87d4-40ce-83d1-01a3d37011c1", APIVersion:"v1", ResourceVersion:"4834374834", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
</code></pre>
| Brice Le Roux | <p>As said in the comments, I finally realised the nginx configmap was not needed, as the docker image embedded an nginx on its own.</p>
<p>Removing the configmap instructions in deployment.yaml did the trick !</p>
| Brice Le Roux |
<p>I'm working on a POC for getting a Spark cluster set up to use Kubernetes for resource management using AKS (Azure Kubernetes Service). I'm using spark-submit to submit pyspark applications to k8s in cluster mode and I've been successful in getting applications to run fine.</p>
<p>I got Azure file share set up to store application scripts and Persistent Volume and a Persistent Volume Claim pointing to this file share to allow Spark to access the scripts from Kubernetes. This works fine for applications that don't write any output, like the pi.py example given in the spark source code, but writing any kind of outputs fails in this setup. I tried running a script to get wordcounts and the line</p>
<pre><code>wordCounts.saveAsTextFile(f"./output/counts")
</code></pre>
<p>causes an exception where wordCounts is an rdd.</p>
<pre><code>Traceback (most recent call last):
File "/opt/spark/work-dir/wordcount2.py", line 14, in <module>
wordCounts.saveAsTextFile(f"./output/counts")
File "/opt/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1570, in saveAsTextFile
File "/opt/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/opt/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o65.saveAsTextFile.
: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/spark/work-dir/output/counts': Operation not permitted
</code></pre>
<p>The directory "counts" has been created from the spark application just fine, so it seems like it has required permissions, but this subsequent <code>chmod</code> that spark tries to perform internally fails. I haven't been able to figure out the cause and what exact configuration I'm missing in my commands that's causing this. Any help would be greatly appreciated.</p>
<p>The <code>kubectl</code> version I'm using is</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"881d4a5a3c0f4036c714cfb601b377c4c72de543", GitTreeState:"clean", BuildDate:"2021-10-21T05:13:01Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>The spark version is 2.4.5 and the command I'm using is</p>
<pre><code><SPARK_PATH>/bin/spark-submit --master k8s://<HOST>:443 \
--deploy-mode cluster \
--name spark-pi3 \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.container.image=docker.io/datamechanics/spark:2.4.5-hadoop-3.1.0-java-8-scala-2.11-python-3.7-dm14 \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.azure-fileshare-pvc.options.claimName=azure-fileshare-pvc \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.azure-fileshare-pvc.mount.path=/opt/spark/work-dir \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.azure-fileshare-pvc.options.claimName=azure-fileshare-pvc \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.azure-fileshare-pvc.mount.path=/opt/spark/work-dir \
--verbose /opt/spark/work-dir/wordcount2.py
</code></pre>
<p>The PV and PVC are pretty basic. The PV yml is:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: azure-fileshare-pv
labels:
usage: azure-fileshare-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureFile:
secretName: azure-storage-secret
shareName: dssparktestfs
readOnly: false
secretNamespace: spark-operator
</code></pre>
<p>The PVC yml is:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: azure-fileshare-pvc
# Set this annotation to NOT let Kubernetes automatically create
# a persistent volume for this volume claim.
annotations:
volume.beta.kubernetes.io/storage-class: ""
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
# To make sure we match the claim with the exact volume, match the label
matchLabels:
usage: azure-fileshare-pv
</code></pre>
<p>Let me know if more info is needed.</p>
| Maaverik | <blockquote>
<p>The owner and user are root.</p>
</blockquote>
<p>It looks like you've mounted your volume as root. Your problem:</p>
<pre><code>chmod: changing permissions of '/opt/spark/work-dir/output/counts': Operation not permitted
</code></pre>
<p>is due to the fact that you are trying to change the permissions of a file that you are not the owner of. So you need to change the owner of the file first.</p>
<p>The easiest solution is to <code>chown</code> on the resource you want to access. However, this is often not feasible as it can lead to privilege escalation as well as the image itself may block this possibility. In this case you can create <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">security context</a>.</p>
<blockquote>
<p>A security context defines privilege and access control settings for a Pod or Container. Security context settings include, but are not limited to:</p>
<ul>
<li><p>Discretionary Access Control: Permission to access an object, like a file, is based on <a href="https://wiki.archlinux.org/index.php/users_and_groups" rel="nofollow noreferrer">user ID (UID) and group ID (GID)</a>.</p>
</li>
<li><p><a href="https://en.wikipedia.org/wiki/Security-Enhanced_Linux" rel="nofollow noreferrer">Security Enhanced Linux (SELinux)</a>: Objects are assigned security labels.</p>
</li>
<li><p>Running as privileged or unprivileged.</p>
</li>
<li><p><a href="https://linux-audit.com/linux-capabilities-hardening-linux-binaries-by-removing-setuid/" rel="nofollow noreferrer">Linux Capabilities</a>: Give a process some privileges, but not all the privileges of the root user.
></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tutorials/clusters/apparmor/" rel="nofollow noreferrer">AppArmor</a>: Use program profiles to restrict the capabilities of individual programs.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/tutorials/clusters/seccomp/" rel="nofollow noreferrer">Seccomp</a>: Filter a process's system calls.</p>
</li>
<li><p>AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This bool directly controls whether the <a href="https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt" rel="nofollow noreferrer"><code>no_new_privs</code></a> flag gets set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged OR 2) has <code>CAP_SYS_ADMIN</code>.</p>
</li>
<li><p>readOnlyRootFilesystem: Mounts the container's root filesystem as read-only.</p>
</li>
</ul>
<p>The above bullets are not a complete set of security context settings -- please see <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#securitycontext-v1-core" rel="nofollow noreferrer">SecurityContext</a> for a comprehensive list.</p>
<p>For more information about security mechanisms in Linux, see <a href="https://www.linux.com/learn/overview-linux-kernel-security-features" rel="nofollow noreferrer">Overview of Linux Kernel Security Features</a></p>
</blockquote>
<p>You can <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods" rel="nofollow noreferrer">Configure volume permission and ownership change policy for Pods</a>.</p>
<blockquote>
<p>By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the <code>fsGroup</code> specified in a Pod's <code>securityContext</code> when that volume is mounted. For large volumes, checking and changing ownership and permissions can take a lot of time, slowing Pod startup. You can use the <code>fsGroupChangePolicy</code> field inside a <code>securityContext</code> to control the way that Kubernetes checks and manages ownership and permissions for a volume.</p>
</blockquote>
<p>Here is an example:</p>
<pre class="lang-yaml prettyprint-override"><code>securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
</code></pre>
<p>See also <a href="https://stackoverflow.com/questions/51200115/chown-changing-ownership-of-data-db-operation-not-permitted/51203031#51203031">this similar question</a>.</p>
| Mikołaj Głodziak |
<p>I wanted to create alerts in Grafana for My Kubernetes Clusters.
I have configured Prometheus, Node exporter, Kube-Metrics, Alert Manager in my k8s Cluster.
I wanted to setup Alerting on Unschedulable or Failed Pods.</p>
<ol>
<li>Cause of unschedulable or failed pods</li>
<li>Generating an alert after a while</li>
<li>Creating another alert to notify us when pods fail.
Can You guide me how to achieve this??</li>
</ol>
| sayali_bhavsar | <p>Based on the comment from <a href="https://stackoverflow.com/users/8803619/suresh-vishnoi" title="14,017 reputation">Suresh Vishnoi</a>:</p>
<blockquote>
<p>it might be helpful <a href="https://awesome-prometheus-alerts.grep.to/rules.html#kubernetes" rel="noreferrer">awesome-prometheus-alerts.grep.to/rules.html#kubernetes</a></p>
</blockquote>
<p>yes, this could be very helpful. On this site you can find templates for <a href="https://awesome-prometheus-alerts.grep.to/rules.html#rule-kubernetes-1-17" rel="noreferrer">failed pods (not healthy)</a>:</p>
<blockquote>
<p>Pod has been in a non-ready state for longer than 15 minutes.</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code> - alert: KubernetesPodNotHealthy
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~"Pending|Unknown|Failed"})[15m:1m]) > 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})
description: "Pod has been in a non-ready state for longer than 15 minutes.\n V
</code></pre>
<p>or for <a href="https://awesome-prometheus-alerts.grep.to/rules.html#rule-kubernetes-1-18" rel="noreferrer">crash looping</a>:</p>
<blockquote>
<p>Pod {{ $labels.pod }} is crash looping</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code> - alert: KubernetesPodCrashLooping
expr: increase(kube_pod_container_status_restarts_total[1m]) > 3
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes pod crash looping (instance {{ $labels.instance }})
description: "Pod {{ $labels.pod }} is crash looping\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
</code></pre>
<p>See also <a href="https://sysdig.com/blog/kubernetes-monitoring-prometheus/" rel="noreferrer">this good guide about monitoring kubernetes cluster with Prometheus</a>:</p>
<blockquote>
<p>The Kubernetes API and the <a href="https://github.com/kubernetes/kube-state-metrics" rel="noreferrer">kube-state-metrics</a> (which natively uses prometheus metrics) <strong>solve part of this problem</strong> by exposing Kubernetes internal data, such as the number of desired / running replicas in a deployment, unschedulable nodes, etc.</p>
<p>Prometheus is a good fit for microservices because you just need to <strong>expose a metrics port</strong>, and don’t need to add too much complexity or run additional services. Often, the service itself is already presenting a HTTP interface, and the developer just needs to add an additional path like <code>/metrics</code>.</p>
</blockquote>
<p>If it comes to unschedulable nodes, you can use the metric <code>kube_node_spec_unschedulable</code>. It is described <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/node-metrics.md" rel="noreferrer">here</a> or <a href="https://www.sumologic.com/blog/kubernetes-monitoring/" rel="noreferrer">here</a>:
<code>kube_node_spec_unschedulable</code> - Whether a node can schedule new pods or not.</p>
<p>Look also at <a href="https://www.replex.io/blog/kubernetes-in-production-the-ultimate-guide-to-monitoring-resource-metrics" rel="noreferrer">this guide</a>.
Basically, you need to find the metric you want to monitor and set it appropriately in Prometheus. Alternatively, you can use templates, as I showed at the beginning of the answer.</p>
| Mikołaj Głodziak |
<p>I want to make several different deployments that will run the same containers with specific startup arguments for every deployment. I use HELM to template yaml files and do not produce copy paste. As far as I understand, I can pass different deploy arguments either using HELM templates or by creating several ConfigMaps and specifying one of them at a startup. I don't know what to choose, can someone please tell me which of these approaches is better and why?</p>
| parean | <p>With Helm, I like to think of the Chart as the function and the values.yaml as the arguments. If you are selectively including and excluding certain ConfigMaps, you are changing the function and will have to have a separate Helm Chart for each deployment. It will also complicate things like updating, since you'll have to update each Helm Chart individually.</p>
<p>Unless there was some strange extenuating circumstance I would just create different values.yaml files for the different deployments. An added benefit is that when you list your Helm releases, you can easily bring up the .yaml file for any of them to load the config.</p>
| Chandler |
<p>This problem was encountered during the installation of K8S 1.16.4
The prompt says it lacks the dependency of Kubenpilies-CNI 0.7.5</p>
<p>But if you install Kubenpiles-cni 0.7.5 directly using YUM, kubelet 1.18 will be installed automatically
complete info</p>
<pre><code>[root@k8s-node-2 yum.repos.d]# yum install -y kubelet-1.16.4-0
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirrors.tuna.tsinghua.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package kubelet.x86_64 0:1.16.4-0 will be installed
--> Processing Dependency: kubernetes-cni >= 0.7.5 for package: kubelet-1.16.4-0.x86_64
Package kubernetes-cni is obsoleted by kubelet, but obsoleting package does not provide for requirements
--> Finished Dependency Resolution
Error: Package: kubelet-1.16.4-0.x86_64 (kubernetes)
Requires: kubernetes-cni >= 0.7.5
Available: kubernetes-cni-0.3.0.1-0.07a8a2.x86_64 (kubernetes)
kubernetes-cni = 0.3.0.1-0.07a8a2
Available: kubernetes-cni-0.5.1-0.x86_64 (kubernetes)
kubernetes-cni = 0.5.1-0
Available: kubernetes-cni-0.5.1-1.x86_64 (kubernetes)
kubernetes-cni = 0.5.1-1
Available: kubernetes-cni-0.6.0-0.x86_64 (kubernetes)
kubernetes-cni = 0.6.0-0
Available: kubernetes-cni-0.7.5-0.x86_64 (kubernetes)
kubernetes-cni = 0.7.5-0
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
</code></pre>
| wang.wenzheng | <p>Same issue we have met today.<br />
i found the yum repo was updated on 6/21, so i suppose this is bug of yum repo.<br />
fixed it by remove cache dir of yum, and copy it from an old server which has been installed k8s already.</p>
<pre><code>rm -rf /var/cache/yum/x86_64/7/kubernetes
scp x.x.x.x:/var/cache/yum/x86_64/7/kubernetes /var/cache/yum/x86_64/7/
</code></pre>
| wilala |
<p>I am trying to figure out <strong>how to get pod labels into the metric tags from kubelet metrics using prometheus-stack</strong>. In our environment, we need to hash pod names (due to length limitations) so our app name, env, and unit name are saved in pod labels.</p>
<p>We are using prometheus-stack (helm installation) to collect metrics from kubelet (<code>/metrics</code>, <code>/metrics/cadvisor</code>) and due to the lack of pod labels in metrics tags, it's difficult to know which metric belongs to which application.</p>
<p>Prometheus-stack is using <code>sd_kubernetes_config</code> with endpoint rule to collect kubelet metrics, where <code>__meta</code> tags for pod labels cannot be used. Is there another way how to get that labels in metric tags?</p>
<p>I also tried to collect pod_labels metric using <code>kubeStateMetrics</code>, where I can get metric that contains pod labels, but I cannot figure out how to display both metrics in a way that metric from cadvisor will show its value and metric from <code>kubeStateMetrics</code> will be used to display its labels (in Prometheus graph).</p>
<p>Thanks for any advice.</p>
| Jiří Peták | <p>As far as I know you can really use filtering metrics <a href="https://stackoverflow.com/questions/60067654/prometheus-filtering-based-on-labels">based on pod labels</a>. Look at the original answer:</p>
<blockquote>
<p>You can use <code>+</code> operator to join metrics. Here, <code>group_left()</code> will include the extra label: <code>label_source</code> from the right metric <code>kube_pod_labels</code>. The metric you're joining is forced to zero ( i.e. <code>0 * kube_pod_labels</code> ) so that it doesn't affect the result of first metric.</p>
</blockquote>
<pre><code>(
kube_pod_info{namespace="test"}
)
+ on(namespace) group_left(label_source)
(
0 * kube_pod_labels
)
</code></pre>
<p>In fact, metrics then can be long and nasty. However, it is an effective method in prometheus to create what you expect, which is kubelet metrics with pod labels.</p>
<p>Look also at <a href="https://ypereirareis.github.io/blog/2020/02/21/how-to-join-prometheus-metrics-by-label-with-promql/" rel="nofollow noreferrer">this guide</a> - it is described how to join Prometheus metrics by label with PromQL and <a href="https://grafana.com/blog/2021/08/04/how-to-use-promql-joins-for-more-effective-queries-of-prometheus-metrics-at-scale/" rel="nofollow noreferrer">this another guide</a> - How to use PromQL joins for more effective queries of Prometheus metrics at scale.</p>
| Mikołaj Głodziak |
<p>We want to provide a cluster for our customers with pre-installed applications and therefore want to give the customer all rights except on the namespaces provided by us and the system namespaces, such as "kube-system", so that they cannot see the sensitive informations in secrets or break anything there. We have already tested with OPA, but unfortunately you can't intercept GET requests there, which means the secrets would still be viewable.
It also doesn't work with RBAC because you can't deny access to a particular namespace there.</p>
<p>Is there a way to achieve this?</p>
<p>Thanks and best regards</p>
<p>Vedat</p>
| Vedat | <p>I solved the problem by giving the user a ClusterRole that only has permissions on namespaces and a ClusterRole that has permissions on everything. I bound the ClusterRole for the namespace with a ClusterRoleBinding and the other ClusterRole with a RoleBinding.
So that the user also has permissions on the namespaces he dynamically creates he needs a RoleBinding on the ClusterRole that is allowed to do everything.</p>
<p>To do this automatically, I use the tool Argo-Events, which triggers a RoleBinding deployment on a namespace creation event.
And with OPA I prevent that the user can change or delete namespaces.</p>
| Vedat |
<p>Attempting to use Google's Oauth Proxy service and Grafana's Auth Proxy configuration, but Grafana still displays login form.</p>
<p>Google login dialog is displayed as expected, but once authenticated it is expected that the user is then authenticated by Grafana.</p>
<p>Setup:
Kubernetes (AWS/EKS)
Oauth Proxy enabled for ingress-nginx</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: monitoring
spec:
rules:
- host: grafana.*domain*
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 4180
path: /oauth2
...
tls:
- hosts:
- grafana.*domain*
</code></pre>
<p>Grafana ingress:</p>
<pre><code>...
Rules:
Host Path Backends
---- ---- --------
grafana.*domain*
/ prometheus-operator-grafana:80 (192.168.2.91:3000)
Annotations: helm.fluxcd.io/antecedent: monitoring:helmrelease/prometheus-operator
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$escaped_request_uri
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2/auth
</code></pre>
<p>Grafana.ini: as per <a href="https://grafana.com/docs/grafana/latest/auth/auth-proxy/" rel="noreferrer">https://grafana.com/docs/grafana/latest/auth/auth-proxy/</a></p>
<pre><code>[analytics]
check_for_updates = true
[auth]
oauth_auto_login = true
signout_redirect_url = https://grafana.*domain*
[auth.proxy]
auto_sign_up = true
enable_login_token = false
enabled = true
header_name = X-WEBAUTH-USER
header_property = username
headers = Name:X-WEBAUTH-NAME Email:X-WEBAUTH-EMAIL Groups:X-WEBAUTH-GROUPS
[grafana_net]
url = https://grafana.net
[log]
mode = console
[paths]
data = /var/lib/grafana/data
logs = /var/log/grafana
plugins = /var/lib/grafana/plugins
provisioning = /etc/grafana/provisioning
[server]
domain = *domain*
root_url = https://grafana.*domain*
[users]
allow_sign_up = false
auto_assign_org = true
auto_assign_org_role = Admin
</code></pre>
<p>User is prompted for Google Authentication as desired.
However Grafana still presents login dialog, despite presence of</p>
<pre><code>[auth]
oauth_auto_login = true
</code></pre>
<p>Log from Nginx</p>
<pre><code>192.168.77.87 - xxxxxxxxxxxxxxxxx [2020/06/24 15:59:52] [AuthSuccess] Authenticated via OAuth2: Session{email:xxxxxxxxxn@domain user:nnnnnnnnnnnnnnnnnnnn PreferredUsername: token:true id_token:true created:2020-06-24 15:59:52.238393221 +0000 UTC m=+106369.587921725 expires:2020-06-24 16:59:51 +0000 UTC refresh_token:true}
192.168.77.87 - - [2020/06/24 15:59:52] grafana.domain GET - "/oauth2/callback?state=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:/&code=xxxxxxxxxxxxxxxxxxxxxxxxx&scope=email%20profile%20https://www.googleapis.com/auth/userinfo.profile%20https://www.googleapis.com/auth/userinfo.email%20openid&authuser=0&hd=domain&prompt=consent" HTTP/1.1 "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0" 302 24 0.181
</code></pre>
<p>Log from Oauth Proxy:</p>
<pre><code>192.168.17.214 - - [24/Jun/2020:15:59:52 +0000] "GET /oauth2/auth HTTP/1.1" 202 0 "https://accounts.google.com/signin/oauth/oauthchooseaccount?access_type=offline&acr_values&approval_prompt=force&client_id=client_id&redirect_uri=https%3A%2F%2Fgrafana.domain%2Foauth2%2Fcallback&response_type=code&scope=profile%20email&state=xxxxxxxxxxxxxxxxxxxxxxxxxx&flowName=GeneralOAuthFlow" "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0" 1217 0.004 [monitoring-oauth2-proxy-4180] [] 192.168.44.224:4180 0 0.004 202 xxxxxxxxxxxxxxxxxxxxxxxxxx
192.168.6.127 - - [24/Jun/2020:15:59:52 +0000] "GET /login HTTP/2.0" 202 0 "https://accounts.google.com/signin/oauth/oauthchooseaccount?access_type=offline&acr_values&approval_prompt=force&client_id=client_id&redirect_uri=https%3A%2F%2Fgrafana.domain%2Foauth2%2Fcallback&response_type=code&scope=profile%20email&state=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxQg&flowName=GeneralOAuthFlow" "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0" 0 0.052 [monitoring-prometheus-operator-grafana-80] [] 54.76.77.91:443 0 0.048 202 xxxxxxxxxxxxxxxxxxxxxxxxxxxx
192.168.6.127 - - [24/Jun/2020:15:59:52 +0000] "GET /login HTTP/2.0" 200 6822 "https://accounts.google.com/signin/oauth/oauthchooseaccount?access_type=offline&acr_values&approval_prompt=force&client_id=client_id&redirect_uri=https%3A%2F%2Fgrafana.dmoain%2Foauth2%2Fcallback&response_type=code&scope=profile%20email&state=xxxxxxxxxxxxxxxxxxxxxxxxx&flowName=GeneralOAuthFlow" "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0" 32 0.056 [monitoring-prometheus-operator-grafana-80] [] 192.168.2.91:3000 27042 0.008 200 xxxxxxxxxxxxxxxxxxxxxxxxxxx
</code></pre>
<p>Note instance can be made to work via auth.google however that requires the secret held with grafana.ini where other clients are secured at ingress-nginx.</p>
<p>Q. What should the Oauth callback be set to for Grafana Oauth Proxy?</p>
| Andrew Pickin | <p>So the issue is that although the user is authenticated as far a Google (as OAuth Provider), and Oauth-Proxy is concerned this is not reflected in the user experience as far a Grafana (being the upstream application) is concerned.</p>
<p>So both Oauth-Proxy and Google (the Oauth Provider) were both configured correctly. Namely:</p>
<ol>
<li>Oauth-Proxy requires <code>--set-xauthrequest</code> as part of the command
line options.</li>
<li>Google's redirect url to be <code>https://<host>/oauth2/callback</code>. Note this is different than when using Grafana's built in proxy.</li>
</ol>
<p>Problem is that the authenticated user's details where either not getting back to Grafana or they weren't being acknowledged. There is a great deal is inadequate documentation on this, partly from Grafana, partly from the Ingress Nginx Controller. This had nothing to do with Oauth-proxy as has been speculated.</p>
<p>My solution is as follows:</p>
<pre><code> grafana.ini:
auth:
oauth_auto_login: true
signout_redirect_url: "https://<host>/oauth2/sign_out"
auth.proxy:
enabled: true
header_name: X-Email
header_property: email
auto_sign_up: true
users:
allow_sign_up: false
auto_assign_org: true
auto_assign_org_role: Viewer
</code></pre>
<p>This is not intended to be complete but it covers the main options regarding authentication and user profiles.</p>
<p>Grafana Ingress also requires the following annotations:</p>
<pre><code> kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $user $upstream_http_x_auth_request_user;
auth_request_set $email $upstream_http_x_auth_request_email;
proxy_set_header X-User $user;
proxy_set_header X-Email $email;
</code></pre>
<p>There may be a way of specifying the 4 snippet lines with a single annotation, but I was not able to find it.</p>
<p>The above has a number of advantages over Grafana's built in proxy:</p>
<ol>
<li>It enables multiple applications to be configured with the common
Authentication backend. This gives a single source of truth for the secrets.</li>
<li>Bad actors are stopped at the Reverse Proxy, not the application.</li>
<li>This solution works with Prometheus Operator without the need to place secrets within your code, there currently being an issue with the prometheus operator not working correctly with environment variables (set by secrets).</li>
</ol>
| Andrew Pickin |
<p>If I have two pods running on different namespaces, and there is netpol already setup and cannot be modified, how would I approach the POD to POD communication making the ingress and egress possible again without modifying the existing object?</p>
| user9356263 | <p>User <a href="https://stackoverflow.com/users/1153938/vorsprung" title="30,068 reputation">Vorsprung</a> has good mentioned in the comment:</p>
<blockquote>
<p>The netpolicy that is already there probably does a general ingres / egress block. If you add another policy that specifically allows the POD to POD you need then it will, in this case override the general policy. See <a href="https://serverfault.com/questions/951958">https://serverfault.com/questions/951958</a></p>
</blockquote>
<p>Yes, you can add another network policy according to your needs and everything should work. It doesn't matter in which order you apply your rules.</p>
<p>Look also <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes" rel="nofollow noreferrer">here</a>. You can find many <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes" rel="nofollow noreferrer">kubernetes network policies recipes</a>.</p>
| Mikołaj Głodziak |
<p>I deployed a simple test <code>ingress</code> and an <code>externalName</code> <code>service</code> using <code>kustomize</code>.
The deployment works and I get the expected results, but when <code>describing</code> the <code>test-ingress</code> it shows the error: <code><error: endpoints "test-external-service" not found></code>.
It seems like a k8s bug. It shows this error, but everything is working fine.</p>
<p>Here is my deployment:</p>
<p><code>kustomization.yaml</code>:</p>
<pre class="lang-yml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: platform
resources:
- test-ingress.yaml
- test-service.yaml
generatorOptions:
disableNameSuffixHash: true
</code></pre>
<p><code>test-service.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: test-external-service
namespace: platform
spec:
type: ExternalName
externalName: "some-working-external-elasticsearch-service"
</code></pre>
<p><code>test-ingress.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx-external
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache_bypass $http_upgrade;
spec:
rules:
- host: testapi.mydomain.com
http:
paths:
- path: /
backend:
serviceName: test-external-service
servicePort: 9200
</code></pre>
<p>Here, I connected the external service to a working <code>elasticsearch</code> server. When browsing to <code>testapi.mydomain.com</code> ("mydomain" was replaced with our real domain of course), I'm getting the well known expected <code>elasticsearch</code> results:</p>
<pre class="lang-json prettyprint-override"><code>{
"name" : "73b40a031651",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "Xck-u_EFQ0uDHJ1MAho4mQ",
"version" : {
"number" : "7.10.1",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "1c34507e66d7db1211f66f3513706fdf548736aa",
"build_date" : "2020-12-05T01:00:33.671820Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
</code></pre>
<p>So everything is working. But when describing the <code>test-ingress</code>, there is the following error:</p>
<p><code>test-external-service:9200 (<error: endpoints "test-external-service" not found>)</code></p>
<p>What is this error? Why am I getting it even though everything is working properly? What am I missing here?</p>
| TomG | <p>This is how the <code>kubectl describe ingress</code> command works.<br />
The <code>kubectl describe ingress</code> command calls the <a href="https://github.com/kubernetes/kubectl/blob/ac49920c0ccb0dd0899d5300fc43713ee2dfcdc9/pkg/describe/describe.go#L2623-L2672" rel="nofollow noreferrer">describeIngressV1beta1</a> function, which calls the <a href="https://github.com/kubernetes/kubectl/blob/ac49920c0ccb0dd0899d5300fc43713ee2dfcdc9/pkg/describe/describe.go#L2507-L2531" rel="nofollow noreferrer">describeBackendV1beta1</a> function to describe the backend.</p>
<p>As can be found in the <a href="https://github.com/kubernetes/kubectl/blob/master/pkg/describe/describe.go#L2507-L2531" rel="nofollow noreferrer">source code</a>, the <a href="https://github.com/kubernetes/kubectl/blob/ac49920c0ccb0dd0899d5300fc43713ee2dfcdc9/pkg/describe/describe.go#L2507-L2531" rel="nofollow noreferrer">describeBackendV1beta1</a> function looks up the endpoints associated with the backend services, if it doesn't find appropriate endpoints, it generate an error message (as in your example):</p>
<pre><code>func (i *IngressDescriber) describeBackendV1beta1(ns string, backend *networkingv1beta1.IngressBackend) string {
endpoints, err := i.client.CoreV1().Endpoints(ns).Get(context.TODO(), backend.ServiceName, metav1.GetOptions{})
if err != nil {
return fmt.Sprintf("<error: %v>", err)
}
...
</code></pre>
<p>In the <a href="https://docs.openshift.com/dedicated/3/dev_guide/integrating_external_services.html#mysql-define-service-using-fqdn" rel="nofollow noreferrer">Integrating External Services</a> documentation, you can find that <code>ExternalName</code> services do not have any defined endpoints:</p>
<blockquote>
<p>ExternalName services do not have selectors, or any defined ports or endpoints, therefore, you can use an ExternalName service to direct traffic to an external service.</p>
</blockquote>
| matt_j |
<p>after reading <a href="https://alexandrev.medium.com/how-to-change-the-names-of-your-metrics-in-prometheus-b78497efb5de" rel="nofollow noreferrer">this</a> article I'm trying to cleanup metrics going out of Spark 3.0.1. Here is my servicemonitor.yml file:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: spark3
release: prometheus
name: spark3-servicemonitor
spec:
endpoints:
- interval: 5s
port: spark-ui
path: /metrics/prometheus
relabelings:
# Rename metrics
- sourceLabels: [__name__]
targetLabel: __name__
regex: 'metrics_spark_driver_.+_StreamingMetrics_([a-zA-Z_]{1,})_Value'
replacement: 'spark_driver_$1'
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
spark-version: "3"
</code></pre>
<p>I expect the following transformation:
metrics_spark_driver_whateverappid_StreamingMetrics_streaming_lastCompletedBatch_totalDelay_Value -> spark_driver_streaming_lastCompletedBatch_totalDelay however the relabelling does not seem to work. Could you please assist me on this subject?</p>
| eugen-fried | <p>The <code>relabelings</code> must be named <code>metricRelabelings</code> according to the <a href="https://github.com/prometheus-operator/prometheus-operator/blob/552d86a2713f6653146674f3598955a375b93099/bundle.yaml#L10224" rel="noreferrer">spec</a>. Note that the yaml format of the ServiceMonitors does not use the same key names as the corresponding prometheus config (but it is still valid yaml).</p>
| Jens Baitinger |
<p>I'm deploying a sidecar container in a kubernetes deployment.</p>
<p>The issue is that the pod sometimes is getting restarted many times because the main container (container1) is not ready at all.</p>
<p>The deployment is similar to this one but the sidecar container cannot reach propertly the container1 when this one is not ready. I think that's the reason why the pod is getting restarted many times</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: webserver
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: container1
image: image1
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
- name: sidecar-container
image: busybox
command: ["sh","-c","while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done"]
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
</code></pre>
<p>My question is simple. Is there any way to make busy-box container to wait for container1 until is ready?</p>
| X T | <p>In my case, to resolve it faster I have just included a sleep before executing the code so I can give enought time to the main container to be ready.</p>
<pre><code>time.Sleep(8 * time.Second)
</code></pre>
<p>That's not the best solution but resolves the issue.</p>
| X T |
<p>when i install moloch with helm to my kubernetes system (2 nodes that named minikube and minikube-02), i get this errori why, how can we resolve it?</p>
<p>Warning Failed 7s (x6 over 49s) kubelet Error: secret "passive-interface" not found</p>
<p>note: I see that "passive-interface" on this file "https://github.com/sealingtech/EDCOP-MOLOCH/blob/master/moloch/templates/moloch-viewer.yaml"
<a href="https://i.stack.imgur.com/4ZG2g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ZG2g.png" alt="enter image description here" /></a></p>
| John | <p>I solved this problem for this step, So you can use it for moloch:</p>
<pre><code>$kubectl create secret generic passive-interface --from-literal='interface=neverforget'
</code></pre>
<p><a href="https://i.stack.imgur.com/AiDaE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AiDaE.png" alt="enter image description here" /></a></p>
| John |
<p>For example, I have an internal docker registry for my kube cluster, hosted on <code>internal-docker-registry.io:5000</code>.</p>
<p>When the pod from the kube cluster pulling image <code>busybox</code>, I don't want to pull from docker hub <code>docker.io</code>. But instead I want it to pull from <code>internal-docker-registry.io:5000</code>.</p>
<p><em>Note that I cannot change the image name to <code>internal-docker-registry.io:5000/busybox</code> since I don't own the spec and there are too man these kind of third party image in my cluster.</em></p>
| jack2684 | <p>I have posted community wiki answer to summarize the topic:</p>
<p>As <a href="https://stackoverflow.com/users/10008173/david-maze" title="76,777 reputation">David Maze</a> well mentioned in the comment:</p>
<blockquote>
<p><strong>there is no "image registry search list"</strong>; an image name without a registry <em>always</em> uses <code>docker.io</code>, to avoid surprises where <em>e.g.</em> <code>FROM ubuntu</code> means something different depending on local configuration.</p>
</blockquote>
<p>So if you can't change the image name to internal-docker-registry.io:5000/busybox then unfortunately you don't have the option to get the image from your private registry (according to David's comment).</p>
<p>See also similar questions:</p>
<p><a href="https://stackoverflow.com/questions/33054369/how-to-change-the-default-docker-registry-from-docker-io-to-my-private-registry">How to change the default docker registry from docker.io to my private registry?</a></p>
<p><a href="https://stackoverflow.com/questions/66026479/how-to-change-default-k8s-cluster-registry">How to change default K8s cluster registry?</a></p>
| Mikołaj Głodziak |
<p>I know how to use RBAC with X.509 certificates to identify a user of <code>kubectl</code> and restrict them (using <code>Role</code> and <code>RoleBinding</code>) from creating pods of any kind in a namespace. However, I don't know how I can prevent them from putting specific labels on a pod (or any resource) they're trying to create.</p>
<p>What I want to do is something like:</p>
<ol>
<li>Create a <code>NetworkPolicy</code> that only resources in other namespaces with the label <code>group: cross-ns</code> are allowed to reach a resource in the <code>special-namespace</code></li>
<li>Have a user who cannot create pods or other resources with the label <code>group: cross-ns</code></li>
<li>Have another user who <em>can</em> create resources with the label <code>group: cross-ns</code></li>
</ol>
<p>Is this possible?</p>
| Don Rhummy | <p>You can use the Kubernetes-native policy engine called <a href="https://kyverno.io/" rel="nofollow noreferrer">Kyverno</a>:</p>
<blockquote>
<p>Kyverno runs as a dynamic admission controller in a Kubernetes cluster. Kyverno receives validating and mutating admission webhook HTTP callbacks from the kube-apiserver and applies matching policies to return results that enforce admission policies or reject requests.</p>
</blockquote>
<p>A Kyverno policy is a collection of rules that can be applied to the entire cluster (<code>ClusterPolicy</code>) or to the specific namespace (<code>Policy</code>).</p>
<hr />
<p>I will create an example to illustrate how it may work.</p>
<p>First we need to install Kyverno, you have the option of installing Kyverno directly from the latest release manifest, or using Helm (see: <a href="https://kyverno.io/docs/introduction/#quick-start" rel="nofollow noreferrer">Quick Start guide</a>):</p>
<pre><code>$ kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml
</code></pre>
<p>After successful installation, let's create a simple <code>ClusterPolicy</code>:</p>
<pre><code>apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: labeling-policy
spec:
validationFailureAction: enforce
background: false
rules:
- name: deny-rule
match:
resources:
kinds:
- Pod
exclude:
clusterRoles:
- cluster-admin
preconditions:
- key: "{{request.object.metadata.labels.purpose}}"
operator: Equals
value: "*"
validate:
message: "Using purpose label is not allowed for you"
deny: {}
</code></pre>
<p>In the example above, only using the <code>cluster-admin</code> <code>ClusterRole</code> you can modify a Pod with a label <code>purpose</code>.</p>
<p>Suppose I have two users (<code>john</code> and <code>dave</code>), but only <code>john</code> is linked to the <code>cluster-admin</code> <code>ClusterRole</code> via <code>ClusterRoleBinding</code>:</p>
<pre><code>$ kubectl describe clusterrolebinding john-binding
Name: john-binding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
User john
</code></pre>
<p>Finally, we can test if it works as expected:</p>
<pre><code>$ kubectl run test-john --image=nginx --labels purpose=test --as john
pod/test-john created
$ kubectl run test-dave --image=nginx --labels purpose=test --as dave
Error from server: admission webhook "validate.kyverno.svc" denied the request:
resource Pod/default/test-dave was blocked due to the following policies
labeling-policy:
deny-rule: Using purpose label is not allowed for you
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
test-john 1/1 Running 0 32s purpose=test
</code></pre>
<p>More examples with detailed explanations can be found in the <a href="https://kyverno.io/docs/writing-policies/" rel="nofollow noreferrer">Kyverno Writing Policies documentation</a>.</p>
| matt_j |
<p>Is it possible to do a dynamic routing with a nginx ingress controller? By dynamic I mean based on the url I need to strip and fetch a value from the url and route based on that value. Let me know how if it’s possible. If it’s not possible with nginx controller , let me know any other way in which this is possible. Appreciate any help.</p>
| Kowshhal | <p>Ingress controllers are based on <code>Ingress</code> objects. Kubernetes objects definitions are static by nature (so we could version control them).</p>
<p>From what I gathered in the comments, when a user requests <code>domain.com/foo</code> they will be redirected to their instance of your app ? You will need a source to get the updated info.</p>
<p>I could see 2 ways of doing that:</p>
<ul>
<li>Edit the ingress object manually or programmatically (using helm or some other templating software)</li>
<li>Make a dedicated app using a persistent database and trigger redirections from there: <code>domain.com/*</code> -> <code>redirect app</code> -> `user app. This way you can control your users list as you want.</li>
</ul>
<p>It depends if the end user will stay on <code>domain.com/user</code> or if they get redirected to another unique domain. I would need more info to discuss that.</p>
| LaurentZ |
<p>I'm new with Azure infrastructure and I'm trying to deploy Jenkins on AKS and be able to preserve all of my Jenkins data if the container stopped working and I run with a permissions issue for my newly created PVC.</p>
<p>I want to change the permissions for a specific folder and files in the PVC and the "chmod" command looks like running but doesn't do anything and the permissions are still set to 777 instead of my wanted permissions.</p>
<p>I have noticed that the Storage Class default permissions value for dirs and files are 777 but I need some specific files to be with other permissions.</p>
<p>Can I do this or there is any other option to do this?</p>
| RandomGuy17 | <blockquote>
<p>I want to change the permissions for a specific folder and files in the PVC and the "chmod" command looks like running but doesn't do anything and the permissions are still set to 777 instead of my wanted permissions.</p>
</blockquote>
<p>If you want to configure permissions in Kubernetes, you must use the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">security context</a>:</p>
<blockquote>
<p>A security context defines privilege and access control settings for a Pod or Container. Security context settings include, but are not limited to:</p>
<ul>
<li><p>Discretionary Access Control: Permission to access an object, like a file, is based on <a href="https://wiki.archlinux.org/index.php/users_and_groups" rel="nofollow noreferrer">user ID (UID) and group ID (GID)</a>.</p>
</li>
<li><p><a href="https://en.wikipedia.org/wiki/Security-Enhanced_Linux" rel="nofollow noreferrer">Security Enhanced Linux (SELinux)</a>: Objects are assigned security labels.</p>
</li>
<li><p>Running as privileged or unprivileged.</p>
</li>
<li><p><a href="https://linux-audit.com/linux-capabilities-hardening-linux-binaries-by-removing-setuid/" rel="nofollow noreferrer">Linux Capabilities</a>: Give a process some privileges, but not all the privileges of the root user.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/tutorials/clusters/apparmor/" rel="nofollow noreferrer">AppArmor</a>: Use program profiles to restrict the capabilities of individual programs.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/tutorials/clusters/seccomp/" rel="nofollow noreferrer">Seccomp</a>: Filter a process's system calls.</p>
</li>
<li><p>AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This bool directly controls whether the <a href="https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt" rel="nofollow noreferrer"><code>no_new_privs</code></a> flag gets set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged OR 2) has <code>CAP_SYS_ADMIN</code>.</p>
</li>
<li><p>readOnlyRootFilesystem: Mounts the container's root filesystem as read-only.</p>
</li>
</ul>
<p>The above bullets are not a complete set of security context settings -- please see <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#securitycontext-v1-core" rel="nofollow noreferrer">SecurityContext</a> for a comprehensive list.</p>
<p>For more information about security mechanisms in Linux, see <a href="https://www.linux.com/learn/overview-linux-kernel-security-features" rel="nofollow noreferrer">Overview of Linux Kernel Security Features</a></p>
</blockquote>
<p>In your case, if you want to grant permissions for a specific object (e.g. a file), you can use <a href="https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/#8-run-containers-as-a-non-root-user" rel="nofollow noreferrer">Discretionary Access Control</a>:</p>
<blockquote>
<p><strong>Containers that run as root frequently have far more permissions than their workload requires which, in case of compromise, could help an attacker further their attack.</strong></p>
<p>Containers still rely on the traditional Unix security model (called <a href="https://www.linux.com/learn/overview-linux-kernel-security-features" rel="nofollow noreferrer">discretionary access control</a> or DAC) - everything is a file, and permissions are granted to users and groups.</p>
</blockquote>
<p>You can also <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods" rel="nofollow noreferrer">configure volume permission and ownership change policy for Pods</a>.</p>
<p>See also:</p>
<ul>
<li><a href="https://medium.com/kubernetes-tutorials/defining-privileges-and-access-control-settings-for-pods-and-containers-in-kubernetes-2cef08fc62b7" rel="nofollow noreferrer">Defining Privileges and Access Control Settings for Pods and Containers in Kubernetes</a></li>
<li><a href="https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/#8-run-containers-as-a-non-root-user" rel="nofollow noreferrer">11 Ways (Not) to Get Hacked</a></li>
</ul>
| Mikołaj Głodziak |
<p>I have a kubernetes cluster with 4 nodes. I have a pod deployed as a deployment, with 8 replicas. When I deployed this, kubernetes sometimes schedule 4 pods in node1, and the rest of the 4 pods in node2. In this case node3 and node4 don't have this container running (but other containers running there)</p>
<p>I do understand Pod affinity and <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="noreferrer">anti-affinity</a> , where they have the <a href="https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure" rel="noreferrer">Zookeeper</a> example for pod-anti-affinity, which is great. This would make sure that no 2 pods would deploy on the same node. </p>
<p>This is fine, however my requirement is slightly different where I want to restrict the maximum number of the pods k8s can deploy to one node with node anti-affinity.</p>
<p>I need to make sure that not more than 3 instance of same pods are deployed on a node in my above example. I thought of setting a memory/cpu limit on pods but that seemed like a bad idea as I have nodes with different configuration. Is there any way to achieve this? </p>
<p>( Update 1 ) - I understand that my questions wasn't clear enough. To clarify further, what I want is to limit the instance of a pod to a maximum of 3 per node for a particular deployment. Example, how do I tell k8s to not deploy more than 3 instances of nginx pod per node? The restriction should only be applied to the nginx deployments and not other deployments.</p>
<p>( Update 2 ) - To further explain with a scenario.
A k8s cluster, with 4 worker nodes.
2 Deployments</p>
<ol>
<li>A nginx deployment -> replicas = 10</li>
<li>A custom user agent deployment -> Replicas 10</li>
</ol>
<p>Requirement - Hey kubernetes, I want to schedule 10 Pods of the "custom user agent" pod (Pod #2 in this example) in 4 nodes, but I want to make sure that each node may have only a maximum of 3 pods of the 'custom user agent'. For the 'nginx' pod, there shouldnt' be any such restriction, which means I don't mind if k8s schedule 5 nginx in one node and the rest of the 5 in the second node. </p>
| zeroweb | <p>I myself didn't find official documentation for this. but i think you can use podantiaffinity with <code>preferredDuringSchedulingIgnoredDuringExecution</code> option. this will prevent k8s from placing the same pods on a single node, but if that is not possible it will select the most eligible existing node. official doc <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">here</a></p>
<pre><code>affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
name: deployment-name
topologyKey: kubernetes.io/hostname
weight: 100
</code></pre>
| rumputkering |
<p>after watching a view videos on RBAC (role based access control) on kubernetes (of which <a href="https://www.youtube.com/watch?v=U67OwM-e9rQ" rel="nofollow noreferrer">this one</a> was the most transparent for me), I've followed the steps, however on k3s, not k8s as all the sources imply. From what I could gather (not working), the problem isn't with the actual role binding process, but rather the x509 user cert which isn't acknowledged from the API service</p>
<blockquote>
<p>$ kubectl get pods --kubeconfig userkubeconfig</p>
<p>error: You must be logged in to the server (Unauthorized)</p>
</blockquote>
<p>Also not documented on <a href="https://rancher.com/docs/k3s/latest/en/security/" rel="nofollow noreferrer">Rancher's wiki</a> on security for K3s (while documented for their k8s implementation)?, while described for <a href="https://rancher.com/docs/rancher/v2.x/en/admin-settings/rbac/" rel="nofollow noreferrer">rancher 2.x</a> itself, not sure if it's a problem with my implementation, or a k3s <-> k8s thing.</p>
<pre><code>$ kubectl version --short
Client Version: v1.20.5+k3s1
Server Version: v1.20.5+k3s1
</code></pre>
<hr />
<p><strong>With duplication of the process, my steps are as follows:</strong></p>
<ol>
<li>Get k3s ca certs</li>
</ol>
<p>This was described to be under <em>/etc/kubernetes/pki</em> (k8s), however based on <a href="https://www.reddit.com/r/kubernetes/comments/f69h3y/help_k3s_where_to_find_ca_certificate_files/" rel="nofollow noreferrer">this</a> seems to be at <em>/var/lib/rancher/k3s/server/tls/ (server-ca.crt & server-ca.key)</em>.</p>
<ol start="2">
<li>Gen user certs from ca certs</li>
</ol>
<pre><code>#generate user key
$ openssl genrsa -out user.key 2048
#generate signing request from ca
openssl req -new -key user.key -out user.csr -subj "/CN=user/O=rbac"
# generate user.crt from this
openssl x509 -req -in user.csr -CA server-ca.crt -CAkey server-ca.key -CAcreateserial -out user.crt -days 365
</code></pre>
<p>... all good:
<a href="https://i.stack.imgur.com/mBqwM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mBqwM.png" alt="enter image description here" /></a></p>
<ol start="3">
<li>Creating kubeConfig file for user, based on the certs:</li>
</ol>
<pre><code># Take user.crt and base64 encode to get encoded crt
cat user.crt | base64 -w0
# Take user.key and base64 encode to get encoded key
cat user.key | base64 -w0
</code></pre>
<ul>
<li>Created config file:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <server-ca.crt base64-encoded>
server: https://<k3s masterIP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: user
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: user
user:
client-certificate-data: <user.crt base64-encoded>
client-key-data: <user.key base64-encoded>
</code></pre>
<ol start="4">
<li>Setup role & roleBinding (within specified namespace 'rbac')</li>
</ol>
<ul>
<li>role</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
</code></pre>
<ul>
<li>roleBinding</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
apiGroup: rbac.authorization.k8s.io
kind: User
name: user
</code></pre>
<hr />
<p>After all of this, I get fun times of...</p>
<pre><code>$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>Any suggestions please?</p>
<blockquote>
<p>Apparently this <a href="https://stackoverflow.com/questions/59940927/k3s-create-user-with-client-certificate">stackOverflow question</a> presented a solution to the problem, but following the github feed, it came more-or-less down to the same approach followed here (unless I'm missing something)?</p>
</blockquote>
| Paul | <p>As we can find in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#normal-user" rel="nofollow noreferrer">Kubernetes Certificate Signing Requests documentation</a>:</p>
<blockquote>
<p>A few steps are required in order to get a normal user to be able to authenticate and invoke an API.</p>
</blockquote>
<p><br>I will create an example to illustrate how you can get a normal user who is able to authenticate and invoke an API (I will use the user <code>john</code> as an example).</p>
<hr />
<p>First, create PKI private key and CSR:</p>
<pre><code># openssl genrsa -out john.key 2048
</code></pre>
<p><strong>NOTE:</strong> <code>CN</code> is the name of the user and <code>O</code> is the group that this user will belong to</p>
<pre><code># openssl req -new -key john.key -out john.csr -subj "/CN=john/O=group1"
# ls
john.csr john.key
</code></pre>
<p>Then create a <code>CertificateSigningRequest</code> and submit it to a Kubernetes Cluster via <code>kubectl</code>.</p>
<pre><code># cat <<EOF | kubectl apply -f -
> apiVersion: certificates.k8s.io/v1
> kind: CertificateSigningRequest
> metadata:
> name: john
> spec:
> groups:
> - system:authenticated
> request: $(cat john.csr | base64 | tr -d '\n')
> signerName: kubernetes.io/kube-apiserver-client
> usages:
> - client auth
> EOF
certificatesigningrequest.certificates.k8s.io/john created
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 39s kubernetes.io/kube-apiserver-client system:admin Pending
# kubectl certificate approve john
certificatesigningrequest.certificates.k8s.io/john approved
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 52s kubernetes.io/kube-apiserver-client system:admin Approved,Issued
</code></pre>
<p>Export the issued certificate from the <code>CertificateSigningRequest</code>:</p>
<pre><code># kubectl get csr john -o jsonpath='{.status.certificate}' | base64 -d > john.crt
# ls
john.crt john.csr john.key
</code></pre>
<p>With the certificate created, we can define the <code>Role</code> and <code>RoleBinding</code> for this user to access Kubernetes cluster resources. I will use the <code>Role</code> and <code>RoleBinding</code> similar to yours.</p>
<pre><code># cat role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: john-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
# kubectl apply -f role.yml
role.rbac.authorization.k8s.io/john-role created
# cat rolebinding.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: john-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: john-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: john
# kubectl apply -f rolebinding.yml
rolebinding.rbac.authorization.k8s.io/john-binding created
</code></pre>
<p>The last step is to add this user into the kubeconfig file (see: <a href="https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#add-to-kubeconfig" rel="nofollow noreferrer">Add to kubeconfig</a>)</p>
<pre><code># kubectl config set-credentials john --client-key=john.key --client-certificate=john.crt --embed-certs=true
User "john" set.
# kubectl config set-context john --cluster=default --user=john
Context "john" created.
</code></pre>
<p>Finally, we can change the context to <code>john</code> and check if it works as expected.</p>
<pre><code># kubectl config use-context john
Switched to context "john".
# kubectl config current-context
john
# kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 30m
# kubectl run web-2 --image=nginx
Error from server (Forbidden): pods is forbidden: User "john" cannot create resource "pods" in API group "" in the namespace "default"
</code></pre>
<p>As you can see, it works as expected (user <code>john</code> only has <code>get</code> and <code>list</code> permissions).</p>
| matt_j |
<p>I have created kubernetes cluster in GCP and top of it i have configured jenkins in kubernetes using the below url
<a href="https://cloud.google.com/solutions/jenkins-on-kubernetes-engine-tutorial" rel="nofollow noreferrer">https://cloud.google.com/solutions/jenkins-on-kubernetes-engine-tutorial</a></p>
<p>I am able to run jenkins build with normal commands and it is creating a pod and the build is getting successful. When i am trying to change the image for maven build or golang build, i am unable to complete it. When i try to change it the pod is keep on terminating and recreating.</p>
<p><a href="https://i.stack.imgur.com/pmPxz.png" rel="nofollow noreferrer">jenkins kubernetes configuration</a></p>
<p><a href="https://i.stack.imgur.com/l90Xv.png" rel="nofollow noreferrer">Jenkins pod template</a></p>
<p><a href="https://i.stack.imgur.com/Zl88D.png" rel="nofollow noreferrer">Jenkins build</a></p>
| sai manikanta | <p>We can add pod template in Jenkins pipeline script to pull our custom image and to run as a slave pod. Use this below format:</p>
<p><div class="snippet" data-lang="js" data-hide="true" data-console="false" data-babel="false">
<div class="snippet-code snippet-currently-hidden">
<pre class="snippet-code-html lang-html prettyprint-override"><code>pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: maven
image: maven:alpine
command:
- cat
tty: true
- name: busybox
image: busybox
command:
- cat
tty: true
"""
}
}
stages {
stage('Run maven') {
steps {
container('maven') {
sh 'mvn -version'
}
container('busybox') {
sh '/bin/busybox'
}
}
}
}
}</code></pre>
</div>
</div>
</p>
| sai manikanta |
<p>According to the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="noreferrer">referrence</a>, two of the options <code>kube-apiserver</code> takes are <code>--bind-address</code> and <code>--advertise-address</code> It appears to me that they conflict each other.</p>
<p>What is/are the actual difference(s) between the two?</p>
<p>Is <code>--bind-address</code> the address that the <code>kube-apiserver</code> process will listen on?</p>
<p>Is <code>--advertise-address</code> the address that <code>kube-apiserver</code> will advertise as the address that it will be listening on? If so, how does it advertise? Does it do some kind of a broadcast over the network?</p>
| Iresh Dissanayaka | <p>According to the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">reference-kube-apiserver</a> that you are referencing:</p>
<blockquote>
<p>--advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>--bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)</p>
</blockquote>
<p>Those parameters are configurable, but please keep in mind they should be specified during cluster bootstrapping.</p>
<h3><a href="https://kubernetes.io/docs/concepts/security/controlling-access/#api-server-ports-and-ips" rel="nofollow noreferrer">API server ports and IP addresses</a></h3>
<ul>
<li>default “Secure port” is <code>6443</code>, but can be changed with the
<code>--secure-port</code> flag. As described in the <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports" rel="nofollow noreferrer">documentation</a> - master node should expose secure port for other cluster components to communicate with the Kubernetes API server.</li>
<li>default IP is first non-localhost network interface, but can be
changed with the <code>--bind-address</code> flag.</li>
</ul>
<p>Above mentioned parameters (<code>--secure-port</code> and <code>--bind-address</code>) allow you to configure network interface with secure port for Kubernetes API.
As stated before, if you don't specify any values:</p>
<blockquote>
<p>By default it would be default IP is first non-localhost network interface and 6443 port.</p>
</blockquote>
<p>Please note that:<br />
<code>--advertise-address</code> will be used by <code>kube-apiserver</code> to advertise this address for kubernetes controller which are responsible for preparing endpoints for <code>kubernetes.default.svc</code> (core <code>Service</code> responsible for communication between internal applications and the the API server). This Kubernetes Service VIP is configured for per-node load-balancing by kube-proxy.<br />
More information on <code>kubernetes.default.svc</code> and kubernetes controller can be found <a href="https://networkop.co.uk/post/2020-06-kubernetes-default/" rel="nofollow noreferrer">here</a>.</p>
<h3><a href="https://v1-16.docs.kubernetes.io/docs/concepts/architecture/master-node-communication/" rel="nofollow noreferrer">Cluster <-> Master communication</a></h3>
<blockquote>
<p>All communication paths from the cluster to the master terminate at the apiserver (none of the other master components are designed to expose remote services). In a typical deployment, the apiserver is configured to listen for remote connections on a secure HTTPS port (443)
The kubernetes service is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.</p>
</blockquote>
<blockquote>
<p>There are two primary communication paths from the master (apiserver) to the cluster. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver’s proxy functionality.</p>
</blockquote>
<p>Additionally, you can find out more about communication within the cluster by reading <a href="https://v1-16.docs.kubernetes.io/docs/concepts/architecture/master-node-communication/" rel="nofollow noreferrer">master-node-communication</a> and <a href="https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/" rel="nofollow noreferrer">control-plane-node-communication</a>.</p>
| matt_j |
<p><em>SOS</em>
I'm trying to deploy ELK stack on my Kubernetes**a</p>
<p>ElasticSearch, Metricbeat, Filebeat and Kibana running on Kubernetes, but in Kibana there is <code>no Filebeat index logs</code>
Kibana accessable: <strong>URL</strong><a href="http://logging.halykmart.online/" rel="nofollow noreferrer"> here</a>
Only <strong><a href="http://logging.halykmart.online/app/kibana#/management/elasticsearch/index_management/home?_g=()" rel="nofollow noreferrer">MetricBeat</a></strong> index available</p>
<p><a href="https://i.stack.imgur.com/TuDvU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TuDvU.png" alt="enter image description here" /></a></p>
<p><strong>I don't know where the issue please help me to figure out.
Any idea???</strong></p>
<p><strong>Pods</strong>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 21h
es-mono-0 1/1 Running 0 19h
filebeat-4446k 1/1 Running 0 11m
filebeat-fwb57 1/1 Running 0 11m
filebeat-mk5wl 1/1 Running 0 11m
filebeat-pm8xd 1/1 Running 0 11m
kibana-86d8ccc6bb-76bwq 1/1 Running 0 24h
logstash-deployment-8ffbcc994-bcw5n 1/1 Running 0 24h
metricbeat-4s5tx 1/1 Running 0 21h
metricbeat-sgf8h 1/1 Running 0 21h
metricbeat-tfv5d 1/1 Running 0 21h
metricbeat-z8rnm 1/1 Running 0 21h
</code></pre>
<p><strong>SVC</strong></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch LoadBalancer 10.245.83.99 159.223.240.9 9200:31872/TCP,9300:30997/TCP 19h
kibana NodePort 10.245.229.75 <none> 5601:32040/TCP 24h
kibana-external LoadBalancer 10.245.184.232 <pending> 80:31646/TCP 24h
logstash-service ClusterIP 10.245.113.154 <none> 5044/TCP 24h
</code></pre>
<p><strong>Logstash logs <a href="https://github.com/shukurew/logs/blob/main/logs.logstash" rel="nofollow noreferrer">logstash (Raw)</a></strong></p>
<p><strong>filebeat <a href="https://github.com/shukurew/logs/blob/main/logs.filebeat-4446k" rel="nofollow noreferrer">logs (Raw)</a></strong></p>
<p><code>kibana.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: elk
labels:
run: kibana
spec:
replicas: 1
selector:
matchLabels:
run: kibana
template:
metadata:
labels:
run: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:6.5.4
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch.elk:9200/
- name: XPACK_SECURITY_ENABLED
value: "true"
#- name: CLUSTER_NAME
# value: elasticsearch
#resources:
# limits:
# cpu: 1000m
# requests:
# cpu: 500m
ports:
- containerPort: 5601
name: http
protocol: TCP
#volumes:
# - name: logtrail-config
# configMap:
# name: logtrail-config
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: elk
labels:
#service: kibana
run: kibana
spec:
type: NodePort
selector:
run: kibana
ports:
- port: 5601
targetPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: kibana-external
spec:
type: LoadBalancer
selector:
app: kibana
ports:
- name: http
port: 80
targetPort: 5601
</code></pre>
<p><code>filebeat.yaml</code></p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elk
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: elk
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
output.logstash:
hosts: ['logstash-service:5044']
setup.kibana.host: "http://kibana.elk:5601"
setup.kibana.protocol: "http"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: elk
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.5.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
#- name: data
# mountPath: /usr/share/filebeat/data
subPath: filebeat/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
#- name: data
# persistentVolumeClaim:
# claimName: elk-pvc
---
</code></pre>
<p><code>Metricbeat.yaml</code></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-config
namespace: elk
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
setup.kibana:
host: "kibana.elk:5601"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
namespace: elk
labels:
k8s-app: metricbeat
data:
system.yml: |-
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
#- core
#- diskio
#- socket
processes: ['.*']
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
hosts: ["localhost:10255"]
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: metricbeat
namespace: elk
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
serviceAccountName: metricbeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:6.5.4
args: [
"-c", "/etc/metricbeat.yml",
"-e",
"-system.hostfs=/hostfs",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
runAsUser: 0
resources:
limits:
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: data
mountPath: /usr/share/metricbeat/data
subPath: metricbeat/
volumes:
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: config
configMap:
defaultMode: 0600
name: metricbeat-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-daemonset-modules
- name: data
persistentVolumeClaim:
claimName: elk-pvc
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metricbeat
subjects:
- kind: ServiceAccount
name: metricbeat
namespace: elk
roleRef:
kind: ClusterRole
name: metricbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- events
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metricbeat
namespace: elk
labels:
k8s-app: metricbeat
</code></pre>
<p><code>Logstash.yaml</code></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: elk
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["elasticsearch.elk:9200"]
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: elk
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
---
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: elk
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
</code></pre>
<p><strong>Full src files<a href="https://github.com/shukurew/ELK-Stack-kubernetes" rel="nofollow noreferrer">(GitHub)</a></strong></p>
<pre><code></code></pre>
| IDIf Dsd | <p>Try to use FluentD as log transportation
<code>fluentd.yaml</code></p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: elk
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: elk
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: elk
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.elk.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
</code></pre>
| Shukurillo Baikhanov |
<p><a href="https://github.com/oussamabouchikhi/udagram-microservices" rel="nofollow noreferrer">Github repo</a></p>
<p>After I configured the <code>kubectl</code> with the AWS EKS cluster, I deployed the services using these commands:</p>
<pre><code>kubectl apply -f env-configmap.yaml
kubectl apply -f env-secret.yaml
kubectl apply -f aws-secret.yaml
# this is repeated for all services
kubectl apply -f svcname-deploymant.yaml
kubectl apply -f svcname-service.yaml
</code></pre>
<p><a href="https://i.stack.imgur.com/WT9Bn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WT9Bn.png" alt="enter image description here" /></a></p>
<p>The other services ran successfully but the reverse proxy returned an error and when I investigated by running the command <code>kubectl describe pod reverseproxy...</code>
I got this info:</p>
<p><a href="https://pastebin.com/GaREMuyj" rel="nofollow noreferrer">https://pastebin.com/GaREMuyj</a></p>
<p><strong>[Edited]</strong></p>
<p>After running the command <code>kubectl logs -f reverseproxy-667b78569b-qg7p</code> I get this:
<a href="https://i.stack.imgur.com/ixk7s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ixk7s.png" alt="enter image description here" /></a></p>
| Oussama Bouchikhi | <p>As <a href="https://stackoverflow.com/users/10008173/david-maze" title="75,885 reputation">David Maze</a> very rightly pointed out, your problem is not reproducible. You haven't provided all the configuration files, for example. However, the error you received clearly tells about the problem:</p>
<pre><code>host not found in upstream "udagram-users: 8080" in /etc/nginx/nginx.conf:11
</code></pre>
<p>This error makes it clear that you are trying to connect to host <code>udagram-users: 8080</code> as defined in file <code>/etc/nginx/nginx.conf</code> on line 11.</p>
<blockquote>
<p>And how can I solve it please?</p>
</blockquote>
<p>You need to check the connection. (It is also possible that you entered the wrong hostname or port in the config). You mentioned that you are using multiple subnets:</p>
<blockquote>
<p>it is using 5 subnets</p>
</blockquote>
<p>In such a situation, it is very likely that there is no connection because the individual components operate on different networks and will never be able to communicate with each other. If you run all your containers on one network, it should work. If, on the other hand, you want to use multiple subnets, you need to ensure container-to-container communication across multiple subnets.</p>
<p>See also this <a href="https://stackoverflow.com/questions/33639138/docker-networking-nginx-emerg-host-not-found-in-upstream">similar problem</a> with many possible solutions.</p>
| Mikołaj Głodziak |
<p>We are moving EC2-backed Jenkins to Amazon EKS[Elastic Kubernetes Service] & EFS[Elastic File System] backed Jenkins. I have deployed Jenkins in EKS machine and it's opening and working fine. But to run our pipeline we need to install Python and AWS CLI in the slave node. But we don't know where and how to install them. Any help would be highly appreciated.</p>
<p><a href="https://i.stack.imgur.com/nrwPD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nrwPD.png" alt="enter image description here" /></a></p>
| Deepakvg | <p>You can get the publicly available image and include it in your pipeline.</p>
<p>This how I run it on my jenkins</p>
<pre><code>pipeline {
options {
ansiColor('xterm')
}
environment {
}
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.9.0-debug
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 2048Mi
command:
- cat
tty: true
- name: aws-cli
image: public.ecr.aws/bitnami/aws-cli:2.4.25
resources:
requests:
cpu: 200m
memory: 400Mi
limits:
cpu: 1024m
memory: 2048Mi
command:
- cat
tty: true
securityContext:
runAsUser: 0
fsGroup: 0
'''
}
}
stages {
stage ('GitLab') {
steps {
echo 'Building....'
updateGitlabCommitStatus name: 'build', state: 'running'
}
}
stage ('Configure AWS Credentials') {
steps {
withCredentials([[
$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
credentialsId: AWSCRED, // ID of credentials in Jenkins
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
]]){
container('aws-cli') {
sh '''
ls -lha
aws sts get-caller-identity
'''
}
}
}
post{
success{
echo "==== IAM Role assumed successfully ===="
}
failure{
echo "==== IAM Role failed to be assumed ===="
}
}
}
...
</code></pre>
| jazzlighthart |
<p>RabbitMQ cluster operator does not work in Kubernetes.<br />
I have a kubernetes cluster 1.17.17 of 3 nodes. I deployed it with a rancher.
According to this instruction I installed RabbitMQ cluster-operator:
<a href="https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html" rel="nofollow noreferrer">https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html</a><br />
<code>kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"</code><br />
Its ok! but..
I have created this very simple configuration for the instance according to the documentation:</p>
<pre><code>apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
namespace: test-rabbitmq
</code></pre>
<p>I have error:error while running "VolumeBinding" filter plugin for pod "rabbitmq-server-0": pod has unbound immediate PersistentVolumeClaims</p>
<p>after that i checked:<br />
<code>kubectl get storageclasses</code><br />
and saw that there were no resources! I added the following storegeclasses:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>create pv and pvc:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-data-sigma
labels:
type: local
namespace: test-rabbitmq
annotations:
volume.alpha.kubernetes.io/storage-class: rabbitmq-data-sigma
spec:
storageClassName: local-storage
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/rabbitmq-data-sigma"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rabbitmq-data
namespace: test-rabbitmq
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
</code></pre>
<p>I end up getting an error in the volume - which is being generated automated:
<a href="https://i.stack.imgur.com/MhUST.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MhUST.png" alt="enter image description here" /></a></p>
<pre><code>FailedBinding no persistent volumes available for this claim and no storage class is set
</code></pre>
<p>please help to understand this problem!</p>
| Le0n | <p>You can configure <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">Dynamic Volume Provisioning</a> e.g. <strong>Dynamic NFS provisioning</strong> as describe in this <a href="https://medium.com/@myte/kubernetes-nfs-and-dynamic-nfs-provisioning-97e2afb8b4a9" rel="nofollow noreferrer">article</a> or you can manually create <code>PersistentVolume</code> ( it is <strong>NOT</strong> recommended approach).</p>
<p>I really recommend you to configure dynamic provisioning -
this will allow you to generate <code>PersistentVolumes</code> automatically.</p>
<h3>Manually creating PersistentVolume</h3>
<p>As I mentioned it isn't recommended approach but it may be useful when we want to check something quickly without configuring additional components.</p>
<p>First you need to create <code>PersistentVolume</code>:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/rabbitmq # data will be stored in the "/mnt/rabbitmq" directory on the worker node
type: Directory
</code></pre>
<p>And then create the <code>/mnt/rabbitmq</code> directory on the node where the <code>rabbitmq-server-0</code> <code>Pod</code> will be running. In your case you have 3 worker node so it may difficult to determine where the <code>Pod</code> will be running.</p>
<p>As a result you can see that the <code>PersistentVolumeClaim</code> was bound to the newly created <code>PersistentVolume</code> and the <code>rabbitmq-server-0</code> <code>Pod</code> was created successfully:</p>
<pre><code># kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10Gi RWO Recycle Bound test-rabbitmq/persistence-rabbitmq-server-0 11m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-rabbitmq persistentvolumeclaim/persistence-rabbitmq-server-0 Bound pv 10Gi RWO 11m
# kubectl get pod -n test-rabbitmq
NAME READY STATUS RESTARTS AGE
rabbitmq-server-0 1/1 Running 0 11m
</code></pre>
| matt_j |
<p>I was creating a service to cleanup cached/unused images from all the nodes in AKS clusters. The implementation uses "<strong>crictl rmi --prune</strong>", which deletes all the images which do not have an active pod/container running at the time of trigger i.e., all unused images in cache.</p>
<p>However, I was asked what happens if it deletes images which are required by AKS/K8s, which are currently not in use but might be needed in future? and what if its ImagePullPolicy is set to Never?</p>
<p>So, I had few questions for AKS/K8 experts:</p>
<ul>
<li>I want to know what images are used by AKS/K8s service and in what scenarios?</li>
<li>What is the default ImagePullPolicy for images AKs/k8s might need? &
How can I check if default ImagePullPolicy was changed?</li>
<li>"crictl rmi --prune" is not a recommended way to cleanup cached/unused images? If not,
what is the recommended way to cleanup cached images from
cluster nodes?</li>
</ul>
<p>Apologies if these questions sound silly, still a rookie with K8s :)</p>
<p>Thanks and Regards,</p>
<p>Junaid.</p>
| SafiJunaid | <p>That's the right way of cleaning up of old/cached images.
More details here:
<a href="https://github.com/Azure/AKS/issues/1421" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/1421</a></p>
<p>You can also use a DaemonSet YAML file which can clean-up automatically:
<a href="https://gist.github.com/alexeldeib/2a02ccb3db02ddb828a9c1ef04f2b955" rel="nofollow noreferrer">https://gist.github.com/alexeldeib/2a02ccb3db02ddb828a9c1ef04f2b955</a></p>
<p>/////////////</p>
<p>AKS runs it's images from mcr.microsoft.com repository for all the pods running in kube-system and by-default for those pods imagePullPolicy is "ifNotPresent" and you can't edit/change the policy of the system pods.</p>
<p>//////////</p>
<p>You can see all the pods and its image details in kube-system namespace</p>
<p><strong>Example:</strong></p>
<p>kubectl get pods -n kube-system</p>
<p>(pick up one of the pod from above output, here I am checking kube-proxy pods)</p>
<p>kubectl get pod kube-proxy-cwrqw -n kube-system -o yaml
(From the output you can search for ImagePullPolicy attribute value)</p>
| Shiva Patpi |
<p>i got really confused since i am new to kubernetes
is there any difference between kubernetes endpoint and ClusterIP ?</p>
| Amine Ben Amor | <p>An Endpoint in Kubernetes is just an IP address and port tied to some resource, and you rarely want to think about them at all as they are just used by other resources like Services. It is though a Kubernetes resource that can be found, listed and described.</p>
<p>You can list all Endpoint resources in the cluster with <code>kubectl get endpoints -A</code></p>
<p>A ClusterIP on the other hand is just an IP address on the internal Kubernetes network. This is where all pods communicate with each other through Services.</p>
<p>A Service is a way for network traffic to flow to pods. If you want to expose a pod only on the internal network for other pods to communicate with you would setup a Service with the type ClusterIP, and use the name of the service as a DNS name. Inside pods on the network you can always call the service names in stead of the actual IP addresses.</p>
<p>If you want the pods to be exposed external to the Kubernets cluster you have multiple ways to do this. You can create a Service with type NodePort which will open the same port on all cluster nodes and direct traffic to the pod through that port. You could also setup a Service with type LoadBalancer, but it is a little more complicated depending on where your cluster is living.</p>
| CrowDev |
<p><strong>Hello!</strong></p>
<p>I tired to create some playbook for deploying "AWX Operator" and Kubernetes using manual of installation <a href="https://computingforgeeks.com/how-to-install-ansible-awx-on-ubuntu-linux/#comment-8810" rel="nofollow noreferrer">Install AWX Operator</a></p>
<p>I have the command:</p>
<pre><code>export NAMESPACE=awx
kubectl create ns ${NAMESPACE}
</code></pre>
<p>I created tasks:</p>
<pre><code>- name: Echo export NAMESPACE awx
shell: "export NAMESPACE=awx"
environment:
NAMESPACE: awx
- name: my_env_var
shell: "kubectl create ns NAMESPACE"
</code></pre>
<p>But I get an error:</p>
<pre><code>fatal: [jitsi]: FAILED! => {"changed": true, "cmd": "kubectl create ns NAMESPACE", "delta": "0:00:00.957414", "end": "2021-10-22 13:25:16.822714", "msg": "non-zero return code", "rc": 1, "start": "2021-10-22 13:25:15.865300", "stderr": "The Namespace \"NAMESPACE\" is invalid: metadata.name: Invalid value: \"NAMESPACE\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')", "stderr_lines": ["The Namespace \"NAMESPACE\" is invalid: metadata.name: Invalid value: \"NAMESPACE\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')"], "stdout": "", "stdout_lines": []}
</code></pre>
<p>Could you please help me with advice? <strong>Thank you.</strong></p>
| Oleg | <p>You have everything written in this error :)</p>
<p><strong>There is a problem with the command</strong></p>
<pre class="lang-yaml prettyprint-override"><code>kubectl create ns NAMESPACE
</code></pre>
<p>You want to create a namespace called <code>NAMESPACE</code> which is wrong. <strong>You cannot use capital letters in the name of the namespace.</strong> You can get hint from this message:</p>
<pre class="lang-yaml prettyprint-override"><code>Invalid value: \"NAMESPACE\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc'
</code></pre>
<p>How to solve it? You need to change this line:</p>
<pre class="lang-yaml prettyprint-override"><code>shell: "kubectl create ns NAMESPACE"
</code></pre>
<p>You need to proper set your namespace without capital letters.</p>
<p><strong>Examples:</strong></p>
<pre class="lang-yaml prettyprint-override"><code>shell: "kubectl create ns my-namespace"
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>shell: "kubectl create ns my-name"
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>shell: "kubectl create ns whatever-you-want"
</code></pre>
| Mikołaj Głodziak |
<p>I have a problem with kubernetes local cluster recently.When I was using the command <code>kubectl exec -it curl-- bash</code> to run some commands on the pod called 'curl',I got some errors:
<a href="https://i.stack.imgur.com/WQir6.png" rel="nofollow noreferrer">error info</a></p>
<p>And here are the nodes' info:
<a href="https://i.stack.imgur.com/yvO7A.png" rel="nofollow noreferrer">nodes info</a></p>
<p>The pod 'curl' is working nicely on the datanode-2 and kubelet is listening on the port 10250,but I don't know why I got the error info,Here is the `kubectl describe po curl':
<a href="https://i.stack.imgur.com/N8Myo.png" rel="nofollow noreferrer">curl pod describe</a></p>
<p>And here are the pods in the namespace kube-system,the CNI is flannel:
<a href="https://i.stack.imgur.com/4bzPf.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>It's same to run <code>kubectl exec</code> on others pod(same on datanode-1),how to solve this?</p>
| Jesse Stutler | <p>This error might be related with communication of the kube-apiserver.service (on the control nodes) with the kubelet.service (port 10250 by default)</p>
<p>To Troubleshoot , you might want to ssh into the control node and</p>
<pre><code>telnet hostname(workernode) 10250
telnet privateip(workernode) 1025
</code></pre>
<p>If both telnet tests failed it might be related with your firewall on the worker nodes . So you should open the port 10250 in the worker nodes . To check if the kubelet is running on this port</p>
<pre><code>lsof -i :10250
</code></pre>
<p>If the telnet test fails with the hostname or public ip, but works with the private ip . You should add to the unit service file of the kube-apiserver (located at /etc/systemd/system/kube-apiserver.service) the flag</p>
<pre><code>--kubelet-preferred-address-types InternalIP
</code></pre>
<p>Save it , and then just</p>
<pre><code>systemctl daemon-reload
systemctl restart kube-apiserver
</code></pre>
| Zahid |
<p>My cluster certificates are expired and now I cannot execute any kubectls commands.</p>
<pre><code>root@node1:~# kubectl get ns
Unable to connect to the server: x509: certificate has expired or is not yet valid
root@node1:~#
</code></pre>
<p>I have created this cluster using Kubespray , kubeadm version is v1.16.3 and kubernetesVersion v1.16.3</p>
<pre><code>root@node1:~# kubeadm alpha certs check-expiration
failed to load existing certificate apiserver-etcd-client: open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
root@node1:~#
</code></pre>
<p>And it is found that apiserver-etcd-client.crt and apiserver-etcd-client.key files are missing on /etc/kubernetes/pki directory.</p>
<pre><code>root@node1:/etc/kubernetes/pki# ls -ltr
total 72
-rw------- 1 root root 1679 Jan 24 2020 ca.key
-rw-r--r-- 1 root root 1025 Jan 24 2020 ca.crt
-rw-r----- 1 root root 1679 Jan 24 2020 apiserver.key.old
-rw-r----- 1 root root 1513 Jan 24 2020 apiserver.crt.old
-rw------- 1 root root 1679 Jan 24 2020 apiserver.key
-rw-r--r-- 1 root root 1513 Jan 24 2020 apiserver.crt
-rw------- 1 root root 1675 Jan 24 2020 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1099 Jan 24 2020 apiserver-kubelet-client.crt
-rw-r----- 1 root root 1675 Jan 24 2020 apiserver-kubelet-client.key.old
-rw-r----- 1 root root 1099 Jan 24 2020 apiserver-kubelet-client.crt.old
-rw------- 1 root root 1679 Jan 24 2020 front-proxy-ca.key
-rw-r--r-- 1 root root 1038 Jan 24 2020 front-proxy-ca.crt
-rw-r----- 1 root root 1675 Jan 24 2020 front-proxy-client.key.old
-rw-r----- 1 root root 1058 Jan 24 2020 front-proxy-client.crt.old
-rw------- 1 root root 1675 Jan 24 2020 front-proxy-client.key
-rw-r--r-- 1 root root 1058 Jan 24 2020 front-proxy-client.crt
-rw------- 1 root root 451 Jan 24 2020 sa.pub
-rw------- 1 root root 1679 Jan 24 2020 sa.key
root@node1:/etc/kubernetes/pki#
</code></pre>
<p>I have tried the following command but nothing is worked and showing errors:</p>
<pre><code>#sudo kubeadm alpha certs renew all
#kubeadm alpha phase certs apiserver-etcd-client
#kubeadm alpha certs apiserver-etcd-client --config /etc/kubernetes/kubeadm-config.yaml
</code></pre>
<p>Kubespray command:</p>
<pre><code>#ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
</code></pre>
<p>The above command ended with the below error:</p>
<p>FAILED! => {"attempts": 5, "changed": true, "cmd": ["/usr/local/bin/kubeadm", "--kubeconfig", "/etc/kubernetes/admin.conf", "token", "create"], "delta": "0:01:15.058756", "end": "2021-02-05 13:32:51.656901", "msg": "non-zero return code", "rc": 1, "start": "2021-02-05 13:31:36.598145", "stderr": "timed out waiting for the condition\nTo see the stack trace of this error execute with --v=5 or higher", "stderr_lines": ["timed out waiting for the condition", "To see the stack trace of this error execute with --v=5 or higher"], "stdout": "", "stdout_lines": []}</p>
<pre><code># cat /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: master1_IP
bindPort: 6443
certificateKey: xxx
nodeRegistration:
name: node1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
criSocket: /var/run/dockershim.sock
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
clusterName: cluster.local
etcd:
external:
endpoints:
- https://master1:2379
- https://master2:2379
- https://master3:2379
caFile: /etc/ssl/etcd/ssl/ca.pem
certFile: /etc/ssl/etcd/ssl/node-node1.pem
keyFile: /etc/ssl/etcd/ssl/node-node1-key.pem
dns:
type: CoreDNS
imageRepository: docker.io/coredns
imageTag: 1.6.0
networking:
dnsDomain: cluster.local
serviceSubnet: IP/18
podSubnet: IP/18
kubernetesVersion: v1.16.3
controlPlaneEndpoint: master1_IP:6443
certificatesDir: /etc/kubernetes/ssl
imageRepository: gcr.io/google-containers
apiServer:
</code></pre>
| Ajeesh Kannan | <p>First you need to renew expired certificates, use <code>kubeadm</code> to do this:</p>
<pre><code>kubeadm alpha certs renew apiserver
kubeadm alpha certs renew apiserver-kubelet-client
kubeadm alpha certs renew front-proxy-client
</code></pre>
<p>Next generate new <code>kubeconfig</code> files:</p>
<pre><code>kubeadm alpha kubeconfig user --client-name kubernetes-admin --org system:masters > /etc/kubernetes/admin.conf
kubeadm alpha kubeconfig user --client-name system:kube-controller-manager > /etc/kubernetes/controller-manager.conf
# instead of $(hostname) you may need to pass the name of the master node as in "/etc/kubernetes/kubelet.conf" file.
kubeadm alpha kubeconfig user --client-name system:node:$(hostname) --org system:nodes > /etc/kubernetes/kubelet.conf
kubeadm alpha kubeconfig user --client-name system:kube-scheduler > /etc/kubernetes/scheduler.conf
</code></pre>
<p>Copy new <code>kubernetes-admin</code> <code>kubeconfig</code> file:</p>
<pre><code>cp /etc/kubernetes/admin.conf ~/.kube/config
</code></pre>
<p>Finally you need to restart: <code>kube-apiserver</code>, <code>kube-controller-manager</code> and <code>kube-scheduler</code>. You can use below commands or just restart master node:</p>
<pre><code>sudo kill -s SIGHUP $(pidof kube-apiserver)
sudo kill -s SIGHUP $(pidof kube-controller-manager)
sudo kill -s SIGHUP $(pidof kube-scheduler)
</code></pre>
<p>Additionally you can find more information on <a href="https://github.com/kubernetes/kubeadm/issues/581" rel="nofollow noreferrer">github</a> and <a href="https://github.com/kubernetes/kubeadm/issues/581#issuecomment-471575078" rel="nofollow noreferrer">this answer</a> may be of great help to you.</p>
| matt_j |
<p>I have Kubernetes Master listening on Internal Openstack network address 192.168.6.6:6443. This machine has a floating IP associated for ssh based access (x.x.x.x) from my home. SSH received on the floating IP is sent to the internal IP. But this does not work for 6443 forwarding.</p>
<p>How do I access the K8S API server from my home when I can access the floating IP associated with the K8S master but not the internal IP on which the API server is listening.</p>
<p>I know the method of copying config file to your local machine but config files have the ip address on which master is listening and that ip is not accessible from outside Openstack.</p>
<p>Thanks for any help</p>
| SNK | <p>I managed to solve this by reinstantiating the k8s cluster on OpenStack and providing the floating ip as "--apiserver-cert-extra-sans" parameter to kubeadm.</p>
<p>kubeadm init --apiserver-cert-extra-sans=</p>
| SNK |
<p>I have two services, say <code>svcA</code> and <code>svcB</code> that may sit in different namespaces or even in different k8s clusters. I want to configure the services so that <code>svcA</code> can refer to <code>svcB</code> using some constant address, then deploy an Istio <strong>Service Entry</strong> object depending on the environment to route the request. I will use Helm to do the deployment, so using a condition to choose the object to deploy is easy.</p>
<p>If <code>svcB</code> is in a completely different cluster, it is just like any external server and is easy to configure.</p>
<p>But when it is in a different namespace on the same cluster, I just could not get the <strong>Service Entry</strong> work. Maybe I don't understand all the options it provides.</p>
<h2>Istio objects</h2>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: demo-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.alias
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
</code></pre>
<h2>Update</h2>
<p>After doing some random/crazy test, I found that the <em>alias</em> domain name must ends with well know suffix, like <code>.com</code>, <code>.org</code>, arbitrary suffix, like <code>.svc</code>, <code>.alias</code>, won't work.</p>
<p>If I update the above <strong>ServiceEntry</strong> object to like this. My application works.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.com
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
</code></pre>
<p>I searched for a while and checked the Istio documentations, but could not find any reference about domain name suffix restrictions.</p>
<p>Is it implicit knowledge that only domain names like <code>.com</code> and <code>.org</code> are valid? I have left school for a long time.</p>
| David S. | <p>I have posted community wiki answer to summarize the topic and paste explanation of the problem:</p>
<p>After doing some random/crazy test, I found that the <em>alias</em> domain name must ends with well know suffix, like <code>.com</code>, <code>.org</code>, arbitrary suffix, like <code>.svc</code>, <code>.alias</code>, won't work.</p>
<p>If I update the above <strong>ServiceEntry</strong> object to like this. My application works.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.com
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
</code></pre>
<p>I searched for a while and checked the Istio documentations, but could not find any reference about domain name suffix restrictions.</p>
<p>Is it implicit knowledge that only domain names like <code>.com</code> and <code>.org</code> are valid? I have left school for a long time.</p>
<p><strong>Explanation:</strong>
You can find <a href="https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry" rel="nofollow noreferrer">ServiceEntry</a> requirements in the offical documentation. You can find description how you can properly set this value:</p>
<blockquote>
<p>The hosts associated with the ServiceEntry. Could be a DNS name with wildcard prefix.</p>
<ol>
<li>The hosts field is used to select matching hosts in VirtualServices and DestinationRules.</li>
<li>For HTTP traffic the HTTP Host/Authority header will be matched against the hosts field.</li>
<li>For HTTPs or TLS traffic containing Server Name Indication (SNI), the SNI value will be matched against the hosts field.</li>
</ol>
<p><strong>NOTE 1:</strong> When resolution is set to type DNS and no endpoints are specified, the host field will be used as the DNS name of the endpoint to route traffic to.</p>
<p><strong>NOTE 2:</strong> If the hostname matches with the name of a service from another service registry such as Kubernetes that also supplies its own set of endpoints, the ServiceEntry will be treated as a decorator of the existing Kubernetes service. Properties in the service entry will be added to the Kubernetes service if applicable. Currently, the only the following additional properties will be considered by <code>istiod</code>:</p>
<ol>
<li>subjectAltNames: In addition to verifying the SANs of the service accounts associated with the pods of the service, the SANs specified here will also be verified.</li>
</ol>
</blockquote>
<p>Based on <a href="https://github.com/istio/istio/issues/13436" rel="nofollow noreferrer">this issue</a> you don't have to use FQDN in your <code>hosts</code> field, but you need to set proper value to select matching hosts in VirtualServices and DestinationRules.</p>
| Mikołaj Głodziak |
<p>I have deployed the Bitnami EFK helm chart on the K8s cluster.
<strong><a href="https://github.com/bitnami/charts/tree/master/bitnami/fluentd" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/fluentd</a></strong></p>
<p>All pod runs fine but Fluentd not showing any logs. I don't know if I have something missing in the config. However, the cluster is restricted, and don't know if that makes any difference. I deployed the same EFK on the unrestricted cluster with the same configuration and works totally fine.</p>
<pre><code>kkot@ltp-str-00-0085:~/logging-int$ kk get pod
NAME READY STATUS RESTARTS AGE
elasticsearch-elasticsearch-coordinating-only-5f5656cdd5-9d4lj 1/1 Running 0 6h34m
elasticsearch-elasticsearch-coordinating-only-5f5656cdd5-h6lbd 1/1 Running 0 6h34m
elasticsearch-elasticsearch-data-0 1/1 Running 0 6h34m
elasticsearch-elasticsearch-data-1 1/1 Running 0 6h34m
elasticsearch-elasticsearch-master-0 1/1 Running 0 6h34m
elasticsearch-elasticsearch-master-1 1/1 Running 0 6h34m
fluentd-0 1/1 Running 0 6h10m
fluentd-4glgs 1/1 Running 2 6h10m
fluentd-59tzz 1/1 Running 0 5h43m
fluentd-b8bc8 1/1 Running 2 6h10m
fluentd-qfdcs 1/1 Running 2 6h10m
fluentd-sf2hk 1/1 Running 2 6h10m
fluentd-trvwx 1/1 Running 0 95s
fluentd-tzqw8 1/1 Running 2 6h10m
kibana-656d55f94d-8qf8f 1/1 Running 0 6h28m
kkot@ltp-str-00-0085:~/logging-int$ kk logs fluentd-qfdcs
</code></pre>
<p>Error Log:</p>
<pre><code>2021-02-24 10:52:15 +0000 [warn]: #0 pattern not matched: "{\"log\":\"2021-02-24 10:52:13 +0000 [warn]: #0 pattern not matched: \\"{\\\\"log\\\\":\\\\"
</code></pre>
<p>Has anyone faced the same issue? Thanks</p>
| kishorK | <p>Could you please share what configuration is your forwarder using?</p>
<p>In the latest version of the chart (3.6.2) it will use the following by default:</p>
<pre><code> configMapFiles:
fluentd.conf: |
# Ignore fluentd own events
<match fluent.**>
@type null
</match>
@include fluentd-inputs.conf
@include fluentd-output.conf
{{- if .Values.metrics.enabled }}
@include metrics.conf
{{- end }}
fluentd-inputs.conf: |
# HTTP input for the liveness and readiness probes
<source>
@type http
port 9880
</source>
# Get the logs from the containers running in the node
<source>
@type tail
path /var/log/containers/*.log
# exclude Fluentd logs
exclude_path /var/log/containers/*fluentd*.log
pos_file /opt/bitnami/fluentd/logs/buffers/fluentd-docker.pos
tag kubernetes.*
read_from_head true
<parse>
@type json
</parse>
</source>
# enrich with kubernetes metadata
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
</code></pre>
<p>By the error log you shared:</p>
<pre><code>2021-02-24 10:52:15 +0000 [warn]: #0 pattern not matched: "{\"log\":\"2021-02-24 10:52:13 +0000 [warn]: #0 pattern not matched: \\"{\\\\"log\\\\":\\\\"
</code></pre>
<p>I notice two things:</p>
<ul>
<li>The fluentd pods seem to be collecting their own logs, which shouldn't be happening because of:
<blockquote>
<pre><code> # exclude Fluentd logs
exclude_path /var/log/containers/*fluentd*.log
</code></pre>
</blockquote>
</li>
<li>The JSON logs are not being parsed although it is configured:
<blockquote>
<pre><code> <parse>
@type json
</parse>
</code></pre>
</blockquote>
</li>
</ul>
<p>Maybe you have omitted the configMapFiles in your <code>values.yaml</code>?</p>
| Miguel Ruiz |
<p>I have one question regarding the <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage" rel="nofollow noreferrer">access log of envoy</a>:</p>
<ul>
<li>I use the field host: <code>%REQ(:AUTHORITY)%</code>, can I remove the port?</li>
</ul>
<p>Or is there another fields which include the <code>AUTHORITY</code> and doesn't include the port?</p>
| JME | <p>First, the <code>"%REQ(:AUTHORITY)%"</code> field does not contain any information about the port. Look at this <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#format-strings" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p>Format strings are plain strings, specified using the <code>format</code> key. They may contain either command operators or other characters interpreted as a plain string. The access log formatter does not make any assumptions about a new line separator, so one has to specified as part of the format string. See the <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#config-access-log-default-format" rel="nofollow noreferrer">default format</a> for an example.</p>
</blockquote>
<blockquote>
<p>If custom format string is not specified, Envoy uses the following default format:</p>
</blockquote>
<pre><code>[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%"
%RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION%
%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%"
"%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"\n
</code></pre>
<blockquote>
<p>Example of the default Envoy access log format:</p>
</blockquote>
<pre><code>[2016-04-15T20:17:00.310Z] "POST /api/v1/locations HTTP/2" 204 - 154 0 226 100 "10.0.35.28"
"nsq2http" "cc21d9b0-cf5c-432b-8c7e-98aeb7988cd2" "locations" "tcp://10.0.2.1:80"
</code></pre>
<p>Field <code>"%REQ(:AUTHORITY)%"</code> shows value <code>"locations"</code> and field <code>"%UPSTREAM_HOST%"</code> shows <code>"tcp://10.0.2.1:80"</code>.</p>
<p>You can customise your log format based on format keys.</p>
<p><a href="https://blog.getambassador.io/understanding-envoy-proxy-and-ambassador-http-access-logs-fee7802a2ec5" rel="nofollow noreferrer">Here</a> you can find good article about understanding these logs. Field <code>"%REQ(:AUTHORITY)%"</code> is value of the <code>Host</code> (HTTP/1.1) or <code>Authority</code> (HTTP/2) header. Look at <a href="https://twitter.com/askmeegs/status/1157029140693995521/photo/1" rel="nofollow noreferrer">this picture</a> to better understand.</p>
<p>I suppose you want to edit the field <code>"%UPSTREAM_HOST%"</code> It is impossible to remove the port from this field. You can find documentation with description of these fields <a href="https://www.bookstack.cn/read/envoyproxy-1.14/c5ab90d69db4830d.md" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>%UPSTREAM_HOST%</p>
<p>Upstream host URL (e.g., <a href="https://www.envoyproxy.io/docs/envoy/v1.14.0/configuration/observability/tcp:/ip:port" rel="nofollow noreferrer">tcp://ip:port</a> for TCP connections).</p>
</blockquote>
<p>I haven't found any other field that returns just an IP address without a port.</p>
<hr />
<p><strong>Answering your question:</strong></p>
<blockquote>
<ul>
<li>I use the field host: <code>%REQ(:AUTHORITY)%</code> , can I remove the port ?</li>
</ul>
</blockquote>
<p>No, because this field does not return a port at all.</p>
<blockquote>
<p>is there another fields which include the AUTHORITY and doesnt include the port?</p>
</blockquote>
<p>You can use <code>%REQ(:AUTHORITY)%</code> field without <code>"%UPSTREAM_HOST%"</code> field. You can do this by creating your custom log format. As far as I know it is impossible to have only IP adress without port in the logs.</p>
| Mikołaj Głodziak |
<p>Sometimes we face some nginx vulnerabilities,
so we need to fix the nginx vulnerabilities inside ingress-nginx,
but the docker build -t image is too slow.
The reason is that the dockerfile internal will make compile and make install process.
How to add some parameters can make the docker build process faster?</p>
<p>Although the docker build process prompts make to add the <code>-j</code> parameter to increase threads to speed up the process,
there is no make related parameter inside the dockerfile.
It is not a good idea to modify the dockerfile directly.</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/blob/main/images/nginx/rootfs/Dockerfile" rel="nofollow noreferrer">Source</a> of the dockerfile.</p>
| auggie321 | <p>There is no one good solution on how to speed up the building of a Docker image. This may depend on a number of things. That is why I am posting the answer of the community wiki to present as many solution proposals as possible, referring to various tutorials.</p>
<hr />
<p>There are a few tricks you can use to speed up building Docker images.
First I will present you solution from <a href="https://cloud.google.com/build/docs/speeding-up-builds#using_a_cached_docker_image" rel="nofollow noreferrer">Google cloud</a>:</p>
<blockquote>
<p>The easiest way to increase the speed of your Docker image build is by specifying a cached image that can be used for subsequent builds. You can specify the cached image by adding the <code>--cache-from</code> argument in your build config file, which will instruct Docker to build using that image as a cache source.</p>
</blockquote>
<p>You can read more here about <a href="https://docs.semaphoreci.com/ci-cd-environment/docker-layer-caching/" rel="nofollow noreferrer">Docker Layer Caching</a>.</p>
<p>Another way is <a href="https://vsupalov.com/5-tips-to-speed-up-docker-build/#tip-2-structure-your-dockerfile-instructions-like-an-inverted-pyramid" rel="nofollow noreferrer">Structure your Dockerfile instructions like an inverted pyramid</a>:</p>
<blockquote>
<p>Each instruction in your Dockerfile results in an image layer being created. Docker uses layers to reuse work, and save bandwidth. Layers are cached and don’t need to be recomputed if:</p>
<ul>
<li>All previous layers are unchanged.</li>
<li>In case of COPY instructions: the files/folders are unchanged.</li>
<li>In case of all other instructions: the command text is unchanged.</li>
</ul>
<p>To make good use of the Docker cache, it’s a good idea to try and put layers where lots of slow work needs to happen, but which change infrequently early in your Dockerfile, and put quickly-changing and fast layers last. The result is like an inverted pyramid.</p>
</blockquote>
<p>You can also <a href="https://vsupalov.com/5-tips-to-speed-up-docker-build/#tip-2-structure-your-dockerfile-instructions-like-an-inverted-pyramid" rel="nofollow noreferrer">Only copy files which are needed for the next step</a>.</p>
<p>Look at these great tutorials about speeding building your Docker images:</p>
<p>-<a href="https://vsupalov.com/5-tips-to-speed-up-docker-build" rel="nofollow noreferrer">5 Tips to Speed up Your Docker Image Build</a>
-<a href="https://www.docker.com/blog/speed-up-your-development-flow-with-these-dockerfile-best-practices/" rel="nofollow noreferrer">Speed Up Your Development Flow With These Dockerfile Best Practices</a>
-[Six Ways to Build Docker Images Faster (Even in Seconds)](# Six Ways to Build Docker Images Faster (Even in Seconds)</p>
<p>At the end I will present you one more method described here - <a href="https://tsh.io/blog/speed-up-your-docker-image-build/" rel="nofollow noreferrer">How to speed up your Docker image build?</a> You can you a tool Buildkit.</p>
<blockquote>
<p>With Docker 18.09 ,a new builder was released. It’s called Buildkit. It is not used by default, so most of us are still using the old one. The thing is, Buildkit is much faster, even for such simple images!</p>
</blockquote>
<blockquote>
<p>The difference is about 18 seconds on an image that builds in the 70s. That’s a lot, almost 33%!</p>
</blockquote>
<p>Hope it helps ;)</p>
| Mikołaj Głodziak |
<p>I'm using the SeleniumGrid in the most recent version <code>4.1.2</code> in a Kubernetes cluster.</p>
<p>In many cases (I would say in about half) when I execute a test through the grid, the node fails to kill the processes and does not go back to being idle. The container then keeps using one full CPU all the time until I kill it manually.</p>
<p>The log in the container is the following:</p>
<pre><code>10:51:34.781 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
10:51:35.680 INFO [NodeServer.lambda$createHandlers$2] - Node has been added
Starting ChromeDriver 98.0.4758.102 (273bf7ac8c909cde36982d27f66f3c70846a3718-refs/branch-heads/4758@{#1151}) on port 39592
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
[1C6h4r6o1m2e9D1r2i3v.e9r8 7w]a[sS EsVtEaRrEt]e:d bsiuncdc(e)s sffauillleyd.:
Cannot assign requested address (99)
11:08:24.970 WARN [SeleniumSpanExporter$1.lambda$export$0] - {"traceId": "99100300a4e6b4fe2afe5891b50def09","eventTime": 1646129304968456597,"eventName": "No slot matched the requested capabilities. ","attributes"
11:08:44.672 INFO [OsProcess.destroy] - Unable to drain process streams. Ignoring but the exception being swallowed follows.
org.apache.commons.exec.ExecuteException: The stop timeout of 2000 ms was exceeded (Exit value: -559038737)
at org.apache.commons.exec.PumpStreamHandler.stopThread(PumpStreamHandler.java:295)
at org.apache.commons.exec.PumpStreamHandler.stop(PumpStreamHandler.java:180)
at org.openqa.selenium.os.OsProcess.destroy(OsProcess.java:135)
at org.openqa.selenium.os.CommandLine.destroy(CommandLine.java:152)
at org.openqa.selenium.remote.service.DriverService.stop(DriverService.java:281)
at org.openqa.selenium.grid.node.config.DriverServiceSessionFactory.apply(DriverServiceSessionFactory.java:183)
at org.openqa.selenium.grid.node.config.DriverServiceSessionFactory.apply(DriverServiceSessionFactory.java:65)
at org.openqa.selenium.grid.node.local.SessionSlot.apply(SessionSlot.java:143)
at org.openqa.selenium.grid.node.local.LocalNode.newSession(LocalNode.java:314)
at org.openqa.selenium.grid.node.NewNodeSession.execute(NewNodeSession.java:52)
at org.openqa.selenium.remote.http.Route$TemplatizedRoute.handle(Route.java:192)
at org.openqa.selenium.remote.http.Route.execute(Route.java:68)
at org.openqa.selenium.grid.security.RequiresSecretFilter.lambda$apply$0(RequiresSecretFilter.java:64)
at org.openqa.selenium.remote.tracing.SpanWrappedHttpHandler.execute(SpanWrappedHttpHandler.java:86)
at org.openqa.selenium.remote.http.Filter$1.execute(Filter.java:64)
at org.openqa.selenium.remote.http.Route$CombinedRoute.handle(Route.java:336)
at org.openqa.selenium.remote.http.Route.execute(Route.java:68)
at org.openqa.selenium.grid.node.Node.execute(Node.java:240)
at org.openqa.selenium.remote.http.Route$CombinedRoute.handle(Route.java:336)
at org.openqa.selenium.remote.http.Route.execute(Route.java:68)
at org.openqa.selenium.remote.AddWebDriverSpecHeaders.lambda$apply$0(AddWebDriverSpecHeaders.java:35)
at org.openqa.selenium.remote.ErrorFilter.lambda$apply$0(ErrorFilter.java:44)
at org.openqa.selenium.remote.http.Filter$1.execute(Filter.java:64)
at org.openqa.selenium.remote.ErrorFilter.lambda$apply$0(ErrorFilter.java:44)
at org.openqa.selenium.remote.http.Filter$1.execute(Filter.java:64)
at org.openqa.selenium.netty.server.SeleniumHandler.lambda$channelRead0$0(SeleniumHandler.java:44)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
11:08:44.673 ERROR [OsProcess.destroy] - Unable to kill process Process[pid=75, exitValue=143]
11:08:44.675 WARN [SeleniumSpanExporter$1.lambda$export$0] - {"traceId": "99100300a4e6b4fe2afe5891b50def09","eventTime": 1646129316638154262,"eventName": "exception","attributes": {"driver.url": "http:\u002f\u002f
</code></pre>
<p>Here's an excerpt from the Kubernetes manifest:</p>
<pre class="lang-yaml prettyprint-override"><code> - name: selenium-node-chrome
image: selenium/node-chrome:latest
...
env:
- name: TZ
value: Europe/Berlin
- name: START_XVFB
value: "false"
- name: SE_NODE_OVERRIDE_MAX_SESSIONS
value: "true"
- name: SE_NODE_MAX_SESSIONS
value: "1"
envFrom:
- configMapRef:
name: selenium-event-bus-config
...
volumeMounts:
- name: dshm
mountPath: /dev/shm
...
volumes:
- name: dshm
emptyDir:
medium: Memory
</code></pre>
<p>The <code>selenium-event-bus-config</code> contains the following vars:</p>
<pre class="lang-yaml prettyprint-override"><code>data:
SE_EVENT_BUS_HOST: selenium-hub
SE_EVENT_BUS_PUBLISH_PORT: "4442"
SE_EVENT_BUS_SUBSCRIBE_PORT: "4443"
</code></pre>
<p>Did I misconfigure anything? Has anyone any idea how I can fix this?</p>
| Max N. | <p>If you don't need to use Xvfb you can remove it from your code and your problem will be resolved.</p>
<blockquote>
<p>Apparently the issue resolves when removing the <code>START_XVFB</code> parameter. With a node with only the timezone config I did not yet have the problem.</p>
</blockquote>
<p>For the workaround you can try to change your driver for example to <a href="https://sites.google.com/a/chromium.org/chromedriver/downloads" rel="nofollow noreferrer">Chromedriver</a>. You can read about the differences between them <a href="https://stackoverflow.com/questions/41460168/what-is-difference-between-xvfb-and-chromedriver-and-when-to-use-them">here</a>.</p>
<p>See also <a href="https://github.com/SeleniumHQ/docker-selenium/issues/1500" rel="nofollow noreferrer">this similar problem</a>.</p>
| Mikołaj Głodziak |
<p>Now I read this config in kubernetes:</p>
<pre><code> containers:
- name: canal-admin-stable
image: 'dolphinjiang/canal-admin:v1.1.5'
ports:
- name: http
containerPort: 8089
protocol: TCP
resources:
limits:
cpu: '2'
memory: 2Gi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: admin-conf
mountPath: /home/canal/conf/application.yml
subPath: application.yml
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
</code></pre>
<p>the volumne mount path(<code>/home/canal/conf/application.yml</code>) already file full path, I think kubernetes only overwrite the file application.yml, why still specify subPath <code>application.yml</code>? Why not write like this:</p>
<pre><code>volumeMounts:
- name: admin-conf
mountPath: /home/canal/conf/
subPath: application.yml
</code></pre>
| Dolphin | <p>I was using init-containers to pass a config file to the main container and discovered the use behind the mountPath and subPath as it was confusing to me too at first. The mountPath is always the destination inside the Pod a volume gets mounted to. However, I discovered that if your mountPath has other config files in it, then that directory will be overwritten and all that exists will be your file that you mounted.</p>
<p>I only needed to mount part of the volume, such as a single file in a volume, so I used subPath to specify the part to be mounted within the mountPath.</p>
<p>Sometimes using just a mountPath is fine, but I had to also use a subPath to preserve the other config files in the directory.</p>
| Alan |
<p>I have a Deployment which runs a simple apache server. I want to execute some commands after the service is up. I am not quite sure how much time the post action commands going to take. I have "timeoutSeconds" set as more than "periodSeconds".</p>
<p>Kubernets Version: 1.25</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: readiness
spec:
replicas: 1
selector:
matchLabels:
app: readiness
template:
metadata:
labels:
app: readiness
spec:
containers:
- image: sujeetkp/readiness:3.0
name: readiness
resources:
limits:
memory: "500M"
cpu: "1"
readinessProbe:
httpGet:
path: /health_monitor
port: 80
initialDelaySeconds: 20
timeoutSeconds: 10
failureThreshold: 20
periodSeconds: 10
livenessProbe:
httpGet:
path: /health_monitor
port: 80
initialDelaySeconds: 60
timeoutSeconds: 10
failureThreshold: 20
periodSeconds: 10
startupProbe:
exec:
command:
- /bin/sh
- -c
- |-
OUTPUT=$(curl -s -o /dev/null -w %{http_code} http://localhost:80/health_monitor)
if [ $? -eq 0 ] && [ $OUTPUT -ge 200 ] && [ $OUTPUT -lt 400 ]
then
echo "Success" >> /tmp/post_action_track
if [ ! -f /tmp/post_action_success ]
then
# Trigger Post Action
sleep 60
echo "Success" >> /tmp/post_action_success
fi
else
exit 1
fi
initialDelaySeconds: 20
timeoutSeconds: 80
failureThreshold: 20
periodSeconds: 10
</code></pre>
<p>When I run this code, I see very strange results.
As "periodSeconds" is 10 and my script has a sleep of 60 seconds, should not the start up probe trigger atleast 6 times, but it only triggers 2 times.
I am checking the contents of files /tmp/post_action_success and /tmp/post_action_track to identify how many times the probe triggers. (Count the number of success inside the files)</p>
<p>Question: If the previous instance of startup probe is running, then is the startupProbe triggered on top of it or not ? If yes, then why it triggered only twice in my case.</p>
<p>Another observation:
When I set below options</p>
<pre><code>initialDelaySeconds: 20
timeoutSeconds: 5
failureThreshold: 20
periodSeconds: 10
</code></pre>
<p>Then the content of file /tmp/post_action_success shows sleep/timeoutSeconds (60/5)=12 "success".
Can someone please explain how this works.</p>
| Sujeet Padhi | <p>I think the reason you see the probe being triggered twice is because of <code>timeoutSeconds: 80</code>. See <a href="https://stackoverflow.com/questions/63308717/what-is-the-role-of-timeoutseconds-in-kubernetes-liveness-readiness-probes">this question</a>. Also the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">official doc</a> is quiet handy in explaining the other fields.</p>
<p>Perhaps you can set <code>initialDelaySeconds: 61</code> instead of using <code>sleep</code> in you script?</p>
| adelmoradian |
<p>Let's assume I have a node labeled with the labels <code>myKey1: 2</code>, <code>myKey2: 5</code>, <code>myKey3: 3</code>, <code>myKey4: 6</code>. I now want to check if one of those labels has a value greater than 4 and if so schedule my workload on this node. For that I use the following <code>nodeAffinity</code> rule:</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
containers:
- name: wl1
image: myImage:latest
imagePullPolicy: IfNotPresent
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: myKey1
operator: Gt
values:
- 4
nodeSelectorTerms:
- matchExpressions:
- key: myKey2
operator: Gt
values:
- 4
nodeSelectorTerms:
- matchExpressions:
- key: myKey3
operator: Gt
values:
- 4
nodeSelectorTerms:
- matchExpressions:
- key: myKey4
operator: Gt
values:
- 4
</code></pre>
<p>I would instead love to use something shorter to be able to address a bunch of similar labels like e.g.</p>
<pre class="lang-yaml prettyprint-override"><code> affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: myKey*
operator: Gt
values:
- 4
</code></pre>
<p>so basically using a <code>key</code>-wildcard and the different checks connected via a logical <code>OR</code>. Is this possible or is there another solution to check the value of multiple similar labels?</p>
| Wolfson | <p>As <a href="https://stackoverflow.com/users/1909531/matthias-m" title="9,753 reputation">Matthias M</a> wrote in the comment:</p>
<blockquote>
<p>I would add an extra label to all nodes, which should match. I think that was the simplest solution. Was that also a solution for you? <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node</a></p>
</blockquote>
<p>In your situation, it will actually be easier to just add another key and check only one condition. Alternatively, you can try to use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements" rel="nofollow noreferrer">set-based values</a>:</p>
<blockquote>
<p>Newer resources, such as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer"><code>Job</code></a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer"><code>Deployment</code></a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer"><code>ReplicaSet</code></a>, and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer"><code>DaemonSet</code></a>, support <em>set-based</em> requirements as well.</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>selector:
matchLabels:
component: redis
matchExpressions:
- {key: tier, operator: In, values: [cache]}
- {key: environment, operator: NotIn, values: [dev]}
</code></pre>
<blockquote>
<p><code>matchLabels</code> is a map of <code>{key,value}</code> pairs. A single <code>{key,value}</code> in the <code>matchLabels</code> map is equivalent to an element of <code>matchExpressions</code>, whose <code>key</code> field is "key", the <code>operator</code> is "In", and the <code>values</code> array contains only "value". <code>matchExpressions</code> is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both <code>matchLabels</code> and <code>matchExpressions</code> are ANDed together -- they must all be satisfied in order to match.</p>
</blockquote>
<p>For more about it read also <a href="https://stackoverflow.com/questions/55002334/kubernetes-node-selector-regex">this question</a>.</p>
| Mikołaj Głodziak |
<p>Getting the following when trying to attach visual studio code to a WordPress pod (using the kubernetes extension):</p>
<p><code>An error occurred attaching to the container</code></p>
<p>With the following showing up in the terminal:</p>
<pre><code>[3 ms] Remote-Containers 0.166.0 in VS Code 1.55.0 (c185983a683d14c396952dd432459097bc7f757f).
[48 ms] Start: Resolving Remote
[51 ms] Start: Run: kubectl exec -it wp-test-2-wordpress-65859bfc97-qtr9w --namespace default --container wordpress -- /bin/sh -c VSCODE_REMOTE_CONTAINERS_SESSION='5875030e-bcef-47a6-ada5-7f69edb5d9091617678415893' /bin/sh
[56 ms] Start: Run in container: id -un
[279 ms] 1001
[279 ms] Unable to use a TTY - input is not a terminal or the right kind of file
id: cannot find name for user ID 1001
[279 ms] Exit code 1
[281 ms] Command in container failed: id -un
</code></pre>
<p>I have no such problem doing the exact same operation on any other helm release. Only with Bitnami WordPress helm releases.</p>
| Nomnom | <p>That is because the Bitnami WordPress image (version <code>9.0.0</code>) was migrated to a "non-root" user approach. From now on, both the container and the Apache daemon run as user <code>1001</code>.</p>
<p>You can find more information in the <a href="https://github.com/bitnami/charts/tree/master/bitnami/wordpress#to-900" rel="nofollow noreferrer">Bitnami WordPress documentation</a>:</p>
<blockquote>
<p>The Bitnami WordPress image was migrated to a "non-root" user approach. Previously the container ran as the root user and the Apache daemon was started as the daemon user. From now on, both the container and the Apache daemon run as user 1001. You can revert this behavior by setting the parameters securityContext.runAsUser, and securityContext.fsGroup to 0. Chart labels and Ingress configuration were also adapted to follow the Helm charts best practices.</p>
</blockquote>
<p>This problem occurs because running the <code>id -un</code> command inside the WordPress Pod causes error:</p>
<pre><code>$ kubectl exec -it my-1-wordpress-756c595c9c-497xr -- bash
I have no name!@my-1-wordpress-756c595c9c-497xr:/$ id -un
id: cannot find name for user ID 1001
</code></pre>
<p>As a workaround, you can run WordPress as a <code>root</code> by setting the parameters <code>securityContext.runAsUser</code>, and <code>securityContext.fsGroup</code> to <code>0</code> as described in the <a href="https://github.com/bitnami/charts/tree/master/bitnami/wordpress#to-900" rel="nofollow noreferrer">Bitnami WordPress documentation</a>.</p>
<p>For demonstration purposes, I only changed the <code>containerSecurityContext.runAsUser</code> parameter:</p>
<pre><code>$ helm install --set containerSecurityContext.runAsUser=0 my-1 bitnami/wordpress
</code></pre>
<p>Then we can check the output of the <code>id -un</code> command:</p>
<pre><code>$ kubectl exec -it my-1-wordpress-7c44f695ff-f9j9f -- bash
root@my-1-wordpress-7c44f695ff-f9j9f:/# id -un
root
</code></pre>
<p>As you can see, the <code>id -un</code> command doesn't cause any problems and therefore we can now successfully connect to the specific container.</p>
<p>I know this workaround is not ideal as there are many advantages to using non-root containers.
Unfortunately, in this case, I don't know of any other workarounds without modifying the Dockerfile.</p>
| matt_j |
<p>I am having test environment cluster with 1 master and two worker node, all the basic pods are up and running.</p>
<pre><code>root@master:~/pre-release# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-jn4pl 1/1 Running 0 23h
coredns-74ff55c5b-lz5pq 1/1 Running 0 23h
etcd-master 1/1 Running 0 23h
kube-apiserver-master 1/1 Running 0 23h
kube-controller-manager-master 1/1 Running 0 23h
kube-flannel-ds-c7czv 1/1 Running 0 150m
kube-flannel-ds-kz74g 1/1 Running 0 150m
kube-flannel-ds-pb4f2 1/1 Running 0 150m
kube-proxy-dbmjn 1/1 Running 0 23h
kube-proxy-kfrdd 1/1 Running 0 23h
kube-proxy-wj4rk 1/1 Running 0 23h
kube-scheduler-master 1/1 Running 0 23h
metrics-server-67fb68f54c-4hnt7 1/1 Running 0 9m
</code></pre>
<p>Next when i am checking pod logs for metric server i don't see any error message as well</p>
<pre><code>root@master:~/pre-release# kubectl -n kube-system logs -f metrics-server-67fb68f54c-4hnt7
I0330 09:53:15.286101 1 serving.go:325] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0330 09:53:15.767767 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0330 09:53:15.767790 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0330 09:53:15.767815 1 secure_serving.go:197] Serving securely on [::]:4443
I0330 09:53:15.767823 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0330 09:53:15.767835 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0330 09:53:15.767857 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0330 09:53:15.767865 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0330 09:53:15.767878 1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
I0330 09:53:15.767897 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0330 09:53:15.867954 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
I0330 09:53:15.868014 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0330 09:53:15.868088 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
</code></pre>
<p>Then i verified metric services</p>
<pre><code>root@master:~/pre-release# kubectl describe apiservice v1beta1.metrics.k8s.io
Name: v1beta1.metrics.k8s.io
Namespace:
Labels: k8s-app=metrics-server
Annotations: <none>
API Version: apiregistration.k8s.io/v1
Kind: APIService
Metadata:
Creation Timestamp: 2021-03-30T09:53:13Z
Resource Version: 126838
UID: 6da11b3f-87d5-4de4-92a0-463219b23301
Spec:
Group: metrics.k8s.io
Group Priority Minimum: 100
Insecure Skip TLS Verify: true
Service:
Name: metrics-server
Namespace: kube-system
Port: 443
Version: v1beta1
Version Priority: 100
Status:
Conditions:
Last Transition Time: 2021-03-30T09:53:13Z
Message: failing or missing response from https://10.108.112.196:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.112.196:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Reason: FailedDiscoveryCheck
Status: False
Type: Available
Events: <none>
</code></pre>
<p>finally status of type false and ending with above error.</p>
<p>Here the deployment spec file</p>
<pre><code> spec:
containers:
- args:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
</code></pre>
<p>kubectl top nodes</p>
<pre><code>root@master:~# kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
</code></pre>
<p>Now able to find thee solution still from yesterday, could you pleaes help me on this</p>
| Gowmi | <p>In this case, adding <code>hostNetwork:true</code> under <code>spec.template.spec</code> to the <code>metrics-server</code> Deployment may help.</p>
<pre><code>...
spec:
hostNetwork: true
containers:
- args:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
...
</code></pre>
<p>As we can find in the <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces" rel="nofollow noreferrer">Kubernetes Host namespaces documentation</a>:</p>
<blockquote>
<p>HostNetwork - Controls whether the pod may use the node network namespace. Doing so gives the pod access to the loopback device, services listening on localhost, and could be used to snoop on network activity of other pods on the same node.</p>
</blockquote>
| matt_j |
<p>Is there any open-source tool or logic by which I can get the continuous status of K8s resources. For example if have 10 deployments running on my K8s cluster then I want to keep checking the current operational status of those deployments. Similarly, this will hold good for other resources like Replica sets, stateful sets, Daemon sets, etc.
Currently, I have a logic that is dependent on the specific deployment hence looking out on something which can be generic for all deployments.</p>
| Imran | <p>As <strong>@rock'n rolla</strong> correctly pointed out, <code>kubectl get -w</code> is a really easy way to get basic information about the state of different Kubernetes resources.</p>
<p>If you need a more powerful tool that can quickly visualize cluster health, you can use the <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">Kubernetes Dashboard</a> or one of the many <a href="https://octopus.com/blog/alternative-kubernetes-dashboards" rel="nofollow noreferrer">alternatives</a>.<br />
Personally I like to use the <a href="https://github.com/indeedeng/k8dash#k8dash---kubernetes-dashboard" rel="nofollow noreferrer">k8dash - Kubernetes Dashboard</a> because it's really easy to install and we can fully manage the cluster via the <code>k8dash</code> web interface.</p>
<hr />
<p>I will describe how you can install and use <code>k8dash</code>.</p>
<p>As described in the <a href="https://github.com/indeedeng/k8dash#getting-started" rel="nofollow noreferrer">k8dash - Getting Started</a> documentation, you can deploy <code>k8dash</code> with the following command:</p>
<pre><code>$ kubectl apply -f https://raw.githubusercontent.com/indeedeng/k8dash/master/kubernetes-k8dash.yaml
</code></pre>
<p>To access <code>k8dash</code>, you must make it publicly visible using <code>Ingress</code> (see: <a href="https://github.com/indeedeng/k8dash#getting-started" rel="nofollow noreferrer">Running k8dash with Ingress</a>) or <code>NodePort</code> (see: <a href="https://github.com/indeedeng/k8dash#running-k8dash-with-nodeport" rel="nofollow noreferrer">Running k8dash with NodePort</a>).</p>
<p>After exposing the dashboard, the easiest way to log into it is to create a dedicated service account (see: <a href="https://github.com/indeedeng/k8dash#service-account-token" rel="nofollow noreferrer">Service Account Token</a>).</p>
<p>Using <code>k8dash</code> UI dashboard is pretty intuitive, you can monitor <code>Nodes</code>, <code>Pods</code>, <code>ReplicaSets</code>, <code>Deployments</code>, <code>StatefulSets</code> and more. Real time charts help quickly track down poorly performing resources.</p>
<p><a href="https://i.stack.imgur.com/wRBNP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wRBNP.png" alt="enter image description here" /></a></p>
| matt_j |
<p>I have a k8s cluster with 3 nodes.
With kubectl command i enter in a pod shell and make some file editing:</p>
<pre><code>kubectl exec --stdin --tty <pod-name> -- /bin/bash
</code></pre>
<p>at this point i have one pod wit correct editing and other 2 replicas with old file.
My question is:
There is a kubectl commend for, starting from a specific pod, overwrite current replicas in cluster for create n equals pods?</p>
<p>Hope to be clear</p>
<p>So many thanks in advance
Manuel</p>
| Manuel Santi | <p>You can use a kubectl plugin called: <a href="https://github.com/predatorray/kubectl-tmux-exec" rel="nofollow noreferrer">kubectl-tmux-exec</a>.<br />
All information on how to install and use this plugin can be found on GitHub: <a href="https://github.com/predatorray/kubectl-tmux-exec#installation" rel="nofollow noreferrer">predatorray/kubectl-tmux-exec</a>.</p>
<p>As described in the <a href="https://github.com/predatorray/kubectl-tmux-exec/wiki/How-to-Install-Dependencies" rel="nofollow noreferrer">How to Install Dependencies</a> documentation.</p>
<blockquote>
<p>The plugin needs the following programs:</p>
<ul>
<li>gnu-getopt(1)</li>
<li>tmux(1)</li>
</ul>
</blockquote>
<hr />
<p>I've created a simple example to illustrate you how it works.</p>
<p>Suppose I have a <code>web</code> <code>Deployment</code> and want to create a <code>sample-file</code> file inside all (3) replicas.</p>
<pre><code>$ kubectl get deployment,pods --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
deployment.apps/web 3/3 3 3 19m app=web
NAME READY STATUS RESTARTS AGE LABELS
pod/web-96d5df5c8-5gn8x 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
pod/web-96d5df5c8-95r4c 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
pod/web-96d5df5c8-wc9k5 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
</code></pre>
<p>I have the <code>kubectl-tmux_exec</code> plugin installed, so I can use it:</p>
<pre><code>$ kubectl plugin list
The following compatible plugins are available:
/usr/local/bin/kubectl-tmux_exec
$ kubectl tmux-exec -l app=web bash
</code></pre>
<p>After running the above command, Tmux will be opened and we can modify multiple Pods simultaneously:</p>
<p><a href="https://i.stack.imgur.com/AzO5B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AzO5B.png" alt="enter image description here" /></a></p>
| matt_j |
<p>In particular, %CPU/R, %CPU/L, %MEM/R, and %MEM/L. While I'm at it, what are the units of the memory and CPU (non-percentage) columns?</p>
| Alan Robertson | <p>They are explained in the K9s release notes <a href="https://github.com/derailed/k9s/blob/master/change_logs/release_v0.13.4.md#cpumem-metrics" rel="noreferrer">here</a></p>
<ul>
<li>%CPU/R Percentage of requested cpu</li>
<li>%MEM/R Percentage of requested memory</li>
<li>%CPU/L Percentage of limited cpu</li>
<li>%MEM/L Percentage of limited memory</li>
</ul>
| Alan |
<p>For some reason, I can not easily get the k8s.gcr.io containers in my server. The workaround is to fetch containers locally and then move it to my server. My question is how to pre-get all messages of version for all kubeadm cluster containers?</p>
| ccd | <p>To print a list of images <code>kubeadm</code> will use, you can run the following command:</p>
<pre><code>$ kubeadm config images list
</code></pre>
<p>As an example you can see the images that will be used for my <code>kubeadm</code> <code>v1.20</code>:</p>
<pre><code>$ kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.20.5
k8s.gcr.io/kube-controller-manager:v1.20.5
k8s.gcr.io/kube-scheduler:v1.20.5
k8s.gcr.io/kube-proxy:v1.20.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
</code></pre>
| matt_j |
<p>I have created a database and accompanying user for the database but It appears I cant do backups with that user and neither can I add the backup role to the user.
Having checked documentation I added a user but this time at the admin database level (<code>use admin</code>) and added backup role for the same.</p>
<p>However when I attempt to do a backup I am getting an error <em>Failed: error dumping metadata: error creating directory for metadata file /var/backups/...: mkdir /var/backups/...: permission denied</em></p>
<p>Steps
1.</p>
<pre><code> `kubectl -n <namespace> exec -it <pod_name> -- sh`
</code></pre>
<p>(mongo is running in kubernetes)</p>
<p>2.</p>
<pre><code>`use admin`
</code></pre>
<p>(switch to admin user)</p>
<p>3.</p>
<pre><code>db.createUser( {user: "backupuser", pwd: "abc123", roles: ["root", "userAdminAnyDatabase", "dbAdminAnyDatabase", "readWriteAnyDatabase","backup"], mechanisms:["SCRAM-SHA-256"]})
</code></pre>
<ol start="4">
<li></li>
</ol>
<pre><code> `db.getUsers({ filter: { mechanisms: "SCRAM-SHA-256" } })`
</code></pre>
<p>(Verify if user exists)</p>
<p>5.</p>
<pre><code> mongodump -u backupuser -p abc123 --authenticationDatabase admin -d TESTDB --out /var/backups/dump-25-05-22 --gzip
</code></pre>
<p>Is it possible to even amend permissions for this user in such a case or I should be looking somewhere else.
In the container it seems I cant do any permission updates (for the group) but the user already has all permissions on /var/backups :</p>
<pre><code>ls -la
total 8
drwxr-xr-x 2 root root 4096 Feb 18 2021 .
drwxr-xr-x 1 root root 4096 Feb 18 2021 ..
</code></pre>
<p>I am not convinced either that I should be going even this far. The backup should execute out of the box as per the user I added.</p>
<p>What exactly am I missing ?</p>
| Golide | <p>There is nothing to do from mongodb side. The user that is running <code>mongodump</code> command doesn't have the required permission. To check if thats the case, you can try this out : <code>sudo chmod 777 -R /var/backups/</code> before running <code>mongodump</code>.</p>
| Arnob Saha |
<p>In my project, I need to test if Guaranteed Application pods should evict any dummy application pods which are running. How do I achieve that application pods always have the highest priority?</p>
| Vrushali | <p>The answer provided by the <a href="https://stackoverflow.com/users/6309601/p">P....</a> is very good and useful. By <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/" rel="nofollow noreferrer">Pod Priority and Preemption</a> you can achieve what you are up to.</p>
<p>However, apart from that, you can use dedicated solutions, for example in the clouds. Look at the <a href="https://cloud.google.com/blog/products/gcp/get-the-most-out-of-google-kubernetes-engine-with-priority-and-preemption" rel="nofollow noreferrer">Google cloud example</a>:</p>
<blockquote>
<p>Before priority and preemption, Kubernetes pods were scheduled purely on a first-come-first-served basis, and ran to completion (or forever, in the case of pods created by something like a Deployment or StatefulSet). This meant less important workloads could block more important, later-arriving, workloads from running—not the desired effect. Priority and preemption solves this problem.</p>
<p>Priority and preemption is valuable in a number of scenarios. For example, imagine you want to cap autoscaling to a maximum cluster size to control costs, or you have clusters that you can’t grow in real-time (e.g., because they are on-premises and you need to buy and install additional hardware). Or you have high-priority cloud workloads that need to scale up faster than the cluster autoscaler can add nodes. In short, priority and preemption lead to better resource utilization, lower costs and better service levels for critical applications.</p>
</blockquote>
<p>Additional guides for other clouds:</p>
<ul>
<li><a href="https://cloud.ibm.com/docs/containers?topic=containers-pod_priority" rel="nofollow noreferrer">IBM cloud</a></li>
<li><a href="https://www.eksworkshop.com/intermediate/201_resource_management/pod-priority/" rel="nofollow noreferrer">AWS cloud</a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/operator-best-practices-advanced-scheduler" rel="nofollow noreferrer">Azure cloud</a></li>
<li><a href="https://docs.openshift.com/container-platform/4.7/nodes/pods/nodes-pods-priority.html" rel="nofollow noreferrer">RedHat Openshift</a></li>
</ul>
<p>See also <a href="https://mohaamer5.medium.com/kubernetes-pod-priority-and-preemption-943c58aee07d" rel="nofollow noreferrer">this useful tutorial</a>.</p>
| Mikołaj Głodziak |
<p>I have all my env vars in .env files.
They get automatically loaded when I open my shell-terminal.</p>
<p>I normally render shell environment variables into my target files with <code>envsubst</code>. similar to the example below.</p>
<h3>What I search is a solution where I can pass a <code>dotenv</code>-file as well my <code>template</code>-file to a script which outputs the rendered result.</h3>
<p>Something like this:</p>
<pre><code>aScript --input .env.production --template template-file.yml --output result.yml
</code></pre>
<p>I want to be able to parse different environment variables into my yaml. The output should be sealed via "Sealed secrets" and finally saved in the regarding kustomize folder</p>
<pre><code>envsub.sh .env.staging templates/secrets/backend-secrets.yml | kubeseal -o yaml > kustomize/overlays/staging
</code></pre>
<p>I hope you get the idea.</p>
<hr />
<p>example</p>
<p><code>.env.production</code>-file:</p>
<p>FOO=bar
PASSWROD=abc</p>
<p>content of <code>template-file.yml</code></p>
<pre><code>stringData:
foo: $FOO
password: $PASSWORD
</code></pre>
<p>Then running this:</p>
<pre><code>envsubst < template-file.yml > file-with-vars.yml
</code></pre>
<p>the result is:</p>
<pre><code>stringData:
foo: bar
password: abc
</code></pre>
<p>My approach so far does not work because Dotenv also supports <strong>different environments</strong> like <code>.env</code>, <code>.env.production</code>, <code>.env.staging</code> asf..</p>
| Jan | <p>What about:</p>
<pre><code>#!/bin/sh
# envsub - subsitute environment variables
env=$1
template=$2
sh -c "
. \"$env\"
cat <<EOF
$(cat "$template")
EOF"
</code></pre>
<p>Usage:</p>
<pre><code>./envsub .env.production template-file.yaml > result.yaml
</code></pre>
<ul>
<li>A here-doc with an unquoted delimiter (<code>EOF</code>) expands variables, whilst preserving quotes, backslashes, and other shell sequences.</li>
<li><code>sh -c</code> is used like <code>eval</code>, to expand the command substitution, then run that output through a here-doc.</li>
<li>Be aware that this extra level of indirection creates potential for code injection, if someone can modify the yaml file.</li>
</ul>
<p>For example, adding this:</p>
<pre><code>EOF
echo malicous commands
</code></pre>
<p>But it does get the result you want.</p>
| dan |
<p>We deployed our .Net core api in Kubernetes.Our container is up and running. When the request is about connect to oracle db we get an exception that “ TNS could not resolve the connect identifier specified “</p>
<p>Connection string is coded in appsettings..json file and we have passed this file during build stage in docker file.</p>
<p>How to resolve this.</p>
| user15292536 | <p>Initially provided DB name and credentials to the connection string. Tried adding Oracle server IP and Port Number to the connection String and my issue got resolved.</p>
| user15292536 |
<p>I'm unable to launch new pods despite resources seemingly being available.</p>
<p>Judging from the below screenshot there should be room for about 40 new pods.</p>
<p><a href="https://i.stack.imgur.com/CHW8S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CHW8S.png" alt="enter image description here" /></a></p>
<p>And also judging from the following screenshot the nodes seems fairly underutilized</p>
<p><a href="https://i.stack.imgur.com/myRet.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/myRet.png" alt="enter image description here" /></a></p>
<p>However I'm currently facing the below error message</p>
<pre><code>0/3 nodes are available: 1 Insufficient cpu, 2 node(s) had volume node affinity conflict.
</code></pre>
<p>And last night it was the following</p>
<pre><code>0/3 nodes are available: 1 Too many pods, 2 node(s) had volume node affinity conflict.
</code></pre>
<p>Most of my services require very little memory and cpu. And therefore their resources are configured as seen below</p>
<pre><code>resources:
limits:
cpu: 100m
memory: 64Mi
requests:
cpu: 100m
memory: 32Mi
</code></pre>
<p>Why I can't deploy more pods? And how to fix this?</p>
| user672009 | <p>Your problem is "volume node affinity conflict".</p>
<p>From <a href="https://stackoverflow.com/a/55514852">Kubernetes Pod Warning: 1 node(s) had volume node affinity conflict</a>:</p>
<blockquote>
<p>The error "volume node affinity conflict" happens when the persistent volume claims that the pod is using are scheduled on different zones, rather than on one zone, and so the actual pod was not able to be scheduled because it cannot connect to the volume from another zone.</p>
</blockquote>
<p>First, try to investigate exactly where the problem is. You can find a <a href="https://www.datree.io/resources/kubernetes-troubleshooting-fixing-persistentvolumeclaims-error" rel="nofollow noreferrer">detailed guide here</a>. You will need commands like:</p>
<pre><code>kubectl get pv
kubectl describe pv
kubectl get pvc
kubectl describe pvc
</code></pre>
<p>Then you can delete the PV and PVC and move pods to the same zone along with the PV and PVC.</p>
| Mikołaj Głodziak |
<p>Pod lifecycle is managed by Kubelet in data plane.</p>
<p>As per the definition: If the liveness probe fails, the kubelet kills the container</p>
<p>Pod is just a container with dedicated network namespace & IPC namespace with a sandbox container.</p>
<hr />
<p>Say, if the Pod is single app container Pod, then upon liveness failure:</p>
<ul>
<li>Does kubelet kill the Pod?</li>
</ul>
<p>or</p>
<ul>
<li>Does kubelet kill the container (only) within the Pod?</li>
</ul>
| overexchange | <p>The kubelet uses liveness probes to know when to restart a container (<strong>NOT</strong> the entire Pod). If the liveness probe fails, the kubelet kills the container, and then the container may be restarted, however it depends on its <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">restart policy</a>.</p>
<hr />
<p>I've created a simple example to demonstrate how it works.</p>
<p>First, I've created an <code>app-1</code> Pod with two containers (<code>web</code> and <code>db</code>).
The <code>web</code> container has a liveness probe configured, which always fails because the <code>/healthz</code> path is not configured.</p>
<pre><code>$ cat app-1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: app-1
name: app-1
spec:
containers:
- image: nginx
name: web
livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: Custom-Header
value: Awesome
- image: postgres
name: db
env:
- name: POSTGRES_PASSWORD
value: example
</code></pre>
<p>After applying the above manifest and waiting some time, we can describe the <code>app-1</code> Pod to check that only the <code>web</code> container has been restarted and the <code>db</code> container is running without interruption:<br />
<strong>NOTE:</strong> I only provided important information from the <code>kubectl describe pod app-1</code> command, not the entire output.</p>
<pre><code>$ kubectl apply -f app-1.yml
pod/app-1 created
$ kubectl describe pod app-1
Name: app-1
...
Containers:
web:
...
Restart Count: 4 <--- Note that the "web" container was restarted 4 times
Liveness: http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
...
db:
...
Restart Count: 0 <--- Note that the "db" container works fine
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
...
Normal Killing 78s (x2 over 108s) kubelet Container web failed liveness probe, will be restarted
...
</code></pre>
<p>We can connect to the <code>db</code> container to see if it is running:<br />
<strong>NOTE:</strong> We can use the <code>db</code> container even when restarting the<code> web</code> container.</p>
<pre><code>$ kubectl exec -it app-1 -c db -- bash
root@app-1:/#
</code></pre>
<p>In contrast, after connecting to the <code>web</code> container, we can observe that the liveness probe restarts this container:</p>
<pre><code>$ kubectl exec -it app-1 -c web -- bash
root@app-1:/# command terminated with exit code 137
</code></pre>
| matt_j |
<p>I am migrating <strong>Certificate</strong> from <code>certmanager.k8s.io/v1alpha1</code> to <code>cert-manager.io/v1</code>, however, I am getting this error</p>
<blockquote>
<p>error validating data: ValidationError(Certificate.spec): unknown field "acme" in io.cert-manager.v1.Certificate.spec</p>
</blockquote>
<p>My manifest</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: myapp-com-tls
namespace: default
spec:
secretName: myapp-com-tls
issuerRef:
name: letsencrypt-myapp-issuer
commonName: '*.myapp.com'
dnsNames:
- myapp.com
acme:
config:
- dns01:
provider: google-dns
domains:
- '*.myapp.com'
- myapp.com
</code></pre>
<p>I know that there is no more <code>acme</code>, but how to migrate to a newer version?</p>
| Rodrigo | <p>The <code>cert-manager.io/v1</code> API version separates the roles of certificate <a href="https://cert-manager.io/docs/concepts/issuer/" rel="nofollow noreferrer">issuers</a> and certificates.</p>
<p>Basically, you need to <a href="https://cert-manager.io/docs/configuration/" rel="nofollow noreferrer">configure</a> a certificate issuer among the supported ones, like <a href="https://cert-manager.io/docs/configuration/acme/" rel="nofollow noreferrer">ACME</a>.</p>
<p>This issuer can be later used to obtain a certificate.</p>
<p>Please, consider read this tutorial about a certificate obtained from ACME with DNS validation in the <a href="https://cert-manager.io/docs/tutorials/acme/dns-validation/#issuing-an-acme-certificate-using-dns-validation" rel="nofollow noreferrer">cert-manager.io documentation</a>.</p>
| jccampanero |
<p>I'm using the <a href="https://github.com/kubernetes-client/javascript" rel="nofollow noreferrer">Javascript Kubernetes Client</a> and I'm trying to read all resources from a custom resource definition. In particular I want to run <code>kubectl get prometheusrule</code> (prometheusrule is my CRD).</p>
<p>I couldn't find a wat to do this yet. I can read resources like this:</p>
<pre><code>const kc = new k8s.KubeConfig();
kc.loadFromDefault();
const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
k8sApi.listNamespacedPod('default').then((res) => {
res.body.items.forEach(pod => console.log(pod.metadata.name));
});
</code></pre>
<p>But it does not provide a method for reading CRDs.</p>
<p>I also tried</p>
<pre><code>const k8Client = k8s.KubernetesObjectApi.makeApiClient(kc);
k8Client.read({ kind: "service"}).then(res => console.log(res));
</code></pre>
<p>But this way I get the error <code>UnhandledPromiseRejectionWarning: Error: Unrecognized API version and kind: v1 service</code></p>
<p>Any idea how I can achieve this?</p>
| Jonas | <p>You can use the <a href="https://github.com/kubernetes-client/javascript/blob/master/src/gen/api/customObjectsApi.ts#L1491" rel="nofollow noreferrer">listNamespacedCustomObject</a> function. This function has four required arguments as described below:</p>
<ul>
<li><strong>group</strong> - the custom resource's group name</li>
<li><strong>version</strong> - the custom resource's version</li>
<li><strong>namespace</strong> - the custom resource's namespace</li>
<li><strong>plural</strong> - the custom resource's plural name.</li>
</ul>
<hr />
<p>I've created a sample script that lists all <code>PrometheusRules</code> to illustrate how it works:</p>
<pre><code>$ cat list_rules.js
const k8s = require('@kubernetes/client-node')
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8sApi = kc.makeApiClient(k8s.CustomObjectsApi)
k8sApi.listNamespacedCustomObject('monitoring.coreos.com','v1','default', 'prometheusrules').then((res) => {
res.body.items.forEach(rule => console.log(rule.metadata.name));
});
</code></pre>
<p>We can check if it works as expected:</p>
<pre><code>$ node list_rules.js
prometheus-kube-prometheus-alertmanager.rules
prometheus-kube-prometheus-etcd
prometheus-kube-prometheus-general.rules
prometheus-kube-prometheus-k8s.rules
...
</code></pre>
| matt_j |
<p>I have following configuration of a service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: academy-backend-service
spec:
selector:
app: academy-backend-app
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30081
</code></pre>
<p>Behind this service there is a deployment that runs a docker image of a spring boot application that expose port 8081.
When I try to reach the application from browser on http://localhost:30081 I don't get anything (not reachable). However if I connect inside minikube cluster, the application is available on http:{servcieip}:8081.
Any clues what is not configured properly? I thought that nodePort is enough.</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
academy-backend-service NodePort 10.97.44.87 <none> 8081:30081/TCP 34m
</code></pre>
| Cosmin D | <p>Use <code>NodePorts</code> to expose the service nodePort on all nodes in the cluster.</p>
<p>From <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
<blockquote>
<p>NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.</p>
</blockquote>
<p>If you want to expose your service outside the cluster, use <code>LoadBalancer</code> type:</p>
<blockquote>
<p>LoadBalancer: Exposes the Service externally using a cloud provider's load balancer.</p>
</blockquote>
<p>or use ingress-controller which is a reverse-proxy that routs traffics from outside to your cluster:
<a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></p>
| Reda E. |
<p>Im attempting to incorporate git-sync sidecar container into my Airflow deployment yaml so my private Github repo gets synced to my Airflow Kubernetes env every time I make a change in the repo.</p>
<p>So far, it successfully creates a git-sync container along with our scheduler, worker, and web server pods, each in their respective pod (ex: scheduler pod contains a scheduler container and gitsync container).
</p>
<p>I looked at the git-sync container logs and it looks like it successfully connects with my private repo (using a personal access token) and prints success logs every time I make a change to my repo.</p>
<pre><code>INFO: detected pid 1, running init handler
I0411 20:50:31.009097 12 main.go:401] "level"=0 "msg"="starting up" "pid"=12 "args"=["/git-sync","-wait=60","-repo=https://github.com/jorgeavelar98/AirflowProject.git","-branch=master","-root=/opt/airflow/dags","-username=jorgeavelar98","-password-file=/etc/git-secret/token"]
I0411 20:50:31.029064 12 main.go:950] "level"=0 "msg"="cloning repo" "origin"="https://github.com/jorgeavelar98/AirflowProject.git" "path"="/opt/airflow/dags"
I0411 20:50:31.031728 12 main.go:956] "level"=0 "msg"="git root exists and is not empty (previous crash?), cleaning up" "path"="/opt/airflow/dags"
I0411 20:50:31.894074 12 main.go:760] "level"=0 "msg"="syncing git" "rev"="HEAD" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03"
I0411 20:50:31.907256 12 main.go:800] "level"=0 "msg"="adding worktree" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "branch"="origin/master"
I0411 20:50:31.911039 12 main.go:860] "level"=0 "msg"="reset worktree to hash" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03"
I0411 20:50:31.911065 12 main.go:865] "level"=0 "msg"="updating submodules"
</code></pre>
<p> </p>
<p><strong>However, despite their being no error logs in my git-sync container logs, I could not find any of the files in the destination directory where my repo is supposed to be synced into (/opt/airflow/dags). Therefore, no DAGs are appearing in the Airflow UI</strong></p>
<p>This is our scheduler containers/volumes yaml definition for reference. We have something similar for workers and webserver</p>
<pre><code> containers:
- name: airflow-scheduler
image: <redacted>
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: "AIRFLOW_SERVICE_NAME-env"
env:
<redacted>
resources:
requests:
memory: RESOURCE_MEMORY
cpu: RESOURCE_CPU
volumeMounts:
- name: scripts
mountPath: /home/airflow/scripts
- name: dags-data
mountPath: /opt/airflow/dags
subPath: dags
- name: dags-data
mountPath: /opt/airflow/plugins
subPath: plugins
- name: variables-pools
mountPath: /home/airflow/variables-pools/
- name: airflow-log-config
mountPath: /opt/airflow/config
command:
- "/usr/bin/dumb-init"
- "--"
args:
<redacted>
- name: git-sync
image: registry.k8s.io/git-sync/git-sync:v3.6.5
args:
- "-wait=60"
- "-repo=<repo>"
- "-branch=master"
- "-root=/opt/airflow/dags"
- "-username=<redacted>"
- "-password-file=/etc/git-secret/token"
volumeMounts:
- name: git-secret
mountPath: /etc/git-secret
readOnly: true
- name: dags-data
mountPath: /opt/airflow/dags
volumes:
- name: scripts
configMap:
name: AIRFLOW_SERVICE_NAME-scripts
defaultMode: 493
- name: dags-data
emptyDir: {}
- name: variables-pools
configMap:
name: AIRFLOW_SERVICE_NAME-variables-pools
defaultMode: 493
- name: airflow-log-config
configMap:
name: airflow-log-configmap
defaultMode: 493
- name: git-secret
secret:
secretName: github-token
</code></pre>
<p>What can be the issue? I couldn't find much documentation that could help me further investigate. Any help and guidance would be greatly appreciated!</p>
| jorgeavelar98 | <p>Your problem could be probably related to the directory structure you are defining across the different containers.</p>
<p>It is unclear in your question but, according to your containers definitions, your git repository should contain at least <code>dags</code> and <code>plugins</code> as top level directories:</p>
<pre><code>/
├─ dags/
├─ plugins/
</code></pre>
<p>This structure resembles a typical <code>airflow</code> folder structure: I assume, that is the one you configured.</p>
<p>Then, please, try using this slightly modified version of your Kubernetes configuration:</p>
<pre class="lang-yaml prettyprint-override"><code> containers:
- name: airflow-scheduler
image: <redacted>
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: "AIRFLOW_SERVICE_NAME-env"
env:
<redacted>
resources:
requests:
memory: RESOURCE_MEMORY
cpu: RESOURCE_CPU
volumeMounts:
- name: scripts
mountPath: /home/airflow/scripts
- name: dags-data
mountPath: /opt/airflow/dags
subPath: dags
- name: dags-data
mountPath: /opt/airflow/plugins
subPath: plugins
- name: variables-pools
mountPath: /home/airflow/variables-pools/
- name: airflow-log-config
mountPath: /opt/airflow/config
command:
- "/usr/bin/dumb-init"
- "--"
args:
<redacted>
- name: git-sync
image: registry.k8s.io/git-sync/git-sync:v3.6.5
args:
- "-wait=60"
- "-repo=<repo>"
- "-branch=master"
- "-root=/opt/airflow"
- "-username=<redacted>"
- "-password-file=/etc/git-secret/token"
volumeMounts:
- name: git-secret
mountPath: /etc/git-secret
readOnly: true
- name: dags-data
mountPath: /opt
volumes:
- name: scripts
configMap:
name: AIRFLOW_SERVICE_NAME-scripts
defaultMode: 493
- name: dags-data
emptyDir: {}
- name: variables-pools
configMap:
name: AIRFLOW_SERVICE_NAME-variables-pools
defaultMode: 493
- name: airflow-log-config
configMap:
name: airflow-log-configmap
defaultMode: 493
- name: git-secret
secret:
secretName: github-token
</code></pre>
<p>Note that we basically changed the <code>root</code> argument of the <code>git-sync</code> container removing <code>/dags</code>.</p>
<p>If it doesn't work, please, try including and tweaking the value of the <a href="https://github.com/kubernetes/git-sync/tree/release-3.x#primary-flags" rel="nofollow noreferrer"><code>--dest</code></a> <code>git-sync</code> flag, I think it could be of help as well.</p>
| jccampanero |
<p>When I run:</p>
<pre><code>kubectl get pods --field-selector=status.phase=Running
</code></pre>
<p>I see:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
k8s-fbd7b 2/2 Running 0 5m5s
testm-45gfg 1/2 Error 0 22h
</code></pre>
<p>I don't understand why this command gives me pod that are in Error status?
According to <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="nofollow noreferrer">K8S api</a>, there is no such thing <code>STATUS=Error</code>.</p>
<p>How can I get only the pods that are in this Error status?</p>
<p>When I run:</p>
<pre><code>kubectl get pods --field-selector=status.phase=Failed
</code></pre>
<p>It tells me that there are no pods in that status.</p>
| Slava | <p>Using the <code>kubectl get pods --field-selector=status.phase=Failed</code> command you can display all Pods in the <code>Failed</code> phase.</p>
<p><code>Failed</code> means that all containers in the Pod have terminated, and at least one container has terminated in failure (see: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="noreferrer">Pod phase</a>):</p>
<blockquote>
<p>Failed - All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.</p>
</blockquote>
<p>In your example, both Pods are in the <code>Running</code> phase because at least one container is still running in each of these Pods.:</p>
<blockquote>
<p>Running - The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.</p>
</blockquote>
<p>You can check the current phase of Pods using the following command:</p>
<pre><code>$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}'
</code></pre>
<p>Let's check how this command works:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS
app-1 1/2 Error
app-2 0/1 Error
$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}'
app-1 Running
app-2 Failed
</code></pre>
<p>As you can see, only the <code>app-2</code> Pod is in the <code>Failed</code> phase. There is still one container running in the <code>app-1</code> Pod, so this Pod is in the <code>Running</code> phase.</p>
<p>To list all pods with the <code>Error</code> status, you can simply use:</p>
<pre><code>$ kubectl get pods -A | grep Error
default app-1 1/2 Error
default app-2 0/1 Error
</code></pre>
<p>Additionally, it's worth mentioning that you can check the state of all containers in Pods:</p>
<pre><code>$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.containerStatuses[*].state}{"\n"}{end}'
app-1 {"terminated":{"containerID":"containerd://f208e2a1ff08c5ce2acf3a33da05603c1947107e398d2f5fbf6f35d8b273ac71","exitCode":2,"finishedAt":"2021-08-11T14:07:21Z","reason":"Error","startedAt":"2021-08-11T14:07:21Z"}} {"running":{"startedAt":"2021-08-11T14:07:21Z"}}
app-2 {"terminated":{"containerID":"containerd://7a66cbbf73985efaaf348ec2f7a14d8e5bf22f891bd655c4b64692005eb0439b","exitCode":2,"finishedAt":"2021-08-11T14:08:50Z","reason":"Error","startedAt":"2021-08-11T14:08:50Z"}}
</code></pre>
| matt_j |
<p>Currently i created a docker image by using build command in the minikube itself. But i need to know is there any way to copy the docker image in my system docker daemon to minikube docker daemon environment. I don't need to use Dockerhub.</p>
| I.vignesh David | <p>Yes, you can point your terminal to use the docker daemon inside minikube by running this,</p>
<pre class="lang-sh prettyprint-override"><code>$ eval $(minikube docker-env)
</code></pre>
<p>Then you can build your own image,</p>
<pre class="lang-sh prettyprint-override"><code>docker build -t my_image .
</code></pre>
<p>For more info, see <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/pushing/</a></p>
| Jinesi Yelizati |
<p>I have a master node and it works fine. when I get nodes it gives me a master node
now I want to add a new master node with following command:</p>
<pre><code>kubeadm join 45.82.137.112:8443 --token 61vi23.i1qy9k2hvqc9k8ib --discovery-token-ca-cert-hash sha256:40617af1ebd8893c1df42f2d26c5f18e05be91b4e2c9b69adbeab1edff7a51ab --control-plane --certificate-key 4aafd2369fa85eb2feeacd69a7d1cfe683771181e3ee781ce806905b74705fe8
</code></pre>
<p>which 45.82.137.112 is my HAProxy IP and I copy this command after creating first master node.
after this command I get following error:</p>
<pre><code>[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[kubelet-check] Initial timeout of 40s passed.
</code></pre>
<p>and my first master node also disappear and fails. Everything in master1 is ok until I use join command for another master node.</p>
| aria hamedian | <p>bro,I've solved the problem,my kubeadm version is 1.20.1
this is my join command:
<code>kubeadm join 192.168.43.122:6444 \ --token 689yfz.w60ihod0js5zcina \ --discovery-token-ca-cert-hash sha256:532de9882f2b417515203dff99203d7d7f3dd00a88eb2e8f6cbf5ec998827537 \ --control-plane \ --certificate-key 8792f355dc22227029a091895adf9f84be6eea9e8e65f0da4ad510843e54fbcf \ --apiserver-advertise-address 192.168.43.123</code></p>
<p>I just add the flag
<code>--apiserver-advertise-address 192.168.43.123</code></p>
| 浮光掠影 |
<p>I am trying to run a .net application on AKS. I am building the image using a docker file and using a deployment file to deploy on aks from container registry. Its fails showing status as CrashLoopBackOff.</p>
<p><a href="https://i.stack.imgur.com/ZQCpa.png" rel="nofollow noreferrer">docker file with user and group creation</a>
<a href="https://i.stack.imgur.com/eCZCJ.png" rel="nofollow noreferrer">security context details in deployment.yaml file</a></p>
| Shashank Karukonda | <p>I had to change the port to 8080 and it worked after that . It seems we cannot use port 80 for a non root user</p>
<p>and made the following changes in docker file</p>
<pre><code>FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
RUN addgroup --system --gid 1000 custom-group && adduser --system --uid 1000 --ingroup custom-group --shell /bin/sh customuser
USER 1000
WORKDIR /app
#Serve on port 8080, we cannot serve on port 80 with a custom user that is not root
ENV ASPNETCORE_URLS http://+:8080
EXPOSE 8080
</code></pre>
| Shashank Karukonda |
<p>I have set up <code>jenkins</code> on GKE using the official helm <a href="https://github.com/helm/charts/tree/master/stable/jenkins" rel="nofollow noreferrer">chart</a>.</p>
<p>I have also created an <code>nginx-ingress</code> controller installation using helm and I am able to access jenkins via <code>https://112.222.111.22/jenkins</code> where <code>112.222.111.22</code> is the static IP I am passing to the load balancer.</p>
<p>I am also able to create jobs.</p>
<p>However, when I try to spin up inbound remote agent:</p>
<pre><code>▶ java -jar agent.jar -noCertificateCheck -jnlpUrl https://112.222.111.22/jenkins/computer/My%20Builder%203/slave-agent.jnlp -secret <some_secret>
...
WARNING: Connect timed out
Feb 28, 2020 5:57:18 PM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: https://112.222.111.22/jenkins/ provided port:50000 is not reachable
java.io.IOException: https://112.222.111.22/jenkins/ provided port:50000 is not reachable
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:303)
at hudson.remoting.Engine.innerRun(Engine.java:527)
at hudson.remoting.Engine.run(Engine.java:488)
</code></pre>
<p>Why is that?</p>
| pkaramol | <p>I had similar issue. I have solved it enabling the "Use WebSocket". Jenkins Salve/Agent > Configure > Launch Method > Use WebSocket (enable) > Save.</p>
| AbhiOps |
<p>env</p>
<pre><code>k8s v1.20
cri containerd
system centos7.9
</code></pre>
<p>shell</p>
<pre><code>kubeadm init --service-cidr=172.30.0.0/16 --pod-network-cidr=10.128.0.0/14 --cri-socket=/run/containerd/containerd.sock --image-repository=registry.aliyuncs.com
</code></pre>
<p>the error log</p>
<pre><code>[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
</code></pre>
<p><code>journalctl -xeu kubelet</code></p>
<pre><code>Mar 09 22:26:51 master1 kubelet[3179]: I0309 22:26:51.252245 3179 kubelet_node_status.go:71] Attempting to register node master1
Mar 09 22:26:51 master1 kubelet[3179]: E0309 22:26:51.252670 3179 kubelet_node_status.go:93] Unable to register node "master1" with API server: Post "https://192.168.10.1:6443/api/v1/nodes": dial tcp 192.168
Mar 09 22:26:54 master1 kubelet[3179]: E0309 22:26:54.374695 3179 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: nodes have not yet been read at least once,
Mar 09 22:26:54 master1 kubelet[3179]: E0309 22:26:54.394116 3179 kubelet.go:2184] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: c
Mar 09 22:26:55 master1 kubelet[3179]: I0309 22:26:55.230293 3179 kubelet.go:449] kubelet nodes not sync
Mar 09 22:26:55 master1 kubelet[3179]: E0309 22:26:55.247912 3179 kubelet.go:2264] nodes have not yet been read at least once, cannot construct node object
Mar 09 22:26:55 master1 kubelet[3179]: I0309 22:26:55.348020 3179 kubelet.go:449] kubelet nodes not sync
Mar 09 22:26:55 master1 kubelet[3179]: E0309 22:26:55.717348 3179 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.10.1:644
Mar 09 22:26:56 master1 kubelet[3179]: I0309 22:26:56.231699 3179 kubelet.go:449] kubelet nodes not sync
Mar 09 22:26:57 master1 kubelet[3179]: E0309 22:26:57.134170 3179 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"master1.166ab2b6
Mar 09 22:26:57 master1 kubelet[3179]: E0309 22:26:57.134249 3179 event.go:218] Unable to write event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"master1.166ab2b6b
Mar 09 22:26:57 master1 kubelet[3179]: E0309 22:26:57.136426 3179 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"master1.166ab2b6
Mar 09 22:26:57 master1 kubelet[3179]: I0309 22:26:57.230722 3179 kubelet.go:449] kubelet nodes not sync
Mar 09 22:26:57 master1 kubelet[3179]: E0309 22:26:57.844788 3179 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://192.168.10.1:6443/apis/coordination.k8s.io/v1/namespa
</code></pre>
| xlovepython | <p><code>Kubelet</code> wasn't healthy and I couldn't deal with it.</p>
<p>I've created a new virtual machine and used the same steps, it worked.</p>
<p>Additionally,</p>
<p>I found the answer, by changing the version of <code>containerd</code>:</p>
<ul>
<li>from <code>containerd</code>: <code>1.4.1</code>,</li>
<li>to <code>containerd</code>: <code>1.4.4</code></li>
</ul>
<p>It worked, only this old virtual machine had problems. I guess it could be a bug.</p>
| xlovepython |
<p>I have a CRON JOB in Azure k8s which trigger once a day, based on the condition written inside of this CRON JOB (image), I need to start another application/s(Pods) which will do some process and dies</p>
| Ashley Alex | <p>I'll describe a solution that requires running <code>kubectl</code> commands from within the <code>CronJob</code> Pod. If the image you're using to create the <code>CronJob</code> doesn't have the <code>kubectl</code> command, you'll need to install it.<br />
In short, you can use a Bash script that creates Deployment (or Pod) and then you can mount that script in a volume to the <code>CronJob</code> Pod.</p>
<p>Below, I will provide a detailed step-by-step explanation.</p>
<hr />
<p>This is a really simple bash script that we'll mount to the <code>CronJob</code> Pod:<br />
<strong>NOTE:</strong> You may need to modify this script as needed. If the image you're using to create the <code>CronJob</code> doesn't have the <code>kubectl</code> command installed, you can install it in this script.</p>
<pre><code>$ cat deploy-script.sh
#!/bin/bash
echo "Creaing 'web-app'"
kubectl create deployment web-app --image=nginx
sleep 10
echo "Deleting 'web-app'"
kubectl delete deployment web-app
</code></pre>
<p>We want to run this script from inside the Pod, so I converted it to <code>ConfigMap</code> which will allow us to mount this script in a volume (see: <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#using-configmaps-as-files-from-a-pod" rel="nofollow noreferrer">Using ConfigMaps as files from a Pod</a>):</p>
<pre><code>$ cat deploy-script-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: deploy-script
data:
deployScript.sh: |
#!/bin/bash
echo "Creaing 'web-app'"
kubectl create deployment web-app --image=nginx
sleep 10
echo "Deleting 'web-app'"
kubectl delete deployment web-app
$ kubectl apply -f deploy-script-configmap.yml
configmap/deploy-script created
</code></pre>
<p>Then I created a separate <code>cron-user</code> ServiceAccount with the <code>edit</code> Role assigned and our <code>CronJob</code> will run under this ServiceAccount:</p>
<pre><code>$ cat cron-job.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cron-user
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cron-user-binding
subjects:
- kind: ServiceAccount
name: cron-user
namespace: default
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: deploy-cron-job
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: cron-user
volumes:
- name: deploy-script
configMap:
name: deploy-script
containers:
- name: cron-job-1
image: bitnami/kubectl
command: ["bash", "/mnt/deployScript.sh"]
volumeMounts:
- name: deploy-script
mountPath: /mnt/
</code></pre>
<p>After applying the above manifest, the <code>deploy-cron-job</code> CronJob was created:</p>
<pre><code>$ kubectl apply -f cron-job.yaml
serviceaccount/cron-user created
clusterrolebinding.rbac.authorization.k8s.io/cron-user-binding created
cronjob.batch/deploy-cron-job created
$ kubectl get cronjob
NAME SCHEDULE SUSPEND ACTIVE
deploy-cron-job */2 * * * * False 1
</code></pre>
<p>After a while, the <code>CronJob</code> Pod was created and as we can see from the logs, it created a <code>web-app</code> Deployment as expected:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-cron-job-1626426840-6w6s9 0/1 Completed 0 21s
$ kubectl logs -f deploy-cron-job-1626426840-6w6s9
Creaing 'web-app'
deployment.apps/web-app created
Deleting 'web-app'
deployment.apps "web-app" deleted
</code></pre>
| matt_j |
<p>I have implemented a simple website where user can log in/register via some existing service (e.g Google). After log in a user can manage his Jupyter notebook - open/delete. Basically a user has an account where he can access his notebook. The website and Jupyter notebook are containerized by Docker and organized by Kubernetes.</p>
<p>Now the problem is to bind authentication of a user via Google to access Jupyter notebook without requiring Jupyter notebook token/password from user. It is a problem because URL at which container with Jupyter notebook is running is known and accessible by anyone (so disabling password is not an option).</p>
<p>I tried to search for a way to make the container running at certain URL link accessible only by redirect from other website (in my case it would be user account page) but I haven't found a way. Also I was thinking about writing the Jupyter notebook token on the home screen after deploying notebook for every user with option to set the password for notebook but it seems to me very annoying to ask user for Google password and straight after password/token to notebook.</p>
<p>So I would like to know if there is an efficient way to bind authentication via Google with access to Juypter notebook with requiring only authentication via Google.</p>
| Prunus Cerasus | <p>I found a working solution to my problem.</p>
<p>I generated my own token with python library secrets. This token I passed as environment variable to container with Jupyter notebook (JUPYTER_TOKEN). So the Jupyter notebook uses this token as it's default. Now I can redirect user from website to Jupyter notebook using manually generated token.</p>
<p>Simply:</p>
<ol>
<li>Generate token</li>
<li>Deploy Jupyter notebook container with token as env variable</li>
<li>Redirect user to https://{JUPYTER}.com/?token={TOKEN}</li>
</ol>
<p>Where JUPYTER is a place where jupyter notebook container is running and TOKEN is manually generated token. This way user is redirected without requiring of manually typing password. Also no one from outside of internet without knowledge of this token can't get access to Jupyter notebook.</p>
| Prunus Cerasus |
<p>I want to use service per pod, but I didn't found how to run service per pod of type nodePort.</p>
<p>I was trying to use that, but every service is created of type ClusterIP
<a href="https://github.com/metacontroller/metacontroller/tree/master/examples/service-per-pod" rel="nofollow noreferrer">https://github.com/metacontroller/metacontroller/tree/master/examples/service-per-pod</a></p>
<p>I need to create pod together with service of type nodePort.</p>
<p>Current configuration</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx
annotations:
service-per-pod-label: "statefulset.kubernetes.io/nginx"
service-per-pod-ports: "80:80"
spec:
selector:
matchLabels:
app: nginx
serviceName: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 1
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: web
```
</code></pre>
| korallo | <p><code>ClusterIP</code> is the default type. Define the type <code>NodePort</code> in the spec section if you want to use that.</p>
<pre><code>spec:
type: NodePort
</code></pre>
| Sakib Md Al Amin |
<p>I am installed harbor v2.0.1 in kubernetes v1.18, now when I am login harbor, it give me this tips:</p>
<pre><code>{"errors":[{"code":"FORBIDDEN","message":"CSRF token invalid"}]}
</code></pre>
<p>this is my traefik 2.2.1 ingress config(this is the <a href="https://github.com/goharbor/harbor-helm/blob/1.3.0/templates/ingress/ingress.yaml#L24" rel="nofollow noreferrer">docs</a> I am reference):</p>
<pre><code>spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`harbor-portal.dolphin.com`) && PathPrefix(`/c/`)
services:
- name: harbor-harbor-core
port: 80
- kind: Rule
match: Host(`harbor-portal.dolphin.com`)
services:
- name: harbor-harbor-portal
port: 80
</code></pre>
<p>I am check harbor core logs only show ping success message.Shoud I using https? I am learnging in local machine. Is https mandantory? I am searching from internet and find just a little resouce to talk aboout it.what should I do to make it work?</p>
<p><a href="https://i.stack.imgur.com/UUtRu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UUtRu.png" alt="enter image description here" /></a></p>
<p>I read the source code, and tried in harbor core pod like this:</p>
<pre><code>harbor [ /harbor ]$ curl --insecure -w '%{http_code}' -d 'principal=harbor&password=Harbor123456' http://localhost:8080/c/login
{"errors":[{"code":"FORBIDDEN","message":"CSRF token invalid"}]}
</code></pre>
| Dolphin | <p>my expose type is nodePort. modify the values.yaml file, "externalURL: https" change to "externalURL: http"</p>
<p>before: externalURL: <a href="https://10.240.11.10:30002" rel="nofollow noreferrer">https://10.240.11.10:30002</a>
after: externalURL: <a href="http://10.240.11.10:30002" rel="nofollow noreferrer">http://10.240.11.10:30002</a></p>
<p>and then reinstall the harbor</p>
| user296224 |
<p>I have an EKS Kubernetes cluster. High level the setup is:</p>
<p>a) There is an EC2 instance, lets call it "VM" or "Host"</p>
<p>b) In the VM, there is a POD running 2 containers: Side Car HAProxy Container + MyApp Container</p>
<p>What happens is that when external requests come, inside of HAProxy container, I can see that the source IP is the "Host" IP. As the Host has a single IP, there can be a maximum of 64K connections to HAProxy.</p>
<p>I'm curious to know how to workaround this problem as I want to be able to make like 256K connections per Host.</p>
| Luis Serrano | <p>I'm not sure is you understand reason for <code>64k</code> limit so try to explain it</p>
<p>At first that is a good <a href="https://stackoverflow.com/a/2332756/13946204">answer</a> about <code>64k</code> limitations</p>
<p>Let's say that <code>HAProxy</code> (<code>192.168.100.100</code>) listening at port <code>8080</code> and free ports at <code>Host</code> (<code>192.168.1.1</code>) are 1,353~65,353, so you have combination of:</p>
<pre><code>source 192.168.1.1:1353~65353 → destination 192.168.100.100:8080
</code></pre>
<p>That is 64k <strong>simultaneous</strong> connections. I don't know how often NAT table is updating, but after update unused ports will be reused. So <strong>simultaneous</strong> is important</p>
<p>If your only problem is limit of connections per IP, here is couple solutions:</p>
<ol>
<li>Run multiple <code>HAProxy</code>es. Three containers increase limit to 64,000 X 3 = 192,000</li>
<li>Listen multiple ports on <code>HAProxy</code> (check about <a href="https://stackoverflow.com/a/14388707/13946204">SO_REUSEPORT</a>). Three ports (<code>8080</code>, <code>8081</code>, <code>8082</code>) increase max number of connections to 192,000</li>
</ol>
<p><code>Host</code> interface IP is acting like a gateway for Docker internal network so I not sure if it is possible to set couple IPs for <code>Host</code> or <code>HAProxy</code>. At least I didn't find information about it.</p>
| rzlvmp |
<p>A question that I can’t seem to find a good answer to and am so confused as to what should be an easy answer. I guess you can say I can't see the forest for the trees.</p>
<p>I have N Pods that need access to the same shared FILE SYSTEM (contains 20Gig of pdfs, HTML) only read. I don’t want to copy it to each Pod and create a Volume in Docker. How should I handle this? I don’t see this as a Stateful App where each pod gets its own changed data.</p>
<ul>
<li><p>Node-0</p>
<ul>
<li>Pod-0
<ul>
<li>Java-app - NEED READ ACCESS TO SHARED FILE SYSTEM</li>
</ul>
</li>
</ul>
</li>
<li><p>Node-1</p>
<ul>
<li>Pod-1
<ul>
<li>Java-app - NEED READ ACCESS TO SHARED FILE SYSTEM</li>
</ul>
</li>
</ul>
</li>
<li><p>Node-2</p>
<ul>
<li>Pod-2
<ul>
<li>Java-app - NEED READ ACCESS TO SHARED FILE SYSTEM</li>
</ul>
</li>
</ul>
</li>
</ul>
| user3008410 | <p>The question is quite open-ended. However having gone through this thought process recently, I can come up with two options:</p>
<ol>
<li><p><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><strong>hostPath</strong></a>: You mount this data (probably from NFS or some such) at <code>/data</code> (for example) on all your Kubernetes nodes. Then in Pod specification, you attach volume of type <code>hostPath</code> with <code>path: /data</code>.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/storage/volumes/#glusterfs" rel="nofollow noreferrer"><strong>GlusterFS volume</strong></a>: This is the option we finally went for, for high availability and speed requirements. The same GlusterFS volume (having the PDFs, data, etc.) can be attached to multiple Pods.</p>
</li>
</ol>
| seshadri_c |
<p>I have a fresh install of Kubernetes, running on an Amazon EC2.</p>
<p>kubectl get node reports that the node isn't ready after <code>kubeadm init</code>.</p>
<p>The first error in the logs is shown below.</p>
<pre><code>journalctl -u kubelet
error failed to read kubelet config file "/var/lib/kubelet/config.yaml"
</code></pre>
<p>Investigation of this error has not returned any results that indicate a fresh install. Also these suggest the file is missing or has the wrong permissions or is empty.</p>
<p>The file as it appears in my EC2 is belongs to the same user and group as all it's peers. It exists and is populated 876b.</p>
<pre><code>apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
</code></pre>
<p>I have tried applying the flannel from this question: <a href="https://stackoverflow.com/questions/58024643/kubernetes-master-node-not-ready-state/58026895">Kubernetes master node not ready state</a></p>
<p>Additionally, I have tried applying Daemonsets from: <a href="https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/</a></p>
<p>Please let me know your thoughts, thank you.</p>
| Stephen Feyrer | <p>The underlying issues remain as the master is not functional but the issue of Readiness is resolved.</p>
<p>These commands were applied from another question: <a href="https://stackoverflow.com/questions/44086826/kubeadm-master-node-never-ready">kubeadm: master node never ready</a></p>
<pre><code> kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
</code></pre>
<p>Docker does show the containers still as being paused.</p>
| Stephen Feyrer |
<p><strong>CONTEXT:</strong></p>
<p>I'm in the middle of planning a migration of kubernetes services from one cluster to another, the clusters are in separate GCP projects but need to be able to communicate across the clusters until all apps are moved across. The projects have VPC peering enabled to allow internal traffic to an internal load balancer (tested and confirmed that's fine).</p>
<p>We run Anthos service mesh (v1.12) in GKE clusters.</p>
<p><strong>PROBLEM:</strong></p>
<p>I need to find a way to do the following:</p>
<ul>
<li>PodA needs to be migrated, and references a hostname in its ENV which is simply 'serviceA'</li>
<li>Running in the same cluster this resolves fine as the pod resolves 'serviceA' to 'serviceA.default.svc.cluster.local' (the internal kubernetes FQDN).</li>
<li>However, when I run PodA on the new cluster I need serviceA's hostname to actually resolve back to the internal load balancer on the other cluster, and not on its local cluster (and namespace), seen as serviceA is still running on the old cluster.</li>
</ul>
<p>I'm using an istio ServiceEntry resource to try and achieve this, as follows:</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: GRPC
resolution: STATIC
endpoints:
- address: 'XX.XX.XX.XX' # IP Redacted
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: resources
namespace: default
spec:
hosts:
- 'serviceA.default.svc.cluster.local'
gateways:
- mesh
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
</code></pre>
<p>This doesn't appear to work and I'm getting <code>Error: 14 UNAVAILABLE: upstream request timeout</code> errors on PodA running in the new cluster.</p>
<p>I can confirm that running <code>telnet</code> to the hostname from another pod on the mesh appears to work (i.e. don't get connection timeout or connection refused).</p>
<p>Is there a limitation on what you can use in the hosts on a <code>serviceentry</code>? Does it have to be a .com or .org address?</p>
<p>The only way I've got this to work properly is to use a hostAlias in PodA to add a hostfile entry for the hostname, but I really want to try and avoid doing this as it means making the same change in lots of files, I would rather try and use Istio's serviceentry to try and achieve this.</p>
<p>Any ideas/comments appreciated, thanks.</p>
| sc-leeds | <p>Fortunately I came across someone with a similar (but not identical) issue, and the answer in this <a href="https://stackoverflow.com/a/64622137/15383310">stackoverflow post</a> gave me the outline of what kubernetes (and istio) resources I needed to create.</p>
<p>I was heading in the right direction, just needed to really understand how istio uses Virtual Services and Service Entries.</p>
<p>The end result was this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: serviceA
namespace: default
spec:
type: ExternalName
externalName: serviceA.example.com
ports:
- name: grpc
protocol: TCP
port: 50051
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.example.com
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: TCP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
rewrite:
authority: serviceA.example.com
</code></pre>
| sc-leeds |
<p>I want to use Kong’s Capturing Group in Ingress k8s object to perform an uri rewriting.
I want to implement the following logic:
https://kong_host:30000/service/audits/health -> (rewrite) https://kong_host:30000/service/audit/v1/health</p>
<p>Ingress resource:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: audits
annotations:
konghq.com/plugins: audits-rewrite
spec:
rules:
- http:
paths:
- path: /service/audits/(?<path>\\\S+)
backend:
serviceName: audits
servicePort: 8080
</code></pre>
<p>KongPlugin</p>
<pre><code>apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: audits-rewrite
config:
replace:
uri: /service/audit/v1/$(uri_captures["path"])
plugin: request-transformer
</code></pre>
<p>Thanks.</p>
| mohamed.Yassine.SOUBKI | <p>As pointed in documentation you are not able to use <code>v1beat1</code> ingress API version to capture groups in paths.</p>
<p><a href="https://docs.konghq.com/hub/kong-inc/request-transformer/#examples" rel="nofollow noreferrer">https://docs.konghq.com/hub/kong-inc/request-transformer/#examples</a></p>
<p>You need to upgrade you <code>k8s</code> cluster to <code>1.19</code> or higher version to use this feature.</p>
<p>I also had similar problem and resolved it will following configuration:</p>
<p>Ingress resource:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: audits
annotations:
konghq.com/plugins: audits-rewrite
spec:
rules:
- http:
paths:
- path: /service/audits/(.*)
backend:
serviceName: audits
servicePort: 8080
</code></pre>
<p>KongPlugin</p>
<pre><code>apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: audits-rewrite
config:
replace:
uri: /service/audit/v1/$(uri_captures[1])
plugin: request-transformer
</code></pre>
| Michał Lewndowski |
<p>I have this cron job running on kubernetes:</p>
<pre><code># cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: loadjob
spec:
schedule: "05 10 31 Mar *"
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
metadata: # Dictionary
name: apiaplication
labels: # Dictionary
product_id: myprod
annotations:
vault.security.banzaicloud.io/vault-role: #{KUBERNETES_NAMESPACE}#
prometheus.io/path: /metrics
prometheus.io/port: "80"
prometheus.io/scrape: "true"
spec:
containers:
- name: al-onetimejob
image: #{TIMELOAD_IMAGE_TAG}#
imagePullPolicy: Always
restartPolicy: OnFailure
imagePullSecrets:
- name: secret
</code></pre>
<p>In the above cron expression I have set it to today morning 10.05AM using cron syntax schedule: <code>05 10 31 Mar *</code> - but unfortunately when I checked after 10.05 my job (pod) was not running.</p>
<p>So I found it's not running as expected at 10.05 using the above expression. Can someone please help me to write the correct cron syntax? Any help would be appreciated. Thanks</p>
| Niranjan | <p>I think this should do the work <code>5 10 31 MAR FRI</code>.</p>
<p>I can recommend using website like <a href="https://crontab.guru" rel="nofollow noreferrer">this</a> to compose your <code>cron</code> schedules.</p>
| Michał Lewndowski |
<p>I have an application that is deployed on kubernetes cluster. Accessing this application using rancher namespace. By specifying this namespace I am getting "get pods", and all information.
Now, this application I want to control from the helm. what do I need to do?
I have installed helm where my kubectl installation is there.</p>
| Rajesh | <p>If you want to "control" applications on Kubernetes cluster with Helm, you should start with <a href="https://helm.sh/docs/topics/charts/" rel="noreferrer">helm charts</a>. You can create some if one is not already available. Once you have chart(s), you can target the Kubernetes cluster with the cluster's <code>KUBECONFIG</code> file.</p>
<p>If I had a Helm chart like <code>my-test-app</code> and a Kubernetes cluster called <code>my-dev-cluster</code>.</p>
<p>With Helm I can:</p>
<ul>
<li><p>deploy - <code>install</code></p>
<pre><code>helm install test1 my-test-app/ --kubeconfig ~/.kubeconfigs/my-dev-cluster.kubeconfig
</code></pre>
</li>
<li><p>update - <code>upgrade</code></p>
<pre><code>helm upgrade test1 my-test-app/ --kubeconfig ~/.kubeconfigs/my-dev-cluster.kubeconfig
</code></pre>
</li>
<li><p>remove - <code>uninstall</code></p>
<pre><code>helm uninstall test1 my-test-app/ --kubeconfig ~/.kubeconfigs/my-dev-cluster.kubeconfig
</code></pre>
</li>
</ul>
<p>Where <code>my-dev-cluster.kubeconfig</code> is the kubeconfig file for my cluster in <code>~/.kubeconfigs</code> directory. Or you can set the path using <code>KUBECONFIG</code> environment variable.</p>
| seshadri_c |
<p>Am working on Kubernetes - Elastic search deployment,</p>
<p>I have followed documentation provided by elastic.co (<a href="https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-deploy-elasticsearch.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-deploy-elasticsearch.html</a>)</p>
<p>My YAML file for elastic is below:</p>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.8.0
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
podTemplate:
spec:
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms2g -Xmx2g
resources:
requests:
memory: 4Gi
cpu: 0.5
limits:
memory: 4Gi
cpu: 2
EOF
</code></pre>
<p>But am getting below error, when I describe the pod created</p>
<pre><code>Name: quickstart-es-default-0
Namespace: default
Priority: 0
Node: <none>
Labels: common.k8s.elastic.co/type=elasticsearch
controller-revision-hash=quickstart-es-default-55759bb696
elasticsearch.k8s.elastic.co/cluster-name=quickstart
elasticsearch.k8s.elastic.co/config-hash=178912897
elasticsearch.k8s.elastic.co/http-scheme=https
elasticsearch.k8s.elastic.co/node-data=true
elasticsearch.k8s.elastic.co/node-ingest=true
elasticsearch.k8s.elastic.co/node-master=true
elasticsearch.k8s.elastic.co/node-ml=true
elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
elasticsearch.k8s.elastic.co/version=7.8.0
statefulset.kubernetes.io/pod-name=quickstart-es-default-0
Annotations: co.elastic.logs/module: elasticsearch
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/quickstart-es-default
Init Containers:
elastic-internal-init-filesystem:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
Port: <none>
Host Port: <none>
Command:
bash
-c
/mnt/elastic-internal/scripts/prepare-fs.sh
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
Environment:
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
Mounts:
/mnt/elastic-internal/downward-api from downward-api (ro)
/mnt/elastic-internal/elasticsearch-bin-local from elastic-internal-elasticsearch-bin-local (rw)
/mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
/mnt/elastic-internal/elasticsearch-config-local from elastic-internal-elasticsearch-config-local (rw)
/mnt/elastic-internal/elasticsearch-plugins-local from elastic-internal-elasticsearch-plugins-local (rw)
/mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
/mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
/mnt/elastic-internal/transport-certificates from elastic-internal-transport-certificates (ro)
/mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
/mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
/usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
/usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
/usr/share/elasticsearch/data from elasticsearch-data (rw)
/usr/share/elasticsearch/logs from elasticsearch-logs (rw)
sysctl:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
Port: <none>
Host Port: <none>
Command:
sh
-c
sysctl -w vm.max_map_count=262144
Environment:
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
Mounts:
/mnt/elastic-internal/downward-api from downward-api (ro)
/mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
/mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
/mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
/mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
/mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
/usr/share/elasticsearch/bin from elastic-internal-elasticsearch-bin-local (rw)
/usr/share/elasticsearch/config from elastic-internal-elasticsearch-config-local (rw)
/usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
/usr/share/elasticsearch/config/transport-certs from elastic-internal-transport-certificates (ro)
/usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
/usr/share/elasticsearch/data from elasticsearch-data (rw)
/usr/share/elasticsearch/logs from elasticsearch-logs (rw)
/usr/share/elasticsearch/plugins from elastic-internal-elasticsearch-plugins-local (rw)
Containers:
elasticsearch:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
Ports: 9200/TCP, 9300/TCP
Host Ports: 0/TCP, 0/TCP
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 500m
memory: 4Gi
Readiness: exec [bash -c /mnt/elastic-internal/scripts/readiness-probe-script.sh] delay=10s timeout=5s period=5s #success=1 #failure=3
Environment:
ES_JAVA_OPTS: -Xms2g -Xmx2g
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
PROBE_PASSWORD_PATH: /mnt/elastic-internal/probe-user/elastic-internal-probe
PROBE_USERNAME: elastic-internal-probe
READINESS_PROBE_PROTOCOL: https
HEADLESS_SERVICE_NAME: quickstart-es-default
NSS_SDB_USE_CACHE: no
Mounts:
/mnt/elastic-internal/downward-api from downward-api (ro)
/mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
/mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
/mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
/mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
/mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
/usr/share/elasticsearch/bin from elastic-internal-elasticsearch-bin-local (rw)
/usr/share/elasticsearch/config from elastic-internal-elasticsearch-config-local (rw)
/usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
/usr/share/elasticsearch/config/transport-certs from elastic-internal-transport-certificates (ro)
/usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
/usr/share/elasticsearch/data from elasticsearch-data (rw)
/usr/share/elasticsearch/logs from elasticsearch-logs (rw)
/usr/share/elasticsearch/plugins from elastic-internal-elasticsearch-plugins-local (rw)
Conditions:
Type Status
PodScheduled False
Volumes:
elasticsearch-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elasticsearch-data-quickstart-es-default-0
ReadOnly: false
downward-api:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
elastic-internal-elasticsearch-bin-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
elastic-internal-elasticsearch-config:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-default-es-config
Optional: false
elastic-internal-elasticsearch-config-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
elastic-internal-elasticsearch-plugins-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
elastic-internal-http-certificates:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-http-certs-internal
Optional: false
elastic-internal-probe-user:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-internal-users
Optional: false
elastic-internal-remote-certificate-authorities:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-remote-ca
Optional: false
elastic-internal-scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: quickstart-es-scripts
Optional: false
elastic-internal-transport-certificates:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-transport-certificates
Optional: false
elastic-internal-unicast-hosts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: quickstart-es-unicast-hosts
Optional: false
elastic-internal-xpack-file-realm:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-xpack-file-realm
Optional: false
elasticsearch-logs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler running "VolumeBinding" filter plugin for pod "quickstart-es-default-0": pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler running "VolumeBinding" filter plugin for pod "quickstart-es-default-0": pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling 20m (x3 over 21m) default-scheduler 0/2 nodes are available: 2 Insufficient memory.
</code></pre>
<p><strong>Question 2:</strong>
I have created two ec2 servers(t2 large). Master and worker.
I am using 300 GB HDD for both the servers.</p>
<p><strong>I have following pv</strong></p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0001 200Gi RWO Retain Available
</code></pre>
<p>I am using the below code to create a claim for my elastic.</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.8.0
nodeSets:
name: default count: 1 config: node.master: true node.data: true node.ingest: true node.store.allow_mmap: false volumeClaimTemplates:
metadata: name: elasticsearch-data spec: accessModes:
ReadWriteOnce resources: requests: storage: 200Gi storageClassName: gp2 EOF
</code></pre>
<p><strong>Storage class:</strong>(I created and make it as default)</p>
<pre><code>NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
gp2 (default) kubernetes.io/aws-ebs Delete Immediate false
</code></pre>
<p><strong>Kubectl get pv</strong></p>
<pre><code>Labels: <none>
Annotations: Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 200Gi
Node Affinity: <none>
</code></pre>
<p><strong>kubectl get pvc</strong></p>
<pre><code>Namespace: default
StorageClass: gp2
Status: Pending
Volume:
Labels: common.k8s.elastic.co/type=elasticsearch
elasticsearch.k8s.elastic.co/cluster-name=quickstart
elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: quickstart-es-default-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 61s (x18 over 24m) persistentvolume-controller Failed to provision volume with StorageClass "gp2": Failed to get AWS Cloud Provider. GetCloudProvider returned <nil> instead
</code></pre>
<p><strong>But am getting below error:</strong>
running "VolumeBinding" filter plugin for pod "quickstart-es-default-0": pod has unbound immediate PersistentVolumeClaims</p>
<p><strong>My volume is in Ec2 EBS</strong></p>
| Rajeev Uppala | <p>You need to create PV and bound this PVC to PV. Then you can configure your application to make use of PVC.</p>
| aftab |
<p>I am trying to deploy the popular sock shop app in my AWS EKS cluster. I have successfully created the cluster but whenever I try to deploy to the cluster I keep getting this error.</p>
<pre><code>Error from server (Forbidden): <html><head><meta http-equiv='refresh' content='1;url=/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s'/><script>window.location.replace('/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s');</script></head><body style='background-color:white; color:white;'>
Authentication required
<!--
-->
</body></html>
</code></pre>
<p>I have added my jenkins user to my sudo group but I still get same error. Also I tried to run this command directly on my ec2 instance</p>
<pre><code>kubectl create -f complete-demo.yaml
</code></pre>
<p>And it deployed. What am I getting wrongs that keeps giving this error.</p>
<p>This is my Jenkins content</p>
<pre><code>#!/usr/bin/env groovy
pipeline {
agent any
environment {
AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
AWS_DEFAULT_REGION = "us-east-1"
}
stages {
stage("Deploy to EKS") {
steps {
script {
dir('deploy/kubernetes') {
sh "kubectl create -f complete-demo.yaml"
}
}
}
}
}
}
</code></pre>
<p>This is the github profile I am deploying from.</p>
<p><a href="https://github.com/Okeybukks/Altschool-Final-Cloud-Project" rel="nofollow noreferrer">https://github.com/Okeybukks/Altschool-Final-Cloud-Project</a></p>
| Achebe Peter | <p>Since you are using <code>EC2</code> instance and some AWS credentials please make sure that user connected with those creds is listed in <code>aws-auth</code> configmap located in <code>kube-system</code> namespace.</p>
<p>If yes then you should probably run <code>EKS</code> login command before you apply your manifest to be able to generate proper <code>config</code> file which is not present in <code>Jenkins</code> workspace. So your pipeline should looks like:</p>
<pre><code>#!/usr/bin/env groovy
pipeline {
agent any
environment {
AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
AWS_DEFAULT_REGION = "us-east-1"
}
stages {
stage("Deploy to EKS") {
steps {
script {
dir('deploy/kubernetes') {
sh """
aws eks update-kubeconfig --name my_cluster_name
kubectl create -f complete-demo.yaml
"""
}
}
}
}
}
}
</code></pre>
<p>Please refer to <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-cluster-connection/" rel="nofollow noreferrer">this</a> doc regarding login to <code>EKS</code>.</p>
| Michał Lewndowski |
<p>I have made a HA Kubernetes cluster. FIrst I added a node and joined the other node as master role.
I basically did the multi etcd set up. This worked fine for me. I did the fail over testing which also worked fine. Now the problem is once I am done working, I drained and deleted the other node and then I shut down the other machine( a VM on GCP). But then my kubectl commands dont work... Let me share the steps:</p>
<p>kubectl get node(when multi node is set up)</p>
<pre><code>NAME STATUS ROLES AGE VERSION
instance-1 Ready <none> 17d v1.15.1
instance-3 Ready <none> 25m v1.15.1
masternode Ready master 18d v1.16.0
</code></pre>
<p>kubectl get node ( when I shut down my other node)</p>
<pre><code>root@masternode:~# kubectl get nodes
The connection to the server k8smaster:6443 was refused - did you specify the right host or port?
</code></pre>
<p>Any clue?</p>
| user3403309 | <p>After reboot the server you need to do some step below:</p>
<ol>
<li><p>sudo -i</p>
</li>
<li><p>swapoff -a</p>
</li>
<li><p>exit</p>
</li>
<li><p>strace -eopenat kubectl version</p>
</li>
</ol>
| HenryDo |
<p>This is my very first post here and looking for some advise please.</p>
<p>I am learning Kubernetes and trying to get cloud code extension to deploy Kubernetes manifests on non-GKE cluster. Guestbook app can be deployed using cloud code extension to local K8 cluster(such as MiniKube or Docker-for-Desktop).</p>
<p>I have two other K8 clusters as below and I cannot deploy manifests via cloud code. I am not entirely sure if this is supposed to work or not as I couldn't find any docs or posts on this. Once the GCP free trial is finished, I would want to deploy my test apps on our local onprem K8 clusters via cloud code.</p>
<ol>
<li>3 node cluster running on CentOS VMs(built using kubeadm)</li>
<li>6 node cluster on GCP running on Ubuntu machines(free trial and built using Hightower way)</li>
</ol>
<p>Skaffold is installed locally on MAC and my local $HOME/.kube/config has contexts and users set to access all 3 clusters.
➜</p>
<pre><code>guestbook-1 kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
* kubernetes-admin@kubernetes kubernetes kubernetes-admin
kubernetes-the-hard-way kubernetes-the-hard-way admin
</code></pre>
<p>Error:</p>
<pre><code> Running: skaffold dev -v info --port-forward --rpc-http-port 57337 --filename /Users/testuser/Desktop/Cloud-Code-Builds/guestbook-1/skaffold.yaml -p cloudbuild --default-repo gcr.io/gcptrial-project
starting gRPC server on port 50051
starting gRPC HTTP server on port 57337
Skaffold &{Version:v1.19.0 ConfigVersion:skaffold/v2beta11 GitVersion: GitCommit:63949e28f40deed44c8f3c793b332191f2ef94e4 GitTreeState:dirty BuildDate:2021-01-28T17:29:26Z GoVersion:go1.14.2 Compiler:gc Platform:darwin/amd64}
applying profile: cloudbuild
no values found in profile for field TagPolicy, using original config values
Using kubectl context: kubernetes-admin@kubernetes
Loaded Skaffold defaults from \"/Users/testuser/.skaffold/config\"
Listing files to watch...
- python-guestbook-backend
watching files for artifact "python-guestbook-backend": listing files: unable to evaluate build args: reading dockerfile: open /Users/adminuser/Desktop/Cloud-Code-Builds/src/backend/Dockerfile: no such file or directory
Exited with code 1.
skaffold config file skaffold.yaml not found - check your current working directory, or try running `skaffold init`
</code></pre>
<p>I have the docker and skaffold file in the path as shown in the image and have authenticated the google SDK in vscode. Any help please ?!</p>
<p><img src="https://i.stack.imgur.com/aGuuH.png" alt="enter image description here" /></p>
| P1222 | <p>I was able to get this working in the end. What helped in this particular case was removing skaffold.yaml, then skaffold init, generated new skaffold.yaml. And, Cloud Code was then able deploy pods on both remote clusters. Thanks for all your help.</p>
| P1222 |
<p>I installed first ectd, kubeapiserver and kubelet using systemd service. The services are running fine and listening to all required ports.</p>
<p>When I run kubectl cluster-info , I get below output</p>
<pre><code>Kubernetes master is running at http://localhost:8080
</code></pre>
<p>When I run kubectl get componentstatuses, then I get below output</p>
<pre><code>etcd-0 Healthy {"health": "true"}
</code></pre>
<p>But running kubectl get nodes , I get below error</p>
<pre><code>Error from server (ServerTimeout): the server cannot complete the requested operation at this time, try again later (get nodes)
</code></pre>
<p>Can anybody help me out on this.</p>
| Rehan Ch | <p>For the message:</p>
<pre><code>:~# k get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
--------
</code></pre>
<p>Modify the following files on all master nodes:</p>
<pre><code>$ sudo vim /etc/kubernetes/manifests/kube-scheduler.yaml
</code></pre>
<p>Comment or delete the line:</p>
<pre><code>- --port=0
</code></pre>
<p>in (spec->containers->command->kube-scheduler)</p>
<pre><code>$ sudo vim /etc/kubernetes/manifests/kube-controller-manager.yaml
</code></pre>
<p>Comment or delete the line:</p>
<pre><code>- --port=0
</code></pre>
<p>in (spec->containers->command->kube-controller-manager)</p>
<p>Then restart kubelet service:</p>
<pre><code>$ sudo systemctl restart kubelet.service
</code></pre>
| Mustapha EL GHAZI |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.