Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
β | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
β |
---|---|---|---|
<p>I am in the process of migrating a traditional nginx-fronted reverse proxy to a Kubernetes cluster with Traefik. The end goal is to move everything onto microservices but that might take a while. In the meantime, how do I create an IngressRoute CRD that routes to a legacy system hosted outside the cluster? This would be just a <a href="http://server:port" rel="nofollow noreferrer">http://server:port</a> kind of forward. I've combed through the docs but it seems Traefik v2.0 has removed support for custom backends and I'm not quite sure how dynamic configuration is supposed to be injected in Kubernetes without an IngressRoute CRD (that does not seem to support server:port definitions)? I might be completely off course here so appreciate any guidance on this.</p>
| ystan- | <p>Found the answer while solving an unrelated problem - turns out Traefik isn't involved in the equation at all - the IngressRoute should remain as-is while the standard Kubernetes service needs to use the type <code>ExternalName</code> instead of <code>ClusterIP</code>/<code>NodePort</code>/<code>LoadBalancer</code>.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-app-name
spec:
externalName: hostname-of-legacy-system
ports:
- port: port-serving-legacy-app
protocol: TCP
targetPort: port-serving-legacy-app
selector:
app: my-app-name
type: ExternalName
</code></pre>
| ystan- |
<p>I have a containerized app that use Apache Ignite thick client. This uses bi directional communication with the Ignite cluster outside of Kubernetes. When it connects to Ignite it advertises it's I.P and port to Ignite. Ignite connects back to thick client for discovery/stats and other management tasks.</p>
<p>Obviously using default configs and deployment, the container will advertise its internal info.</p>
<p>In my previous container env <code>DC/OS</code> I was able to read the host i.p and the external random ports ("random" nodePort ports of node port in Kubernetes) as environment variables and Ignite cluster was able to connect trough the publicly advertised i.p and ports. And if the app was moved redeployed on another node it would just get the new host and new random port and Ignite would just reconnect back with the new info.</p>
<p>The host i.p I was able to solve with:</p>
<pre><code>env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
</code></pre>
<p>I was hopping that the Service Node Port would expose the target port but the environment variables available are still only the internal ones.</p>
<p>Service env vars I get:</p>
<pre><code>MY_SERVICE_PORT_18080_TCP_PORT: 123456
MY_SERVICE_SERVICE_PORT: 123456
</code></pre>
<p>With Kubectl I get:</p>
<pre><code>kubectl get svc my-service
my-service NodePort xxx.xxx.xxx.xxx <none> 123456:30086/TCP 2m46s
</code></pre>
<p>So is there's someway to get that <code>30086</code> port known to the container app?</p>
<p>One way I tell my self to quick fix this is to hardcode the targetPort in the service definition and hardcode the same port in the apps config. But this only works if you have 2-3 apps connecting to Ignite, let alone having to track which ports you have reserved for that purpose.</p>
<p>-- Update meant to say the random nodePort that is assigned by a service</p>
| user432024 | <p>As Chris pointed out, init containers is the only way to do this... Though it would have been nice for Service Api to expose it as env var.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: my-app
name: my-app
spec:
replicas: 1
selector:
matchLabels:
run: my-app
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: private
image: foo/bar
volumeMounts:
- name: config-data
mountPath: /config
ports:
- containerPort: 18080
env:
initContainers:
- name: config-data
image: curlimages/curl
command: ["sh", "-c", "TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`; curl -k -H \"Authorization: Bearer $TOKEN\" https://kubernetes.default:443/api/v1/namespaces/default/services/my-app >> /config/service.json"]
volumeMounts:
- name: config-data
mountPath: /config
volumes:
- name: config-data
emptyDir: {}
</code></pre>
<p>For the init container I used <code>curlimages/curl</code> image as the container to provide the curl command.</p>
<p>In the command section above make sure to replace <code>default</code> if using another namespace and <code>my-app</code> with the name of your app obviously. Also for my purposes I chose to just write the full Json output to file. Since my app already parses json configs and has a json libs as dependency and I can read the Json as is.</p>
<p>You also need to give "view" access to the <code>default</code> service account or the service account that you may be using so it can call the curl api inside the init container.</p>
<pre><code>kubectl create rolebinding default-view \
--clusterrole=view \
--serviceaccount=default:default \
--namespace=default
</code></pre>
| user432024 |
<p>When I SSH onto a <strong>running</strong> Docker container in a Kubernetes cluster and run <code>os.getenv("HOSTNAME")</code> from within a python interpreter, I am able to see the name of the <strong>deployment</strong> being used.</p>
<p>But if I try and run <code>os.getenv("HOSTNAME")</code> in a script that gets run from the <code>Dockerfile</code>, the env-var is <code>null</code>.</p>
<p>Is this expected? Is there some workaround here?</p>
<p><strong>UPDATE</strong>: I tried to get the contents from <code>/etc/hostname</code> instead and to my surprise I got <code>debuerreotype</code>. After some googling I saw that that is the base Debian image in use and apparently <a href="https://github.com/debuerreotype/debuerreotype/blob/master/docker-run.sh#L50" rel="nofollow noreferrer">it passes that name as the hostname</a></p>
<ul>
<li>Opened an <a href="https://github.com/debuerreotype/debuerreotype/issues/160" rel="nofollow noreferrer">Issue</a> with them in the meantime</li>
<li>(I still don't understand why I get the correct value when I SSH into the container though)</li>
</ul>
| Felipe | <p>The problem was that I was running scripts in the <code>RUN</code> step of the <code>Dockerfile</code>.</p>
<p>This means that this code runs at <strong>image build time</strong>, so whatever Environment variables I was able to retrieve were those of the build-time environment.</p>
<p>I was able to correctly retrieve the env-vars when I did it inside the code that gets run at <strong>container run time</strong>, i.e. in the <code>ENTRYPOINT</code> command.</p>
| Felipe |
<p>I keep seeing <code>Back-off restarting failed container</code> when try to use docker official image <a href="https://hub.docker.com/_/hello-world" rel="nofollow noreferrer">https://hub.docker.com/_/hello-world</a> to create any pod\deployment. When I switch to other images like <code>nginx</code> everything works well.</p>
<p>How can I debug and fix this issue?</p>
<p>Below are the event logs after creating the pod with <code>kubectl</code>.</p>
<pre><code>root@ip-10-229-68-221:~# kubectl get event --watch
LAST SEEN TYPE REASON OBJECT MESSAGE
24s Normal Scheduled pod/helloworld-656898b9bb-98vrv Successfully assigned default/helloworld-656898b9bb-98vrv to kind-lab-worker
23s Normal Pulling pod/helloworld-656898b9bb-98vrv Pulling image "hello-world:linux"
16s Normal Pulled pod/helloworld-656898b9bb-98vrv Successfully pulled image "hello-world:linux" in 7.371731633s
1s Normal Created pod/helloworld-656898b9bb-98vrv Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-98vrv Started container hello-world
1s Normal Pulled pod/helloworld-656898b9bb-98vrv Container image "hello-world:linux" already present on machine
13s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
24s Normal Scheduled pod/helloworld-656898b9bb-sg6fs Successfully assigned default/helloworld-656898b9bb-sg6fs to kind-lab-worker
23s Normal Pulling pod/helloworld-656898b9bb-sg6fs Pulling image "hello-world:linux"
13s Normal Pulled pod/helloworld-656898b9bb-sg6fs Successfully pulled image "hello-world:linux" in 9.661065021s
13s Normal Created pod/helloworld-656898b9bb-sg6fs Created container hello-world
13s Normal Started pod/helloworld-656898b9bb-sg6fs Started container hello-world
13s Normal Pulled pod/helloworld-656898b9bb-sg6fs Container image "hello-world:linux" already present on machine
11s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
24s Normal Scheduled pod/helloworld-656898b9bb-vhhfm Successfully assigned default/helloworld-656898b9bb-vhhfm to kind-lab-worker
23s Normal Pulling pod/helloworld-656898b9bb-vhhfm Pulling image "hello-world:linux"
18s Normal Pulled pod/helloworld-656898b9bb-vhhfm Successfully pulled image "hello-world:linux" in 5.17232683s
3s Normal Created pod/helloworld-656898b9bb-vhhfm Created container hello-world
2s Normal Started pod/helloworld-656898b9bb-vhhfm Started container hello-world
3s Normal Pulled pod/helloworld-656898b9bb-vhhfm Container image "hello-world:linux" already present on machine
2s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
24s Normal SuccessfulCreate replicaset/helloworld-656898b9bb Created pod: helloworld-656898b9bb-vhhfm
24s Normal SuccessfulCreate replicaset/helloworld-656898b9bb Created pod: helloworld-656898b9bb-sg6fs
24s Normal SuccessfulCreate replicaset/helloworld-656898b9bb Created pod: helloworld-656898b9bb-98vrv
24s Normal ScalingReplicaSet deployment/helloworld Scaled up replica set helloworld-656898b9bb to 3
79s Normal Killing pod/nginx Stopping container nginx
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-sg6fs Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-sg6fs Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-sg6fs Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-vhhfm Container image "hello-world:linux" already present on machine
0s Normal Pulled pod/helloworld-656898b9bb-98vrv Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-vhhfm Created container hello-world
1s Normal Created pod/helloworld-656898b9bb-98vrv Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-vhhfm Started container hello-world
0s Normal Started pod/helloworld-656898b9bb-98vrv Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-sg6fs Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-sg6fs Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-sg6fs Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-vhhfm Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-vhhfm Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-vhhfm Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-98vrv Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-98vrv Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-98vrv Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-sg6fs Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-sg6fs Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-sg6fs Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
</code></pre>
| Chen Zhao | <p><code>hello-world</code> image is not a long-running process: it just outputs text and stops.</p>
<p>Kubernetes Pod is by default expecting long-running processes, and if it stops it automatically restart the container.</p>
<p>This behavior is defined by the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer"><code>restartPolicy</code> parameter of a pod</a> and can have different values:</p>
<ul>
<li><code>Always</code>: always restart stopped container (the default)</li>
<li><code>OnFailure</code>: restart containers that exited with !=0</li>
<li><code>Never</code>: don't restart containers</li>
</ul>
<p>So in your case you should use one of the 2 last as the container is expected to stop normally.</p>
| zigarn |
<p>I'm using the <code>kubectl rollout</code> command to update my deployment. But since my project is a NodeJS Project. The <code>npm run start</code> will take some take(a few seconds before the application is actually running.) But Kubernetes will drop the old pods immediately after the <code>npm run start</code> is executed.</p>
<p>For example,</p>
<pre><code>kubectl logs -f my-app
> my app start
> nest start
</code></pre>
<p>The Kubernetes will drop the old pods now. However, it will take another 10 seconds until</p>
<pre><code>Application is running on: http://[::1]:5274
</code></pre>
<p>which means my service is actually up.</p>
<p>I'd like to know whether there is a way to modify this like waiting some more time before kubernetes drop the old pods.</p>
<p>My docker file:</p>
<pre><code>FROM node:14 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
COPY protos ./protos/
COPY tsconfig.build.json ./
COPY tsconfig.json ./
# Install app dependencies
RUN npm install
RUN export NODE_OPTIONS=--max_old_space_size=16384
RUN npm run build
COPY . .
# FROM node:14
# COPY --from=builder /app/node_modules ./node_modules
# COPY --from=builder /app/package*.json ./
# COPY --from=builder /app/dist ./dist
# COPY --from=builder /app/protos ./protos
EXPOSE 5273
CMD ["npm", "run", "start"]
</code></pre>
<p>Spec for my kubernetes yaml file:</p>
<pre><code>spec:
replicas: 4
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: image
imagePullPolicy: Always
resources:
limits:
memory: "8Gi"
cpu: "10"
requests:
memory: "8Gi"
cpu: "10"
livenessProbe:
httpGet:
path: /api/Health
port: 5274
initialDelaySeconds: 180
periodSeconds: 80
timeoutSeconds: 20
failureThreshold: 2
ports:
- containerPort: 5274
- containerPort: 5900
</code></pre>
| Zichzheng | <p>Use a startup probe on your container. <a href="https://docs.openshift.com/container-platform/4.11/applications/application-health.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.11/applications/application-health.html</a> . Pods don't count as "ready" until all of their containers have passed their startup (and readiness) checks.</p>
<p>And during a deployment the scheduler counts non-ready pods as "unavailable" for things like the "maxUnavailable" setting of the deployment. Thus the scheduler won't keep shutting down working pods until new pods are ready for traffic. (<a href="https://docs.openshift.com/container-platform/4.11/applications/deployments/deployment-strategies.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.11/applications/deployments/deployment-strategies.html</a>)</p>
<p>As an additional benefit, services won't route traffic to non-ready pods, so they won't receive any traffic until the containers have passed their startup probes.</p>
| David Ogren |
<p>I need help with making the XFS quotas work in Kubernetes on DigitalOcean.</p>
<p>My problem essentially is that the <code>xfs_quota</code> tool seems to work only when one has also an access to the disk device, not only to the mounted volume. However, whatever I try, I can't seem to get access both to the device and the mount.</p>
<p>I tried both volume mounts and raw block volumes.</p>
<h2>Volume Mounts</h2>
<p>Here's my storage class:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: block-storage-retain-xfs-prjquota
provisioner: dobs.csi.digitalocean.com
parameters:
fsType: xfs
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
mountOptions:
- prjquota
</code></pre>
<p>Then I claim a new volume and add it to a pod like this:</p>
<pre class="lang-yaml prettyprint-override"><code>volumeClaimTemplates:
- metadata:
name: my-storage
namespace: my-namespace
spec:
accessModes:
- ReadWriteOnce
storageClassName: block-storage-retain-xfs-prjquota
resources:
requests:
storage: 1Gi
</code></pre>
<p>and mount it:</p>
<pre class="lang-yaml prettyprint-override"><code>volumeMounts:
- name: my-storage
mountPath: "/var/www"
</code></pre>
<p>In the pod, everything gets mounted correctly, I have access to the volume (I can create contents in there) and the mount flag is set correctly:</p>
<pre class="lang-sh prettyprint-override"><code>$ mount | grep -i www
/dev/disk/by-id/scsi-0DO_Volume_pvc-650ccba6-3177-45b5-9ffb-0ac2a931fddc on /var/www type xfs (rw,relatime,attr2,inode64,prjquota)
</code></pre>
<p>However, the disk device is not available in the pod:</p>
<pre class="lang-sh prettyprint-override"><code>$ ls -la /dev/disk/by-id/scsi-0DO_Volume_pvc-650ccba6-3177-45b5-9ffb-0ac2a931fddc
ls: cannot access '/dev/disk/by-id/scsi-0DO_Volume_pvc-650ccba6-3177-45b5-9ffb-0ac2a931fddc': No such file or directory
</code></pre>
<p>(in fact, the whole <code>/dev/disk/</code> directory is not available)</p>
<p>According to my investigation, the lack of access to the device is what makes the XFS tools fail:</p>
<pre class="lang-sh prettyprint-override"><code>$ xfs_quota -x -c 'report -h' /var/www
xfs_quota: cannot setup path for mount /var/www: No such device or address
</code></pre>
<h2>Raw Block Volumes</h2>
<p>I also tried to switch to raw block volumes instead:</p>
<pre class="lang-yaml prettyprint-override"><code>volumeClaimTemplates:
- metadata:
name: my-storage
namespace: my-namespace
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
storageClassName: block-storage-retain-xfs-prjquota
resources:
requests:
storage: 1Gi
</code></pre>
<p>and add it as:</p>
<pre class="lang-yaml prettyprint-override"><code>volumeDevices:
- name: my-storage
devicePath: /dev/my-storage
</code></pre>
<p>That gives me the device, but for some reason I can't format it / mount it (neither XFS nor ext4 actually):</p>
<pre class="lang-sh prettyprint-override"><code>$ mkfs.xfs /dev/my-storage
mkfs.xfs: error - cannot set blocksize 512 on block device /dev/my-storage: Permission denied
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ mkfs.ext4 /dev/my-storage
mke2fs 1.45.5 (07-Jan-2020)
Discarding device blocks: done
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: 18f07181-737c-4b68-a5fe-ccd7f2c50ff8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
$ mount /dev/my-storage /var/www
mount: /var/www: cannot mount /dev/my-storage read-only.
</code></pre>
<p>With <code>SYS_ADMIN</code> Linux capability, I can actually format it, but I'm still not able to mount it:</p>
<pre class="lang-sh prettyprint-override"><code>$ mkfs.xfs -f /dev/my-storage
meta-data=/dev/my-storage isize=512 agcount=4, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
$ mount /dev/my-storage /var/www
mount: /var/www: cannot mount /dev/my-storage read-only.
</code></pre>
<p>(Why is the disk device read only?)</p>
<h2>Raw Block Volume - With Partitions</h2>
<p>Ok, so I tried to create a partition and format that. Partition is created successfully, but I don't have access to the partition devices:</p>
<pre class="lang-sh prettyprint-override"><code>$ fdisk -l /dev/my-storage
Disk /dev/my-storage: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk model: Volume
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb4a24142
Device Boot Start End Sectors Size Id Type
/dev/my-storage1 2048 2097151 2095104 1023M 83 Linux
</code></pre>
<p>However, the <code>/dev/my-storage1</code> does not exist:</p>
<pre class="lang-sh prettyprint-override"><code>$ ls -la /dev/my-storage*
brw-rw---- 1 root disk 8, 48 Oct 25 14:42 /dev/my-storage
</code></pre>
<p>I tried running the container as <code>privileged</code> which gave me access to more devices in <code>/dev</code>, but then I didn't see my raw block volume device at all.</p>
<h2>What Next?</h2>
<p>As I see that, any of those would work for me:</p>
<ol>
<li>Getting access to the underlying block device for volume mounts.</li>
<li>Access to the partition device so that I can mount it.</li>
<li>Ability to mount the raw block volume (e.g. by making it not read-only, whatever it means?).</li>
<li>Making the <code>xfs_quota</code> tool NOT require the underlying device.</li>
</ol>
<p>I believe I made it work a few months ago using raw block volumes with partitions, but either I forgot how or something changed on DigitalOcean and I can't seem to be able to create and access partitions anymore.</p>
<p>Any help is hugely appreciated, thank you!</p>
| AleΕ‘ KrajnΓk | <p>Timo here from the Managed Kubernetes (DOKS) team at DigitalOcean.</p>
<p>What you are missing is the host system mount of the <code>/dev</code> directory. If you add both</p>
<pre><code> volumes:
- name: device-dir
hostPath:
path: /dev
</code></pre>
<p>and</p>
<pre><code>volumeMounts:
- name: device-dir
mountPath: /dev
</code></pre>
<p>to the manifest at the right places, things should work as expected.</p>
| Timo Reimann |
<p>According to the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="nofollow noreferrer">Kubernetes documentation</a>,</p>
<blockquote>
<p>If the readiness probe fails, the endpoints controller removes the Pod's IP address from the endpoints of all Services that match the Pod.</p>
</blockquote>
<p>So I understand that Kubernetes won't redirect requests to the pod when the <code>readiness</code> probe fails.</p>
<p>In addition, does Kubernetes kill the pod? Or does it keep calling the <code>readiness</code> probe until the response is successful?</p>
| user11081980 | <h2>What are readiness probes for?</h2>
<p>Containers can use readiness probes to know whether the container being probed is <em>ready</em> to start receiving network traffic. If your container enters a state where it is still alive but cannot handle incoming network traffic (a common scenario during startup), you want the readiness probe to fail. That way, Kubernetes will not send network traffic to a container that isn't ready for it. If Kubernetes did prematurely send network traffic to the container, it could cause the load balancer (or router) to return a 502 error to the client and terminate the request; either that or the client would get a "connection refused" error message.</p>
<p>The name of the readiness probe conveys a semantic meaning. In effect, this probe answers the true-or-false question: "Is this container ready to receive network traffic?"</p>
<p>A readiness probe failing <em><strong>will not kill or restart the container</strong></em>.</p>
<h2>What are liveness probes for?</h2>
<p>A liveness probe sends a signal to Kubernetes that the container is either alive (passing) or dead (failing). If the container is alive, then Kubernetes does nothing because the current state is good. If the container is dead, then Kubernetes attempts to heal the application by restarting it.</p>
<p>The name <em>liveness</em> probe also expresses a semantic meaning. In effect, the probe answers the true-or-false question: "Is this container alive?"</p>
<p>A liveness probe failing <em><strong>will kill/restart a failed container</strong></em>.</p>
<h2>What are startup probes for?</h2>
<p>Kubernetes has a newer probe called startup probes. This probe is useful for applications that are slow to start. It is a better alternative to increasing initialDelaySeconds on readiness or liveness probes. A startup probe allows an application to become ready, joined with readiness and liveness probes, it can increase the applications' availability.</p>
<p>Once the startup probe has succeeded once, the liveness probe takes over to provide a fast response to container deadlocks. If the startup probe never succeeds, the container is killed after failureThreshold * periodSeconds (the total startup timeout) and will be killed and restarted, subject to the pod's restartPolicy.</p>
| Highway of Life |
<p>I want Ingress to redirect a specific subdomain to one backend and all others to other backend. Basically, I want to define a rule something like the following:</p>
<blockquote>
<p>If subdomain is <code>foo.bar.com</code> then go to <code>s1</code>, for all other subdomains go to <code>s2</code></p>
</blockquote>
<p>When I define the rules as shown below in the Ingress spec, I get this exception at deployment:</p>
<pre><code>Error: UPGRADE FAILED: cannot re-use a name that is still in use
</code></pre>
<p>When I change <code>*.bar.com</code> to <code>demo.bar.com</code> it works, however.</p>
<p>Here's my Ingress resource spec:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: s1
servicePort: 80
- host: *.bar.com
http:
paths:
- backend:
serviceName: s2
servicePort: 80
</code></pre>
<p>Anyone has an idea if it is possible or not?</p>
| fyelci | <p>This is now possible in <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#hostname-wildcards" rel="nofollow noreferrer">Kubernetes</a> with nginx:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.global-static-ip-name: web-static-ip
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/server-alias: www.foo.bar
nginx.ingress.kubernetes.io/use-regex: "true"
name: foo-bar-ingress
namespace: test
spec:
rules:
- host: 'foo.bar.com'
http:
paths:
- backend:
serviceName: specific-service
servicePort: 8080
path: /(.*)
pathType: ImplementationSpecific
- host: '*.bar.com'
http:
paths:
- backend:
serviceName: general-service
servicePort: 80
path: /(.*)
pathType: ImplementationSpecific
</code></pre>
| Nathaniel Ford |
<p>I have a scenario where I need to push application logs running on EKS Cluster to separate cloudwatch log streams. I have followed the below link, which pushes all logs to cloudwatch using fluentd. But the issue is, it pushes logs to a single log stream only.</p>
<blockquote>
<p><a href="https://github.com/aws-samples/aws-workshop-for-kubernetes" rel="nofollow noreferrer">https://github.com/aws-samples/aws-workshop-for-kubernetes</a></p>
</blockquote>
<p>It also pushes all the logs under <code>/var/lib/docker/container/*.log</code>. How Can I filter this to can only application specific logs?</p>
| Divy | <p><a href="https://collectord.io" rel="nofollow noreferrer">Collectord</a> now supports AWS CloudWatch Logs (and S3/Athena/Glue). It gives you flexibility to choose to what LogGroup and LogStream you want to forward the data (if the default does work for you).</p>
<ul>
<li><a href="https://collectord.io/docs/cloudwatch/kubernetes/installation/" rel="nofollow noreferrer">Installation instructions for CloudWatch</a></li>
<li><a href="https://collectord.io/docs/cloudwatch/kubernetes/annotations/#override-loggroup-and-logstream" rel="nofollow noreferrer">How you can specify LogGroup and LogStream with annotations</a></li>
</ul>
<p>Highly recommend to read <a href="https://collectord.io/blog/2019-03-13-aws-centralized-logging-for-kubernetes/" rel="nofollow noreferrer">Setting up comprehensive centralized logging with AWS Services for Kubernetes</a></p>
| outcoldman |
<p>I'm creating a web application, where I'm using Kubernetes, in my backend application I have a server that listens to socket connections on port 3000, I deployed my application (front and back) and it works fine I can get data by HTTP requests ... now I want to establish a socket connection with my backend application, but I don't know which address and which port I have to use in my frontend application (or which configuration to do), I searched with my few keywords but I can't find a tutorial or documentation for this if anyone has an idea I would be thankful</p>
| Sandukhan | <p>Each deployment (frontend and backend) should have its own service.</p>
<p>Ingress (web) traffic would be routed to the frontend service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: frontend-svc
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 8080
targetPort: 8080
</code></pre>
<p>In this example, your frontend application would talk to host: <code>backend-svc</code> on port <code>6379</code> for a backend connection.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: backend-svc
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 6379
targetPort: 6379
</code></pre>
<p>Example API implementation:</p>
<pre><code>io.adapter(socketRedis({ host: 'backend-svc', port: '6379' }));
</code></pre>
| Highway of Life |
<p>I have created a custom Splunk forwarder image.</p>
<p>Image name: <code>vrathore/splunkuniversalforwarder</code></p>
<p>I have verified that the log is pushing to the server. I am using dummy log present in my host (<code>c/Users/var/log</code>). If I run this Docker command:</p>
<pre><code>docker run --name splunkforwarder -d -v /c/Users/var/log://var/log/messages -p 8089:8089 -p 8088:8088 -e SPLUNK_SERVER_HOST=splunk-prodtest-gsp.test.com:9997 -e
FORWARD_HOSTNAME=kubernetes vrathore/splunkuniversalforwarder
</code></pre>
<p>Now I wanted to use the same image in Kubernetes pod, where 2 container will share their log folder with my Splunk forwarder image.</p>
<pre><code>spec:
revisionHistoryLimit: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 10%
maxSurge: 10%
replicas: 1
template:
metadata:
name: %APP_FULL_NAME%-pod
labels:
appname: %APP_FULL_NAME%
stage: %APP_ENV%
component: app-kube-pod-object
spec:
containers:
- name: %APP_FULL_NAME%-service
image: %DOCKER_IMAGE%
imagePullPolicy: Always
envFrom:
- configMapRef:
name: %APP_CONFIG_MAP%
command: ["catalina.sh", "run"]
ports:
- containerPort: 8080
imagePullSecrets:
- name: %DOCKER_REPO_REGKEY%
selector:
matchLabels:
appname: %APP_FULL_NAME%
stage: %APP_ENV%
</code></pre>
<p>Kubernetes is new to me. How can I share the log folder between the containers?</p>
| gamechanger17 | <p>You need to define an emptyDir type volume and attach it to both containers. Assuming that the logs from the app are under <code>/var/log/myapp/</code> (I have added the second container as well)</p>
<pre><code>spec:
revisionHistoryLimit: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 10%
maxSurge: 10%
replicas: 1
template:
metadata:
name: %APP_FULL_NAME%-pod
labels:
appname: %APP_FULL_NAME%
stage: %APP_ENV%
component: app-kube-pod-object
spec:
containers:
- name: %APP_FULL_NAME%-service
image: %DOCKER_IMAGE%
imagePullPolicy: Always
envFrom:
- configMapRef:
name: %APP_CONFIG_MAP%
command: ["catalina.sh", "run"]
ports:
- containerPort: 8080
volumeMounts:
- name: logs
mountPath: /var/log/myapp/
- name: uf
image: vrathore/splunkuniversalforwarder
...
volumeMounts:
- name: logs
mountPath: /var/log/myapp/
imagePullSecrets:
- name: %DOCKER_REPO_REGKEY%
volumes:
- name: logs
emptyDir: {}
selector:
matchLabels:
appname: %APP_FULL_NAME%
stage: %APP_ENV%
</code></pre>
<p>Also, I would recommend looking for an alternative solution, with Collectord and Monitoring Kubernetes/OpenShift you can tell Collectord where to look for logs and you don't need to run a sidecar container <a href="https://www.outcoldsolutions.com/docs/monitoring-kubernetes/v5/annotations/#application-logs" rel="nofollow noreferrer">https://www.outcoldsolutions.com/docs/monitoring-kubernetes/v5/annotations/#application-logs</a>, just one Collectord daemon will do the work.</p>
| outcoldman |
<p>in my k8s system I have a nginx ingress controller as LoadBalancer and accessing it to ddns adress like hedehodo.ddns.net and this triggering to forward web traffic to another nginx port.
Now I deployed another nginx which works on node.js app but I cannot forward nginx ingress controller for any request to port 3000 to go another nginx</p>
<p>here is the nginx ingress controller yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: hedehodo.ddns.net
http:
paths:
- path: /
backend:
serviceName: my-nginx
servicePort: 80
- path: /
backend:
serviceName: helloapp-deployment
servicePort: 3000
</code></pre>
<p>helloapp deployment works a Loadbalancer and I can access it from IP:3000</p>
<p>could any body help me?</p>
| tuker | <p>Each host cannot share multiple duplicate paths, so in your example, the request to host: <code>hedehodo.ddns.net</code> will always map to the first service listed: <code>my-nginx:80</code>.</p>
<p>To use another service, you have to specify a different path. That path can use any service that you want. Your ingress should always point to a service, and that service can point to a deployment.</p>
<p>You should also use HTTPS by default for your ingress.</p>
<p>Ingress example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: my.example.net
http:
paths:
- path: /
backend:
serviceName: my-nginx
servicePort: 80
- path: /hello
backend:
serviceName: helloapp-svc
servicePort: 3000
</code></pre>
<p>Service example:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: Service
metadata:
name: helloapp-svc
spec:
ports:
- port: 3000
name: app
protocol: TCP
targetPort: 3000
selector:
app: helloapp
type: NodePort
</code></pre>
<p>Deployment example:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloapp
labels:
app: helloapp
spec:
replicas: 1
selector:
matchLabels:
app: helloapp
template:
metadata:
labels:
app: helloapp
spec:
containers:
- name: node
image: my-node-img:v1
ports:
- name: web
containerPort: 3000
</code></pre>
| Highway of Life |
<p>I have issue with deploying angular app on minikube. I am not able to expose the running angular container on browser.</p>
<p>Below are my setup files.</p>
<h2>Minikube start command</h2>
<pre><code>$ minikube start --driver=docker
</code></pre>
<h2>Dockerfile</h2>
<pre><code>FROM node:10-alpine AS node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build-prod
FROM nginx:alpine
COPY --from=node /app/dist/shopping-wepapp /usr/share/nginx/html
</code></pre>
<h2>Deployment configuration file</h2>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: shop-cart
spec:
replicas: 2
selector:
matchLabels:
app: shop-cart
template:
metadata:
labels:
app: shop-cart
version: v1
spec:
containers:
- name: shop-cart
image: kavin1995/development:shop-cart-app-07-04-2020-14-00
imagePullPolicy: Always
ports:
- containerPort: 80
</code></pre>
<h2>Service configuration file</h2>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: shop-cart-service
spec:
selector:
app: shop-cart
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
type: NodePort
</code></pre>
<h2>Port exposing command</h2>
<pre><code>$ minikube service shop-cart-service --url
</code></pre>
| Kavinkumar | <p>The problem seems to be the use of the <code>docker</code> driver: the minikube IP is a container IP and cannot be accessed outside of the docker host.</p>
<p>The only way to expose the nodePort outside would be to publish the port on the <strong>running</strong> minikube container (which only expose this: <code>127.0.0.1:32771->22/tcp, 127.0.0.1:32770->2376/tcp, 127.0.0.1:32769->5000/tcp, 127.0.0.1:32768->8443/tcp</code>)</p>
<p>One way to do this would be (even if ugly) :</p>
<pre class="lang-sh prettyprint-override"><code>CONTAINER_IP=`minikube ip`
SERVICE_NAME=shop-cart-service
SERVICE_NODE_PORT=`kubectl get service ${SERVICE_NAME} --output jsonpath='{.spec.ports[0].nodePort}'`
iptables -t nat -A DOCKER -p tcp --dport ${SERVICE_NODE_PORT} -j DNAT --to-destination ${CONTAINER_IP}:${SERVICE_NODE_PORT}
iptables -t nat -A POSTROUTING -j MASQUERADE -p tcp --source ${CONTAINER_IP} --destination ${CONTAINER_IP} --dport ${SERVICE_NODE_PORT}
iptables -A DOCKER -j ACCEPT -p tcp --destination ${CONTAINER_IP} --dport ${SERVICE_NODE_PORT}
</code></pre>
| zigarn |
<p>I have to convert an existing <code>nodejs</code> application to run on the RedHat OpenShift container platform. Currently, the application is run as follows:</p>
<pre><code>node index.js $HOME/arg1.json $HOME/arg2.json
</code></pre>
<p>Using files for arguments is <strong>important</strong> for this application since the number of arguments are quite large.</p>
<p>How can I ensure that the container version is also run off of the configuration files?</p>
| cogitoergosum | <p>You mention in your first comment the requirement that filenames be specified at runtime. Something like this will work:</p>
<pre><code>ENV work /app
WORKDIR $work
COPY ./arg1.json ./arg2.json $work/
CMD["node", "index.js", "./arg1.json", "arg2.json"]
</code></pre>
<p>Runtime command:</p>
<pre><code>docker run -v $(pwd)/myarg1.json:/app/arg1.json -v $(pwd)/myarg2.json:/app/arg2.json <image>
</code></pre>
| Rondo |
<p>we recently updated our AKS cluster from 1.17.x to 1.19.x and recognised that the format of our custom application logs in <code>/var/lib/docker/containers</code> changed.</p>
<p>Before the update it looked like this:
<a href="https://i.stack.imgur.com/0GJio.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0GJio.png" alt="old valid json format" /></a></p>
<p>Afterwards it looks like this:
<a href="https://i.stack.imgur.com/PW1LN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PW1LN.png" alt="new invalid json format" /></a></p>
<p>I can find some notes in the changelog that kubernetes changed from just text logs to structured logs (for system components) but I don't see how this correlates to how our log format changed.</p>
<p><a href="https://kubernetes.io/blog/2020/09/04/kubernetes-1-19-introducing-structured-logs/#:%7E:text=In%20Kubernetes%201.19%2C%20we%20are,migrated%20to%20the%20structured%20format" rel="nofollow noreferrer">https://kubernetes.io/blog/2020/09/04/kubernetes-1-19-introducing-structured-logs/#:~:text=In%20Kubernetes%201.19%2C%20we%20are,migrated%20to%20the%20structured%20format</a></p>
<p><a href="https://kubernetes.io/docs/concepts/cluster-administration/system-logs/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/system-logs/</a></p>
<p>Is there a chance to still get valid json logs to <code>/var/lib/docker/containers</code> in AKS > 1.19.x?</p>
<p>Background:
We send our application logs to Splunk and don't use the Azure stack for log analysis. Our Splunk setup cannot parse that new log format as of now.</p>
| sschoebinger | <p>The format of the logs are defined by the container runtime. It seems before you were parsing logs from docker container runtime, and now it is containerd (<a href="https://azure.microsoft.com/en-us/updates/azure-kubernetes-service-aks-support-for-containerd-runtime-is-in-preview/" rel="noreferrer">https://azure.microsoft.com/en-us/updates/azure-kubernetes-service-aks-support-for-containerd-runtime-is-in-preview/</a>).</p>
<p>Based on the article - you can still choose moby (which is docker) as the container runtime.</p>
<p>To take that also from your shoulders, you should look into using one of those (considering that they will automatically detect the log format and container runtime for you).</p>
<ul>
<li><p>Splunk Connect for Kubernetes <a href="https://github.com/splunk/splunk-connect-for-kubernetes" rel="noreferrer">https://github.com/splunk/splunk-connect-for-kubernetes</a></p>
</li>
<li><p>Collectord <a href="https://www.outcoldsolutions.com" rel="noreferrer">https://www.outcoldsolutions.com</a></p>
</li>
</ul>
| outcoldman |
<p>I am running a Dev Linux machine and setting up a local Kafka for development on Kubernetes(moving from docker-compose for learning and practicing pourposes) with <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">Kind</a> and everything works fine but I am now trying to map volumes from Kafka and Zookeeper to the host but I am only able to for the Kafka volume.
For zookeeper I configure and map the data and log paths to a volume but the internal directories are not being exposed on the host(which happens with the kafka mapping), it only shows the data and log folders but no content is actually present on the host so restarting zookeeper resets state.</p>
<p>I am wondering if there's a limitation or a different approach when using Kind and mapping multiples directories from different pods, what am I missing? Why only Kafka volumes are successfully persisted on host.</p>
<p>The full setup with a readme on how to run it's <a href="https://github.com/mmaia/kafka-local-kubernetes" rel="nofollow noreferrer">on Github</a> under <code>pv-pvc-setup</code> folder.</p>
<p>Zookeeper meaningful configuration, Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: zookeeper
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
service: zookeeper
strategy: {}
template:
metadata:
labels:
network/kafka-network: "true"
service: zookeeper
spec:
containers:
- env:
- name: TZ
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
- name: ZOOKEEPER_DATA_DIR
value: "/var/lib/zookeeper/data"
- name: ZOOKEEPER_LOG_DIR
value: "/var/lib/zookeeper/log"
- name: ZOOKEEPER_SERVER_ID
value: "1"
image: confluentinc/cp-zookeeper:7.0.1
name: zookeeper
ports:
- containerPort: 2181
resources: {}
volumeMounts:
- mountPath: /var/lib/zookeeper
name: zookeeper-data
hostname: zookeeper
restartPolicy: Always
volumes:
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeeper-pvc
</code></pre>
<p>Persistent volume claim:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: zookeeper-local-storage
resources:
requests:
storage: 5Gi
</code></pre>
<p>Persistent volume:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-pv
spec:
accessModes:
- ReadWriteOnce
storageClassName: zookeeper-local-storage
capacity:
storage: 5Gi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /var/lib/zookeeper
</code></pre>
<p>kind-config:</p>
<pre><code>apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30092 # internal kafka nodeport
hostPort: 9092 # port exposed on "host" machine for kafka
- containerPort: 30081 # internal schema-registry nodeport
hostPort: 8081 # port exposed on "host" machine for schema-registry
extraMounts:
- hostPath: ./tmp/kafka-data
containerPath: /var/lib/kafka/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data
containerPath: /var/lib/zookeeper
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
</code></pre>
<p>As I mentioned the setup works, I am now just trying to make sure relevant kafka and zookeeper volumes are mapped to persistent external storage(in this case a local disk).</p>
| groo | <p>I finally sort it out. I had 2 main issues in my initial setup, which are now fixed.</p>
<p>Folders used to persist data on local host need to be created beforehand so they have the same <code>uid:guid</code> from the one used to create the initial Kind cluster, if this is not in place the folders will not have the data persisted properly.</p>
<p>Created specific persistent volume and persistent volume claims for each persistent folder from the zookeeper (data and log) and configure those on kind-config. Here is the final kind-config:</p>
<pre><code>apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30092 # internal kafka nodeport
hostPort: 9092 # port exposed on "host" machine for kafka
- containerPort: 30081 # internal schema-registry nodeport
hostPort: 8081 # port exposed on "host" machine for schema-registry
extraMounts:
- hostPath: ./tmp/kafka-data
containerPath: /var/lib/kafka/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data/data
containerPath: /var/lib/zookeeper/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data/log
containerPath: /var/lib/zookeeper/log
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
</code></pre>
<p>Full setup using persistent volumes and persistent volume claims is available in this repo with further instructions if you want to run it for fun. <a href="https://github.com/mmaia/kafka-local-kubernetes" rel="nofollow noreferrer">https://github.com/mmaia/kafka-local-kubernetes</a></p>
| groo |
<p>I'm trying to use a container that contains a Java tool to do some DB migrations on a MySQL database in a Kubernetes Job.</p>
<p>When I run the container locally in Docker (using a MySQL container in the same network), the tool runs as expected.
And if I create a Pod using the container and set the command arguments to point to the <code>mysql</code> service running in the same namespace, it does as well.</p>
<p>But if I convert that Pod spec into a Job, the created container can not connect to the MySQL service anymore for some reason.</p>
<p>The container is based on <code>amazoncorretto:8-al2-jdk</code> and just copies the JAR to <code>/opt/</code>.</p>
<p>The MySQL DB is available through the <code>mysql</code> service in the cluster:</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl describe service mysql -n <namespace>
Name: mysql
Namespace: <namespace>
Labels: app=mysql
Annotations: <none>
Selector: app=mysql
Type: ClusterIP
IP Families: <none>
IP: <ip>
IPs: <ip>
Port: mysql 3306/TCP
TargetPort: 3306/TCP
Endpoints: <ip>:3306
Session Affinity: None
Events: <none>
</code></pre>
<p>These are the specifications for the Pod:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: java-tool-pod
spec:
containers:
- name: javatool
image: <registry>/<image-name>:<version>
command: [ "/bin/sh" ]
args: [ "-x", "-c", "/usr/bin/java -jar /opt/<tool>.jar \"jdbc:mysql://mysql:3306/<db>\" -u <user> -p<password>" ]
imagePullSecrets:
- name: <secret>
</code></pre>
<p>Running the Container as a Pod:</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl apply -f /tmp/as-pod.yaml -n <namespace>
pod/java-tool-pod created
$ kubectl logs pod/java-tool-pod -n <namespace>
+ /usr/bin/java -jar /opt/<tool>.jar jdbc:mysql://mysql:3306/<db> -u <user> -p<password>
DB Migration Tool
Database Schema, 3.30.0.3300024390, built Wed Jul 14 12:13:52 UTC 2021
Driver class: com.mysql.jdbc.Driver
INFO Flyway 3.2.1 by Boxfuse
INFO Database: jdbc:mysql://mysql:3306/<db> (MySQL 5.7)
INFO Validated 721 migrations (execution time 00:00.253s)
INFO Current version of schema `<db>`: 3.29.0.10859.10
WARN outOfOrder mode is active. Migration of schema `<db>` may not be reproducible.
INFO Schema `<db>` is up to date. No migration necessary.
</code></pre>
<p>These are the specifications for the Job:</p>
<pre class="lang-yaml prettyprint-override"><code>$ cat /tmp/as-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: javatool-job
spec:
template:
spec:
containers:
- name: javatool
image: <registry>/<image-name>:<version>
command: [ "/bin/sh" ]
args: [ "-x", "-c", "/usr/bin/java -jar /opt/<tool>.jar \"jdbc:mysql://mysql:3306/<db>\" -u <user -p<password>" ]
imagePullSecrets:
- name: <secret>
restartPolicy: Never
</code></pre>
<p>Running the container as a Job:</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl apply -f /tmp/as-job.yaml -n <namespace>
job.batch/javatool-job created
$ kubectl logs job.batch/javatool-job -n <namespace>
+ /usr/bin/java -jar /opt/<tool>.jar jdbc:mysql://mysql:3306/<db> -u <user> -p<password>
DB Migration Tool
Database Schema, 3.30.0.3300024390, built Wed Jul 14 12:13:52 UTC 2021
Driver class: com.mysql.jdbc.Driver
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:983)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:339)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2252)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2285)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2084)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:795)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:44)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:400)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:327)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:173)
at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:164)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnectionFromDriver(AbstractDriverBasedDataSource.java:153)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnection(AbstractDriverBasedDataSource.java:119)
at com.nordija.itv.db.FlywayMigrationSchemaData.isNotFlywaySchemaVersion(FlywayMigrationSchemaData.java:58)
[...]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:214)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:298)
... 22 more
INFO Flyway 3.2.1 by Boxfuse
Unable to obtain Jdbc connection from DataSource
[...]
</code></pre>
<p>I haven't seen any significant differences in the containers being created.
The only thing I can think of is some kind of character encoding issue, but I don't see why that should only occur in a Pod that was created for a Job and not in one that was created directly.</p>
<p>Thanks in advance for any help with this issue!</p>
<p><strong>Edit:</strong> I forgot to mention that Istio is active on the Namespace, which turned out to be causing the issues.</p>
| wtfc63 | <p>The problem was that Istio doesn't play nice with Kubernetes Jobs (I forgot to mention that Istio is active on the Namespace, sorry).</p>
<p>Once I added a short delay (<code>sleep 5</code> before starting the Java tool), the connection could be established.</p>
<p>But then I had another issue: After the container terminated successfully, the Job would not be marked as completed.</p>
<p>And the reason was again Istio.
Jobs are considered complete once <strong>all</strong> Pods are terminated and the Istio Sidecar is a service Pod that doesn't terminate.
After finding <a href="https://medium.com/redbox-techblog/handling-istio-sidecars-in-kubernetes-jobs-c392661c4af7" rel="nofollow noreferrer">this article</a>, I ended up integrating their <a href="https://github.com/redboxllc/scuttle" rel="nofollow noreferrer"><code>scuttle</code></a> tool into the container and now the Job can be completed successfully.</p>
| wtfc63 |
<p>I want to create a ReplicaSet with separate pods.</p>
<h3><code>Pods</code></h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: user-pod
labels:
app: user-pod
spec:
containers:
- name: user-container
image: kia9372/store-user
</code></pre>
<h3><code>Replicaset</code></h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: user-replicaset
labels:
app: user-replicaset
spec:
replicas: 3
selector:
matchLabels:
app: user-pod
</code></pre>
<p>but when i execute the following command, it throws the foloowing error:</p>
<pre><code>kubectl create -f user-replicaset.yml
>error: error validating "user-replicaset.yml":
error validating data: ValidationError(ReplicaSet.spec.selector):
unknown field "app" in io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector;
if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p><strong>What's the problem? how can I solve this?</strong></p>
| Mr-Programer | <p>matchlabels is missing. Also your pod definition does not define a matching label.</p>
<p>Have a look at the docs for a proper setup:
<a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/</a></p>
| Aydin K. |
<p>I'm working on a project where I need to spawn 1 instance per user (customer).</p>
<p>I figured it makes sense to create some sort of manager to handle that and host it somewhere. Kubernetes seems like a good choice since it can be hosted virtually anywhere and it will automate a lot of things (e.g. ensuring instances keep running on failure).</p>
<p>All entities are in Python and have a corresponding Flask API.</p>
<pre><code> InstanceManager Instance (user1)
.-----------. .--------.
POST /instances/user3 --> | | ---------- | |---vol1
| | '--------'
| | -----.
'...........' \ Instance (user2)
\ .--------.
'- | |---vol2
'--------'
</code></pre>
<p>Now I can't seem to figure out how to translate this into Kubernetes</p>
<p>My thinking:</p>
<ul>
<li>Instance is a <code>StatefulSet</code> since I want the data to be maintained through restarts.</li>
<li>InstanceManager is a <code>Service</code> with a database attached to track user to instance IP (for health checks, etc).</li>
</ul>
<p>I'm pretty lost on how to make <code>InstanceManager</code> spawn a new instance on an incoming POST request. I did a lot of digging (Operators, namespaces, etc.) but nothing seems straightforward. Namely I don't seem to even be able to do that via <code>kubectl</code>. Am I thinking totally wrong on how Kubernetes works?</p>
| Pithikos | <p>I've done some progress and thought to share.</p>
<p>Essentially you need to interact with Kubernetes REST API directly <strong>instead of applying a static yaml or using <code>kubectl</code></strong>, ideally with one of the numerous clients out there.</p>
<p>In our case there's two options:</p>
<ol>
<li>Create a namespace per user and then a service in that namespace</li>
<li>Create a new service with a unique name for each user</li>
</ol>
<p>The first approach seems more sensible since using namespaces gives a lot of other benefits (network control, resource allocation, etc.).</p>
<p>The service itself can be pointing to a <code>statefulset</code> or a <code>pod</code> depending on the situation.</p>
<p>There's another gotcha (and possibly more). Namespaces, pod names, etc, they all need to conform to RFC 1123. So for namespaces, you can't simply use email addresses or even base64. You'll need to use something like <code>user-100</code> and have a mapping table to map back to an actual user.</p>
| Pithikos |
<p>Following this <a href="https://github.com/maciekrb/gcs-fuse-sample" rel="nofollow noreferrer">guide</a>,I am trying to run gcsfuse inside a pod in GKE. Below is the deployment manifest that I am using:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gcsfuse-test
spec:
replicas: 1
template:
metadata:
labels:
app: gcsfuse-test
spec:
containers:
- name: gcsfuse-test
image: gcr.io/project123/gcs-test-fuse:latest
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
lifecycle:
postStart:
exec:
command: ["mkdir", "/mnt"]
command: ["gcsfuse", "-o", "nonempty", "cloudsql-p8p", "/mnt"]
preStop:
exec:
command: ["fusermount", "-u", "/mnt"]
</code></pre>
<p>However, I would like to run gcsfuse without the privileged mode inside my GKE Cluster.
I think (because of questions like <a href="https://stackoverflow.com/questions/48407184/gcsfuse-on-datalab-vm-machine-error-fusermount-fuse-device-not-found-try-mo">these</a> on SO) it is possible to run the docker image with certain flags and there will be no need to run it in privileged mode.</p>
<p>Is there any way in GKE to run gcsfuse without running the container in privileged mode?</p>
| Amit Yadav | <p>Edit Apr 26, 2022: for a further developed repo derived from this answer, <strong>see <a href="https://github.com/samos123/gke-gcs-fuse-unprivileged" rel="nofollow noreferrer">https://github.com/samos123/gke-gcs-fuse-unprivileged</a></strong></p>
<p>Now it finally is possible to mount devices without <code>privileged: true</code> or <code>CAP_SYS_ADMIN</code>!</p>
<p>What you need is</p>
<ul>
<li><p>A Kubelet device manager which allows containers to have direct access to host devices in a secure way. The device manager explicitly given devices to be available via the Kubelet Device API. I used this hidden gem: <a href="https://gitlab.com/arm-research/smarter/smarter-device-manager" rel="nofollow noreferrer">https://gitlab.com/arm-research/smarter/smarter-device-manager</a>.</p>
</li>
<li><p>Define <a href="https://gitlab.com/arm-research/smarter/smarter-device-manager/-/blob/7eb7526956c59b9897a716922fd49566202b4e91/smarter-device-manager-configmap-rpi.yaml" rel="nofollow noreferrer">list of devices provided by the Device Manager</a> - add <code>/dev/YOUR_DEVICE_NAME</code> into this list, see example below.</p>
</li>
<li><p>Request a device via the Device Manager in the pod spec <code>resources.requests.smarter-devices/YOUR_DEVICE_NAME: 1</code></p>
</li>
</ul>
<p>I spent quite some time figuring this out so I hope sharing the information here will help someone else from the exploration.</p>
<p>I wrote my details findings in the Kubernetes Github issue about /dev/fuse. See an <a href="https://github.com/kubernetes/kubernetes/issues/7890#issuecomment-766088805" rel="nofollow noreferrer">example setup in this comment</a> and more technical details above that one.</p>
<p>Examples from the comment linked above:</p>
<p><strong>Allow FUSE devices via Device Manager</strong>:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: smarter-device-manager
namespace: device-manager
data:
conf.yaml: |
- devicematch: ^fuse$
nummaxdevices: 20
</code></pre>
<p><strong>Request /dev/fuse via Device Manager:</strong></p>
<pre><code># Pod spec:
resources:
limits:
smarter-devices/fuse: 1
memory: 512Mi
requests:
smarter-devices/fuse: 1
cpu: 10m
memory: 50Mi
</code></pre>
<p><strong>Device Manager as a DaemonSet</strong>:</p>
<pre><code># https://gitlab.com/arm-research/smarter/smarter-device-manager/-/blob/master/smarter-device-manager-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: smarter-device-manager
namespace: device-manager
labels:
name: smarter-device-manager
role: agent
spec:
selector:
matchLabels:
name: smarter-device-manager
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: smarter-device-manager
annotations:
node.kubernetes.io/bootstrap-checkpoint: "true"
spec:
## kubectl label node pike5 smarter-device-manager=enabled
# nodeSelector:
# smarter-device-manager : enabled
priorityClassName: "system-node-critical"
hostname: smarter-device-management
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: smarter-device-manager
image: registry.gitlab.com/arm-research/smarter/smarter-device-manager:v1.1.2
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
resources:
limits:
cpu: 100m
memory: 15Mi
requests:
cpu: 10m
memory: 15Mi
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
- name: dev-dir
mountPath: /dev
- name: sys-dir
mountPath: /sys
- name: config
mountPath: /root/config
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
- name: dev-dir
hostPath:
path: /dev
- name: sys-dir
hostPath:
path: /sys
- name: config
configMap:
name: smarter-device-manager
</code></pre>
| Petrus Repo |
<p>I have a k8s cluster with an ingress nginx as a reverse proxy. I am using letsencrypt to generate TLS certificate</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: ******
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p>Everything worked fine for months. Today,</p>
<pre><code>$ curl -v --verbose https://myurl
</code></pre>
<p>returns</p>
<pre><code>* Rebuilt URL to: https://myurl/
* Trying 51.103.58.**...
* TCP_NODELAY set
* Connected to myurl (51.103.58.**) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, Server hello (2):
* SSL certificate problem: certificate has expired
* stopped the pause stream!
* Closing connection 0
curl: (60) SSL certificate problem: certificate has expired
More details here: https://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
HTTPS-proxy has similar options --proxy-cacert and --proxy-insecure.
</code></pre>
<p>For 2 other people on my team, error is the same and I have the same error when I use Postman (expired certificate).</p>
<p>But for another one, we get no error :</p>
<pre><code>* Trying 51.103.58.**...
* TCP_NODELAY set
* Connected to myurl (51.103.58.**) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=myurl
* start date: Jul 24 07:15:13 2021 GMT
* expire date: Oct 22 07:15:11 2021 GMT
* subjectAltName: host "myurl" matched cert's "myurl"
* issuer: C=US; O=Let's Encrypt; CN=R3
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fd9be00d600)
> GET / HTTP/2
> Host: myurl
> User-Agent: curl/7.64.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200
< server: nginx/1.19.1
< date: Thu, 30 Sep 2021 16:11:23 GMT
< content-type: application/json; charset=utf-8
< content-length: 56
< vary: Origin, Accept-Encoding
< access-control-allow-credentials: true
< x-xss-protection: 1; mode=block
< x-frame-options: DENY
< strict-transport-security: max-age=15724800; includeSubDomains
< x-download-options: noopen
< x-content-type-options: nosniff
< etag: W/"38-3eQD3G7Y0vTkrLR+ExD2u5BSsMc"
<
* Connection #0 to host myurl left intact
{"started":"2021-09-30T13:30:30.912Z","uptime":9653.048}* Closing connection 0
</code></pre>
<p>When I use my web browser to go to the website, everything works fine and the certificate is presented as valid and for now, I get no error in prod or staging environment. (same error on staging)</p>
<p>Has anyone an explanation on this ?</p>
| soling | <p><em>Warning! Please plan OS upgrade path. The below advice should be applied only in emergency situation to quickly fix a critical system.</em></p>
<p>Your team missed OS update or <code>ca-certificates</code> package update.
Below solution works on old Debian/Ubuntu systems.</p>
<p>First check if you have offending DST Root CA X3 cert present:</p>
<pre><code># grep X3 /etc/ca-certificates.conf
mozilla/DST_Root_CA_X3.crt
</code></pre>
<p>Make sure the client OS have the proper ISRG Root X1 present too:</p>
<pre><code># grep X1 /etc/ca-certificates.conf
mozilla/ISRG_Root_X1.crt
</code></pre>
<p>This is going to disable X3:</p>
<pre><code># sed -i '/^mozilla\/DST_Root_CA_X3/s/^/!/' /etc/ca-certificates.conf && update-ca-certificates -f
</code></pre>
<p>Try <code>curl https://yourdomain</code> now, should pass.</p>
<p>Again, plan an upgrade please.</p>
| gertas |
<p>We have helm charts to deploy our application. We use a <code>configuration.json</code> file for application properties and load them to config map. But users typically use their own configuration file. </p>
<p>Default configuration.json file is packaged inside helm charts under data directoty. This file is read as </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
{{ (.Files.Glob .Values.appConfigFile).AsConfig | indent 4}}
</code></pre>
<p>And in values</p>
<pre><code>appConfigFile: data/configuration.json
</code></pre>
<p>If users install our charts directly from repository how can this configuration file be overriden? doing <code>--set appConfigFile=/path/to/custom.json</code> doen't populate config map.</p>
<p>If charts are untarred to a directory they can add the custom configuration file into charts directory and give the configuration file using <code>--set appConfigFile=customData/custom.json</code> works </p>
<p>Can file overrides be achieved when charts are deployed from repository directly?</p>
| Dheeraj Joshi | <p>Adding custom configuration to a values file and execute <code>helm install</code> using <code>-f</code> flag is a solution.</p>
<p>customValues.yaml</p>
<pre><code>overrideConfig: true
customConfig:{
//Add your custom json here as variable value
}
</code></pre>
<p>Config map yaml </p>
<pre><code>#If custom values file passed then overrideConfig variable will be set.
#So load configmap from customConfig variable
{{ if .Values.overrideConfig}}
app-config.json : |-
{{ toJson .Values.customConfig }}
{{ else }}
# Else load from the default configuration available in charts.
{{ (.Files.Glob .Values.appConfigFile).AsConfig indent 4 }}
{{ end }}
</code></pre>
<p>If custom configuration is needed </p>
<pre><code>helm install -f customValues.yaml repo/chartName
</code></pre>
<p>Not sure if this is the perfect solution, but ended up taking this route.</p>
| Dheeraj Joshi |
<p>I have a Google Kubernetes cluster with its associated VM instance. How can I change the image of the disk used by that VM instance to Windows Server Edition? I know it's possible to create a VM instance from a Windows Server Image, but how do I change the current image of my existing VM instance to a Windows Server Image?</p>
<p>Thank you</p>
| Harry Stuart | <p>As of writing this answer(December 2018), Google Kubernetes engine supports two operating systems:</p>
<ul>
<li>Container-Optimized OS (from Google) </li>
<li>Ubuntu</li>
</ul>
<p>More details about supported image types you can find <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/node-images" rel="nofollow noreferrer">here</a></p>
<p>To change VM image, you need to update your configuration, this depends on how you create your kubernetes cluster. For example you have option to select the image when creating your cluster via web ui(create cluster -> node pools -> customize -> advanced edit):</p>
<p><a href="https://i.stack.imgur.com/Q9zC0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q9zC0.png" alt="enter image description here"></a></p>
<p>If you are creating the cluster via configuration file you need to update the image you want to use in the config file and apply your changes, for example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: resource-reserver
spec:
containers:
- name: sleep-forever
image: <your-image-here>
</code></pre>
| Caner |
<p>Some kubernetes resources have a length limit of 63 characters (presumably because they need to be valid DNS names). Which are they?</p>
| AndreKR | <p>Those are <strong>at least</strong>:</p>
<ul>
<li>Pod names</li>
<li>Service names</li>
<li>The name part of a label key</li>
</ul>
<pre><code> labels:
example.com/environment: production <-- the string "environment"
</code></pre>
<ul>
<li>The label value</li>
</ul>
<pre><code> labels:
example.com/environment: production <-- the string "production"
</code></pre>
<ul>
<li>The name part of an annotation key</li>
</ul>
<pre><code> annotations:
example.com/image-registry: "https://hub.docker.com/" <-- the string "image-registry"
</code></pre>
<ul>
<li>CronJob names (including 11 automatically appended characters)</li>
</ul>
| AndreKR |
<p>When I browse my website from Chrome, it says that the certificate is invalid, and if I check the details, this is what I see:</p>
<pre><code>Issued to:
Common Name (CN) test.x.example.com
Organization (O) cert-manager
Organizational Unit (OU) <Not Part Of Certificate>
Issued by:
Common Name (CN) cert-manager.local
Organization (O) cert-manager
Organizational Unit (OU) <Not Part Of Certificate>
</code></pre>
<p>I don't understand what is going wrong. From cert-manager's output it would seem everything is going well:</p>
<pre><code>I1002 15:56:52.761583 1 start.go:76] cert-manager "level"=0 "msg"="starting controller" "git-commit"="95e8b7de" "version"="v0.9.1"
I1002 15:56:52.765337 1 controller.go:169] cert-manager/controller/build-context "level"=0 "msg"="configured acme dns01 nameservers" "nameservers"=["10.44.0.10:53"]
I1002 15:56:52.765777 1 controller.go:134] cert-manager/controller "level"=0 "msg"="starting leader election"
I1002 15:56:52.767133 1 leaderelection.go:235] attempting to acquire leader lease cert-manager/cert-manager-controller...
I1002 15:56:52.767946 1 metrics.go:203] cert-manager/metrics "level"=0 "msg"="listening for connections on" "address"="0.0.0.0:9402"
I1002 15:58:18.940473 1 leaderelection.go:245] successfully acquired lease cert-manager/cert-manager-controller
I1002 15:58:19.043002 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="challenges"
I1002 15:58:19.043050 1 base_controller.go:132] cert-manager/controller/challenges "level"=0 "msg"="starting control loop"
I1002 15:58:19.043104 1 controller.go:91] cert-manager/controller "level"=0 "msg"="not starting controller as it's disabled" "controller"="certificates-experimental"
I1002 15:58:19.043174 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="orders"
I1002 15:58:19.043200 1 base_controller.go:132] cert-manager/controller/orders "level"=0 "msg"="starting control loop"
I1002 15:58:19.043376 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="certificates"
I1002 15:58:19.043410 1 base_controller.go:132] cert-manager/controller/certificates "level"=0 "msg"="starting control loop"
I1002 15:58:19.043646 1 controller.go:91] cert-manager/controller "level"=0 "msg"="not starting controller as it's disabled" "controller"="certificaterequests-issuer-ca"
I1002 15:58:19.044292 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="clusterissuers"
I1002 15:58:19.044459 1 base_controller.go:132] cert-manager/controller/clusterissuers "level"=0 "msg"="starting control loop"
I1002 15:58:19.044617 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="ingress-shim"
I1002 15:58:19.044742 1 base_controller.go:132] cert-manager/controller/ingress-shim "level"=0 "msg"="starting control loop"
I1002 15:58:19.044959 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="issuers"
I1002 15:58:19.045110 1 base_controller.go:132] cert-manager/controller/issuers "level"=0 "msg"="starting control loop"
E1002 15:58:19.082958 1 base_controller.go:91] cert-manager/controller/certificates/handleOwnedResource "msg"="error getting order referenced by resource" "error"="certificate.certmanager.k8s.io \"api-certificate\" not found" "related_resource_kind"="Certificate" "related_resource_name"="api-certificate" "related_resource_namespace"="staging" "resource_kind"="Order" "resource_name"="api-certificate-3031097725" "resource_namespace"="staging"
I1002 15:58:19.143501 1 base_controller.go:187] cert-manager/controller/orders "level"=0 "msg"="syncing item" "key"="staging/api-certificate-3031097725"
I1002 15:58:19.143602 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.143677 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.144011 1 sync.go:304] cert-manager/controller/orders "level"=0 "msg"="need to create challenges" "resource_kind"="Order" "resource_name"="api-certificate-3031097725" "resource_namespace"="staging" "number"=0
I1002 15:58:19.144043 1 logger.go:43] Calling GetOrder
I1002 15:58:19.144033 1 conditions.go:154] Setting lastTransitionTime for Certificate "cert-manager-webhook-webhook-tls" condition "Ready" to 2019-10-02 15:58:19.144027373 +0000 UTC m=+86.444394730
I1002 15:58:19.145112 1 conditions.go:154] Setting lastTransitionTime for Certificate "cert-manager-webhook-ca" condition "Ready" to 2019-10-02 15:58:19.145103359 +0000 UTC m=+86.445470721
I1002 15:58:19.145593 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="staging/api-certificate"
I1002 15:58:19.147411 1 issue.go:169] cert-manager/controller/certificates/certificates "level"=0 "msg"="Order is not in 'valid' state. Waiting for Order to transition before attempting to issue Certificate." "related_resource_kind"="Order" "related_resource_name"="api-certificate-3031097725" "related_resource_namespace"="staging"
I1002 15:58:19.148059 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.148099 1 base_controller.go:187] cert-manager/controller/ingress-shim "level"=0 "msg"="syncing item" "key"="staging/example-ingress"
I1002 15:58:19.148906 1 sync.go:71] cert-manager/controller/ingress-shim "level"=0 "msg"="not syncing ingress resource as it does not contain a \"certmanager.k8s.io/issuer\" or \"certmanager.k8s.io/cluster-issuer\" annotation" "resource_kind"="Ingress" "resource_name"="example-ingress" "resource_namespace"="staging"
I1002 15:58:19.148925 1 base_controller.go:193] cert-manager/controller/ingress-shim "level"=0 "msg"="finished processing work item" "key"="staging/example-ingress"
I1002 15:58:19.148133 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-selfsign"
I1002 15:58:19.148963 1 conditions.go:91] Setting lastTransitionTime for Issuer "cert-manager-webhook-selfsign" condition "Ready" to 2019-10-02 15:58:19.148956891 +0000 UTC m=+86.449324275
I1002 15:58:19.149567 1 setup.go:73] cert-manager/controller/issuers/setup "level"=0 "msg"="signing CA verified" "related_resource_kind"="Secret" "related_resource_name"="cert-manager-webhook-ca" "related_resource_namespace"="cert-manager" "resource_kind"="Issuer" "resource_name"="cert-manager-webhook-ca" "resource_namespace"="cert-manager"
I1002 15:58:19.149759 1 conditions.go:91] Setting lastTransitionTime for Issuer "cert-manager-webhook-ca" condition "Ready" to 2019-10-02 15:58:19.149752693 +0000 UTC m=+86.450120071
I1002 15:58:19.148155 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="default/letsencrypt-staging"
I1002 15:58:19.150457 1 setup.go:160] cert-manager/controller/issuers "level"=0 "msg"="skipping re-verifying ACME account as cached registration details look sufficient" "related_resource_kind"="Secret" "related_resource_name"="letsencrypt-staging" "related_resource_namespace"="default" "resource_kind"="Issuer" "resource_name"="letsencrypt-staging" "resource_namespace"="default"
I1002 15:58:19.148177 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="staging/letsencrypt-staging-issuer"
I1002 15:58:19.148630 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="staging/api-certificate"
I1002 15:58:19.150669 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="default/letsencrypt-staging"
I1002 15:58:19.151696 1 setup.go:160] cert-manager/controller/issuers "level"=0 "msg"="skipping re-verifying ACME account as cached registration details look sufficient" "related_resource_kind"="Secret" "related_resource_name"="letsencrypt-staging-secret-key" "related_resource_namespace"="staging" "resource_kind"="Issuer" "resource_name"="letsencrypt-staging-issuer" "resource_namespace"="staging"
I1002 15:58:19.151975 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="staging/letsencrypt-staging-issuer"
I1002 15:58:19.153763 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.156512 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.157047 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.157659 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.158671 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.158827 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.171562 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.172759 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.173387 1 setup.go:73] cert-manager/controller/issuers/setup "level"=0 "msg"="signing CA verified" "related_resource_kind"="Secret" "related_resource_name"="cert-manager-webhook-ca" "related_resource_namespace"="cert-manager" "resource_kind"="Issuer" "resource_name"="cert-manager-webhook-ca" "resource_namespace"="cert-manager"
I1002 15:58:19.173465 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.173562 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.174168 1 sync.go:329] cert-manager/controller/certificates/certificates "level"=0 "msg"="certificate scheduled for renewal" "duration_until_renewal"="6905h41m20.825882558s" "related_resource_kind"="Secret" "related_resource_name"="cert-manager-webhook-webhook-tls" "related_resource_namespace"="cert-manager"
I1002 15:58:19.174487 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.175092 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-selfsign"
I1002 15:58:19.175489 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-selfsign"
I1002 15:58:19.175743 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-selfsign"
I1002 15:58:19.175978 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.176791 1 sync.go:329] cert-manager/controller/certificates/certificates "level"=0 "msg"="certificate scheduled for renewal" "duration_until_renewal"="41945h41m15.823245228s" "related_resource_kind"="Secret" "related_resource_name"="cert-manager-webhook-ca" "related_resource_namespace"="cert-manager"
I1002 15:58:19.177118 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.807942 1 base_controller.go:193] cert-manager/controller/orders "level"=0 "msg"="finished processing work item" "key"="staging/api-certificate-3031097725"
</code></pre>
<p>Here is my configuration.</p>
<p><strong>Ingress</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- test.x.example.com
secretName: letsencrypt-staging-certificate-secret
rules:
- host: test.x.example.com
http:
paths:
- path: /
backend:
serviceName: example-frontend
servicePort: 80
</code></pre>
<p><strong>Issuer</strong></p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging-issuer
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging-secret-key
# Enable the HTTP-01 challenge provider
solvers:
- http01: {}
</code></pre>
<p><strong>Certificate</strong></p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: test-x-example-com
spec:
secretName: letsencrypt-staging-certificate-secret
issuerRef:
name: letsencrypt-staging-issuer
kind: Issuer
dnsNames:
- test.x.example.com
acme:
config:
- http01:
ingressClass: nginx
domains:
- test.x.example.com
</code></pre>
<p>Additional details: the secrets are in the <code>staging</code> namespace, like everything else except cert manager which is in the <code>cert-manager</code> namespace. The cluster is deployed on GKE.</p>
<p>EDIT: I'm wondering if it's possible that I hit the limits of the production environment in Let's Encrypt and got blocked. Is it possible to verify that somewhere?</p>
| rubik | <p>I finally solved the issue mostly by editing the Certificate configuration. I also switched from an Issuer to a ClusterIssuer but that should not have any impact on this issue. I think the problem was ACME verification.</p>
<p>Here is my new ClusterIssuer:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging-issuer
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging-secret-key
# Enable the HTTP-01 challenge provider
http01: {}
</code></pre>
<p>and, more importantly, the new Certificate:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: test-x-example-com
spec:
secretName: letsencrypt-staging-certificate-secret
issuerRef:
name: letsencrypt-staging-issuer
kind: ClusterIssuer
dnsNames:
- test.x.example.com
acme:
config:
- http01:
ingressClass: nginx
domains:
- test.x.example.com
</code></pre>
| rubik |
<p>First day helm user here. Trying to understand how to build common templates for k8s resources.
Let's say I have 10 cron jobs within single chart, all of them different by args and names only. Today 10 full job manifests exists and 95% of manifest content is equal. I want to move common part in template and create 10 manifests where I will provide specific values for args and names.</p>
<p>So I defined template _cron-job.yaml</p>
<pre><code> {{- define "common.cron-job"}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ include "costing-report.name" . }}-bom-{{ .Values.env }}
labels:
{{ include "costing-report.labels" . | indent 4 }}
spec:
schedule: "{{ .Values.cronjob.scheduleBom }}"
suspend: {{ .Values.cronjob.suspendBom }}
{{- with .Values.cronjob.concurrencyPolicy }}
concurrencyPolicy: {{ . }}
{{- end }}
{{- with .Values.cronjob.failedJobsHistoryLimit }}
failedJobsHistoryLimit: {{ . }}
{{- end }}
{{- with .Values.cronjob.successfulJobsHistoryLimit }}
successfulJobsHistoryLimit: {{ . }}
{{- end }}
jobTemplate:
metadata:
labels:
app.kubernetes.io/name: {{ include "costing-report.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args: ["--report=Bom","--email={{ .Values.configmap.service.email_bom }}"]
env:
- name: spring_profiles_active
value: "{{ .Values.env }}"
envFrom:
- configMapRef:
name: {{ include "costing-report.fullname" . }}
- secretRef:
name: {{ .Values.secrets.properties }}
restartPolicy: Never
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end -}}
</code></pre>
<p>and now I need to create job manifest that override name and args job1.yaml</p>
<pre><code>{{- template "common.cron-job" . -}}
??? override ???
name: {{ include "cost-report.name" . }}-job1-{{ .Values.env }}
jobTemplate:
spec:
template:
spec:
containers:
args: ["--report=Bom","--email={{ .Values.configmap.service.email_bom }}"]
</code></pre>
<p>Is there any way to do this? I didn't find this in helm docs. I did find this <a href="https://github.com/helm/charts/tree/master/incubator/common" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/incubator/common</a> but It didn't work as well and gave me error.</p>
<p>Thanks.</p>
| Georgy Gobozov | <p>Solution found</p>
<p><strong>Option 1</strong>
Use example from helm github <a href="https://github.com/helm/charts/tree/master/incubator/common" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/incubator/common</a>
Solution based on yaml merging and values override. Pretty flexible, allow you to define common templates and the use them to compose final k8s manifest. </p>
<p><strong>Option 2</strong>
Define common template and pass parameters with desired values.
In my case it looks smth like this.</p>
<p>_common.cronjob.yaml</p>
<pre><code>{{- define "common.cronjob" -}}
{{- $root := .root -}}
{{- $name := .name -}}
{{- $schedule := .schedule -}}
{{- $suspend := .suspend -}}
{{- $args := .args -}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $name }}
labels:
{{ include "costing-report.labels" $root | indent 4 }}
spec:
schedule: {{ $schedule }}
suspend: {{ $suspend }}
{{- with $root.Values.cronjob.concurrencyPolicy }}
concurrencyPolicy: {{ . }}
{{- end }}
{{- with $root.Values.cronjob.failedJobsHistoryLimit }}
failedJobsHistoryLimit: {{ . }}
{{- end }}
{{- with $root.Values.cronjob.successfulJobsHistoryLimit }}
successfulJobsHistoryLimit: {{ . }}
{{- end }}
jobTemplate:
metadata:
labels:
app.kubernetes.io/name: {{ include "costing-report.name" $root }}
app.kubernetes.io/instance: {{ $root.Release.Name }}
spec:
template:
spec:
containers:
- name: {{ $root.Chart.Name }}
image: "{{ $root.Values.image.repository }}:{{ $root.Values.image.tag }}"
imagePullPolicy: {{ $root.Values.image.pullPolicy }}
args: {{ $args }}
env:
- name: spring_profiles_active
value: "{{ $root.Values.env }}"
envFrom:
- configMapRef:
name: {{ include "costing-report.fullname" $root }}
- secretRef:
name: {{ $root.Values.secrets.properties }}
restartPolicy: Never
{{- with $root.Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end -}}
</code></pre>
<p>Then create job manifest(s), define values to pass to common template</p>
<p>bom-cronjob.yaml</p>
<pre><code>{{ $bucket := (printf "%s%s%s" "\"--bucket=ll-costing-report-" .Values.env "\"," )}}
{{ $email := (printf "%s%s%s" "\"--email=" .Values.configmap.service.email_bom "\"") }}
{{ $args := (list "\"--report=Bom\"," "\"--reportType=load\"," "\"--source=bamboorose\"," $bucket "\"--table=COSTING_BOM\"," "\"--ignoreLines=1\"," "\"--truncate=true\"," $email )}}
{{ $name := (printf "%s%s" "costing-report.name-bom-" .Values.env )}}
{{- template "common.cronjob" (dict "root" . "name" $name "schedule" .Values.cronjob.scheduleBom "suspend" .Values.cronjob.suspendBom "args" $args) -}}
</code></pre>
<p>Last line do the trick. Trick is that you can pass only single argument to template, in my case it's dictionary with all values that I need on template side. You can omit defining template variables and use dict values right away. Please note that I pass root context (scope) as "root" and prefix . with "root" in template.</p>
| Georgy Gobozov |
<p>I use <a href="https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.24.1" rel="noreferrer">nginx-ingress-controller:0.24.1</a> (<a href="https://github.com/kubernetes/ingress-nginx/issues/1809" rel="noreferrer">Inspired by</a>)</p>
<p>I would like to set a DNS A record to LB IP address, so it would connect it to the Google cloud public bucket (<code>my-back-end-bucket</code>) that has public index.html in the root AND to the back-end by another url rule.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Service
apiVersion: v1
metadata:
name: google-storage-buckets-service
namespace: ingress-nginx
spec:
type: ExternalName
externalName: storage.googleapis.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proxy-assets-ingress
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /my.bucket.com
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/upstream-vhost: "storage.googleapis.com"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: google-storage-buckets-service
servicePort: 443
- path: /c/
backend:
serviceName: hello-world-service
servicePort: 8080
</code></pre>
<p>By reaching <a href="https://my.ip.add.ress/c" rel="noreferrer">https://my.ip.add.ress/c</a> - got both outputs: <strong>Hello, world! bucket content.</strong></p>
<p><strong>"Hello, world!"</strong> form the <strong>hello-world-service</strong></p>
<p><strong>"bucket content"</strong> from the <strong>bucket</strong>' index.html file</p>
<p>Question: how to make it work so, that by <strong>ip/</strong> - I got a bucket content
and by <strong>ip/c</strong> - back-end response content ?</p>
| ses | <p>You can split your ingress into two where one defines <code>path: /*</code> with necessary annotation and another ingress that defines <code>path: /c/</code>.</p>
<p>The problem with your single ingress is that its annotations that you want to apply to <code>path: /*</code> only gets applied to other paths as well.</p>
| hers19 |
<p>Guess you have deployed a service which's using certificates in order to create tls/https communications.</p>
<p>So, I need to deploy java client containers which have to trust with these certificates.</p>
<p>Nevertheless, java looks up in truststores in order to check whether the certificate is valid.</p>
<p>As you can see, I'm not able to create an image using these certificates since they are unknown in build time.</p>
<p>I mean, I'm not able to create this kind of <code>Dockerfile</code> snippet, due to <code>/var/run/secrets/kubernetes.io/certs/tls.crt</code> is not located on build-time.</p>
<pre><code>RUN keytool -import -alias vault -storepass changeit -keystore truststore.jks -noprompt -trustcacerts -file /var/run/secrets/kubernetes.io/certs/tls.crt
</code></pre>
<p>So, how can I populate these truststores filled with these certificates when containers/pods are deployed/started?</p>
<p>I hope I've explained so well.</p>
| Jordi | <p>RedHat has a tutorial on how to do this on OpenShift:</p>
<p><a href="https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift/" rel="noreferrer">https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift/</a></p>
<p>It uses OpenShifts built in CA to actually generate and supply the certificate, so if using vanilla k8s you'll need to do that yourself, but once you have the certificate in a file on the pod, the method is exactly the same.</p>
| James Roper |
<p>I'm trying to bootstrap the k8s cluster using weave as cni plugin which was originally configured without --pod-cidr and weave plugin</p>
<pre><code>root@kube1:/etc/systemd/system/kubelet.service.d# kubectl get no
NAME STATUS ROLES AGE VERSION
kube1 Ready master 31m v1.18.2
kube2 Ready <none> 30m v1.18.2
kube3 Ready <none> 31m v1.18.2
root@kube1:/etc/systemd/system/kubelet.service.d#
</code></pre>
<p>So i have done the cleanup using the below commands</p>
<pre><code> kubectl drain kube2 --delete-local-data --force --ignore-daemonsets
kubectl drain kube3 --delete-local-data --force --ignore-daemonsets
kubectl drain kube1 --delete-local-data --force --ignore-daemonsets
kubectl delete no kube1 kube2 kube3
kubeadm reset
curl -L git.io/weave -o /usr/local/bin/weave
chmod a+x /usr/local/bin/weave
kubeadm reset
weave reset --force
rm /opt/cni/bin/weave-*
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X
systemctl restart docker
</code></pre>
<p>i have ensured the weave bridge interface was deleted on all 3 nodes and re initiated the cluster </p>
<pre><code> kubeadm init --apiserver-advertise-address=192.168.56.101 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.100.0.0/16
</code></pre>
<p>I have ensured the node-cidr was allocated to worker nodes(pod-cidr - 10.244.0.0./16) as below</p>
<pre><code>root@kube1:/etc/systemd/system/kubelet.service.d# kubectl get no kube2 -o yaml|grep -i podCIDR|grep -i 24
podCIDR: 10.244.2.0/24
root@kube1:/etc/systemd/system/kubelet.service.d# kubectl get no kube3 -o yaml|grep -i podCIDR|grep -i 24
podCIDR: 10.244.1.0/24
</code></pre>
<p>After i have created the weave pod's ,i was expecting to see the weave bridge interface ip in 10.244.* but it created with default weave configuration (10.32.0.1) it seems</p>
<pre><code>root@kube2:/etc/kubernetes# ifconfig weave
weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet 10.32.0.1 netmask 255.240.0.0 broadcast 10.47.255.255
inet6 fe80::187b:63ff:fe5c:a2ae prefixlen 64 scopeid 0x20<link>
ether 1a:7b:63:5c:a2:ae txqueuelen 1000 (Ethernet)
</code></pre>
<p>Is there anything i have missed to clean up? or is it weave plugins default behaviour? </p>
| JPNagarajan | <p>By default Weave Net uses its own IP allocator, which can be configured via the environment variable <code>IPALLOC_RANGE</code>. <a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#-changing-configuration-options" rel="nofollow noreferrer">Link to docs</a></p>
<p>If you change the CNI config on each node to use a different IPAM plugin, e.g. <a href="https://github.com/containernetworking/plugins/tree/master/plugins/ipam/host-local" rel="nofollow noreferrer">"host-local"</a>
you can probably get that to do exactly what you tried to do.</p>
| Bryan |
<p>I have an anti-affinity rule that asks kubernetes to schedule pods from the same deployment onto <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#always-co-located-in-the-same-node" rel="nofollow noreferrer">different nodes</a>, we've used it successfully for a long time.</p>
<pre><code>affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: application
operator: In
values:
- {{ $appName }}
- key: proc
operator: In
values:
- {{ $procName }}
</code></pre>
<p>I'm trying to update my pod affinity rules to be a strong preference instead of a hard requirement, so that we don't need to expand our cluster if a deployment needs more replicas than there are nodes available.</p>
<pre><code>affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
weight: 100
labelSelector:
matchExpressions:
- key: application
operator: In
values:
- {{ $appName }}
- key: proc
operator: In
values:
- {{ $procName }}
</code></pre>
<p>However, when I try applying the new rules, I get an unexpected error with the topologyKey:</p>
<pre><code>Error: Deployment.apps "core--web" is invalid:
[spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey: Required value: can not be empty,
spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey: Invalid value: "": name part must be non-empty,
spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey: Invalid value: "": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')]
</code></pre>
<p>The scheduler seems to be getting an empty string value for the topology key, even though all my nodes have a label for the specified key which match the regex:</p>
<pre><code>$ kubectl describe nodes | grep kubernetes.io/hostname
kubernetes.io/hostname=ip-10-x-x-x.ec2.internal
kubernetes.io/hostname=ip-10-x-x-x.ec2.internal
kubernetes.io/hostname=ip-10-x-x-x.ec2.internal
kubernetes.io/hostname=ip-10-x-x-x.ec2.internal
</code></pre>
<p>I didn't expect to see a problem like this from a simple change from required to preferred. What did I screw up to cause the topologyKey error?</p>
| Brad Koch | <p>There's a slight difference between the syntax of required and preferred, note the reference to <code>podAffinityTerm</code> in the error message path:</p>
<pre><code>spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey
</code></pre>
<p>The correct syntax for preferred scheduling is:</p>
<pre><code>affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: application
operator: In
values:
- {{ $appName }}
- key: proc
operator: In
values:
- {{ $procName }}
</code></pre>
<p>Note that <code>weight</code> is a top level key, with a sibling of <code>podAffinityTerm</code> which contains the <code>topologyKey</code> and <code>labelSelector</code>.</p>
| Brad Koch |
<p>I have a situation where I would like to run two kubernetes clusters in the same AWS VPC sharing subnets. This seems to work okay except the weave CNI plugin seems to discover nodes in the other cluster. These nodes get rejected with "IP allocation was seeded by different peers" which makes sense. They are different clusters. Is there a way to keep weave from finding machines in alternate clusters. When I do <code>weave --local status ipam</code> and <code>weave --local status targets</code> I see the expected targets and ipams for each cluster.</p>
<p>Weave pods are in an infinite loop of connecting and rejecting nodes from alternate clusters. This is chewing up cpu and impacting the clusters. If I run <code>kube-utils</code> inside a weave pod it returns the correct nodes for each cluster. It seems kubernetes should know what peers are available, can I just have weave use the peers that the cluster knows about.</p>
<p>After further investigation I believe the issue is that I have scaled machines up and down for both clusters. IP addresses were re-used from one cluster to the next in the process. For instance Cluster A scaled down a node. Weave continues to attempt connections to the now lost node. Cluster B scales up and uses the ip that was used originally in Cluster A. Weave finds the node. This then made weave "discover" the other cluster nodes. Once it discovers one node from the other cluster, it discovers all the nodes. </p>
<p>I have upgraded from 2.4.0 to 2.4.1 to see if some fixes related to re-using ips mitigates this issue. </p>
| cchanley2003 | <p>There is a demo <a href="https://github.com/weaveworks-experiments/demo-weave-kube-hybrid" rel="nofollow noreferrer">here</a> where Weave Net is run across multiple clusters. This demo was shown in the keynote for KubeCon 2016.</p>
<p>The most important part is <a href="https://github.com/weaveworks-experiments/demo-weave-kube-hybrid/blob/master/weave-kube-join.yaml#L34" rel="nofollow noreferrer">here</a>
which stops subsequent clusters from forming their own cluster and hence rejecting connections from others.</p>
<pre><code>--ipalloc-init=observer
</code></pre>
<p>It's not a particularly clean solution, hacking around with the config, but it does work.</p>
| Bryan |
<p>My understanding is that setting the <code>Service</code> type to <code>LoadBalancer</code> creates a new Azure Load Balancer and assigns an IP address to the <code>Service</code>. Does this mean that I can have multiple Services using port 80? If the app behind my <code>Service</code> (an ASP.NET Core app) can handle TLS and HTTPS why shouldn't I just use <code>LoadBalancer</code>'s for any <code>Service</code> I want to expose to the internet?</p>
<p>What is the advantage of using an <code>Ingress</code> if I don't care about TLS termination (You can let Cloudflare handle TLS termination)? If anything, it slows things down by adding an extra hop for every request.</p>
<h2>Update</h2>
<p>Some answers below mention that creating load balancers is costly. It should be noted that load balancers on Azure are free but they do charge for IP addresses of which they give you five for free. So for small projects where you want to expose up to five IP addresses, it's essentially free. Any more than that, then you may want to look ad usign <code>Ingress</code>.</p>
<p>Some answers also mention extra complexity if you don't use <code>Ingress</code>. I have already mentioned that Cloudflare can handle TLS termination for me. I've also discovered the <code>external-dns</code> Kubernetes project to create DNS entries in Cloudflare pointing at the load balancers IP address? It seems to me that cutting out <code>Ingress</code> reduces complexity as it's one less thing that I have to configure and manage. The choice of Ingress is also massive, it's likely that I'll pick the wrong one which will end up unmaintained after some time.</p>
| Muhammad Rehan Saeed | <p>There is a nice article <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="noreferrer">here</a> which describe the differences on Service(Load Balancer) and Ingress.</p>
<p>In summary, you can have multiple Service(Load Balancer) in the cluster, where each application is exposed independently from each other. The main issue is that each Load Balancer added will increase the cost of your solution, and does not have to be this way, unless you strictly need this.</p>
<p>If multiple applications listen on port 80, and they are inside the container, there is no reason you also need to map it to the port 80 in the host node. You can assign it to any port, because the Service will handle the dynamic port mappings for you.</p>
<p>The ingress is best in this scenario, because you can have one ingress listing on port 80, and route the traffic to the right service based on many variables, like:</p>
<ul>
<li>Domain</li>
<li>Url Path</li>
<li>Query String</li>
<li>And many other</li>
</ul>
<p>Ingress in not just for TLS termination, it is in simple terms a proxy\gateway that will control the routing to the right service, TLS termination is just one of the features.</p>
| Diego Mendes |
<p>I am writing an Operator for Kubernetes using the Operator SDK and have a question about logging (I am still rather new to Go).</p>
<p>I have setup a logger in my controller that I have attached a UUID to as a Trace ID</p>
<pre><code>func (r *MyReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
myLog = log.FromContext(ctx).WithValues("traceID", uuid.NewUUID())
myCtx := log.IntoContext(ctx, myLog)
</code></pre>
<p>However I have a Library that is setup at the same time as the controllers and passed in via the struct.</p>
<pre><code>type MyReconciler struct {
MyBase
MyClient MyGreatClient
}
</code></pre>
<p>And each method on the client take the ctx as a parameter.</p>
<p>As I want to use the logger in the ctx as it contains my trace ID do I need to add this line to every method in the client or is there a better way to do it?</p>
<pre><code>func (mgc *MyGreatClient) SomethingExists(ctx context.Context, something string) bool {
myGreatClientLog = log.FromContext(ctx).WithName("MyGreatClient")
myGreatClientLog.Info("Checking Something Exists", "Something", something)
</code></pre>
<p>Is there a better way to achieve what I am wanting to do? It doesn't appear that <code>"sigs.k8s.io/controller-runtime/pkg/log"</code> (of type <code>logr.Logger</code>) supports just logging a context like zap.</p>
<pre><code> myappcontex.Logger(ctx).Info("did something awesome")
</code></pre>
<p>Any help on doing this the idiomatic way is appreciated</p>
| apr_1985 | <p>Not at all sure of this answer and I too wonder why logging and logging sinks are built so complex (refer <a href="https://dave.cheney.net/2015/11/05/lets-talk-about-logging" rel="nofollow noreferrer">https://dave.cheney.net/2015/11/05/lets-talk-about-logging</a> which I found reffered in logr <a href="https://pkg.go.dev/github.com/go-logr/[email protected]" rel="nofollow noreferrer">https://pkg.go.dev/github.com/go-logr/[email protected]</a> !);</p>
<p>This is how I logged in a generated <code>kubebuilder</code> operator controller</p>
<pre><code>log.Log.Info("Pod Image is set", "PodImageName", testOperator.Spec.PodImage)
</code></pre>
<p>Ouput-</p>
<pre><code>1.6611775636957748e+09 INFO Pod Image is set {"PodImageName": "alexcpn/run_server:1.2"}
</code></pre>
<p>and with this</p>
<pre><code>log.FromContext(ctx).Info("Pod Image is ", "PodImageName", testOperator.Spec.PodImage)
</code></pre>
<p>Ouput is</p>
<pre><code>1.6611801111484244e+09 INFO Pod Image is {"controller": "testoperartor", "controllerGroup": "grpcapp.mytest.io", "controllerKind": "Testoperartor", "testoperartor": {"name":"testoperartor-sample","namespace":"default"}, "namespace": "default", "name": "testoperartor-sample", "reconcileID": "ffa3a957-c14f-4ec9-8cf9-767c38fc26ee", "PodImageName": "alexcpn/run_server:1.2"}
</code></pre>
<p>The controller uses Golang Logr</p>
<p><code>All logging in controller-runtime is structured, using a set of interfaces defined by a package called logr (https://pkg.go.dev/github.com/go-logr/logr). The sub-package zap provides helpers for setting up logr backed by Zap (go.uber.org/zap) </code> <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/log#DelegatingLogSink" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/log#DelegatingLogSink</a></p>
<p>And I can see that it sets Zap logging in main</p>
<pre><code>ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
</code></pre>
| Alex Punnen |
<p>I'm trying to run Ceph on (micro)K8s which running on my Ubuntu WSL distory.</p>
<p>OSD pods are not being created because there is no supported device is available (see logs below).</p>
<p>Ceph is configured to <a href="https://rook.io/docs/rook/v1.5/ceph-common-issues.html#osd-pods-are-not-created-on-my-devices" rel="nofollow noreferrer">UseAllDevices</a> which you can see it searching for one in the log above.</p>
<p>Ceph ideally wants an unformatted partition which I created on my windows host but I'm unsure how I mount that to <code>/dev/sd{c}</code> or do try to create a new partition within WSL itself?</p>
<p>I don't know how to do either or if that is even the right approach.</p>
<p>Thanks in advance.</p>
<pre><code>2021-03-01 17:09:17.090302 W | inventory: skipping device "loop0". unsupported diskType loop
2021-03-01 17:09:17.112037 W | inventory: skipping device "loop1". unsupported diskType loop
2021-03-01 17:09:17.150605 W | inventory: skipping device "loop2". unsupported diskType loop
2021-03-01 17:09:17.173562 W | inventory: skipping device "loop3". unsupported diskType loop
2021-03-01 17:09:17.185464 W | inventory: skipping device "loop4". unsupported diskType loop
2021-03-01 17:09:17.209067 W | inventory: skipping device "loop5". unsupported diskType loop
2021-03-01 17:09:17.224485 W | inventory: skipping device "loop6". unsupported diskType loop
2021-03-01 17:09:17.246726 W | inventory: skipping device "loop7". unsupported diskType loop
2021-03-01 17:09:17.257490 W | inventory: skipping device "loop8". unsupported diskType loop
2021-03-01 17:09:17.272513 W | inventory: skipping device "loop9". unsupported diskType loop
2021-03-01 17:09:17.292126 W | inventory: skipping device "loop10". unsupported diskType loop
2021-03-01 17:09:17.301785 W | inventory: skipping device "loop11". unsupported diskType loop
2021-03-01 17:09:17.323591 W | inventory: skipping device "loop12". unsupported diskType loop
2021-03-01 17:09:17.327819 W | inventory: skipping device "loop13". diskType is empty
2021-03-01 17:09:20.140453 I | cephosd: skipping device "ram0": ["Insufficient space (<5GB)"].
2021-03-01 17:09:21.762693 I | cephosd: skipping device "ram1": ["Insufficient space (<5GB)"].
2021-03-01 17:09:23.759026 I | cephosd: skipping device "ram2": ["Insufficient space (<5GB)"].
2021-03-01 17:09:25.396302 I | cephosd: skipping device "ram3": ["Insufficient space (<5GB)"].
2021-03-01 17:09:26.512274 I | cephosd: skipping device "ram4": ["Insufficient space (<5GB)"].
2021-03-01 17:09:27.664515 I | cephosd: skipping device "ram5": ["Insufficient space (<5GB)"].
2021-03-01 17:09:28.854953 I | cephosd: skipping device "ram6": ["Insufficient space (<5GB)"].
2021-03-01 17:09:30.080786 I | cephosd: skipping device "ram7": ["Insufficient space (<5GB)"].
2021-03-01 17:09:31.407741 I | cephosd: skipping device "ram8": ["Insufficient space (<5GB)"].
2021-03-01 17:09:32.646524 I | cephosd: skipping device "ram9": ["Insufficient space (<5GB)"].
2021-03-01 17:09:33.856632 I | cephosd: skipping device "ram10": ["Insufficient space (<5GB)"].
2021-03-01 17:09:35.568848 I | cephosd: skipping device "ram11": ["Insufficient space (<5GB)"].
2021-03-01 17:09:36.766882 I | cephosd: skipping device "ram12": ["Insufficient space (<5GB)"].
2021-03-01 17:09:37.800115 I | cephosd: skipping device "ram13": ["Insufficient space (<5GB)"].
2021-03-01 17:09:38.895007 I | cephosd: skipping device "ram14": ["Insufficient space (<5GB)"].
2021-03-01 17:09:40.013397 I | cephosd: skipping device "ram15": ["Insufficient space (<5GB)"].
2021-03-01 17:09:40.013498 I | cephosd: skipping device "sda" because it contains a filesystem "ext4"
2021-03-01 17:09:40.013513 I | cephosd: skipping device "sdb" because it contains a filesystem "ext4"
2021-03-01 17:09:41.237145 W | cephosd: skipping OSD configuration as no devices matched the storage settings for this node
</code></pre>
<p>Original: <a href="https://superuser.com/questions/1630022/create-a-new-dev-sdb-c-d-on-wsl-for-ceph">https://superuser.com/questions/1630022/create-a-new-dev-sdb-c-d-on-wsl-for-ceph</a></p>
| a11hard | <p>Took me a couple days to sort out but, to get this working you need to:</p>
<ul>
<li>Get the developer preview of Windows 10 so you have the --mount option for WSL</li>
<li>Create a VHDX on your Windows Host. You can do this through your Disk Manager and creating a dynamic VHDX under the actions menu.</li>
<li>Mount that VHDX and, voila the 1/dev/sd{x}1 will be created. In my use-case this allowed Ceph to create the OSDs using that disk.</li>
</ul>
| a11hard |
<p>I'm installing pod network addon flannel to my cluster.</p>
<p>On the official kubernetes doc, the url to install flannel add-on is</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
</code></pre>
<p>and on the github repository wiki, the url is </p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
</code></pre>
<p>So what is the diff ? Which url should I apply ?</p>
<p>thank you for some help</p>
| Vincent | <p>They are the same document, but potentially different versions.
You can view the <a href="https://github.com/coreos/flannel/commits/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">version history</a>, and observe that <code>62e44c867a28</code> is (as of this answer) the second-most-recent version.</p>
<p>That particular change is described at <a href="https://github.com/coreos/flannel/pull/1162" rel="nofollow noreferrer">https://github.com/coreos/flannel/pull/1162</a> - basically updating definitions from a beta format to a newer format that is less likely to change going forward.</p>
| Bryan |
<p>My deployment pod was evicted due to memory consumption:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning Evicted 1h kubelet, gke-XXX-default-pool-XXX The node was low on resource: memory. Container my-container was using 1700040Ki, which exceeds its request of 0.
Normal Killing 1h kubelet, gke-XXX-default-pool-XXX Killing container with id docker://my-container:Need to kill Pod
</code></pre>
<p>I tried to grant it more memory by adding the following to my deployment <code>yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
...
containers:
- name: my-container
image: my-container:latest
...
resources:
requests:
memory: "3Gi"
</code></pre>
<p>However, it failed to deploy:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4s (x5 over 13s) default-scheduler 0/3 nodes are available: 3 Insufficient memory.
Normal NotTriggerScaleUp 0s cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added)
</code></pre>
<p>The deployment requests only one container.</p>
<p>I'm using <code>GKE</code> with autoscaling, the nodes in the default (and only) pool have 3.75 GB memory.</p>
<p>From trial and error, I found that the maximum memory I can request is "2Gi". Why can't I utilize the full 3.75 of a node with a single pod? Do I need nodes with bigger memory capacity?</p>
| Mugen | <p>Even though the node has 3.75 GB of total memory, is very likely that the capacity allocatable is not all 3.75 GB.</p>
<p>Kubernetes reserve some capacity for the system services to avoid containers consuming too much resources in the node affecting the operation of systems services .</p>
<p>From the <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources" rel="noreferrer">docs</a>:</p>
<blockquote>
<p>Kubernetes nodes can be scheduled to Capacity. Pods can consume all the available capacity on a node <strong>by default</strong>. This is an issue because nodes typically run quite a few system daemons that power the OS and Kubernetes itself. Unless resources are set aside for these system daemons, pods and system daemons compete for resources and lead to resource starvation issues on the node.</p>
</blockquote>
<p>Because you are using GKE, is they don't use the defaults, running the following command will show how much <strong>allocatable</strong> resource you have in the node:</p>
<p><code>kubectl describe node [NODE_NAME] | grep Allocatable -B 4 -A 3</code></p>
<p>From the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#node_allocatable" rel="noreferrer">GKE docs</a>:</p>
<blockquote>
<p>Allocatable resources are calculated in the following way:</p>
<p>Allocatable = Capacity - Reserved - Eviction Threshold</p>
<p>For memory resources, GKE reserves the following:</p>
<ul>
<li>25% of the first 4GB of memory</li>
<li>20% of the next 4GB of memory (up to 8GB)</li>
<li>10% of the next 8GB of memory (up to 16GB)</li>
<li>6% of the next 112GB of memory (up to 128GB)</li>
<li>2% of any memory above 128GB</li>
</ul>
<p>GKE reserves an additional 100 MiB memory on each node for kubelet eviction.</p>
</blockquote>
<p>As the error message suggests, scaling the cluster will not solve the problem because each node capacity is limited to X amount of memory and the POD need more than that.</p>
| Diego Mendes |
<p>I have deployed some simple services as a proof of concept: an nginx web server patched with <a href="https://stackoverflow.com/a/8217856/735231">https://stackoverflow.com/a/8217856/735231</a> for high performance.</p>
<p>I also edited <code>/etc/nginx/conf.d/default.conf</code> so that the line <code>listen 80;</code> becomes <code>listen 80 http2;</code>.</p>
<p>I am using the Locust distributed load-testing tool, with a class that swaps the <code>requests</code> module for <code>hyper</code> in order to test HTTP/2 workloads. This may not be optimal in terms of performance, but I can spawn many locust workers, so it's not a huge concern.</p>
<p>For testing, I spawned a cluster on GKE of 5 machines, 2 vCPU, 4GB RAM each, installed Helm and the charts of these services (I can post them on a gist later if useful).</p>
<p>I tested Locust with min_time=0 and max_time=0 so that it spawned as many requests as possible; with 10 workers against a single nginx instance.</p>
<p>With 10 workers, 140 "clients" total, I get ~2.1k requests per second (RPS).</p>
<pre><code>10 workers, 260 clients: I get ~2.0k RPS
10 workers, 400 clients: ~2.0k RPS
</code></pre>
<p>Now, I try to scale horizontally: I spawn 5 nginx instances and get:</p>
<pre><code>10 workers, 140 clients: ~2.1k RPS
10 workers, 280 clients: ~2.1k RPS
20 workers, 140 clients: ~1.7k RPS
20 workers, 280 clients: ~1.9k RPS
20 workers, 400 clients: ~1.9k RPS
</code></pre>
<p>The resouce usage is quite low as portrayed by <code>kubectl top pod</code> (this is for 10 workers, 280 clients; nginx is not resource-limited, locust workers are limited to 1 CPU per pod):</p>
<pre><code>user@cloudshell:~ (project)$ kubectl top pod
NAME CPU(cores) MEMORY(bytes)
h2test-nginx-cc4d4c69f-4j267 34m 68Mi
h2test-nginx-cc4d4c69f-4t6k7 27m 68Mi
h2test-nginx-cc4d4c69f-l942r 30m 69Mi
h2test-nginx-cc4d4c69f-mfxf8 32m 68Mi
h2test-nginx-cc4d4c69f-p2jgs 45m 68Mi
lt-master-5f495d866c-k9tw2 3m 26Mi
lt-worker-6d8d87d6f6-cjldn 524m 32Mi
lt-worker-6d8d87d6f6-hcchj 518m 33Mi
lt-worker-6d8d87d6f6-hnq7l 500m 33Mi
lt-worker-6d8d87d6f6-kf9lj 403m 33Mi
lt-worker-6d8d87d6f6-kh7wt 438m 33Mi
lt-worker-6d8d87d6f6-lvt6j 559m 33Mi
lt-worker-6d8d87d6f6-sxxxm 503m 34Mi
lt-worker-6d8d87d6f6-xhmbj 500m 33Mi
lt-worker-6d8d87d6f6-zbq9v 431m 32Mi
lt-worker-6d8d87d6f6-zr85c 480m 33Mi
</code></pre>
<p>I portrayed this test on GKE for easier replication, but I have come to the same results in a private-cloud cluster.</p>
<p>Why does it seem that it does not matter how many instances I spawn of a service?</p>
<p><strong>UPDATE</strong>: As per the first answer, I'm updating information with information on the nodes and on what happens with a single Locust worker.</p>
<pre><code>1 worker, 1 clients: 22 RPS
1 worker, 2 clients: 45 RPS
1 worker, 4 clients: 90 RPS
1 worker, 8 clients: 174 RPS
1 worker, 16 clients: 360 RPS
32 clients: 490 RPS
40 clients: 480 RPS (this seems over max. sustainable clients per worker)
</code></pre>
<p>But above all, it seems that the root problem is that I'm at the limit of capacity:</p>
<pre><code>user@cloudshell:~ (project)$ kubectl top pod
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-sc1-default-pool-cbbb35bb-0mk4 1903m 98% 695Mi 24%
gke-sc1-default-pool-cbbb35bb-9zgl 2017m 104% 727Mi 25%
gke-sc1-default-pool-cbbb35bb-b02k 1991m 103% 854Mi 30%
gke-sc1-default-pool-cbbb35bb-mmcs 2014m 104% 776Mi 27%
gke-sc1-default-pool-cbbb35bb-t6ch 1109m 57% 743Mi 26%
</code></pre>
| ssice | <p>If I understood correctly, you did run the load testing on same cluster/nodes as your pods, this will definitely have an impact on the overall result, I would recommend you split the client from the server on separate nodes so that one does not affect each other.</p>
<p>For the values you reported, is clearly visible that the workers are consuming more CPU that the nginx servers. </p>
<p>You should check either:</p>
<ul>
<li>The Host CPU utilization, it might be under high pressure with context switches because the amount threads is much higher than the number of CPU available.</li>
<li>A network bottleneck, maybe you could try add more nodes or increase the worker capacity(SKU) and split client from servers.</li>
<li>The clients does not have enough capacity to generate the load, you increase the threads but the raw limits are the same</li>
</ul>
<p>You should also test individual server capacity to validate the limit of each server, so you have a parameter to compare if the results are in line with the expected values.</p>
| Diego Mendes |
<h2>Question</h2>
<p>Can I get nginx to call another microservice inside of AKS k8s prior to it routing to the requested api? - the goal being to speed up requests (fewer hops) and simplify build and deployment (fewer services).</p>
<h2>Explanation</h2>
<p>In our currently deployed Azure AKS (Kubernetes) cluster, we have an additional service I was hoping to replace with nginx. It's a routing microservice that calls out to a identity API prior to doing the routing.</p>
<p>The reason is a common one I'd imagine, we recieve some kind of authentication token via some pre-defined header(s) (the standard <code>Authorization</code> header, or sometimes some bespoke ones used for debug tokens, and impersonation), we call from the routing API into the identity API with those pre-defined headers and get a user identity object in return.</p>
<p>We then pass on this basic user identity object into the microservices so they have quick and easy access to the user and roles.</p>
<p>A brief explanation would be:</p>
<ul>
<li>Nginx receives a request, off-loads SSL and route to the requested service.</li>
<li>Routing API takes the authorization headers and makes a call to the Identity API.</li>
<li>Identity API validations the authorization information and returns either an authorization error (when auth fails), or a serialized user identity object.</li>
<li>Router API either returns there and then, for failure, or routes to the requested microservice (by cracking the request path), and attaches the user identity object as a header.</li>
<li>Requested microservice can then turn that user identity object into a Claims Principal in the case of .NET Core for example.</li>
</ul>
<p><a href="https://i.stack.imgur.com/C7vwo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C7vwo.png" alt="k8s with our own Router API"></a></p>
<p>There are obviously options for merging the Router.API and the UserIdentity.API, but keeping the separation of concerns seems like a better move. I'd just to remove the Route.API, in-order to maintain that separation, but get nginx to do that work for me.</p>
| Kieron | <p>ProxyKit (<a href="https://github.com/damianh/ProxyKit" rel="nofollow noreferrer">https://github.com/damianh/ProxyKit</a>) could be a good alternative to nginx - it allows you to easily add custom logic to certain requests (for example I lookup API keys based on a tenant in URL) and you can cache the responses using CacheCow (see a recipe in ProxyKit source)</p>
| Jakub Konecki |
<p>we have couchbase cluster deployed on Openshift. Couchbase web console is opening but we are not able to add buckets to it. For that we have to mention buckets in Ymal file and then redeploy using openshift operator.
Is it the general behavior or we can add buckets without re-deployment of cluster ??</p>
| Gurmeet | <p>Yes, that's the intended behavior. Much of the cluster management, like buckets and adding of nodes, is under the control of the operator.</p>
<p>It is possible to <a href="https://docs.couchbase.com/operator/current/couchbase-cluster-config.html#disablebucketmanagement" rel="nofollow noreferrer">disable bucket management if you have a need to</a>, but the intent is that you'll operate through kubectl.</p>
| Matt Ingenthron |
<p>How does kubernetes pod gets IP instead of container instead of it as CNI plugin works at container level?</p>
<p>How all containers of same pod share same network stack?</p>
| Bharath Thiruveedula | <p>Containers use a feature from the kernel called <strong><em>virtual network interface</em></strong>, the virtual network Interface( lets name it <strong>veth0</strong>) is created and then assigned to a namespace, when a container is created, it is also assigned to a namespace, when multiple containers are created within the same namespace, only a single network interface veth0 will be created.</p>
<p>A POD is just the term used to specify a set of resources and features, one of them is the namespace and the containers running in it.</p>
<p>When you say the POD get an IP, what actually get the ip is the veth0 interface, container apps see the veth0 the same way applications outside a container see a single physical network card on a server.</p>
<p>CNI is just the technical specification on how it should work to make multiple network plugins work without changes to the platform. The process above should be the same to all network plugins.</p>
<p>There is a nice explanation in <a href="https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727" rel="nofollow noreferrer">this blog post</a></p>
| Diego Mendes |
<p>I'm trying to create a user in a Kubernetes cluster.</p>
<p>I spinned up 2 droplets on DigitalOcean using a Terraform script of mine.</p>
<p>Then I logged in the master node droplet using <code>ssh</code>:</p>
<pre><code>doctl compute ssh droplet1
</code></pre>
<p>Following this, I created a new cluster and a namespace in it:</p>
<pre><code>kubectl create namespace thalasoft
</code></pre>
<p>I created a user role in the <code>role-deployment-manager.yml</code> file:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: thalasoft
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
</code></pre>
<p>and executed the command:</p>
<pre><code>kubectl create -f role-deployment-manager.yml
</code></pre>
<p>I created a role grant in the <code>rolebinding-deployment-manager.yml</code> file:</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: deployment-manager-binding
namespace: thalasoft
subjects:
- kind: User
name: stephane
apiGroup: ""
roleRef:
kind: Role
name: deployment-manager
apiGroup: ""
</code></pre>
<p>and executed the command:
kubectl create -f rolebinding-deployment-manager.yml</p>
<p>Here is my terminal output:</p>
<pre><code>Last login: Wed Dec 19 10:48:48 2018 from 90.191.151.182
root@droplet1:~# kubectl create namespace thalasoft
namespace/thalasoft created
root@droplet1:~# vi role-deployment-manager.yml
root@droplet1:~# kubectl create -f role-deployment-manager.yml
role.rbac.authorization.k8s.io/deployment-manager created
root@droplet1:~# vi rolebinding-deployment-manager.yml
root@droplet1:~# kubectl create -f rolebinding-deployment-manager.yml
rolebinding.rbac.authorization.k8s.io/deployment-manager-binding created
root@droplet1:~#
</code></pre>
<p>Now I'd like to first create a user in the cluster, and then configure the client <code>kubectl</code> with this user so as to operate from my laptop and avoid logging via <code>sshΜ</code> to the droplet.</p>
<p>I know I can configure a user in the <code>kubectl</code> client:</p>
<pre><code>#Β Create a context, that is, a user against a namespace of a cluster, in the client configuration
kubectl config set-context digital-ocean-context --cluster=digital-ocean-cluster --namespace=digital-ocean-namespace --user=stephane
#Β Configure the client with a user credentials
cd;
kubectl config set-credentials stephane --client-certificate=.ssh/id_rsa.pub --client-key=.ssh/id_rsa
</code></pre>
<p>But this is only some client side configuration as I understand.</p>
<p>UPDATE: I could add a user credentials with a certificate signed by the Kubernetes CA, running the following commands on the droplet hosting the Kubernetes master node:</p>
<pre><code>#Β Create a private key
openssl genrsa -out .ssh/thalasoft.key 4096
#Β Create a certificate signing request
openssl req -new -key .ssh/thalasoft.key -out .ssh/thalasoft.csr -subj "/CN=stephane/O=thalasoft"
#Β Sign the certificate
export CA_LOCATION=/etc/kubernetes/pki/
openssl x509 -req -in .ssh/thalasoft.csr -CA $CA_LOCATION/ca.crt -CAkey $CA_LOCATION/ca.key -CAcreateserial -out .ssh/thalasoft.crt -days 1024
#Β Configure a cluster in the client
kubectl config set-cluster digital-ocean-cluster --server=https://${MASTER_IP}:6443 --insecure-skip-tls-verify=true
#Β Configure a user in the client
#Β Copy the key and the certificate to the client
scp -o "StrictHostKeyChecking no" [email protected]:.ssh/thalasoft.* .
#Β Configure the client with a user credentials
kubectl config set-credentials stephane --client-certificate=.ssh/thalasoft.crt --client-key=.ssh/thalasoft.key
#Β Create a context, that is, a user against a namespace of a cluster, in the client configuration
kubectl config set-context digital-ocean-context --cluster=digital-ocean-cluster --namespace=digital-ocean-namespace --user=stephane
</code></pre>
| Stephane | <blockquote>
<p>But this is only some client side configuration as I understand.</p>
<p>What command I should use to create the user ?</p>
</blockquote>
<p>Kubernetes doesn't provide user management. This is handled through x509 certificates that can be signed by your cluster CA.</p>
<p>First, you'll need to create a Key:</p>
<pre><code>openssl genrsa -out my-user.key 4096
</code></pre>
<p>Second, you'll need to create a signing request:</p>
<pre><code>openssl req -new -key my-user.key -out my-user.csr -subj "/CN=my-user/O=my-organisation"
</code></pre>
<p>Third, sign the certificate request:</p>
<pre><code>openssl x509 -req -in my-user.csr -CA CA_LOCATION/ca.crt -CAkey CA_LOCATION/ca.key -CAcreateserial -out my-user.crt -days 500
</code></pre>
<p><code>ca.crt</code> and <code>ca.key</code> is the same cert/key provided by <code>kubeadm</code> or within your master configuration.</p>
<p>You can then give this signed certificate to your user, along with their key, and then can configure access with:</p>
<pre><code>kubectl config set-credentials my-user --client-certificate=my-user.crt --client-key=my-user.key
kubectl config set-context my-k8s-cluster --cluster=cluster-name --namespace=whatever --user=my-user
</code></pre>
<p>Bitnami provide a great resource that explains all of this:</p>
<p><a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/#use-case-1-create-user-with-limited-namespace-access" rel="nofollow noreferrer">https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/#use-case-1-create-user-with-limited-namespace-access</a></p>
| Rawkode |
<p>In Kubernetes is it possible to add <code>hostPath</code> storage in Statefulset. If so, can someone help me with some example?</p>
| Dinesh | <p>Yes but it is definitely for testing purposes.</p>
<p>First you need to create as many Persistent Volume as you need</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-001
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data01"
kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-002
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data02"
...
</code></pre>
<p>Afterwards, add this VolumeClaimsTemplate to your Statefulset</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: my-hostpath-volume
spec:
storageClassName: manual
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
</code></pre>
<p>Another solution is using the <a href="https://github.com/kubernetes-incubator/external-storage/blob/master/docs/README.md" rel="noreferrer">hostpath dynamic provisioner</a>. You do not have to create the PV bin advance but this remains a "proof-of-concept solution" as well and you will have to build and deploy the provisioner in your cluster.</p>
| Jcs |
<p>I have one config file for kube-apiserver.</p>
<pre><code>KUBE_APISERVER_OPTS="
--logtostderr=true
--v=4
--etcd-servers=https://172.16.0.2:2379,https://172.16.0.3:2379
--bind-address=172.16.0.2
--secure-port=6443
--advertise-address=172.16.0.2
--allow-privileged=true
--service-cluster-ip-range=10.0.0.0/24
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction
--authorization-mode=RBAC,Node --enable-bootstrap-token-auth
--token-auth-file=/k8s/kubernetes/cfg/token.csv
--service-node-port-range=30000-50000
--tls-cert-file=/k8s/kubernetes/ssl/server.pem
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem
--client-ca-file=/k8s/kubernetes/ssl/ca.pem
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem
--etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
</code></pre>
<p>Then I add system unit like this:</p>
<pre><code>[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
</code></pre>
<p>But when I startup kube-apiserver, it shows me the hints with config error:</p>
<pre><code>error: invalid authentication config: parse error on line 1, column 82: extraneous or missing " in quoted-field
</code></pre>
<p>Could you tell me where is my error, thanks much!
<a href="https://i.stack.imgur.com/clRId.png" rel="nofollow noreferrer">error hints</a></p>
| jiang li | <p><code>systemd</code> environment files must be formatted correctly, which includes escaping the end of your lines if you wish to do multi line values:</p>
<pre><code>APISERVER=" \
--arg-1=2 \
--arg-2=3 \
--arg-3=4 \
"
</code></pre>
| Rawkode |
<p>I have tomcat as docker image.
I have 3 xmls/property files to bring up the war in my tomcat
I need to write init container which will </p>
<ol>
<li>have a script [shell or python]</li>
<li>create a volume and mount it to the main container</li>
<li>copy the property files from my local system to the mounted
volume.</li>
<li>then init-container finishes app container starts after this.</li>
</ol>
<p>for example:
on my local I have the following:</p>
<pre><code>/work-dir tree
βββ bootstrap.properties
βββ index.html
βββ indexing_configuration.xml
βββ repository.xml
βββ wrapper.sh
</code></pre>
<p>init container should run a script wrapper.sh to copy these<br>
files into the mounted volume on app container
which is <code>/usr/share/jack-configs/</code></p>
| Tuhin Subhra Mandal | <p>You have to create a volume and mount on both containers. On Init container you run the script to copy the files to the mounted volume.</p>
<p>Instead of using a local file, I would suggest you use a blob storage to copy you files over, will make it much more simple.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="nofollow noreferrer">This</a> docs shows how to do what you want.</p>
<p>An example YAML is the following:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
</code></pre>
<p>To accomplish what you want, you have to change the <code>command</code> in the init container to execute your script, this bit I leave you try.</p>
<p>PS: If you really want to copy from a local(node) filesystem, you need to mount another volume to the init container and copy from one volume to another</p>
| Diego Mendes |
<p>I am fairly new to networkpolicies on Calico. I have created the following NetworkPolicy on my cluster:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: nginxnp-po
namespace: default
spec:
podSelector:
matchLabels:
run: nginxnp
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
acces: frontend
ports:
- port: 80
</code></pre>
<p>This is how I read it: All pods that have the selector <code>run=nginxnp</code> are only accessible on port 80 from every pod that has the selector <code>access=frontend</code>.</p>
<p>Here is my nginx pod (with a running nginx in it):</p>
<pre><code>$ kubectl get pods -l run=nginxnp
NAME READY STATUS RESTARTS AGE
nginxnp-9b49f4b8d-tkz6q 1/1 Running 0 36h
</code></pre>
<p>I created a busybox container like this:</p>
<pre><code>$ kubectl run busybox --image=busybox --restart=Never --labels=access=frontend -- sleep 3600
</code></pre>
<p>I can see that it matches the selector <code>access=frontend</code>:</p>
<pre><code>$ kubectl get pods -l access=frontend
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 6m30s
</code></pre>
<p>However when I exec into the busybox pod and try to wget the nginx pod, the connection is still refused.</p>
<p>I also tried setting an egress rule that allows the traffic the other way round, but this didn't do anything as well. As I understood networkpolicies: When no rule is set, nothing is blocked. Hence, when I set no egress rule, egress should not be blocked.</p>
<p>If I delete the networkpolicy it works. Any pointers are highly appreciated.</p>
| stiller_leser | <p>There is a typo in the NetworkPolicy template <code>acces: frontend</code> should be <code>access: frontend</code></p>
<pre><code> ingress:
- from:
- podSelector:
matchLabels:
acces: frontend
</code></pre>
| Diego Mendes |
<p>I have 3 Linux VM on my MBP, and all 3 VM can share the same disk on MBP's disk (I have no NFS). K8S can dispatch the docker images to K8S nodes. When I kill the process, it seems restarted on the <em>same</em> node. I am pretty the other node has the same docker image installed, and I guess it is limited by the .yaml file which binds to the same PVC and PV on that node.</p>
<p>If so, how I can configure my .yaml file especially for PV and PVC so when the process is killed, K8S can dispatch it from the one node (the process got killed) to the other node.</p>
<p>Thanks,
Derek</p>
| Derek Ma | <p>I don't believe this is possible. When the PV is bound to a node, as that's where it exists; so if your pod has a PVC bound to that PV it will always be scheduled on that node.</p>
<p>You'd need to use a different provider, such as Ceph/RBD, in-order to maintain freedom of movement and PV/PVC's.</p>
<p>Maybe Rook.io would be something useful for you to experiment with :)</p>
| Rawkode |
<p>In my K8 cluster, I have two Deployments in the same namespace. One Deployment is for Database-Postgres and the other Deployment is for Tomcat. The Tomcat should be accessible from outside hence I have configured "NodePort" Service and for internal communication I have created a "ClusterIP" Service by exposing the port of Postgres (i.e. 5432). Once everything is deployed, I want the Tomcat pod to communicate with the Postgres pod. But when I "curl postgres-service:5432" from Tomcat pod, I get "Connection refused" message. Is there any misconfiguration?</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: tomcat-service
namespace: application
labels:
app: application-tomcat
spec:
type: NodePort
ports:
- name: tomcat-port
targetPort: 80
port: 80
selector:
app: application-tomcat
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
namespace: application
labels:
app: application-postgres
spec:
ports:
- port: 5432
name: postgres-port
selector:
app: application-postgres
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: application-tomcat-deployment
namespace: application
labels:
app: application-tomcat
spec:
replicas: 1
selector:
matchLabels:
app: application-tomcat
template:
metadata:
labels:
app: application-tomcat
spec:
containers:
- name: application-container
image: tomcat
command:
- sleep
- "infinity"
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: application
name: application-postgres-deployment
labels:
app: application-postgres
spec:
replicas: 1
selector:
matchLabels:
app: application-postgres
template:
metadata:
labels:
app: application-postgres
spec:
containers:
- name: postgres
image: postgres
command:
- sleep
- "infinity"
ports:
- containerPort: 5432
name: postgredb
</code></pre>
<p>Postgres pod is listening on port '5432' and database is running.</p>
<pre><code>Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN -
tcp6 0 0 :::5432 :::* LISTEN -
</code></pre>
<p>Resources in the Namespace</p>
<pre><code>$ kubectl get all -n application
NAME READY STATUS RESTARTS AGE
pod/application-postgres-deployment-694869cd5d-wrhzr 1/1 Running 0 9m9s
pod/application-tomcat-deployment-6db75ffb6d-ds8fr 1/1 Running 0 9m9s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgres-service ClusterIP 10.32.0.207 <none> 5432/TCP 9m9s
service/tomcat-service NodePort 10.32.0.59 <none> 80:31216/TCP 9m9s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/application-postgres-deployment 1/1 1 1 9m9s
deployment.apps/application-tomcat-deployment 1/1 1 1 9m9s
NAME DESIRED CURRENT READY AGE
replicaset.apps/application-postgres-deployment-694869cd5d 1 1 1 9m9s
replicaset.apps/application-tomcat-deployment-6db75ffb6d 1 1 1 9m9s
</code></pre>
| Dusty | <p>You have overridden the default <code>ENTRYPOINT</code> in the postgresql image by specifying <code>deployment.spec.template.containers[0].command</code>. So now the only process that runs inside the pod of the deployment <code>nginx-statefulset</code> is <code>sleep infinity</code> and postgres is not running. Removing the command field and adding either <code>POSTGRES_PASSWORD</code> or <code>POSTGRES_HOST_AUTH_METHOD=trust</code> environment variable should fix the issue. Use the following manifest for <code>nginx-statefulset</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-statefulset
namespace: nginx
spec:
replicas: 1
selector:
matchLabels:
BIZID: nginx
template:
metadata:
labels:
BIZID: nginx
spec:
containers:
- name: postgres
image: postgres
env:
- name: POSTGRES_PASSWORD
value: admin
ports:
- containerPort: 5432
name: postgredb
</code></pre>
<p>You have to set the env variable <code>POSTGRES_PASSWORD</code> or <code>POSTGRES_HOST_AUTH_METHOD=trust</code>. Without this, the pod will crashloop with the following error message:</p>
<pre><code>Error: Database is uninitialized and superuser password is not specified.
You must specify POSTGRES_PASSWORD to a non-empty value for the
superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
connections without a password. This is *not* recommended.
See PostgreSQL documentation about "trust":
https://www.postgresql.org/docs/current/auth-trust.html
</code></pre>
| livinston |
<p>I have a application consisting of frontend, backend and a database.
At the moment the application is running on a kubernetes cluster.
Front-, backend and database is inside its own Pod communicating via services.</p>
<p>My consideration is to put all these application parts (Front-, Backend and DB) in one Pod, so i can make a Helm chart of it and for every new customer i only have to change the values.</p>
<p>The Question is, if this is a good solution or not to be recommended.</p>
| Raphael G. | <p>No, it is a bad idea, this is why:</p>
<ul>
<li>First, the DB is a stateful container, when you update any of the components, you have to put down all containers in the POD, let's say this is a small front end update, it will put down everything and the application will be unavailable.</li>
<li>Let's say you have multiple replicas of this pod to avoid the issue mentioned above, this will make extremely hard to scale the application, because you will need a copy of every container scaled, when you might likely need only FE or BE to scale, also creating multiple replicas of a database, depending how it replicates the data, will make it slower. You also have to consider backup and restore of the data in case of failures.</li>
<li>In the same example above, multiple replicas will make the PODs consume too much resources, even though you don't need it.</li>
</ul>
<p>If you just want to deploy the resources without much customization, you could just deploy them into separate namespaces and add policies to prevent one namespace talking to each other and deploy the raw yaml there, only taking care to use config maps to load the different configurations for each.</p>
<p>If you want just a simple templating and deployment solution, you can use <a href="https://github.com/kubernetes-sigs/kustomize" rel="nofollow noreferrer">kustomize</a>.</p>
<p>If you want to have the complex setup and management provided by Helm, you could defined all pods in the chart, an example is the <a href="https://github.com/helm/charts/tree/master/stable/prometheus" rel="nofollow noreferrer">Prometheus</a> chart.</p>
| Diego Mendes |
<p>I'm trying to set up the LetsEncrypt SSL ceritficate using cert manager.
I have successfully deployed Cert Manager by Helm and stuck at configuring <code>ingress.yaml</code>.</p>
<pre><code>$ sudo kubectl create --edit -f https://raw.githubusercontent.com/jetstack/cert-manager/master/docs/tutorials/quick-start/example/ingress.yaml
</code></pre>
<p>I've got this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: kuard
namespace: default
spec:
rules:
- host: example.example.com
http:
paths:
- backend:
serviceName: kuard
servicePort: 80
path: /
tls:
- hosts:
- example.example.com
secretName: quickstart-example-tls
</code></pre>
<p>So I just replaced hosts from example.com to my external IP and got this:</p>
<pre><code>A copy of your changes has been stored to "/tmp/kubectl-edit-qx3kw.yaml"
The Ingress "kuard" is invalid: spec.rules[0].host: Invalid value: must be a DNS name, not an IP address
</code></pre>
<p>Is there any way to set it up using just my external IP? I have't yet chosen the domain name for my app and want to use just plain IP for demoing and playing around.</p>
| Igniter | <p>No. You cannot use an IP address for the Ingress. To use an IP address, you'd need to configure it to point to your worker nodes and create a NodePort Service, which will allow you to browse to <code>http://IP:NODEPORT</code>.</p>
| Rawkode |
<p>We use device mapper storage driver.
This is probably more of a docker than k8s question.</p>
<p>Is there is a way to determine for example where this mount is coming from</p>
<blockquote>
<p>/opt/dsx/ibm-data-platform/docker/devicemapper/mnt/b1127f21d5fd96b2ac862624d80b928decc1d60b87ec64d98430f69f360d3cee/rootfs/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.39.x86_64/jre/lib/rt.jar</p>
</blockquote>
<p>You see <code>devicemapper/mnt/b1127f21d5fd96b2ac862624d80b928decc1d60b87ec64d98430f69f360d3cee</code> as part of the path.. </p>
<p>We use a commercial product that has 67 different images bundled in.
One particular above has a very old Java .. we'd like to know which image that docker mount is coming from.</p>
<p>Thanks!</p>
| Tagar | <p>You can ask Docker for a list of containers that have that volume mounted:</p>
<p><code>docker container ls --filter=volume=<name of volume></code></p>
| Rawkode |
<p>When there's more than one search result from <code>/</code> filter command, how do you navigate to next item? Basically I'm looking for F3 (next search result) equivalent in k9s. Commands listed <a href="https://k9scli.io/topics/commands/" rel="noreferrer">here</a> does not seem to include what I'm looking for...</p>
| Davita | <p>Narain's answer works when searching in a list of resources but when looking at yaml the interaction is slightly different:</p>
<ol>
<li>Press <code>/</code></li>
<li>Type search term, press enter</li>
<li>Press <code>n</code> to go to next, <code>shift-n</code> to go to previous match</li>
</ol>
<p>Edit: I realize now that this is shown in the header. I suppose this explanation is good for other blind bats like myself.</p>
<p><a href="https://i.stack.imgur.com/3av2P.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3av2P.png" alt="enter image description here" /></a></p>
| worldsayshi |
<p>Just deployed my docker image to Azure AKS and created nginx ingress controller. My image has the SSL certificate and handles SSL itself. So, I need a passthrough route to my container.</p>
<p>When I navigate to <em><a href="https://just-poc.live" rel="nofollow noreferrer">https://just-poc.live</a></em> famous nginx 502 gateway displays as below;</p>
<p>Apparently, nginx couldn't find a route to send https traffic.</p>
<p>What should I do to make nginx controller to route the traffic to my <strong>socket-poc</strong> deployment?</p>
<p><a href="https://i.stack.imgur.com/HjfZ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HjfZ0.png" alt="enter image description here" /></a></p>
<p><strong>nginx ingress controller</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: socket-poc
port:
number: 8081
</code></pre>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: socket-poc
spec:
replicas: 1
selector:
matchLabels:
app: socket-poc
template:
metadata:
labels:
app: socket-poc
spec:
containers:
- name: socket-poc
image: myownacrrepo.azurecr.io/socket:8081
env:
- name: TOOLBAR_COLOR
value: "green"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 300m
memory: 512Mi
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: socket-poc
spec:
type: ClusterIP
ports:
- port: 8081
selector:
app: socket-poc
</code></pre>
<p><code>kubectl get services</code> displays below;</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aks-helloworld-one ClusterIP 10.0.34.79 <none> 80/TCP 57m
nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.74.62 20.93.213.132 80:31262/TCP,443:30706/TCP 35m
nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.0.177.29 <none> 443/TCP 35m
socket-poc ClusterIP 10.0.64.248 <none> 8081/TCP 69m
</code></pre>
<p><code>kubectl describe ingress hello-world-ingress</code> displays like this;</p>
<pre><code>Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: hello-world-ingress
Namespace: ingress-basic
Address: 20.93.213.132
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/(.*) socket-poc:8081 (10.244.1.18:8081)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-passthrough: true
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 19m (x4 over 35m) nginx-ingress-controller Scheduled for sync
Normal Sync 19m (x4 over 35m) nginx-ingress-controller Scheduled for sync
</code></pre>
| killjoy | <p><code>nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"</code> annotaion was missing. 502 error is gone!</p>
| killjoy |
<p>I have couchbase cluster on k8s with operator 1.2 , I see following error today continuously </p>
<p>IP address seems to have changed. Unable to listen on 'ns_1@couchbase-cluster-couchbase-cluster-0001.couchbase-cluster-couchbase-cluster.default.svc'. (POSIX error code: 'nxdomain') (repeated 3 times)</p>
| Vishnu Gopal Singhal | <p>The βIP address changeβ message is an alert message generated by Couchbase Server. The server checks for this situation as follows: it tries to listen on a free port on the interface that is the nodeβs address. </p>
<p>It does this every 3 seconds. If the host name of the node canβt be resolved you get an nxdomain error which is the most common reason that users see this alert message. </p>
<p>However, the alert would also fire if the user stopped the server, renamed the host and restarted - a much more serious configuration error that we would want to alert the user to right away. Because this check runs every three seconds, if you have any flakiness in your DNS you are likely to see this alert message every now and then. </p>
<p>As long as the DNS glitch doesnβt persist for long (a few seconds) there probably wonβt be any adverse issues. However, it is an indication that you may want to take a look at your DNS to make sure itβs reliable enough to run a distributed system such as Couchbase Server against. In the worst case, DNS that is unavailable for a significant length of time could result in lack of availability or auto failover. </p>
<p>Ps:Thanks to Dave Finlay who actually answered this question to me.</p>
| deniswsrosa |
<p>I have 4 k8s pods by setting the replicas of Deployment to 4 now.</p>
<pre><code>apiVersion: v1
kind: Deployment
metadata:
...
spec:
...
replicas: 4
...
</code></pre>
<p>The POD will get items in a database and consume it, the items in database has a column <code>class_name</code>.</p>
<p>now I want one pod only get one <code>class_name</code>'s item.
for example <code>pod1</code> only get item which <code>class_name</code> equals <code>class_name_1</code>, and <code>pod2</code> only get item which <code>class_name</code> equals <code>class_name_2</code>...</p>
<p>So I want to pass different <code>class_name</code> as environment variables to different Deployment PODs. Can I define it in the yaml file of Deployment?</p>
<p>Or is there any other way to achieve my goal?(like something other than Deployment in k8s)</p>
| hxidkd | <p>I would not recommend this approach, but the closest thing to do what you want is using the stateful-set and use the pod name as the index.</p>
<p>When you deploy a stateful set, the pods will be named after their statefulset name, in the following sample:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kuard
labels:
app: kuard
spec:
type: NodePort
ports:
- port: 8080
name: web
selector:
app: kuard
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kuard
spec:
serviceName: "kuard"
replicas: 3
selector:
matchLabels:
app: kuard
template:
metadata:
labels:
app: kuard
spec:
containers:
- name: kuard
image: gcr.io/kuar-demo/kuard-amd64:1
ports:
- containerPort: 8080
name: web
</code></pre>
<p>The pods created by the statefulset will be named as:</p>
<pre><code>kuard-0
kuard-1
kuard-2
</code></pre>
<p>This way you could either, name the stateful-set according to the classes, i.e: <code>class-name</code> and the pod created will be <code>class-name-0</code> and you can replace the <code>_</code> by <code>-</code>. Or just strip the name out to get the index at the end.</p>
<p>To get the name just read the environment variable <code>HOSTNAME</code></p>
<p>This naming is consistent, so you can make sure you always have 0, 1, 2, 3 after the name. And if the <code>2</code> goes down, it will be recreated.</p>
<p>Like I said, I would not recommend this approach because you tie the infrastructure to your code, and also can't scale(if needed) because each service are unique and adding new instances would get new ids.</p>
<p>A better approach would be using one deployment for each class and pass the proper values as environment variables.</p>
| Diego Mendes |
<p>I want to create boiler plate for any k8s object. </p>
<p>For instance, the <code>deployment</code> object boilerplate can be generated by using <code>kubectl</code>: </p>
<p><code>kubectl run --dry-run -o yaml ...</code> </p>
<p>This will generate the yaml configuration file of the deployment object. I can redirect this to a file and modify the fields I need.</p>
<p>But how about objects other than deployment? What about CronJob? Are there any ways to generate boilerplate config file for CronJob object (or any other k8s object at that matter)?</p>
| Tran Triet | <p>While <code>kubectl create object-type -o yaml</code> will give you the very basics, it doesn't normally cover much of the spec.</p>
<p>Instead, I prefer to fetch existing objects and modify:</p>
<p><code>kubectl get configmap configMapName -o yaml > configmap.yaml</code></p>
<p>Strip away everything you don't need, including generated fields; and you're good to go. This step probably requires a solid understanding of what to expect in each YAML.</p>
<p>EDIT://</p>
<p>I just realised there's <code>--export</code> when using this approach that strips generated fields for you :)</p>
| Rawkode |
<p>So I'm trying to create a bash script to pass the IP array needed to make inventory file for ansible
the official docs say that this is achieved through</p>
<pre><code>declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
</code></pre>
<p>however in a bash script <code>CONFIG_FILE</code> is set up as variable so it stop the inventory file being created as the variable is not passed into the python file</p>
<p>i have tried the following to try and pass the varible to the python file in an attempt to create the inventory file</p>
<pre><code>declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=kubespray/inventory/mycluster/hosts.yaml
python3 kubespray/contrib/inventory_builder/inventory.py ${IPS[@]}
</code></pre>
<p>which results in</p>
<pre><code>DEBUG: Adding group all
DEBUG: Adding group kube-master
DEBUG: Adding group kube-node
DEBUG: Adding group etcd
DEBUG: Adding group k8s-cluster
DEBUG: Adding group calico-rr
DEBUG: adding host node1 to group all
DEBUG: adding host node2 to group all
DEBUG: adding host node3 to group all
DEBUG: adding host node1 to group etcd
DEBUG: adding host node2 to group etcd
DEBUG: adding host node3 to group etcd
DEBUG: adding host node1 to group kube-master
DEBUG: adding host node2 to group kube-master
DEBUG: adding host node1 to group kube-node
DEBUG: adding host node2 to group kube-node
DEBUG: adding host node3 to group kube-node
Traceback (most recent call last):
File "kubespray/contrib/inventory_builder/inventory.py", line 431, in <module>
sys.exit(main())
File "kubespray/contrib/inventory_builder/inventory.py", line 427, in main
KubesprayInventory(argv, CONFIG_FILE)
File "kubespray/contrib/inventory_builder/inventory.py", line 116, in __init__
self.write_config(self.config_file)
File "kubespray/contrib/inventory_builder/inventory.py", line 120, in write_config
with open(self.config_file, 'w') as f:
FileNotFoundError: [Errno 2] No such file or directory: './inventory/sample/hosts.yaml'
</code></pre>
<p>I have also tried</p>
<pre><code>declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=kubespray/inventory/mycluster/hosts.yaml
${CONFIG_FILE} python3 kubespray/contrib/inventory_builder/inventory.py ${IPS[@]}
</code></pre>
<p>which results in</p>
<pre><code>-bash: kubespray/inventory/mycluster/hosts.yaml: No such file or directory
</code></pre>
<p>which is understandable but the idea is to have the file created by the python script</p>
<p>is is possible to get the bash script working so that it performs the actions set out by</p>
<pre><code>CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
</code></pre>
<p>?</p>
<p><em><strong>UPDATE</strong></em></p>
<p>So after some tinkering I came to the conclusion that this is probably due to my pip virtual environment as my script tries to create one before running the commands so for more context</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
echo "setting up virtual environment"
sleep 2
sudo apt-get install python3-venv -y
python3 -m venv tutorial-env
source tutorial-env/bin/activate
echo "installing pip requirements"
sudo pip3 install -r kubespray/requirements.txt
cp -rfp kubespray/inventory/sample kubespray/inventory/mycluster
declare -a IPS=(IP1 IP2 IP3)
echo "${IPS[@]}"
CONFIG_FILE=kubespray/inventory/mycluster/hosts.yaml python3 kubespray/contrib/inventory_builder/inventory.py ${IPS[@]}
cat kubespray/inventory/mycluster/hosts.yaml
sudo ansible-playbook -i kubespray/inventory/mycluster/hosts.yaml --ssh-extra-args="-oStrictHostKeyChecking=no" --key-file "~/.ssh/id_rsa" --become-user="provision" kubespray/cluster.yml
</code></pre>
<p>if I remove the lines</p>
<pre><code>python3 -m venv tutorial-env
source tutorial-env/bin/activate
</code></pre>
<p>Then the script works as intended, well to some degree as ideally it should be in the virtual env, apologies for the badly worded question</p>
| user3700919 | <p>The issue comes from the <code>sudo</code> in your script.</p>
<p>You create a virtualenv and activate it for the local session. <br />
But then you change to root context to install the requirements, which are installed at system-level, not virtualenv level. <br />
Then you execute back python in the virtualenv context without the requirements installed and it fails.</p>
<p>It explains why it works without virtualenv as you use the system-level installed requirements.</p>
<p>To solve the problem, just install the requirements without the <code>sudo</code> when you are working in the virtualenv: to install requirements and to execute ansible.</p>
| zigarn |
<p>Is there a way to tell Kubernetes what pods to kill before or after a downscale? For example, suppose that I have 10 replicas and I want to downscale them to 5, but I want certain replicas to be alive and others to be killed after the downscale. Is that possible?</p>
| Matheus Melo | <p>While it's not possible to selectively choose which pod is killed, you can prevent what you're really concerned about, which is the killing of pods that are in the midst of processing tasks. This requires you do two things:</p>
<ol>
<li>Your application should be able to listen for and handle SIGTERM events, which Kubernetes sends to pods before it kills them. In your case, your app would handle SIGTERM by finishing any in-flight tasks then exiting.</li>
<li>You set the <code>terminationGracePeriodSeconds</code> on the pod to something greater than the longest time it takes for the longest task to be processed. Setting this property extends the period of time between k8s sending the SIGTERM (asking your application to finish up), and SIGKILL (forcefully terminating).</li>
</ol>
| Grant David Bachman |
<p>I have 2 questions as I am currently trying to learn minikube and now wants to install it;</p>
<p>1- Which driver is preferable for Minikube (KVM or docker) ? Does one have some sort of advantage over other ?</p>
<p>2- Is it possible to install and run the minkube inside a VM managed by KVM ?</p>
| usmangt87 | <p>1 - There is no "better" or "worse". Using Docker is the default and with that the most supported version.
2 - Yes, it is possible to run Minikube inside a VM.</p>
| Stefan Papp |
<p>I'm trying a simple microservices app on a cloud Kubernetes cluster. This is the Ingress yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-nginx-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: auth-svc
port:
number: 5000
rules:
- host: "somehostname.xyz"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: auth-svc
port:
number: 5000
</code></pre>
<p><strong>The problem:</strong><br />
When I use this URL, I'm able to access the auth service: <code>http://somehostname.xyz:31840</code>. However, if I use <code>http://somehostname.xyz</code>, I get a <em>"This site canβt be reached somehostname.xyz refused to connect."</em> error.<br />
The auth service sends GET requests to other services too, and I'm able to see the response from those services if I use:<br />
<code>http://somehostname.xyz:31840/go</code> or <code>http://somehostname.xyz:31840/express</code>. But again, these work only if the nodeport <code>31840</code> is used.</p>
<p><strong>My questions:</strong></p>
<ul>
<li><p>What typically causes such a problem, where I can access the service
using the hostname and nodeport, but it won't work without supplying the
nodeport?</p>
</li>
<li><p>Is there a method to test this in a different way to figure out where
the problem is?</p>
</li>
<li><p>Is it a problem with the Ingress or Auth namespace? Is it a problem
with the hostname in Flask? Is it a problem with the Ingress
controller? How do I debug this?</p>
</li>
</ul>
<p>These are the results of <code>kubectl get all</code> and other commands.</p>
<pre><code>NAME READY STATUS RESTARTS
pod/auth-flask-58ccd5c94c-g257t 1/1 Running 0
pod/ingress-nginx-nginx-ingress-6677d54459-gtr42 1/1 Running 0
NAME TYPE EXTERNAL-IP PORT(S)
service/auth-svc ClusterIP <none> 5000/TCP
service/ingress-nginx-nginx-ingress LoadBalancer 172.xxx.xx.130 80:31840/TCP,443:30550/TCP
NAME READY UP-TO-DATE AVAILABLE
deployment.apps/auth-flask 1/1 1 1
deployment.apps/ingress-nginx-nginx-ingress 1/1 1 1
NAME DESIRED CURRENT READY
replicaset.apps/auth-flask-58ccd5c94c 1 1 1
replicaset.apps/ingress-nginx-nginx-ingress-6677d54459 1 1 1
NAME CLASS HOSTS ADDRESS PORTS
ingress-nginx-nginx-ingress <none> somehostname.xyz 172.xxx.xx.130 80
</code></pre>
<p>Describing ingress also seems normal.</p>
<pre><code>kubectl describe ingress ingress-nginx-nginx-ingress
Name: ingress-nginx-nginx-ingress
Namespace: default
Address: 172.xxx.xx.130
Default backend: auth-svc:5000 (10.x.xx.xxx:5000)
Rules:
Host Path Backends
---- ---- --------
somehostname.xyz
/ auth-svc:5000 (10.x.xx.xxx:5000)
Annotations: kubernetes.io/ingress.class: nginx
</code></pre>
<p>This is the code of Auth.</p>
<pre><code>import requests
from flask import Flask
app = Flask(__name__)
@app.route('/')
def indexPage():
return ' <!DOCTYPE html><html><head><meta charset="UTF-8" />\
<title>Microservice</title></head> \
<body><div style="text-align: center;">Welcome to the Auth page</div></body></html>'
@app.route('/go')
def getGoJson():
return requests.get('http://analytics-svc:8082/info').content
@app.route('/express')
def getNodeResponse():
return requests.get('http://node-svc:8085/express').content
if __name__ == '__main__':
app.run(debug=True, host="0.0.0.0")
</code></pre>
<p>and Auth's Dockerfile:</p>
<pre><code>FROM python:3.8-slim-buster
WORKDIR /usr/src/app
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]
</code></pre>
<p>The part of docker-compose yaml for auth:</p>
<pre><code>version: "3.3"
services:
auth:
build: ./auth/
image: nav9/auth-flask:v1
ports:
- "5000:5000"
</code></pre>
<p>Auth's Kubernetes manifest:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-flask
spec:
selector:
matchLabels:
any-name: auth-flask
template:
metadata:
labels:
any-name: auth-flask
spec:
containers:
- name: auth-name
image: nav9/auth-flask:v1
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: auth-svc
spec:
# type: ClusterIP
ports:
- targetPort: 5000
port: 5000
selector:
any-name: auth-flask
</code></pre>
| Nav | <p>The solution has three parts:</p>
<ol>
<li><p>Use <code>kubectl get all</code> to find out the running ingress service:</p>
<pre><code>NAME TYPE EXTERNAL-IP PORT(S)
service/ingress-nginx-nginx-ingress LoadBalancer 172.xxx.xx.130 80:31840/TCP,443:30550/TCP
</code></pre>
</li>
</ol>
<p>Copy the EXTERNAL-IP of the service (in this case 172.xxx.xx.130).</p>
<ol start="2">
<li><p>Add a DNS A record named <code>*.somehostname.xyz</code> for the cloud cluster, and use the IP address <code>172.xxx.xx.130</code>.</p>
</li>
<li><p>When accessing the hostname via the browser, make sure that <code>http</code> is used instead of <code>https</code>.</p>
</li>
</ol>
| Nav |
<p>How to get list of pods that are not linked to any service</p>
<p>Lets say i have pods:</p>
<p><code>Svc1-green-xyz</code> and <code>svc1-blue-lmn</code></p>
<p>Service <code>svc1</code> is served by <code>svc1-green-xyz</code>. With <code>svc1-blue-lmn</code> is a prior version of the same service and is not used. </p>
<p><em>I want to select all such unused pods that are not serving any service and delete them. How can this be done. Is there a helm command that can be used?</em> </p>
| lr-pal | <p>This is possible, but very hacky. Pods and services aren't really <em>linked</em>, so much as services use <em>selectors</em> to determine which pods they should target. What's really happening is that services keep track of a list of endpoints they need to forward traffic to. So, you could theoretically get a list of all endpoints for a service <code>kubectl get endpoints</code>, filter based on IP address, and remove all pods whose IPs are not in that list. If you're going through all this, though, you're probably doing something wrong.</p>
| Grant David Bachman |
<p>Hi i keep getting this error when using ansible via kubespray and I am wondering how to over come it</p>
<pre><code>
TASK [bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux)] ********************************************************************************************************************************************************************************************************
task path: /home/dc/xcp-projects/kubespray/roles/bootstrap-os/tasks/main.yml:50
<192.168.10.55> (1, b'\x1b[1;31m==== AUTHENTICATING FOR org.freedesktop.hostname1.set-hostname ===\r\n\x1b[0mAuthentication is required to set the local host name.\r\nMultiple identities can be used for authentication:\r\n 1. test\r\n 2. provision\r\n 3. dc\r\nChoose identity to authenticate as (1-3): \r\n{"msg": "Command failed rc=1, out=, err=\\u001b[0;1;31mCould not set property: Connection timed out\\u001b[0m\\n", "failed": true, "invocation": {"module_args": {"name": "node3", "use": null}}}\r\n', b'Shared connection to 192.168.10.55 closed.\r\n')
<192.168.10.55> Failed to connect to the host via ssh: Shared connection to 192.168.10.55 closed.
<192.168.10.55> ESTABLISH SSH CONNECTION FOR USER: provision
<192.168.10.55> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="provision"' -o ConnectTimeout=10 -oStrictHostKeyChecking=no -o ControlPath=/home/dc/.ansible/cp/c6d70a0b7d 192.168.10.55 '/bin/sh -c '"'"'rm -f -r /home/provision/.ansible/tmp/ansible-tmp-1614373378.5434802-17760837116436/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.10.56> (0, b'', b'')
fatal: [node2]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"name": "node2",
"use": null
}
},
"msg": "Command failed rc=1, out=, err=\u001b[0;1;31mCould not set property: Method call timed out\u001b[0m\n"
}
</code></pre>
<p>my inventory file is as follows</p>
<pre><code>all:
hosts:
node1:
ansible_host: 192.168.10.54
ip: 192.168.10.54
access_ip: 192.168.10.54
node2:
ansible_host: 192.168.10.56
ip: 192.168.10.56
access_ip: 192.168.10.56
node3:
ansible_host: 192.168.10.55
ip: 192.168.10.55
access_ip: 192.168.10.55
children:
kube-master:
hosts:
node1:
node2:
kube-node:
hosts:
node1:
node2:
node3:
etcd:
hosts:
node1:
node2:
node3:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}
</code></pre>
<p>I also have a file which provision the users in the following manner</p>
<pre><code>- name: Add a new user named provision
user:
name: provision
create_home: true
shell: /bin/bash
password: "{{ provision_password }}"
groups: sudo
append: yes
- name: Add a new user named dc
user:
name: dc
create_home: true
shell: /bin/bash
password: "{{ provision_password }}"
groups: sudo
append: yes
- name: Add provision user to the sudoers
copy:
dest: "/etc/sudoers.d/provision"
content: "provision ALL=(ALL) NOPASSWD: ALL"
- name: Add provision user to the sudoers
copy:
dest: "/etc/sudoers.d/dc"
content: "dc ALL=(ALL) NOPASSWD: ALL"
- name: Disable Root Login
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: "PermitRootLogin no"
state: present
backup: yes
notify:
- Restart ssh
</code></pre>
<p>I have run the ansible command in the following manner</p>
<pre><code>ansible-playbook -i kubespray/inventory/mycluster/hosts.yaml --user="provision" --ssh-extra-args="-oStrictHostKeyChecking=no" --key-file "/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key" kubespray/cluster.yml -vvv
</code></pre>
<p>as well as</p>
<pre><code>ansible-playbook -i kubespray/inventory/mycluster/hosts.yaml --user="provision" --ssh-extra-args="-oStrictHostKeyChecking=no" --key-file "/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key" --become-user="provision" kubespray/cluster.yml -vv
</code></pre>
<p>both yield the same error an interestingly escalation seems to succeed on earlier points</p>
<p>after reading this article
<a href="https://askubuntu.com/questions/542397/change-default-user-for-authentication">https://askubuntu.com/questions/542397/change-default-user-for-authentication</a>
I have decided to add the users to the sudo group but the error still persists</p>
<p>looking into the main.yaml file position suggested by the error it seems to be this code possibly causing issues?</p>
<pre><code># Workaround for https://github.com/ansible/ansible/issues/42726
# (1/3)
- name: Gather host facts to get ansible_os_family
setup:
gather_subset: '!all'
filter: ansible_*
- name: Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux)
hostname:
name: "{{ inventory_hostname }}"
when:
- override_system_hostname
- ansible_os_family not in ['Suse', 'Flatcar Container Linux by Kinvolk', 'ClearLinux'] and not is_fedora_coreos
</code></pre>
<p>The OS'es of the hosts are ubuntu 20.04.02 server
is there anything more I can do?</p>
| user3700919 | <p>From Kubespray documentation:</p>
<pre class="lang-sh prettyprint-override"><code># Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
</code></pre>
<p>As stated, the <code>--become</code> is mandatory, it allows to do privilege escalation for most of the system modifications (like setting the hostname) that Kubespray performs.</p>
<p>With <code>--user=provision</code> you're just setting the SSH user, but it will need privilege escalation anyway.
With <code>--become-user=provision</code> you're just saying that privilege escalation will escalade to 'provision' user (but you would need <code>--become</code> to do the privilege escalation).
In both cases, unless 'provision' user has root permissions (not sure putting it in <code>root</code> group is enough), it won't be enough.</p>
<p>For the user 'provision' to be enough, you need to make sure that it can perform a <code>hostnamectl <some-new-host></code> without being asked for authentication.</p>
| zigarn |
<p>I removing other versions from cert-manager. After that i install the new version with helm using.</p>
<p>Installation works fine.</p>
<p>but when i use the command:</p>
<pre><code>$ kubectl get orders,challenges,clusterissuers
Error from server: request to convert CR from an invalid group/version: acme.cert-manager.io/v1alpha2
Error from server: request to convert CR from an invalid group/version: acme.cert-manager.io/v1alpha2
Error from server: request to convert CR from an invalid group/version: cert-manager.io/v1alpha2
</code></pre>
<p>The CRDs:</p>
<pre><code>Name: orders.acme.cert-manager.io
Namespace:
Labels: app=cert-manager
app.kubernetes.io/instance=cert-manager
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cert-manager
app.kubernetes.io/version=v1.8.2
helm.sh/chart=cert-manager-v1.8.2
Annotations: cert-manager.io/inject-ca-from-secret: cert-manager/cert-manager-webhook-ca
meta.helm.sh/release-name: cert-manager
meta.helm.sh/release-namespace: cert-manager
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
....
Last Transition Time: 2022-06-24T15:25:23Z
Message: no conflicts found
Reason: NoConflicts
Status: True
Type: NamesAccepted
Last Transition Time: 2022-06-24T15:25:23Z
Message: the initial names have been accepted
Reason: InitialNamesAccepted
Status: True
Type: Established
Stored Versions:
v1
</code></pre>
<p>i dont found the CR where still using cert-manager.io/v1alpha2 and acme.cert-manager.io/v1alpha2</p>
| Lucas Borges | <p>First of all, suggest to backup all your objects (certs, orders, issuer, clusterissuer, etc. with <a href="https://velero.io/" rel="nofollow noreferrer">velero</a> for example) !</p>
<p><a href="https://cert-manager.io/docs/installation/upgrading/remove-deprecated-apis/" rel="nofollow noreferrer">Cert-manger documentation</a> suggests using the cert-manager cli as:</p>
<pre><code>cmctl upgrade migrate-api-version
</code></pre>
<p>You may need <code>--skip-stored-version-check</code> if you already tried to fix CRD api manually (<a href="https://github.com/cert-manager/cert-manager/issues/3944#issuecomment-848742996" rel="nofollow noreferrer">like described in this issue</a>):</p>
<pre><code>cmctl upgrade migrate-api-version --skip-stored-version-check
</code></pre>
<p>Finally, if it's failing with the same message, <a href="https://cert-manager.io/docs/installation/helm/" rel="nofollow noreferrer">install</a> the 1.6.3 (if CRD is manually installed, upgrade CRD with 1.6.3) and repeat the command <code>cmctl upgrade migrate-api-version</code>.</p>
<p>At last, you can upgrade minor version one by one (1.7, 1.8, etc.) <a href="https://cert-manager.io/docs/installation/upgrading/" rel="nofollow noreferrer">as recommended</a></p>
| Zied |
<p>I am new to Kubernetes. I am using Kops to deploy my Kubernetes application on AWS. I have already registered my domain on AWS and also created a hosted zone and attached it to my default VPC.</p>
<p>Creating my Kubernetes cluster through kops succeeds. However, when I try to validate my cluster using <code>kops validate cluster</code>, it fails with the following error:</p>
<blockquote>
<p>unable to resolve Kubernetes cluster API URL dns: lookup api.ucla.dt-api-k8s.com on 149.142.35.46:53: no such host</p>
</blockquote>
<p>I have tried debugging this error but failed. Can you please help me out? I am very frustrated now.</p>
| Kalpana Sundaram | <p>From what you describe, you created a Private Hosted Zone in Route 53. The validation is probably failing because Kops is trying to access the cluster API from your machine, which is outside the VPC, but private hosted zones only respond to requests coming from within the VPC. Specifically, the hostname <code>api.ucla.dt-api-k8s.com</code> is where the Kubernetes API lives, and is the means by which you can communicate and issue commands to the cluster from your computer. Private Hosted Zones wouldn't allow you to access this API from the outside world (your computer).</p>
<p>A way to resolve this is to make your hosted zone public. Kops will automatically create a VPC for you (unless configured otherwise), but you can still access the API from your computer. </p>
| Grant David Bachman |
<p>I'm kinda new here, so please be gentle with me. </p>
<p>I've inherited an old (ish) kops install procedure using Ansible scripts, which has a specific version of the "kope.io" image within the Instance Group creation </p>
<pre><code>apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: null
labels:
kops.k8s.io/cluster: {{ k8s_cluster_name }}
name: master-{{ vpc_region }}a
spec:
associatePublicIp: false
image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
machineType: "{{ master_instance_type }}"
maxSize: 1
minSize: 1
{% if use_spot %}
maxPrice: "{{ spot_price }}"
{% endif %}
nodeLabels:
kops.k8s.io/instancegroup: master-{{ vpc_region }}a
role: Master
subnets:
- {{ vpc_region }}a-private-subnet
</code></pre>
<p>As you can see the line <code>image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08</code> pins me to a specific k8s version. </p>
<p>I want to rebuild with a newer version, but I'm not sure if I still need to specify this image, and if I do which image should I use?</p>
<p>I'd like to at least update this to 1.9.11, but ideally I think I should be going to the newest stable version. (1.13.0?) but I know a <strong>lot</strong> has changed since then, so it's likely things will break? </p>
<p>So much information by doing a Google search for this, but much of it is confusing or conflicting (or outdated. Any pointers much appreciated.</p>
| Steve Button | <p>According to <a href="https://github.com/kubernetes/kops/blob/master/docs/images.md" rel="nofollow noreferrer">kops documentation</a> you can specify an image and that will be used to provision the AMI that will build your instance group.</p>
<p>You can find out the latest <code>kope.io</code> images and their respective kubernetes versions at <a href="https://github.com/kubernetes/kops/blob/master/channels/stable" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/channels/stable</a></p>
<p>I'm not sure if you can work with different kope.io/k8s-x.xx versions than the ones you are provisioning, or if kops enforces the restrictions that are stated in the stable channel, but you can see that the different kope.io images should be configured to the different Kubernetes versions.</p>
<p>You should try your infrastructure in a test environment just to be safe and not lose data. You should keep in mind that if you need to use hostPath-based mountpoints, you should probably migrate those to the new cluster or use some sort of backup mechanism.</p>
<p>In any case, take a look at the <a href="https://github.com/kubernetes/kops#kubernetes-release-compatibility" rel="nofollow noreferrer">kops compatibility matrix</a> and see which kops version you should use for the upgrade you want. You may prefer to do upgrades to interim versions so that you can both upgrade the cluster and kops itself until you are up-to-date, in order to use procedures that have probably been more tested :)</p>
| ssice |
<p>I try to use the master api to update resources.</p>
<p>In 1.2 to update a deployment resource I'm doing <code>kubectl apply -f new updateddeployment.yaml</code></p>
<p>How to do the same action with the api? </p>
| ant31 | <p>This API is not really convincingly designed, since it forces us to reimplement such basic stuff at the client side...</p>
<p>Anyway, here is my attempt to reinvent the hexagonal wheel in Python...</p>
<h2>Python module kube_apply</h2>
<p>Usage is like <code>kube_apply.fromYaml(myStuff)</code></p>
<ul>
<li>can read strings or opened file streams (via lib Yaml)</li>
<li>handles yaml files with several concatenated objects</li>
<li>implementation is <em>rather braindead</em> and first attempts
to insert the resource. If this fails, it tries a patch,
and if this also fails, it <em>deletes</em> the resource and
inserts it anew.</li>
</ul>
<p>File: <code>kube_apply.py</code></p>
<pre><code>#!/usr/bin/python3
# coding: utf-8
# __________ ________________________________________________ #
# kube_apply - apply Yaml similar to kubectl apply -f file.yaml #
# #
# (C) 2019 Hermann Vosseler <[email protected]> #
# This is OpenSource software; licensed under Apache License v2+ #
# ############################################################### #
'''
Utility for the official Kubernetes python client: apply Yaml data.
While still limited to some degree, this utility attempts to provide
functionality similar to `kubectl apply -f`
- load and parse Yaml
- try to figure out the object type and API to use
- figure out if the resource already exists, in which case
it needs to be patched or replaced alltogether.
- otherwise just create a new resource.
Based on inspiration from `kubernetes/utils/create_from_yaml.py`
@since: 2/2019
@author: Ichthyostega
'''
import re
import yaml
import logging
import kubernetes.client
def runUsageExample():
''' demonstrate usage by creating a simple Pod through default client
'''
logging.basicConfig(level=logging.DEBUG)
#
# KUBECONFIG = '/path/to/special/kubecfg.yaml'
# import kubernetes.config
# client = kubernetes.config.new_client_from_config(config_file=KUBECONFIG)
# # --or alternatively--
# kubernetes.config.load_kube_config(config_file=KUBECONFIG)
fromYaml('''
kind: Pod
apiVersion: v1
metadata:
name: dummy-pod
labels:
blow: job
spec:
containers:
- name: sleepr
image: busybox
command:
- /bin/sh
- -c
- sleep 24000
''')
def fromYaml(rawData, client=None, **kwargs):
''' invoke the K8s API to create or replace an object given as YAML spec.
@param rawData: either a string or an opened input stream with a
YAML formatted spec, as you'd use for `kubectl apply -f`
@param client: (optional) preconfigured client environment to use for invocation
@param kwargs: (optional) further arguments to pass to the create/replace call
@return: response object from Kubernetes API call
'''
for obj in yaml.load_all(rawData):
createOrUpdateOrReplace(obj, client, **kwargs)
def createOrUpdateOrReplace(obj, client=None, **kwargs):
''' invoke the K8s API to create or replace a kubernetes object.
The first attempt is to create(insert) this object; when this is rejected because
of an existing object with same name, we attempt to patch this existing object.
As a last resort, if even the patch is rejected, we *delete* the existing object
and recreate from scratch.
@param obj: complete object specification, including API version and metadata.
@param client: (optional) preconfigured client environment to use for invocation
@param kwargs: (optional) further arguments to pass to the create/replace call
@return: response object from Kubernetes API call
'''
k8sApi = findK8sApi(obj, client)
try:
res = invokeApi(k8sApi, 'create', obj, **kwargs)
logging.debug('K8s: %s created -> uid=%s', describe(obj), res.metadata.uid)
except kubernetes.client.rest.ApiException as apiEx:
if apiEx.reason != 'Conflict': raise
try:
# asking for forgiveness...
res = invokeApi(k8sApi, 'patch', obj, **kwargs)
logging.debug('K8s: %s PATCHED -> uid=%s', describe(obj), res.metadata.uid)
except kubernetes.client.rest.ApiException as apiEx:
if apiEx.reason != 'Unprocessable Entity': raise
try:
# second attempt... delete the existing object and re-insert
logging.debug('K8s: replacing %s FAILED. Attempting deletion and recreation...', describe(obj))
res = invokeApi(k8sApi, 'delete', obj, **kwargs)
logging.debug('K8s: %s DELETED...', describe(obj))
res = invokeApi(k8sApi, 'create', obj, **kwargs)
logging.debug('K8s: %s CREATED -> uid=%s', describe(obj), res.metadata.uid)
except Exception as ex:
message = 'K8s: FAILURE updating %s. Exception: %s' % (describe(obj), ex)
logging.error(message)
raise RuntimeError(message)
return res
def patchObject(obj, client=None, **kwargs):
k8sApi = findK8sApi(obj, client)
try:
res = invokeApi(k8sApi, 'patch', obj, **kwargs)
logging.debug('K8s: %s PATCHED -> uid=%s', describe(obj), res.metadata.uid)
return res
except kubernetes.client.rest.ApiException as apiEx:
if apiEx.reason == 'Unprocessable Entity':
message = 'K8s: patch for %s rejected. Exception: %s' % (describe(obj), apiEx)
logging.error(message)
raise RuntimeError(message)
else:
raise
def deleteObject(obj, client=None, **kwargs):
k8sApi = findK8sApi(obj, client)
try:
res = invokeApi(k8sApi, 'delete', obj, **kwargs)
logging.debug('K8s: %s DELETED. uid was: %s', describe(obj), res.details and res.details.uid or '?')
return True
except kubernetes.client.rest.ApiException as apiEx:
if apiEx.reason == 'Not Found':
logging.warning('K8s: %s does not exist (anymore).', describe(obj))
return False
else:
message = 'K8s: deleting %s FAILED. Exception: %s' % (describe(obj), apiEx)
logging.error(message)
raise RuntimeError(message)
def findK8sApi(obj, client=None):
''' Investigate the object spec and lookup the corresponding API object
@param client: (optional) preconfigured client environment to use for invocation
@return: a client instance wired to the apriopriate API
'''
grp, _, ver = obj['apiVersion'].partition('/')
if ver == '':
ver = grp
grp = 'core'
# Strip 'k8s.io', camel-case-join dot separated parts. rbac.authorization.k8s.io -> RbacAuthorzation
grp = ''.join(part.capitalize() for part in grp.rsplit('.k8s.io', 1)[0].split('.'))
ver = ver.capitalize()
k8sApi = '%s%sApi' % (grp, ver)
return getattr(kubernetes.client, k8sApi)(client)
def invokeApi(k8sApi, action, obj, **args):
''' find a suitalbe function and perform the actual API invocation.
@param k8sApi: client object for the invocation, wired to correct API version
@param action: either 'create' (to inject a new objet) or 'replace','patch','delete'
@param obj: the full object spec to be passed into the API invocation
@param args: (optional) extraneous arguments to pass
@return: response object from Kubernetes API call
'''
# transform ActionType from Yaml into action_type for swagger API
kind = camel2snake(obj['kind'])
# determine namespace to place the object in, supply default
try: namespace = obj['metadata']['namespace']
except: namespace = 'default'
functionName = '%s_%s' %(action,kind)
if hasattr(k8sApi, functionName):
# namespace agnostic API
function = getattr(k8sApi, functionName)
else:
functionName = '%s_namespaced_%s' %(action,kind)
function = getattr(k8sApi, functionName)
args['namespace'] = namespace
if not 'create' in functionName:
args['name'] = obj['metadata']['name']
if 'delete' in functionName:
from kubernetes.client.models.v1_delete_options import V1DeleteOptions
obj = V1DeleteOptions()
return function(body=obj, **args)
def describe(obj):
return "%s '%s'" % (obj['kind'], obj['metadata']['name'])
def camel2snake(string):
string = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', string)
string = re.sub('([a-z0-9])([A-Z])', r'\1_\2', string).lower()
return string
if __name__=='__main__':
runUsageExample()
</code></pre>
| Ichthyo |
<p>I have a service running in Kubernetes and currently, there are two ways of making GET requests to the REST API.</p>
<p>The first is</p>
<pre><code>kubectl port-forward --namespace test service/test-svc 9090
</code></pre>
<p>and then running</p>
<pre><code>curl http://localhost:9090/sub/path \
-d param1=abcd \
-d param2=efgh \
-G
</code></pre>
<p>For the second one, we do a kubctl proxy</p>
<pre><code>kubectl proxy --port=8080
</code></pre>
<p>followed by</p>
<pre><code>curl -lk 'http://127.0.0.1:8080/api/v1/namespaces/test/services/test-svc:9090/proxy/sub/path?param1=abcd&param2=efgh'
</code></pre>
<p>Both work nicely. However, my question is: How do we repeat one of these with the Python Kubernetes client (<a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">https://github.com/kubernetes-client/python</a>)?</p>
<p>Many thanks for your support in advance!</p>
<p><strong>Progress</strong></p>
<p>I found a solution that brings us closer to the desired result:</p>
<pre><code>from kubernetes import client, config
config.load_kube_config("~/.kube/config", context="my-context")
api_instance = client.CoreV1Api()
name = 'test-svc' # str | name of the ServiceProxyOptions
namespace = 'test' # str | object name and auth scope, such as for teams and projects
api_response = api_instance.api_client.call_api(
'/api/v1/namespaces/{namespace}/services/{name}/proxy/ping'.format(namespace=namespace, name=name), 'GET',
auth_settings = ['BearerToken'], response_type='json', _preload_content=False
)
print(api_response)
</code></pre>
<p>yet the result is</p>
<pre><code>(<urllib3.response.HTTPResponse object at 0x104529340>, 200, HTTPHeaderDict({'Audit-Id': '1ad9861c-f796-4e87-a16d-8328790c50c3', 'Cache-Control': 'no-cache, private', 'Content-Length': '16', 'Content-Type': 'application/json', 'Date': 'Thu, 27 Jan 2022 15:05:10 GMT', 'Server': 'uvicorn'}))
</code></pre>
<p>Whereas the desired output was</p>
<pre><code>{
"ping": "pong!"
}
</code></pre>
<p>Do you know how to extract it form here?</p>
| tobias | <p>This should be something which uses:</p>
<pre><code>from kubernetes.stream import portforward
</code></pre>
<p>To find which command maps to an API call in Python, you can used</p>
<pre><code>kubectl -v 10 ...
</code></pre>
<p>For example:</p>
<pre><code>k -v 10 port-forward --namespace znc service/znc 1666
</code></pre>
<p>It spits a lot of output, the most important out put is the curl commands:</p>
<pre><code>POST https://myk8s:16443/api/v1/namespaces/znc/pods/znc-57647bb8d8-dcq6b/portforward 101 Switching Protocols in 123 milliseconds
</code></pre>
<p>This allows you to search the code of the python client. For example there is:</p>
<pre><code>core_v1.connect_get_namespaced_pod_portforward
</code></pre>
<p>However, using it is not so straight forward. Luckily, the maintainers include a great example on how to use <a href="https://github.com/kubernetes-client/python/blob/b313b5e74f7cf222ccc39d8c5bf8a07502bd6db3/examples/pod_portforward.py" rel="nofollow noreferrer">portforward method</a>:</p>
<pre><code># Copyright 2020 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Shows the functionality of portforward streaming using an nginx container.
"""
import select
import socket
import time
import six.moves.urllib.request as urllib_request
from kubernetes import config
from kubernetes.client import Configuration
from kubernetes.client.api import core_v1_api
from kubernetes.client.rest import ApiException
from kubernetes.stream import portforward
##############################################################################
# Kubernetes pod port forwarding works by directly providing a socket which
# the python application uses to send and receive data on. This is in contrast
# to the go client, which opens a local port that the go application then has
# to open to get a socket to transmit data.
#
# This simplifies the python application, there is not a local port to worry
# about if that port number is available. Nor does the python application have
# to then deal with opening this local port. The socket used to transmit data
# is immediately provided to the python application.
#
# Below also is an example of monkey patching the socket.create_connection
# function so that DNS names of the following formats will access kubernetes
# ports:
#
# <pod-name>.<namespace>.kubernetes
# <pod-name>.pod.<namespace>.kubernetes
# <service-name>.svc.<namespace>.kubernetes
# <service-name>.service.<namespace>.kubernetes
#
# These DNS name can be used to interact with pod ports using python libraries,
# such as urllib.request and http.client. For example:
#
# response = urllib.request.urlopen(
# 'https://metrics-server.service.kube-system.kubernetes/'
# )
#
##############################################################################
def portforward_commands(api_instance):
name = 'portforward-example'
resp = None
try:
resp = api_instance.read_namespaced_pod(name=name,
namespace='default')
except ApiException as e:
if e.status != 404:
print("Unknown error: %s" % e)
exit(1)
if not resp:
print("Pod %s does not exist. Creating it..." % name)
pod_manifest = {
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'name': name
},
'spec': {
'containers': [{
'image': 'nginx',
'name': 'nginx',
}]
}
}
api_instance.create_namespaced_pod(body=pod_manifest,
namespace='default')
while True:
resp = api_instance.read_namespaced_pod(name=name,
namespace='default')
if resp.status.phase != 'Pending':
break
time.sleep(1)
print("Done.")
pf = portforward(
api_instance.connect_get_namespaced_pod_portforward,
name, 'default',
ports='80',
)
http = pf.socket(80)
http.setblocking(True)
http.sendall(b'GET / HTTP/1.1\r\n')
http.sendall(b'Host: 127.0.0.1\r\n')
http.sendall(b'Connection: close\r\n')
http.sendall(b'Accept: */*\r\n')
http.sendall(b'\r\n')
response = b''
while True:
select.select([http], [], [])
data = http.recv(1024)
if not data:
break
response += data
http.close()
print(response.decode('utf-8'))
error = pf.error(80)
if error is None:
print("No port forward errors on port 80.")
else:
print("Port 80 has the following error: %s" % error)
# Monkey patch socket.create_connection which is used by http.client and
# urllib.request. The same can be done with urllib3.util.connection.create_connection
# if the "requests" package is used.
socket_create_connection = socket.create_connection
def kubernetes_create_connection(address, *args, **kwargs):
dns_name = address[0]
if isinstance(dns_name, bytes):
dns_name = dns_name.decode()
dns_name = dns_name.split(".")
if dns_name[-1] != 'kubernetes':
return socket_create_connection(address, *args, **kwargs)
if len(dns_name) not in (3, 4):
raise RuntimeError("Unexpected kubernetes DNS name.")
namespace = dns_name[-2]
name = dns_name[0]
port = address[1]
if len(dns_name) == 4:
if dns_name[1] in ('svc', 'service'):
service = api_instance.read_namespaced_service(name, namespace)
for service_port in service.spec.ports:
if service_port.port == port:
port = service_port.target_port
break
else:
raise RuntimeError(
"Unable to find service port: %s" % port)
label_selector = []
for key, value in service.spec.selector.items():
label_selector.append("%s=%s" % (key, value))
pods = api_instance.list_namespaced_pod(
namespace, label_selector=",".join(label_selector)
)
if not pods.items:
raise RuntimeError("Unable to find service pods.")
name = pods.items[0].metadata.name
if isinstance(port, str):
for container in pods.items[0].spec.containers:
for container_port in container.ports:
if container_port.name == port:
port = container_port.container_port
break
else:
continue
break
else:
raise RuntimeError(
"Unable to find service port name: %s" % port)
elif dns_name[1] != 'pod':
raise RuntimeError(
"Unsupported resource type: %s" %
dns_name[1])
pf = portforward(api_instance.connect_get_namespaced_pod_portforward,
name, namespace, ports=str(port))
return pf.socket(port)
socket.create_connection = kubernetes_create_connection
# Access the nginx http server using the
# "<pod-name>.pod.<namespace>.kubernetes" dns name.
response = urllib_request.urlopen(
'http://%s.pod.default.kubernetes' % name)
html = response.read().decode('utf-8')
response.close()
print('Status Code: %s' % response.code)
print(html)
def main():
config.load_kube_config()
c = Configuration.get_default_copy()
c.assert_hostname = False
Configuration.set_default(c)
core_v1 = core_v1_api.CoreV1Api()
portforward_commands(core_v1)
if __name__ == '__main__':
main()
</code></pre>
| oz123 |
<p>I have written few c++ services which have the MQTT Client. Based on the message received on the MQTT topic the c++ service will take some actions like sending an MQTT message to another topic or saving the message to the database etc. </p>
<p>I have set up a few MQTT Brokers on Dockers and attached those MQTT Brokers to an HA Load balancer. All these MQTT Brokers also clustered. </p>
<p>So, if client 1 connected broker-1 ( through Load balancer ) can send message to client x connected broker -x. Due to the clustering of the MQTT Brokers. </p>
<p>So, How can I set the load balancer to my c++ services with HA or similar load balancers? </p>
<p>Update: </p>
<p>In the case of HTTP / REST APIs, the request will be transferred to only one web application at any point of time. But in case of MQTT, the message will be published, and If I run multiple c++ service of Same ABC then all the services will process that message. How I should make sure only one service will process the message. I want to establish High Availability for the C++ service</p>
| Kishore Chilakala | <p>This is not possible under MQTT 3.x. The reason being that prior to MQTT 5, every message is sent to every subscriber to that topic making it very difficult to load balance correctly. <em>Subscribers would need receive everything then discard decide for themselves to discard some messages, leaving them for other subscribers</em>. It's one of the limitations of MQTT 3.x. </p>
<p>There are those who have worked around this by connecting their MQTT broker into an <a href="https://kafka.apache.org/" rel="nofollow noreferrer">Apache Kafka</a> cluster, routing all messages from MQTT to Kafka and then attaching their subscribers (like your c++ services) to Kafka instead of MQTT. Kafka supports the type of load balancing you are asking for.</p>
<hr>
<p><strong>This may be about to change with MQTT 5.0</strong>. There are still a lot of clients and brokers which don't support this. However if both your client and broker support MQTT version 5 then there is a new <sup>1</sup> concept of "<a href="http://mqtt.org/" rel="nofollow noreferrer">Shared Subscriptions</a>":</p>
<blockquote>
<ul>
<li><strong>Shared Subscriptions</strong> β If the message rate on a subscription is high, shared subscriptions can be used to load balance the messages across a number of receiving clients</li>
</ul>
</blockquote>
<p>You haven't stated your client library. But your first steps should be:</p>
<ul>
<li>investigate if both your broker and subscriber support MQTT 5</li>
<li>Check the API for your client to discover how to use subscriber groups</li>
</ul>
<hr>
<p><sup>1</sup> New to MQTT, Kafka already has it.</p>
| Philip Couling |
<p>I'm trying to setup kubernetes on AWS. For this I created an EKS cluster with 3 nodes (t2.small) according to official AWS tutorial. Then I want to run a pod with some app which communicates with Postgres (RDS in different VPC). </p>
<p>But unfortunately the app doesn't connect to the database.</p>
<p>What I have:</p>
<ol>
<li>EKS cluster with its own VPC (CIDR: 192.168.0.0/16)</li>
<li>RDS (Postgres) with its own VPC (CIDR: 172.30.0.0/16)</li>
<li>Peering connection initiated from the RDS VPC to the EKS VPC</li>
<li>Route table for 3 public subnets of EKS cluster is updated: route with destination 172.30.0.0/16 and target β peer connection from the step #3 is added.</li>
<li>Route table for the RDS is updated: route with destination 192.168.0.0/16 and target β peer connection from the step #3 is added.</li>
<li>The RDS security group is updated, new inbound rule is added: all traffic from 192.168.0.0/16 is allowed</li>
</ol>
<p>After all these steps I execute kubectl command:</p>
<pre><code>kubectl exec -it my-pod-app-6vkgm nslookup rds-vpc.unique_id.us-east-1.rds.amazonaws.com
nslookup: can't resolve '(null)': Name does not resolve
Name: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
Address 1: 52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com
</code></pre>
<p>Then I connect to one of the 3 nodes and execute a command:</p>
<pre><code>getent hosts rds-vpc.unique_id.us-east-1.rds.amazonaws.com
52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com rds-vpc.unique_id.us-east-1.rds.amazonaws.com
</code></pre>
<p>What I missed in EKS setup in order to have access from pods to RDS?</p>
<p><strong>UPDATE:</strong></p>
<p>I tried to fix the problem by <code>Service</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ExternalName
externalName: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
</code></pre>
<p>So I created this service in EKS, and then tried to refer to <code>postgres-service</code> as DB URL instead of direct RDS host address.</p>
<p>This fix does not work :(</p>
| Alex Fruzenshtein | <p>Have you tried to enable "dns propagation" in the peering connection? It looks like you are not getting the internally routable dns. You can enable it by going into the setting for the peering connection and checking the box for dns propagation. I generally do this will all of the peering connections that I control.</p>
| donkeyx |
<p>In Helm Chart, one can define a postStart hook with parameters from the values.yaml file.</p>
<p>WIf the container dies and replaced, or upgraded, will postStart always be called with the same values in each start of a container?</p>
| user1015767 | <p>postStart and preStop are container lifecycle events so as long as your deployment configuration is not changed, those hooks will be called in each restart.</p>
| Mesut |
<p>I trying to setup kubernetes on my local environment using docker. I've built the necessary docker image with this Dockerfile:</p>
<pre><code>FROM node:9.11.1
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app/
EXPOSE 3002
CMD [ "npm", "start" ]
</code></pre>
<p>I then pushed this image to my private docker repo on the google cloud repository. Now i can confirm that i can push and pull the image from the cloud repo, so i then built a docker-compose using that repo as the image source file:</p>
<pre><code>version: '3'
services:
redis:
image: redis
ports:
- 6379:6379
networks:
- my-network
mongodb:
image: mongo
ports:
- 27017:27017
volumes:
- ./db:/data/db
networks:
- my-network
my-test-app:
tty: true
image: gcr.io/my-test-app
ports:
- 3002:3002
depends_on:
- redis
- mongodb
networks:
- my-network
volumes:
- .:/usr/src/app
environment:
- REDIS_PORT=6379
- REDIS_HOST=redis
- DB_URI=mongodb://mongodb:27017/
command: bash -c "ls && npm install"
networks:
my-network:
driver: bridge
volumes:
mongodb:
</code></pre>
<p>Then finally building off of that i use Kubernetes kompose to generate my deployment file which looks like this: </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.12.0 ()
creationTimestamp: null
labels:
io.kompose.service: my-test-app
name: my-test-app
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: my-test-app
spec:
imagePullSecrets:
- gcr-json-key
containers:
- args:
- bash
- -c
- ls && npm install
env:
- name: DB_URI
value: mongodb://mongodb:27017/
- name: REDIS_HOST
value: redis
- name: REDIS_PORT
value: "6379"
image: gcr.io/my-test-app
name: my-test-app
ports:
- containerPort: 3002
resources: {}
tty: true
volumeMounts:
- mountPath: /usr/src/app
name: my-test-app-claim0
restartPolicy: Always
volumes:
- name: my-test-app-claim0
persistentVolumeClaim:
claimName: my-test-app-claim0
status: {}
</code></pre>
<p>As you can see in the args section of my yaml i am listing all the files in my directory <code>/usr/src/app</code> However it logs do the only file that appears is a single <code>package-lock.json</code> file which causes the following install command to fail. This error however does not occur when i use docker-compose to launch my app so for some reason only my kubernetes is having trouble. Also i can confirm that my image does contain a <code>package.json</code> file by running an interactive shell. I'm unsure on how to proceed so any help would be appreciated!</p>
| Christofer Johnson | <p>You are mounting something else over <code>/usr/src/app</code> where <code>package.json</code> is supposed to be located. That hides all the files in there. Remove the <code>volumes</code> and <code>volumeMounts</code> sections.</p>
| kichik |
<p>We are running a single NodeJS instance in a Pod with a request of 1 CPU, and no limit. Upon load testing, we observed the following:</p>
<pre><code>NAME CPU(cores) MEMORY(bytes)
backend-deployment-5d6d4c978-5qvsh 3346m 103Mi
backend-deployment-5d6d4c978-94d2z 3206m 99Mi
</code></pre>
<p>If NodeJS is only running a single thread, how could it be consuming more than 1000m CPU, when running directly on a Node it would only utilize a single core? Is kubernetes somehow letting it borrow time across cores?</p>
| danthegoodman | <p>Although Node.js runs the main application code in a single thread, the Node.js runtime is multi-threaded. Node.js has an internal <a href="https://nodejs.org/en/docs/guides/dont-block-the-event-loop/" rel="nofollow noreferrer">worker pool</a> that is used to run background tasks, including I/O and certain CPU-intensive processing like crypto functions. In addition, if you use the <a href="https://nodejs.org/api/worker_threads.html" rel="nofollow noreferrer">worker_threads</a> facility (not to be confused with the worker pool), then you would be directly accessing additional threads in Node.js.</p>
| pvillela |
<p>We are running hazelcast in embedded mode and the application is running in kubernetes cluster. We are using Kubernetes API for discovery. </p>
<p>It was all working fine and now we just started using <code>envoy</code> as sidecar for SSL. Now for both <code>inbound</code> and <code>outbound</code> on TCP at <code>hazelcast</code> port <code>5701</code> we have enabled TLS in envoy but are yet to do changes for kubernetes API call. </p>
<p>Right now we are getting below Exception :</p>
<blockquote>
<p>"class":"com.hazelcast.internal.cluster.impl.DiscoveryJoiner","thread_name":"main","type":"log","data_version":2,"description":"[10.22.69.149]:5701
[dev] [3.9.4] Operation: [get] for kind: [Endpoints] with name:
[servicename] in namespace: [namespace]
failed.","stack_trace":"j.n.ssl.SSLException: Unrecognized SSL
message, plaintext connection?\n\tat
s.s.ssl.InputRecord.handleUnknownRecord(InputRecord.java:710)\n\tat
s.s.ssl.InputRecord.read(InputRecord.java:527)\n\tat
s.s.s.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)\n\tat
s.s.s.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)\n\tat
s.s.s.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)\n\tat
s.s.s.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)\n\tat
o.i.c.RealConnection.connectTls(RealConnection.java:281)\n\tat
o.i.c.RealConnection.establishProtocol(RealConnection.java:251)\n\tat
o.i.c.RealConnection.connect(RealConnection.java:151)\n\tat</p>
</blockquote>
<p>Can someone help with the overall changes which should be needed for Hazelcast k8s discovery using APIs with envoy as sidecar ?</p>
| user762421 | <p>You can find an example config below for how to deploy Hazelcast with Envoy sidecar and use it with mTLS. </p>
<p><a href="https://github.com/hazelcast/hazelcast-kubernetes/issues/118#issuecomment-553588983" rel="nofollow noreferrer">https://github.com/hazelcast/hazelcast-kubernetes/issues/118#issuecomment-553588983</a></p>
<p>If you want to achieve the same with an embedded architecture, you need to create a headless kubernetes service besides your microservice's kubernetes service. Then you need to give headless service name to hazelcast-kubernetes plugin <strong>service-name</strong> parameter.</p>
<p>You can find more info on hazelcast-kubernetes plugin <a href="https://github.com/hazelcast/hazelcast-kubernetes/blob/master/README.md" rel="nofollow noreferrer">README.md</a> file. </p>
<p><strong>EDIT:</strong>
Hazelcast-Istio-SpringBoot step-by-step guide can be found <a href="https://github.com/hazelcast-guides/hazelcast-istio" rel="nofollow noreferrer">here</a>. </p>
| Mesut |
<p>Having:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: example-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>And rolebinding:</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: example-rolebinding
namespace: default
subjects:
- kind: User
name: example-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>How can I get the secret token?</p>
<pre><code>token=$(kubectl get secret/$name -o jsonpath='{.data.token}' | base64 --decode)
</code></pre>
<p>But there is no secret for the user only the "default-token-xxx".</p>
<p>Do I need to bind a services account or is the token added to default-token-xxx?</p>
| Chris G. | <p>All Kubernetes clusters have two categories of users: service accounts managed by Kubernetes, and normal users, and a third subject: Groups. Kubernetes does not have objects (kinds) which represent normal user accounts. Normal users cannot be added to a cluster through an API call. Normal users are typically managed or authenticated through integrations with other authentication protocols such as LDAP, SAML, Azure Active Directory, Kerberos, etc. You can leverage an external identity provider like OIDC to authenticate through a token.</p>
<p>For Service Accounts, as you've correctly noticed, if you don't explicitly create a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">Kubernetes Service Account</a> in your namespace, you'll only have access to the default service account, which will be <code>default-token-<hash></code>.</p>
<p>A token is not automatically created for a "Normal User", but is automatically created for a Service Account. Service accounts are users managed by the Kubernetes API. They are bound to specific namespaces, and created automatically by the API server or manually through API calls. Service accounts are tied to a set of credentials stored as Secrets.</p>
<p>Kubernetes uses client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth to authenticate API requests through authentication plugins. As HTTP requests are made to the API server, plugins attempt to associate the following attributes with the request:</p>
<ul>
<li>Username: a string which identifies the end user. Common values might be kube-admin or [email protected].</li>
<li>UID: a string which identifies the end user and attempts to be more consistent and unique than username.</li>
</ul>
<p>The subject of user authentication is easier answered if we know what the user authentication integration being used is.</p>
| Highway of Life |
<p>Various Kubernetes security recommendations tell you to avoid SSH into containers and ask to use kubectl instead. The prime reason quoted is the possibility of escaping to the underlying host resources via SSH into containers. So, I have following specific queries:</p>
<ol>
<li><p>Which features of kubectl prevent you to access host resources and why ssh has more risk of accessing host resources as compared to kubectl? How kubectl is more secure?</p></li>
<li><p>Can SSH skip the Pod Security policies and access/mount paths on the underlying host which are restricted in pod security policy?</p></li>
<li><p>If SSH into containers is unavoidable, how to secure it in the best possible way?</p></li>
</ol>
| Anuj | <p>If the reason is "you can escape via one and not the other", then I think it comes from somebody who don't understand the security mechanisms involved. There are other reasons to prefer <code>kubectl exec</code> over SSH, such as audit logging integrated with everything else Kubernetes, and easy access revocation, but they are possible to get with SSH too. It's just more work</p>
<ol>
<li><p>kubectl runs client-side. If there were features in it that would prevent you from escaping, you could just patch them out.</p></li>
<li><p>No, those are on the pod and handled by the underlying kernel. SSH would only get you a shell in the container, just like <code>kubectl exec</code> would.</p></li>
<li><p>Use public-key authentication, make sure to have a strategy for ensuring your software in the container is up-to-date. Think about how you're going to manage the <code>authorized_keys</code> file and revocation of compromised SSH keys there. Consider whether you should lock down access to the port SSH is running on with firewall rules.</p></li>
</ol>
| Tollef Fog Heen |
<p>I was trying to get an alert in slack from Datadog when kubernet crojob fail.
I have a query to get pod failure notification when its crashloop but not sure how to update the query for the cronjob failure</p>
<pre><code>max(last_10m) :avg:kubernets_state.contaier.status_report.count.waiting{reason:crashloopbackoff !pod_phase:succeeded} by {kube_cluster_name, kube_namespace,pod_name,kubernetes.io/clusster/cluster_name}.rollup(avg, 120) >=1
</code></pre>
| Rajesh174u | <p>Can you use the <code>kubernetes_state.job.completion.failed</code> metric?</p>
<p>For example:</p>
<pre><code>max(last_5m):max:kubernetes_state.job.completion.failed{*} by {kube_cluster_name,kube_namespace,kube_cronjob} >= 1
</code></pre>
<p>Note that there are two different tags, <code>kube_cronjob</code>, which is the name of the cron job, and <code>kube_job</code> which is the name of the instance of the job.</p>
<p>See also <a href="https://docs.datadoghq.com/containers/kubernetes/data_collected/" rel="nofollow noreferrer">the docs</a>.</p>
| BjΓΆrn Marschollek |
<p>I'm using Docker For Desktop with the built-in Kubernetes cluster. I have installed a <code>Pod</code> that serves resources over HTTP, but I'm not sure how to access it using my browser. I have the following <code>ServiceSpec</code> that correctly routes traffic to the <code>Pod</code>:</p>
<pre><code>spec:
clusterIP: 10.99.132.220
externalTrafficPolicy: Cluster
ports:
- name: myport
nodePort: 31534
port: 8037
protocol: TCP
targetPort: 80
type: LoadBalancer
</code></pre>
<p>And I can see it set up when I query it with <code>kubectl</code>:</p>
<pre><code>$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myservice LoadBalancer 10.99.132.220 localhost 8037:31534/TCP 1h
</code></pre>
<p>How do I reach this service using my browser?</p>
| Cory Klein | <p>That service will be available in your browser at <a href="http://localhost:8037" rel="noreferrer">http://localhost:8037</a></p>
<p>Note that the port <code>8037</code> corresponds to the <code>port</code> property on the <code>ServiceSpec</code> object.</p>
<p>If you are unable to reach the service at that URL, then it could be one of several things, including but not limited to:</p>
<ul>
<li>There is another <code>Service</code> in your cluster that has claimed that port. Either delete the other <code>Service</code>, or change the <code>port</code> property to an unclaimed port.</li>
<li>Your <code>Pod</code> is not running and ready. Check <code>kubectl get pods</code>.</li>
</ul>
| Cory Klein |
<p>I have a service running in namespace istio-system , I want to connect it with pod in different namespace say default . My cluster is running on minikube . How to do the same ?
I tried myService.istio-system.svc.cluster.local , but it didnot worked and from where its picking this i.e from which configuration file . I know in normal k8 cluster but not in minikube
Any help would be appreciated</p>
<h3>Added by BMW</h3>
<p>The user in fact is asking a <a href="https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem">XY Problem</a></p>
<p>Here is the real question, he put in comment.</p>
<p>I want to make use of kubectl port forwarding technique to forward traffic from external world to service running inside minikube so that I can access it from outside I am trying below command :</p>
<pre><code>kubectl port-forward --address 0.0.0.0 svc/kiali.istio-system.svc.cluster.local 31000 31000
Error from server (NotFound): services "kiali.istio-system.svc.cluster.local" not found
</code></pre>
| Anurag Gupta | <p>Realized the user is asking <a href="https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem">XY Problem</a></p>
<p>I put the answer here:</p>
<pre><code>kubectl -n istio-system port-forward --address 0.0.0.0 svc/kiali 31000 31000
</code></pre>
<p>with <code>-n istio-system</code>, you can nominiate the namespace you are working on, and no need care of its domain name postfix.</p>
<p>Here is the original answer, but still useful for some use cases.</p>
<p>Please reference this:<a href="https://i.stack.imgur.com/jhLGL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jhLGL.png" alt="enter image description here" /></a></p>
<p>So in your case, if cross namespace, you have to use below names:</p>
<pre><code><service_name>.<namespace>
<service_name>.<namespace>.svc
<service_name>.<namespace>.svc.cluster.local
</code></pre>
<p>svc is for service, pod is for pod.</p>
<p>If you need double check the last two parts, use <code>CoreDNS</code> as sample, check its configmap:</p>
<pre><code>master $ kubectl -n kube-system get configmap coredns -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2020-01-28T11:37:40Z"
name: coredns
namespace: kube-system
resourceVersion: "179"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 0ee90a0b-6c71-4dbf-ac8a-906a5b37ea4f
</code></pre>
<p>that's the configuration file for CoreDNS, and it is set <code>cluster.local</code> as port of full DNS name.</p>
| BMW |
<p>I have created the kubernetes cluster with the version 1.19.11. Here, the metric-server installed by default. Now, I had hit the below queries,</p>
<p>kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"</p>
<p>it return "Error from server (NotFound): the server could not find the requested resource".
So, Please help me to resolve the issues?.</p>
| bala n | <p>If you are using <code>minikube</code>, try starting with the addon enabled <code>--addons="metrics-server"</code>. For example:</p>
<pre><code>minikube start \
--kubernetes-version=v1.25.2 \
--addons="dashboard" \
--addons="metrics-server" \
--addons="ingress" \
--addons="ingress-dns" \
--feature-gates=EphemeralContainers=true
</code></pre>
| yucer |
<p>I have setup a private docker registry inside my Kubernetes Cluster. The deployment is as follows</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
volumes:
- name: auth-vol
secret:
secretName: "registry-credentials"
containers:
- image: registry:2
name: registry
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_AUTH
value: "htpasswd"
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: "k8s_user"
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: "/auth/htpasswd"
ports:
- containerPort: 5000
volumeMounts:
- name: auth-vol
mountPath: /auth
</code></pre>
<p>I am routing using the following Ingress</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: registry-ingress
spec:
rules:
- host: "registry.<my domain>"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: registry
port:
number: 80
</code></pre>
<p>Externally I have a load balancer terminating SSL and then forwarding the request to the appropriate ingress port for HTTP traffic. From outside the network, I have no problems pushing/pulling from the registry. However from inside the network, I am getting the following error when I try and deploy something and run <code>kubectl pod describe <pod></code></p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26s default-scheduler Successfully assigned default/server-6df575c99c-ltwqr to k8s-root-default-pool-3de67
Normal BackOff 24s (x2 over 25s) kubelet Back-off pulling image "registry.<mydomain>/server:0.0.1"
Warning Failed 24s (x2 over 25s) kubelet Error: ImagePullBackOff
Normal Pulling 11s (x2 over 25s) kubelet Pulling image "registry.<mydomain>/server:0.0.1"
Warning Failed 11s (x2 over 25s) kubelet Failed to pull image "registry.<mydomain>/server:0.0.1": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.<mydomain>/v2/: x509: certificate is valid for haproxy-controller.default, not registry.<mydomain>.io
Warning Failed 11s (x2 over 25s) kubelet Error: ErrImagePull
</code></pre>
<p>It appears as though the request is hitting the HAProxy Ingress controller certificate rather than going to the outside world and hitting the load balancer's SSL certificate. Is there some better way I should be doing this?</p>
| echappy | <p>I figured this out. Before I was using <code>kubectl expose deployment/registry</code> to automatically create the service. I figured out that if I create a NodePort service, this will expose it on a fixed port on all nodes</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: registry
spec:
type: NodePort
selector:
app: registry
ports:
- port: 5000
targetPort: 5000
nodePort: 32500
</code></pre>
<p>This then allowed me to use "localhost:32500" to access the registry on all nodes. I also had to update my deployment to pull the image from "localhost:32500".</p>
| echappy |
<p>I'm troubleshooting liveness probe failures. I'm able to extract specific entries from k8s event using this approach</p>
<pre><code>k get events --sort-by=.metadata.creationTimestamp | grep Liveness
</code></pre>
<p>I'd like to get only the pod(s) that causing the issue.<br>
I'm considering to pipe a cut, but I'm not sure which delimiter should I use to get the specific column.</p>
<p>Where I can find the delimiter related to that specific k8s resource(Events) used to printout the kubectl output?</p>
<p>Any other suggestion is appreciated</p>
<p>UPDATE
so far these are the best options (w/o using extra tools) satisfying my specific needs:</p>
<pre><code>k get events -o jsonpath='{range .items[*]}{.involvedObject.name}/{.involvedObject.namespace}: {.message}{"\n"}{end}' | grep Liveness
k get events -o custom-columns=POD:.involvedObject.name,NS:.involvedObject.namespace,MGG:.message | grep Liveness
</code></pre>
| Crixo | <p>there is a feature in kubernetes called <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">jsonpath</a></p>
<p>validate if your jsonpath is correct with this online tool: <a href="https://jsonpath.com/" rel="nofollow noreferrer">https://jsonpath.com/</a></p>
<p>easily go through json keys with this online tool, so you needn't manually type the key names any more): <a href="http://jsonpathfinder.com/" rel="nofollow noreferrer">http://jsonpathfinder.com/</a></p>
<p>so your command will be:</p>
<pre><code>k get events --sort-by=.metadata.creationTimestamp --jsonpath '{ .xxxxxx }'
</code></pre>
<p><a href="https://i.stack.imgur.com/GYtsY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GYtsY.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/PggIZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PggIZ.png" alt="enter image description here"></a></p>
| BMW |
<p>I setup a standard GKE cluster including istio. In the logs I find errors among which e.g.:</p>
<pre><code>{
insertId: "xuxoovg5olythg"
logName: "projects/projectname/logs/stderr"
metadata: {β¦}
receiveTimestamp: "2019-01-03T11:39:10.996283280Z"
resource: {
labels: {
cluster_name: "standard-cluster-1"
container_name: "metadata-agent"
location: "europe-west4-a"
namespace_name: "kube-system"
pod_name: "metadata-agent-859mq"
project_id: "myprojectname"
}
type: "k8s_container"
}
severity: "ERROR"
textPayload: "W0103 11:39:06 7fc26c8bf700 api_server.cc:183 /healthz returning 500; unhealthy components: Pods
"
timestamp: "2019-01-03T11:39:06Z"
}
</code></pre>
<p>Is this a problem? What to do with this?</p>
| musicformellons | <p>Currently it is said to be working as intended, as a short-term solution. Possibly will be fixed in the future.</p>
<p>Source: <a href="https://github.com/Stackdriver/metadata-agent/issues/183#issuecomment-426406327" rel="nofollow noreferrer">https://github.com/Stackdriver/metadata-agent/issues/183#issuecomment-426406327</a></p>
| Jakub GocΕawski |
<p><a href="https://i.stack.imgur.com/4MX4U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4MX4U.png" alt="enter image description here" /></a><a href="https://stackoverflow.com/questions/64481963/unknown-flag-export-while-copying-secret-from-one-namespace-to-another-kubect">unknown flag: --export while copying secret from one namespace to another kubectl</a></p>
<p>I solved my problem with the above solution but i dont know why i got this error.</p>
<p>Above answers says on kubernetes 1.14 export option deprecated and on 1.18 it has been remove but i use 1.16.</p>
<p>Secondly on GCP my cluster started to give warning at the same time.</p>
<p>Why i get this error while i should not? Am i looking wrong direction?</p>
<p><a href="https://i.stack.imgur.com/CRe8Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CRe8Q.png" alt="enter image description here" /></a></p>
| rmznbyk 1 | <p>As of Kubernetes 1.14, --export is deprecated and this feature is removed in 1.18. You can use get -o yaml without --export. For example, the following command export secrets config:</p>
<pre><code>kubectl get secret <your secrets> --namespace <your namespace> -o yaml > output.yaml
</code></pre>
| Caner |
<p>I want to mount an Azure Shared Disk to the multiple deployments/nodes based on this:
<a href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-shared" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/virtual-machines/disks-shared</a></p>
<p>So, I created a shared disk in Azure Portal and when trying to mount it to deployments in Kubernetes I got an error:</p>
<blockquote>
<p>"Multi-Attach error for volume "azuredisk" Volume is already used by pod(s)..."</p>
</blockquote>
<p>Is it possible to use Shared Disk in Kubernetes? If so how?
Thanks for tips.</p>
| RemoPar | <p><strong><a href="https://azure.microsoft.com/en-us/blog/announcing-the-general-availability-of-azure-shared-disks-and-new-azure-disk-storage-enhancements/" rel="noreferrer">Yes, you can</a></strong>, and the capability is GA.</p>
<p>An Azure Shared Disk can be mounted as ReadWriteMany, which means you can mount it to multiple nodes and pods. It requires the <a href="https://learn.microsoft.com/en-us/azure/aks/azure-disk-csi#shared-disk" rel="noreferrer">Azure Disk CSI driver</a>, and the caveat is that currently only Raw Block volumes are supported, thus the application is responsible for managing the control of writes, reads, locks, caches, mounts, and fencing on the shared disk, which is exposed as a raw block device. This means that you mount the raw block device (disk) to a pod container as a <code>volumeDevice</code> rather than a <code>volumeMount</code>.</p>
<p><a href="https://github.com/kubernetes-sigs/azuredisk-csi-driver/tree/master/deploy/example/sharedisk" rel="noreferrer">The documentation examples</a> mostly points to how to create a Storage Class to dynamically provision the static Azure Shared Disk, but I have also created one statically and mounted it to multiple pods on different nodes.</p>
<h3>Dynamically Provision Shared Azure Disk</h3>
<ol>
<li>Create Storage Class and PVC</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-csi
provisioner: disk.csi.azure.com
parameters:
skuname: Premium_LRS # Currently shared disk only available with premium SSD
maxShares: "2"
cachingMode: None # ReadOnly cache is not available for premium SSD with maxShares>1
reclaimPolicy: Delete
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 256Gi # minimum size of shared disk is 256GB (P15)
volumeMode: Block
storageClassName: managed-csi
</code></pre>
<ol start="2">
<li>Create a deployment with 2 replicas and specify volumeDevices, devicePath in Spec</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: deployment-azuredisk
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
name: deployment-azuredisk
spec:
containers:
- name: deployment-azuredisk
image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
volumeDevices:
- name: azuredisk
devicePath: /dev/sdx
volumes:
- name: azuredisk
persistentVolumeClaim:
claimName: pvc-azuredisk
</code></pre>
<h3>Use a Statically Provisioned Azure Shared Disk</h3>
<p>Using an Azure Shared Disk that has been provisioned through ARM, Azure Portal, or through the Azure CLI.</p>
<ol>
<li>Define a PersistentVolume (PV) that references the DiskURI and DiskName:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: azuredisk-shared-block
spec:
capacity:
storage: "256Gi" # 256 is the minimum size allowed for shared disk
volumeMode: Block # PV and PVC volumeMode must be 'Block'
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureDisk:
kind: Managed
diskURI: /subscriptions/<subscription>/resourcegroups/<group>/providers/Microsoft.Compute/disks/<disk-name>
diskName: <disk-name>
cachingMode: None # Caching mode must be 'None'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-azuredisk-managed
spec:
resources:
requests:
storage: 256Gi
volumeMode: Block
accessModes:
- ReadWriteMany
volumeName: azuredisk-shared-block # The name of the PV (above)
</code></pre>
<p>Mounting this PVC is the same for both dynamically and statically provisioned shared disks. Reference the deployment above.</p>
| Highway of Life |
<p>I'm new, and exploring ArgoCD, Helm, Grafana, and Prometheus. I found a cool repo (<a href="https://github.com/naturalett/continuous-delivery" rel="nofollow noreferrer">https://github.com/naturalett/continuous-delivery</a>) on how to deploy all the services using Argo, it's working great, but I'm unable to reach Grafana.</p>
<p>I'm running a k3s on a proxmox server. My proxmox server IP is 192.168.10.250, my k3s server is 192.168.10.137 and my grafana service is on</p>
<pre><code>service/kube-prometheus-stack-grafana ClusterIP 10.43.175.176
</code></pre>
<p>If I expose the service:</p>
<pre><code>kubectl expose service/kube-prometheus-stack-grafana --type=NodePort --target-port=3000 --name=grafana-ext -n argocd
</code></pre>
<p>ArgoCD will delete the NodePort. I assume it is because it's not on the manifest.</p>
<p>The port forward is not working also. I'm not able to reach the Grafana UI:</p>
<pre><code>kubectl port-forward service/kube-prometheus-stack-grafana -n argocd 9092:80
</code></pre>
<p>So my question is, how can I add the Service NodePort to the manifest? It's using helm charts, but how can I achieve it?</p>
| zoe08 | <p><em>(Posted the solution on behalf of the question author in order to move it to the answer space).</em></p>
<p>I was able to solve it by creating a yaml with the NodePort configuration and passing the yaml to the kustomization!</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: grafana-nodeport
namespace: argocd
labels:
release: kube-prometheus-stack
selector:
app.kubernetes.io/instance: kube-prometheus-stack
app.kubernetes.io/name: grafana
spec:
type: NodePort
selector:
app.kubernetes.io/name: grafana
ports:
- nodePort: 30009
protocol: TCP
port: 80
targetPort: 3000
</code></pre>
| halfer |
<p>I have a working docker image which I am trying to now use on Kubernetes but as I try to run the deployment it never runs. It gets stuck in a crash loop error and I have no way of working out what the logs say because it exits so quickly. I've included my deployment yaml file to see if there is something obviously wrong. </p>
<p>Any help is appreciated.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: newapp
labels:
app: newapp
spec:
ports:
- port: 80
selector:
app: newapp
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: newapp
labels:
app: newapp
spec:
selector:
matchLabels:
app: newapp
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: newapp
tier: frontend
spec:
containers:
- image: customwebimage
name: newapp
envFrom:
- configMapRef:
name: newapp-config
ports:
- containerPort: 80
imagePullSecrets:
- name: test123
</code></pre>
| Rutnet | <p>You can view the previous logs by adding <code>-p</code></p>
<p><code>kubectl logs -p pod-name</code></p>
<p>I'd delete the Deployments Pod and try this with a new Pod, which will run 5 times before entering CrashLoopBackoff.</p>
<p>If the error isn't happening during container runtime, then you can describe the pod to see for scheduling / instantiation errors:</p>
<p><code>kubectl describe pod pod-name</code></p>
| Rawkode |
<p>I have a resource yaml file in a folder structure given below</p>
<blockquote>
<p>base</p>
<p>---- first.yaml</p>
<p>main</p>
<p>---- kustomization.yaml</p>
</blockquote>
<p>In kustomization.yaml I am referring the first.yaml as</p>
<blockquote>
<p>resources:</p>
<ul>
<li>../base/first.yaml</li>
</ul>
</blockquote>
<p>But I am getting an error when i do apply of kubectl apply -f kustomizatio.yaml</p>
<pre><code>accumulating resources: accumulating resources from '../base/first.yaml': security; file '../base/first.yaml' is not in or below '../base'
</code></pre>
<p>How can i call the first.yaml resource from the folder base to the kustomization in main folder?</p>
| Kabilan R | <p>Kustomize cannot refer to individual resources in parent directories, it can only refer to resources in current or child directories, but it can refer to other Kustomize directories.</p>
<p>The following would be a valid configuration for what you have:</p>
<pre><code>.
βββ base
βΒ Β βββ main
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βΒ Β βββ resource.yaml
βΒ Β βββ stuff
βΒ Β βββ first.yaml
βΒ Β βββ kustomization.yaml
βββ cluster
βββ kustomization.yaml
</code></pre>
<p>Contents of <code>base/main/kustomization.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resource.yaml
</code></pre>
<p>Contents of <code>base/stuff/kustomization.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- first.yaml
</code></pre>
<p>Contents of <code>cluster/kustomization.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base/main
- ../base/stuff
</code></pre>
| Highway of Life |
<p>I'm running <code>kubectl create -f notRelevantToThisQuestion.yml</code></p>
<p>The response I get is:</p>
<blockquote>
<p>Error from server (NotFound): the server could not find the requested
resource</p>
</blockquote>
<p>Is there any way to determine which resource was requested that was not found?</p>
<p><code>kubectl get ns</code> returns</p>
<blockquote>
<p>NAME STATUS AGE<br> default Active 243d<br>
kube-public Active 243d<br> kube-system Active 243d<br></p>
</blockquote>
<p>This is not a cron job.<br>
Client version 1.9<br>
Server version 1.6</p>
<p>This is very similar to <a href="https://devops.stackexchange.com/questions/2956/how-do-i-get-kubernetes-to-work-when-i-get-an-error-the-server-could-not-find-t?rq=1">https://devops.stackexchange.com/questions/2956/how-do-i-get-kubernetes-to-work-when-i-get-an-error-the-server-could-not-find-t?rq=1</a> but my k8s cluster has been deployed correctly (everything's been working for almost a year, I'm adding a new pod now).</p>
| Glen Pierce | <p>To solve this, downgrade the client or upgrade the server. In my case I've upgraded server (new minikube) but forget to upgrade client (kubectl) and end up with those versions.</p>
<pre><code>$ kubectl version --short
Client Version: v1.9.0
Server Version: v1.14.1
</code></pre>
<p>When I'd upgraded client version (in this case to 1.14.2) then everything started to work again.</p>
<p>Instructions how to install (in your case upgrade) client are here <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl</a></p>
| sobi3ch |
<p>I am trying to deploy my Jhipster (v5.5.0) project onto Kubernetes (v1.16.3), but the pod keeps failing with the below logs. Anyone have any ideas?</p>
<p>Here is my YAML that will create the deployment / pod. I have another YAML that creates the PV and PVC.</p>
<pre><code>kind: Deployment
metadata:
name: portal
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
template:
spec:
containers:
- name: portal
image: "portal"
resources:
limits:
cpu: "0.5"
memory: "2048Mi"
requests:
cpu: "0.1"
memory: "64Mi"
imagePullPolicy: IfNotPresent
workingDir: /
securityContext:
runAsNonRoot: true
runAsUser: 950
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
volumes:
- name: portal-db-vol01
persistentVolumeClaim:
claimName: portal-db-pvc-volume01
terminationGracePeriodSeconds: 15
</code></pre>
<p>Below are my logs:</p>
<pre><code>
org.h2.jdbc.JdbcSQLException: Error while creating file "/target" [90062-196]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.store.fs.FilePathDisk.createDirectory(FilePathDisk.java:274)
....
2020-02-05T21:59:14Z WARN 7 - [ main] ationConfigEmbeddedWebApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is java.lang.RuntimeException: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'org.springframework.cloud.netflix.zuul.ZuulConfiguration$ZuulFilterConfiguration': Unsatisfied dependency expressed through field 'filters'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'noCachePreFilter' defined in URL [jar:file:/app.war!/WEB-INF/classes!/com/odp/filters/pre/NoCachePreFilter.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'appRepository': Cannot create inner bean '(inner bean)#248deced' of type [org.springframework.orm.jpa.SharedEntityManagerCreator] while setting bean property 'entityManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#248deced': Cannot resolve reference to bean 'entityManagerFactory' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [com/odp/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.DatabaseException: org.h2.jdbc.JdbcSQLException: Error while creating file "/target" [90062-196]
</code></pre>
| Mike K. | <p>Since the container <code>portal</code> in pod run as non-root <code>950</code></p>
<pre><code> securityContext:
runAsNonRoot: true
runAsUser: 950
</code></pre>
<p>Please confirm if this user has permission to create a file/folder in root <code>/</code>. Normally only root has this permission. The only writable folders for non-root users are under:</p>
<pre><code>Home directory (~/)
/tmp
/var/tmp
</code></pre>
| BMW |
<p>I see that normally a new image is created, that is, a dockerfile, but is it a good practice to pass the cert through environment variables? with a batch that picks it up and places it inside the container</p>
<p>Another approach I see is to mount the certificate on a volume.</p>
<p>What would be the best approximation to have a single image for all environments?
Just like what happens with software artifacts, I mean.</p>
<p>Creating a new image for each environment or renewal I find it tedious, but if it has to be like this...</p>
| Julio | <p>Definitely do <strong>not</strong> bake certificates into the image.</p>
<p>Because you tagged your question with <code>azure-aks</code>, I recommend using the Secrets Store CSI Driver to mount your certificates from Key Vault.</p>
<ul>
<li>See the <a href="https://github.com/Azure/secrets-store-csi-driver-provider-azure" rel="nofollow noreferrer">plugin project page on GitHub</a></li>
<li>See also this doc <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/configurations/getting-certs-and-keys/" rel="nofollow noreferrer">Getting Certificates and Keys using Azure Key Vault Provider</a></li>
<li>This doc is better, more thorough and worth going through even if you're not using the nginx ingress controller <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/configurations/ingress-tls/" rel="nofollow noreferrer">Enable NGINX Ingress Controller with TLS</a></li>
</ul>
<p>And so for different environments, you'd pull in different certificates from one or more key vaults and mount them to your cluster. Please also remember to use different credentials/identities to grab those certs.</p>
| julie-ng |
<p>Should I use URL's at the root of my application like so:</p>
<pre><code>/ready
/live
</code></pre>
<p>Should they both be grouped together like so:</p>
<pre><code>/status/ready
/status/live
</code></pre>
<p>Should I use <a href="https://www.rfc-editor.org/rfc/rfc5785" rel="nofollow noreferrer">RFC5785</a> and put them under the <code>.well-known</code> sub-directory like so:</p>
<pre><code>/.well-known/status/ready
/.well-known/status/live
</code></pre>
<p>If I do this, my understanding is that I have to register the <code>status</code> assignment with the official <a href="https://www.iana.org/assignments/well-known-uris/well-known-uris.xhtml" rel="nofollow noreferrer">IANA</a> registry.</p>
<p>Or is there some other scheme? I'm looking for a common convention that people use.</p>
| Muhammad Rehan Saeed | <p>The Kubernetes docs use <code>/healthz</code>, which I'd say is advisable to follow; but you really can use whatever you want.</p>
<p>I believe <code>healthz</code> is used to keep it inline with <code>zpages</code>, which are described by OpenCensus:</p>
<p><a href="https://opencensus.io/zpages/" rel="nofollow noreferrer">https://opencensus.io/zpages/</a></p>
| Rawkode |
<p>I want to set <code>scrape_interval</code> for the Prometheus to 15 seconds. My config below doesn't work, there is an error in the last line. I am wondering how should I config the 15 seconds <code>scrape_interval</code>?</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: main
spec:
serviceAccountName: prometheus
replicas: 1
version: v1.7.1
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
team: frontend
ruleSelector:
matchLabels:
role: alert-rules
prometheus: rules
resources:
requests:
memory: 400Mi
scrape_interval: 15s ##Error in this line.
</code></pre>
<p>I got this error message when compiling the config above:</p>
<p><code>error: error validating "promethus.yml": error validating data: ValidationError(Prometheus): unknown field "scrape_interval" in com.coreos.monitoring.v1.Prometheus; if you choose to ignore these errors, turn validation off with --validate=false</code></p>
<p>Thanks!</p>
| kevin | <p><code>scrape_interval</code> is probably a parameter name in the prometheus config and not for the <code>Prometheus</code> object in k8s (which is read by prometheus-operator and used to generate actual config).</p>
<p>You can see in the <a href="https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#prometheusspec" rel="nofollow noreferrer">prometheus operator documentation</a> that the parameter you are looking for is <code>scrapeInterval</code>. Ensure correct indentation, this is supposed to be part of <code>spec:</code>.</p>
<p>Note that you do not have to change scrape interval globally. You can have per scrape target intervals defined in your ServiceMonitor objects.</p>
| bjakubski |
<p>My core dns corefile got corrupted somehow and now I need to regenerate it or reset it to it's default installed value. How do I do that? I've tried copying and pasting a locally-saved version of the file via <code>kubectl edit cm coredns -n kube-system</code> but I get validation errors </p>
<pre><code>error: configmaps "coredns" is invalid
A copy of your changes has been stored to "/tmp/kubectl-edit-suzaq.yaml"
error: Edit cancelled, no valid changes were saved.
</code></pre>
| user_78361084 | <p>When you directly edit the setting, it used to give the error. </p>
<p>What can you do?</p>
<blockquote>
<p>before you run anything, please take a backup:</p>
</blockquote>
<pre><code>kubectl -n kube-system get configmap coredns -o yaml > coredns.yaml
</code></pre>
<h3>Ways #1, forcely apply it.</h3>
<pre><code>kubectl apply --force -f /tmp/kubectl-edit-suzaq.yaml
</code></pre>
<p>In most cases, it will apply the latest setting successfully by this way. If failed, go through the error, update the file <code>/tmp/kubectl-edit-suzaq.yaml</code> and forcely apply again.</p>
<h3>Ways #2, delete and apply again.</h3>
<pre><code>kubectl -n kube-system get configmap coredns -o yaml > coredns.yaml
# do a backup, if you don't 100% sure the change will work
cp coredns.yaml coredns.yaml.orig
# update the change in coredns.yaml
# delete coredns
kubectl delete configmap coredns
# apply new change
kubectl apply -f coredns.yaml
</code></pre>
<p>Be careful, above steps will generate outage. if you work on a prod environment, you should think about to backup all kubernetes setting before doing above change.</p>
| BMW |
<p>I'm working on a project that uses the <code>klog library</code> for logging, and I want to be able to write the <code>logs to a **terminal** while being able to write to a **file**</code> for viewing on reboot or downtime.</p>
<p>I tested through the following code, but it can only write to the file and not output at the terminal at the same time</p>
<pre><code>package main
import (
"flag"
"k8s.io/klog/v2"
)
func init() {
func init() {
var fs flag.FlagSet
klog.InitFlags(&fs)
fs.Set("logtostderr", "false")
fs.Set("log_file_max_size", "1000")
fs.Set("log_file", "/home/test/workspace/test/test.log")
}
}
func main() {
defer klog.Flush()
klog.Info("test")
}
</code></pre>
<p>How can I get it? I really appreciate any help with this.</p>
| moluzhui | <p>If you read the <a href="https://pkg.go.dev/k8s.io/klog/v2#section-documentation" rel="nofollow noreferrer">Klog documentation</a> says that it supports a flag, <code>-alsologtostderr</code>:</p>
<blockquote>
<p><code>-alsologtostderr=false</code><br/>
Logs are written to standard error as well as to files.</p>
</blockquote>
<p>Seems like that would do you.</p>
| Nicholas Carey |
<p>I'd like to prepare multiple yaml files customizing arguments of flannel (DaemonSet) and run the flannel pod of the node with yaml matching the condition expressed by the label. Can I label a worker node before joining Kubernetes master ?</p>
| shigeru ishida | <p>You can specify <code>--node-labels</code> when you're <code>kubelet</code> is starting, which will apply the labels to the nodes; but <strong>ONLY</strong> during registration.</p>
<p>This will not work if your <code>kubelet</code> is starting up and the node is already a member of the cluster.</p>
<p><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet" rel="nofollow noreferrer">Kubelet Docs</a></p>
| Rawkode |
<p>I have an init container and am running a command in it which takes a ton of parameters so I have something like </p>
<pre><code>command: ['sh', '-c', 'tool input1 input2 output
-action param1 -action param2 -action param3 -action param4 -action param5 -action param6 -action param7 -action param7 -action param8 -action param9 -action param10 ']
</code></pre>
<p>This highly decreases the readability of the command. Is there someway I can improve this like passsing this a a separate array ?</p>
| The_Lost_Avatar | <p>Another tip is to do it via command, so save your time that you needn't manually do that.</p>
<pre><code>kubectl run nginx --image=nginx --generator=run-pod/v1 --dry-run -o yaml -- tool input1 input2 output -action param1 -action param2 -action param3 -action param4 -action param5 -action param6 -action param7 -action param7 -action param8 -action param9 -action param10
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- args:
- tool
- input1
- input2
- output
- -action
- param1
- -action
- param2
- -action
- param3
- -action
- param4
- -action
- param5
- -action
- param6
- -action
- param7
- -action
- param7
- -action
- param8
- -action
- param9
- -action
- param10
image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
</code></pre>
<p>Above <code>dry-run</code> command will generate the yaml for option <code>--args</code>, you copy & paste it into the part of original init container, make it so simple and you can do it less in one munite</p>
<p>You still need keep the part of command, don't forget</p>
<pre><code> command: ["/bin/sh","-c"]
</code></pre>
| BMW |
<p>In some project there are scaling and orchestration implemented using technologies of a local cloud provider, with no Docker & Kubernetes. But the project has poor logging and monitoring, I'd like to instal Prometheus, Loki, and Grafana for metrics, logs, and visualisation respectively. Unfortunately, I've found no articles with instructions about using Prometheus without K8s.</p>
<p>But is it possible? If so, is it a good way? And how to do this? I also know that Prometheus & Loki can automatically detect services in the K8s to extract metrics and logs, but will the same work for a custom orchestration system?</p>
| AivanF. | <p>Can't comment about Loki, but Prometheus is definitely doable.
Prometheus supports a number of service discovery mechanisms, k8s being just on of them. If you look at the <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/" rel="nofollow noreferrer">list of options</a> (the ones ending with _sd_config) you can see if your provider is there.
If it is not then a generic service discovery can be used. Maybe <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config" rel="nofollow noreferrer">DNS-based</a> discovery will work with your custom system? If not then with some glue code a <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config" rel="nofollow noreferrer">file based service discovery</a> will almost certainly work.</p>
| bjakubski |
<p>My metrics are scraping every 30seconds, even though I've specified a 10s interval when defining my servicemonitor.</p>
<p>I've created a servicemonitor for my exporter that seems to be working well. I can see my exporter as a target, and view metrics on the /graph endpoint. However, when on the "targets" page, the "last scrape" is showing the interval as 30s (I refresh the page to see how high the # of seconds will go up to, it was 30). Sure enough, zooming in on the graph shows that metrics are coming in every 30 seconds.</p>
<p>I've set my servicemonitor's interval to 10s, which should override any other intervals. Why is it being ignored?</p>
<pre><code> endpoints:
- port: http-metrics
scheme: http
interval: 10s
</code></pre>
| Trevor Jordy | <p>First: Double check if you've changed the ServiceMonitor you need changed and if you are looking at scrapes from your ServiceMonitor.</p>
<p>Go to the web UI of your prometheus and select Status -> Configuration.
Now try to find part of the config that prometheus operator created (based on ServiceMonitor config). Probably looking by servicemonitor name will work - there should be a section with <code>job_name</code> containing your servicemonitor name.</p>
<p>Now look at the <code>scrape_interval</code> value in this section. If it is "30s" (or anything else that is not the expected "10s") and you are sure you're looking at the correct section then it means one of those things happened:</p>
<ul>
<li>your ServiceMonitor does not really contain "10s" - maybe it was not applied correctly? Verify the live object in your cluster</li>
<li>prometheus-operator did not update Prometheus configuration - maybe it is not working? or is crashing? or just silently stopped working? It is quite safe just to restart the prometheus-operator pod, maybe it is worth trying.</li>
<li>prometheus did not load the new config correctly? prometheus operator updates a secret and when it is changed sidecar in prometheus pod triggers reload in prometheus. Maybe it didn't work? Look again in the Web UI in Status -> Runtime & Build information for "Configuration reload". Is it Successful? Does the "Last successful configuration reload" time roughly matches your change in servicemonitor? If it was not "Successful" then maybe some other change made the final promethuus config incorrect and it is unable to load it?</li>
</ul>
| bjakubski |
<p>I have one question about Grafana. How I can use exiting Prometheus deamonset on GKE for Grafana. I do not want to spin up one more Prometheus deployment for just Grafana. I come up with this question after I spin up the GKE cluster. I have checked <code>kube-system</code> namespace and it turns out there is Prometheus <code>deamonset</code> already deployed. </p>
<pre><code>$ kubectl get daemonsets -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
prometheus-to-sd 2 2 2 2 2 beta.kubernetes.io/os=linux 19d
</code></pre>
<p>and I would like to use this Prometheus</p>
<p>I have Grafana deployment with helm <code>stable/grafana</code></p>
<pre><code>$ kubectl get deploy -n dev
NAME READY UP-TO-DATE AVAILABLE AGE
grafana 1/1 1 1 9m20s
</code></pre>
<p>Currently, I am using <code>stable/prometheus</code></p>
| Farkhod Sadykov | <p>prometheus-to-sd is not a Prometheus instance, but a component that allows getting data from Prometheus to GCP's stackdriver. More info here: <a href="https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd</a></p>
<p>If you'd like to have Prometheus you'll have to run it separately. (<a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">prometheus-operator helm chart</a> is able to deploy whole monitoring stack to your GKE cluster easily (which my or may not be exactly what you need here).</p>
<p>Note that recent Grafana versions come with Stackdriver datasource, which allows you to query Stackdriver directly from Grafana (if all metrics you need are or can be in Stackdriver).</p>
| bjakubski |
<p>On <code>CentOS 7.4</code>, I have set up a Kubernetes master node, pulled down jenkins image and deployed it to the cluster defining the jenkins service on a NodePort as below.</p>
<p>I can curl the jenkins app from the worker or master nodes using the IP defined by the service. But, I can not access the Jenkins app (dashboard) from my browser (outside cluster) using the public IP of the master node.</p>
<pre><code>[administrator@abcdefgh ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
abcdefgh Ready master 19h v1.13.1
hgfedcba Ready <none> 19h v1.13.1
[administrator@abcdefgh ~]$ sudo docker pull jenkinsci/jenkins:2.154-alpine
[administrator@abcdefgh ~]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 5 days ago 80.2MB
k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 5 days ago 146MB
k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 5 days ago 181MB
k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 5 days ago 79.6MB
jenkinsci/jenkins 2.154-alpine aa25058d8320 2 weeks ago 222MB
k8s.gcr.io/coredns 1.2.6 f59dcacceff4 6 weeks ago 40MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 months ago 220MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 10 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 12 months ago 742kB
[administrator@abcdefgh ~]$ ls -l
total 8
-rw------- 1 administrator administrator 678 Dec 18 06:12 jenkins-deployment.yaml
-rw------- 1 administrator administrator 410 Dec 18 06:11 jenkins-service.yaml
[administrator@abcdefgh ~]$ cat jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-ui
spec:
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
selector:
app: jenkins-master
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-discovery
spec:
selector:
app: jenkins-master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: jenkins-slaves
[administrator@abcdefgh ~]$ cat jenkins-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
containers:
- image: jenkins/jenkins:2.154-alpine
name: jenkins
ports:
- containerPort: 8080
name: http-port
- containerPort: 50000
name: jnlp-port
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
[administrator@abcdefgh ~]$ kubectl create -f jenkins-service.yaml
service/jenkins-ui created
service/jenkins-discovery created
[administrator@abcdefgh ~]$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-discovery ClusterIP 10.98.--.-- <none> 50000/TCP 19h
jenkins-ui NodePort 10.97.--.-- <none> 8080:31587/TCP 19h
kubernetes ClusterIP 10.96.--.-- <none> 443/TCP 20h
[administrator@abcdefgh ~]$ kubectl create -f jenkins-deployment.yaml
deployment.extensions/jenkins created
[administrator@abcdefgh ~]$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
jenkins 1/1 1 1 19h
[administrator@abcdefgh ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default jenkins-6497cf9dd4-f9r5b 1/1 Running 0 19h
kube-system coredns-86c58d9df4-jfq5b 1/1 Running 0 20h
kube-system coredns-86c58d9df4-s4k6d 1/1 Running 0 20h
kube-system etcd-abcdefgh 1/1 Running 1 20h
kube-system kube-apiserver-abcdefgh 1/1 Running 1 20h
kube-system kube-controller-manager-abcdefgh 1/1 Running 5 20h
kube-system kube-flannel-ds-amd64-2w68w 1/1 Running 1 20h
kube-system kube-flannel-ds-amd64-6zl4g 1/1 Running 1 20h
kube-system kube-proxy-9r4xt 1/1 Running 1 20h
kube-system kube-proxy-s7fj2 1/1 Running 1 20h
kube-system kube-scheduler-abcdefgh 1/1 Running 8 20h
[administrator@abcdefgh ~]$ kubectl describe pod jenkins-6497cf9dd4-f9r5b
Name: jenkins-6497cf9dd4-f9r5b
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: hgfedcba/10.41.--.--
Start Time: Tue, 18 Dec 2018 06:32:50 -0800
Labels: app=jenkins-master
pod-template-hash=6497cf9dd4
Annotations: <none>
Status: Running
IP: 10.244.--.--
Controlled By: ReplicaSet/jenkins-6497cf9dd4
Containers:
jenkins:
Container ID: docker://55912512a7aa1f782784690b558d74001157f242a164288577a85901ecb5d152
Image: jenkins/jenkins:2.154-alpine
Image ID: docker-pullable://jenkins/jenkins@sha256:b222875a2b788f474db08f5f23f63369b0f94ed7754b8b32ac54b8b4d01a5847
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 18 Dec 2018 07:16:32 -0800
Ready: True
Restart Count: 0
Environment:
JAVA_OPTS: -Djenkins.install.runSetupWizard=false
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wqph5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
jenkins-home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-wqph5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wqph5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
[administrator@abcdefgh ~]$ kubectl describe svc jenkins-ui
Name: jenkins-ui
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=jenkins-master
Type: NodePort
IP: 10.97.--.--
Port: ui 8080/TCP
TargetPort: 8080/TCP
NodePort: ui 31587/TCP
Endpoints: 10.244.--.--:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# Check if NodePort along with Kubernetes ports are open
[administrator@abcdefgh ~]$ sudo su root
[root@abcdefgh administrator]# systemctl start firewalld
[root@abcdefgh administrator]# firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API Server
Warning: ALREADY_ENABLED: 6443:tcp
success
[root@abcdefgh administrator]# firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
Warning: ALREADY_ENABLED: 2379-2380:tcp
success
[root@abcdefgh administrator]# firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
Warning: ALREADY_ENABLED: 10250:tcp
success
[root@abcdefgh administrator]# firewall-cmd --permanent --add-port=10251/tcp # kube-scheduler
Warning: ALREADY_ENABLED: 10251:tcp
success
[root@abcdefgh administrator]# firewall-cmd --permanent --add-port=10252/tcp # kube-controller-manager
Warning: ALREADY_ENABLED: 10252:tcp
success
[root@abcdefgh administrator]# firewall-cmd --permanent --add-port=10255/tcp # Read-Only Kubelet API
Warning: ALREADY_ENABLED: 10255:tcp
success
[root@abcdefgh administrator]# firewall-cmd --permanent --add-port=31587/tcp # NodePort of jenkins-ui service
Warning: ALREADY_ENABLED: 31587:tcp
success
[root@abcdefgh administrator]# firewall-cmd --reload
success
[administrator@abcdefgh ~]$ kubectl cluster-info
Kubernetes master is running at https://10.41.--.--:6443
KubeDNS is running at https://10.41.--.--:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[administrator@hgfedcba ~]$ curl 10.41.--.--:8080
curl: (7) Failed connect to 10.41.--.--:8080; Connection refused
# Successfully curl jenkins app using its service IP from the worker node
[administrator@hgfedcba ~]$ curl 10.97.--.--:8080
<!DOCTYPE html><html><head resURL="/static/5882d14a" data-rooturl="" data-resurl="/static/5882d14a">
<title>Dashboard [Jenkins]</title><link rel="stylesheet" ...
...
</code></pre>
<p>Would you know how to do that? Happy to provide additional logs. Also, I have installed jenkins from yum on another similar machine without any docker or kubernetes and it's possible to access it through 10.20.30.40:8080 in my browser so there is no provider firewall preventing me from doing that.</p>
| Robin | <p>Your Jenkins Service is of type <code>NodePort</code>. That means that a specific port number, on any node within your cluster, will deliver your Jenkins UI.</p>
<p>When you described your Service, you can see that the port assigned was <code>31587</code>.</p>
<p>You should be able to browse to <code>http://SOME_IP:31587</code></p>
| Rawkode |
<p>I'm using the latest prometheus 2.21.0 and latest node-exporter</p>
<p>Trying to run the query and getting <strong>no datapoints found</strong> however both metrics <code>kube_pod_container_resource_limits_memory_bytes</code> and <code>node_memory_MemTotal_bytes</code> are working independently and return data</p>
<pre><code>(sum(kube_pod_container_resource_limits_memory_bytes) / :node_memory_MemTotal_bytes:sum)*100
</code></pre>
<p>So two questions</p>
<ol>
<li>I never saw such syntax before <code>:node_memory_MemTotal_bytes:sum</code> - is it valid prometheus query?</li>
<li>What is wrong with the query if the syntax is correct?</li>
</ol>
| DmitrySemenov | <ol>
<li>This is a convention widely used in prometheus land. It means this metric is not one directly scraped from some target(s), but instead a result of recording rule. This convention is described <a href="https://prometheus.io/docs/practices/rules/#naming-and-aggregation" rel="nofollow noreferrer">here</a>.</li>
<li>If queries on both left and right side return data individually but after performing artihmetic on them you are left with no data then it probably means labels on them are not exactly the same. Execute them separately and compare labels you have on your results. Assuming that <code>:node_memory_MemTotal_bytes:sum</code> does return data then you'll probably have to add <code>sum</code> there too to remove any remaining labels there</li>
</ol>
| bjakubski |
Subsets and Splits