Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>Is it possible for an app running inside a pod to get details/metadata about its own pod. I know this can be easily achieved using <a href="https://kubernetes.io/docs/concepts/workloads/pods/downward-api/" rel="nofollow noreferrer">downwardAPI</a> or <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">env variables</a>.</p>
<p>Is there a third way?</p>
<p>I want to find out the service account and namespace of the pod without using either of these approaches. Is it possible?</p>
| G13 | <p>Any particular reason to decouple container details ? Bear in mind that the downward api is intended to get details such as service account.
Now, responding to your question i envision two possible ways to achieve your objective, kubectl and API Server. I elaborated on the kubectl approach:</p>
<p><strong>Kubernetes client (kubectl)</strong></p>
<ul>
<li>Get information about your Kubernetes secret object</li>
</ul>
<blockquote>
<p>kubectl get secret --namespace={namespace}</p>
</blockquote>
<p>Following is a sample output:</p>
<pre><code>NAME TYPE DATA AGE
admin.registrykey kubernetes.io/dockercfg 1 1h
default-token-2mfqv kubernetes.io/service-account-token 3 1h
</code></pre>
<ul>
<li>Get details of the service account token</li>
</ul>
<blockquote>
<p>kubectl get secret default-token-2mfqv --namespace={namespace} -o yaml</p>
</blockquote>
<p>Following is a sample output, <strong>notice the annotations section</strong>:</p>
<pre><code>apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURVRENDQWppZ0F3SUJBZ0lKQUxSSHNCazBtblM4TUEwR0NTcUdTSWIzRFFFQkN3VUFNQjh4SFRBYkJnTlYKQkFNTUZERXlOeTR3TGpBdU1VQXhORGczTWprNU1EZ3lNQjRYRFRFM01ESXhOekF5TXpnd01sb1hEVEkzTURJeApOVEF5TXpnd01sb3dIekVkTUJzR0ExVUVBd3dVTVRJM0xqQXVNQzR4UURFME9EY3lPVGt3T0RJd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDMTUwZWxjRXZXUDBMVFZZK09jNTl4ZG9PUCtXb08Kd3BGNGRxaGpDSDdyZGtUcGVKSE1zeW0raU4wMWxBSjNsc2UvYjB0V2h5L1A5MVZpZmpjazFpaDBldDg0eUZLawpuQWFaNVF6clJxQjk2WGZ3VVVyUElZc0RjRlpzbnAwZUlZU0xJdEhSSHQ3dlY0R3hqbG1TLzlpMzBIcW5rTWJTCmtCbU0xWEp2ZXdjVkROdE55NUE3K1RhNmJWcmt5TlpPZFFjZTkzMk0yTGZ2bUFORzI2UTRtd0x1MlAxNnZGV3EKbkdDd055OVl3Y0k2YVhpQTFSVTNLdWR5d00zZzN2aU03UVMyMXRGbkh4RzJrcU5NNHVKdWZDYnNNZ1gwd1hNQgpuZWZzZ053K0p1b2VnZzFVcHd5RmQydjVyMEpQVkxBN0N1T1d6RzVtK0RrNWNlWExOaGVwMDhxUkFnTUJBQUdqCmdZNHdnWXN3SFFZRFZSME9CQllFRkxlV3ZDOThkZFJxQ2t0eGVla2t5bnY1aCtDSU1FOEdBMVVkSXdSSU1FYUEKRkxlV3ZDOThkZFJxQ2t0eGVla2t5bnY1aCtDSW9TT2tJVEFmTVIwd0d3WURWUVFEREJReE1qY3VNQzR3TGpGQQpNVFE0TnpJNU9UQTRNb0lKQUxSSHNCazBtblM4TUF3R0ExVWRFd1FGTUFNQkFmOHdDd1lEVlIwUEJBUURBZ0VHCk1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2Z1ZTdQcVFFR2NMUitjQ2tJMXdISHR2ei9tZWJmNndqUHBqN0oKamV5TG5aeWVZMUVZeEJDWEJEYk9BaU5BRTh6aWQrcm1Fd0w5NndtOGFweUVnbEN6aDhmU1ZoZ1dtYmZKSUNQQQpTTGdFZ1ZjOFJDQk5OdjUwWTQ4L0NXWXFZL2pjZkxYQ1VOdVU5RXhQd1BKRE9jNHhFOFg1NHZDekxzZUF3ZnQ0CmlBS0R0QzZmS0FMNXZQL3RRbHBya2FuVC9zcEVackNZV2IyZXlkRjV4U1NMKzNUbVJTeXgvUkczd1FTWEtCT3cKVGdjaWxJdFQ1WlAwQ0V2WHI1OFBMRXZKMVE1TGZ2Q0w0bkliTEEzMmVucUQ4UlZkM01VbkgxSnFpLzU4VktLQgo4SFpBb1V2bkl2SG5SNGVVbnAwMXFWVFpsS21Xc0JtbjV3MkxaS1FWMEIvVzlnSFAKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUprWldaaGRXeDBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpXTnlaWFF1Ym1GdFpTSTZJbVJsWm1GMWJIUXRkRzlyWlc0dE1tMW1jWFlpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1dVlXMWxJam9pWkdWbVlYVnNkQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG5WcFpDSTZJbVJtTkRReFl6WTVMV1kwWW1FdE1URmxOaTA0TVRVM0xUVXlOVFF3TURJeU5XSTFNeUlzSW5OMVlpSTZJbk41YzNSbGJUcHpaWEoyYVdObFlXTmpiM1Z1ZERwa1pXWmhkV3gwT21SbFptRjFiSFFpZlEuWlN4MTNtY3JPcEwteVFmQWtTV05Ja0VELUIxeTNnckJTREg0Z1lwMnNkb2FZNXBSaWMxc3hXWjRDb0M0YVlnN3pzc09oWHk0NDc5VTh6RTVmVmZ3eFdCSXVWUDVoTEJwWTFHOWhlMzZzSkw1dEpjY2dqSVZhaTFZcHUtQld0dERkRFhnUVZXSHZtQmt0STVPaG1GMTFoWFNqd05VUDhYb2NNY1lKMzZUcFZxbkZCLUZaZ1RnN2h5eWdoclN4MnZTTThHNWhPMWlEdXFFbGlrNTUzQy1razVMTGFnc01DRVpkblBKM2tFb0dzX3hoTVVsaDc3OEkweTMwV3FwYW9uOHBLS1I1NjIzMjd6eTdXNGY0UnJhc3VPSGZwUGE3SVE5cU1ub21fcWxBcWxDQ3lXVEkyV3dxQ09xdnNHUmdNUHJjemc3WnYzLWlXRktBaVc3ZU5VYnVR
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: default
kubernetes.io/service-account.uid: df441c69-f4ba-11e6-8157-525400225b53
creationTimestamp: 2017-02-17T02:43:33Z
name: default-token-2mfqv
namespace: default
resourceVersion: "37"
selfLink: /api/v1/namespaces/default/secrets/default-token-2mfqv
uid: df5f1109-f4ba-11e6-8157-525400225b53
type: kubernetes.io/service-account-token
</code></pre>
<p>Theres a complete document <a href="https://www.ibm.com/docs/en/cloud-paks/cp-management/2.0.0?topic=apis-cli-tools-guide" rel="nofollow noreferrer">here</a></p>
| jmvcollaborator |
<p>I have a .NetCore C# project which performs an HTTP POST. The project is set up in Kubernetes and I've noticed the logs below:</p>
<pre><code>Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:45 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:46 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:47 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:48 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:49 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:50 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:51 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:52 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:53 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:54 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:55 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:56 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:44:33 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:44:34 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:44:35 +00:00".
</code></pre>
<p>After some initial research, it seems this is a common result of threadpool starvation. Accordingly, in November last year, I made the post asynchronous and also logged the Max threads and Available threads as follows for monitoring purposes:</p>
<pre><code>ThreadPool.GetMaxThreads(out int workerThreads, out int completionPortThreads);
ThreadPool.GetAvailableThreads(out int workerThreadAvailable, out int completionPortThreadsAvailable);
_log.Info(new { message = $"Max threads = {workerThreads} and Available threads = {workerThreadAvailable}" });
</code></pre>
<p>Consistently over the past few months, the logging shows: <em>Max threads = 32767 and Available threads = 32766</em>. That seems fine, however, I'm noticing the same Heartbeat error so am wondering if this really is a threadpool starvation issue. Might someone know what else is going on and if this error is actually a result of something else? Any investigation/resolution tips for this would be much appreciated!</p>
| ENV | <p>This is a resource issue, as @andy pointed out in his response.</p>
<p>According to OP, the solution to this problem is to either increase the server's CPU capacity (vertically) or the number of instances of your app (horizontally).</p>
| Alex G |
<p><strong>EDIT : I harcoded the fluentd service IP directly in my express app and its working.. how to get it work without harcoding ip?</strong></p>
<p>I have a couple of pods <strong>(nodejs + express server)</strong> running on a Kubernetes cluster.</p>
<p>I'd like send logs from my <strong>nodejs pods</strong> to a <strong>Fluentd DeamonSet</strong>.</p>
<p>But I'm getting this error :</p>
<p><code>Fluentd error Error: connect ECONNREFUSED 127.0.0.1:24224</code></p>
<p>I'm using <a href="https://github.com/fluent/fluent-logger-node" rel="nofollow noreferrer">https://github.com/fluent/fluent-logger-node</a> and my configuration is pretty simple:</p>
<pre><code>const logger = require('fluent-logger')
logger.configure('pptr', {
host: 'localhost',
port: 24224,
timeout: 3.0,
reconnectInterval: 600000
});
</code></pre>
<p>My fluentd conf file:</p>
<pre><code><source>
@type forward
port 24224
bind 0.0.0.0
</source>
# Ignore fluent logs
<label @FLUENT_LOG>
<match fluent.*>
@type null
</match>
</label>
<match pptr.**>
@type elasticsearch
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
type_name fluentd
logstash_format true
</match>
</code></pre>
<p>Here's the Fluentd DeamonSet config file:</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
version: v1
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
ports:
- containerPort: 24224
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "xxx"
- name: FLUENT_ELASTICSEARCH_PORT
value: "xxx"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "https"
# Option to configure elasticsearch plugin with self signed certs
# ================================================================
- name: FLUENT_ELASTICSEARCH_SSL_VERIFY
value: "true"
# Option to configure elasticsearch plugin with tls
# ================================================================
- name: FLUENT_ELASTICSEARCH_SSL_VERSION
value: "TLSv1_2"
# X-Pack Authentication
# =====================
- name: FLUENT_ELASTICSEARCH_USER
value: "xxx"
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: "xxx"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: config-volume
mountPath: /fluentd/etc/kubernetes.conf
subPath: kubernetes.conf
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: config-volume
configMap:
name: fluentd-conf
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
</code></pre>
<p>I also tried to deploy a service and expose the <strong>24224</strong> port :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: fluentd
namespace: kube-system
labels:
app: fluentd
spec:
ports:
- name: "24224"
port: 24224
targetPort: 24224
selector:
k8s-app: fluentd-logging
status:
loadBalancer: {}
</code></pre>
<p>Finally my express app (deployment) is here:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: puppet
labels:
app: puppet
spec:
replicas: 5
selector:
matchLabels:
app: puppet
template:
metadata:
labels:
app: puppet
spec:
containers:
- name: puppet
image: myrepo/my-image
ports:
- containerPort: 8080
</code></pre>
<p><strong>EDIT : I harcoded the fluentd service IP directly in my express app and its working.. how to get it work without harcoding ip?</strong></p>
| MichelDelpech | <p>Focusing on below parts of the question:</p>
<blockquote>
<p>I'd like send logs from my nodejs pods to a Fluentd DeamonSet.</p>
</blockquote>
<blockquote>
<p>EDIT : I harcoded the fluentd service IP directly in my express app and its working.. how to get it work without harcoding ip?</p>
</blockquote>
<p>It looks like the communication between pods and the <code>fluentd</code> service is correct (hardcoding the IP works). The issue here is the way they can communicate with each other.</p>
<p>You can communicate with service <code>fluentd</code> by its name. For example (from the inside of a pod):</p>
<ul>
<li><code>curl fluentd:24224</code></li>
</ul>
<p><strong>You can communicate with services by its name (like <code>fluentd</code>) only in the same namespace.</strong> If a service is in another namespace you would need to use it's full DNS name. It's template and example is following:</p>
<ul>
<li>template: <code>service-name.namespace.svc.cluster.local</code></li>
<li>example: <code>fluentd.kube-system.svc.cluster.local</code></li>
</ul>
<p>You can also use service of type <code>ExternalName</code> to map the full DNS name of your service to a shorter version like below:</p>
<hr />
<p>Assuming that (example):</p>
<ul>
<li>You have created a <code>nginx-namespace</code> namespace:
<ul>
<li><code>$ kubectl create namespace nginx-namespace</code></li>
</ul>
</li>
<li>You have an <code>nginx</code> <code>Deployment</code> inside the <code>nginx-namespace</code> and a service associated with it:
<ul>
<li><code>$ kubectl create deployment nginx --image=nginx --namespace=nginx-namespace</code></li>
<li><code>$ kubectl expose deployment nginx --port=80 --type=ClusterIP --namespace=nginx-namespace</code></li>
</ul>
</li>
<li>You want to communicate with <code>nginx</code> <code>Deployment</code> from another namespace (i.e. <code>default</code>)</li>
</ul>
<p>You have an option to communicate with above pod:</p>
<ul>
<li>By the IP address of a <code>Pod</code>
<ul>
<li><code>10.98.132.201</code></li>
</ul>
</li>
<li>By a (full) DNS service name
<ul>
<li><code>nginx.nginx-namespace.svc.cluster.local</code></li>
</ul>
</li>
<li>By an <code>ExternalName</code> type of service that points to a a (full) DNS service name
<ul>
<li><code>nginx-service</code></li>
</ul>
</li>
</ul>
<p>The example of <code>ExternalName</code> type of service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default # <- the same as the pod communicating with the service
spec:
type: ExternalName
externalName: nginx.nginx-namespace.svc.cluster.local
</code></pre>
<p>You can pass this information to the pod by either:</p>
<ul>
<li>Environment variable: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">Kubernetes.io: Define environment variable container</a></li>
<li>ConfigMap: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Kubernetes.io: Configure pod configmap</a></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/44329470/12257134">Stackoverflow.com: Service located in another namespace</a></em></li>
</ul>
| Dawid Kruk |
<p>In most TCP client/server communications, the client uses a random general purpose port number for outgoing traffic. However, my client application, which is running inside a Kubernetes cluster, must use a specific port number for outgoing traffic; this is due to requirements by the server.</p>
<p>This normally works fine when the application is running externally, but when inside a Kubernetes cluster, the source port is modified somewhere along the way from the pod to the worker node (verified with tcpdump on worker node).</p>
<p>For context, I am using a LoadBalancer Service object. The cluster is running kube-proxy in Iptables mode.</p>
| anonemes11 | <p>So I found that I can achieve this by setting the <code>hostNetwork</code> field to <em>true</em> in the pod's spec.</p>
<p>Not an ideal solution but gets the job done.</p>
| anonemes11 |
<p>I'm trying to make a gRPC service <a href="https://thanos.io/tip/components/sidecar.md/" rel="nofollow noreferrer">(thanos sidecar)</a> externally accessible over a domain in my kubernetes cluster <a href="https://docs.k3s.io/networking#traefik-ingress-controller" rel="nofollow noreferrer">(k3s cluster)</a>. I am using Traefik as an ingress controller.</p>
<p>Any clues as to what I may be misconfiguring would be much appreciated. I am really unclear where the problem lies, be it in the NLB in amazon (do I need something specific for grpc or can I just use TCP & port 80/443?), the Traefik ingress or the service itself.</p>
<p>I have been unsuccessful in finding any errors from traefik logs or service misconfiguration.</p>
<h3>Environment</h3>
<p>The gRPC service is deployed in the cluster as a sidecar container of a Prometheus deployment. This is being deployed using the <a href="https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> helm chart.</p>
<pre class="lang-yaml prettyprint-override"><code>$ kubectl describe pod prometheus-monitoring-prometheus-0 -n monitoring
Name: prometheus-monitoring-prometheus-0
Namespace: monitoring
Priority: 0
Service Account: monitoring-prometheus
Node: k3s-node-1/12.345.678.910
Start Time: Wed, 26 Jul 2023 18:35:38 +0000
Labels: app.kubernetes.io/instance=monitoring-prometheus
app.kubernetes.io/managed-by=prometheus-operator
app.kubernetes.io/name=prometheus
...
prometheus=monitoring-prometheus
statefulset.kubernetes.io/pod-name=prometheus-monitoring-prometheus-0
Annotations: kubectl.kubernetes.io/default-container: prometheus
Status: Running
IP: 10.42.0.200
IPs:
IP: 10.42.0.200
Controlled By: StatefulSet/prometheus-monitoring-prometheus
...
Containers:
...
thanos-sidecar:
Container ID: containerd://bdc1bbfe53bf1ea260c47a44ab26110432388fe5592e037c83da5c6b6c5f696f
Image: http://quay.io/thanos/thanos:v0.31.0
Image ID: quay.io/thanos/thanos@sha256:e7d337d6ac24233f0f9314ec9830291789e16e2b480b9d353be02d05ce7f2a7e
Ports: 10902/TCP, 10901/TCP
Host Ports: 0/TCP, 0/TCP
Args:
sidecar
--prometheus.url=http://127.0.0.1:9090/
--prometheus.http-client={"tls_config": {"insecure_skip_verify":true}}
--grpc-address=:10901
--http-address=:10902
--objstore.config=$(OBJSTORE_CONFIG)
--tsdb.path=/prometheus
--log.level=info
--log.format=logfmt
State: Running
Started: Wed, 26 Jul 2023 18:35:41 +0000
Ready: True
Restart Count: 0
Environment:
OBJSTORE_CONFIG: <set to the key 'objstore.yml' in secret 'my-s3-bucket'> Optional: false
Mounts:
/prometheus from prometheus-monitoring-prometheus-db (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-slz8t (ro)
...
</code></pre>
<p>The sidecar container is then exposed specifically using a service</p>
<pre class="lang-yaml prettyprint-override"><code>$ kubectl describe svc monitoring-thanos-discovery -n monitoring
Name: monitoring-thanos-discovery
Namespace: monitoring
Labels: app=monitoring-thanos-discovery
app.kubernetes.io/instance=monitoring
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/part-of=monitoring
app.kubernetes.io/version=47.2.0
chart=kube-prometheus-stack-47.2.0
heritage=Helm
release=monitoring
Annotations: meta.helm.sh/release-name: monitoring
meta.helm.sh/release-namespace: monitoring
traefik.ingress.kubernetes.io/service.serversscheme: h2c
Selector: app.kubernetes.io/name=prometheus,prometheus=monitoring-prometheus
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: None
IPs: None
Port: grpc 10901/TCP
TargetPort: grpc/TCP
Endpoints: 10.42.0.200:10901
Port: http 10902/TCP
TargetPort: http/TCP
Endpoints: 10.42.0.200:10902
Session Affinity: None
Events: <none>
</code></pre>
<p>I am using an Ingress (default) to create a TLS certificate for my domain and an IngressRoute (traefik specific) to expose the service via what <em>I believe</em> to be HTTP2 capable endpoint.</p>
<p>thanos-ingress-dummy.yaml</p>
<pre class="lang-yaml prettyprint-override"><code># We use this resource to get a certificate for the given domain (To use with ingressroute)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: thanos-discovery-ingress-dummy
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: "thanos-gateway.monitoring.domain.com"
http:
paths:
- path: /cert-placeholder
pathType: Prefix
backend:
service:
name: monitoring-thanos-discovery
port:
name: grpc
tls:
- hosts:
- "thanos-gateway.monitoring.domain.com"
secretName: thanos-sidecar-grpc-tls
</code></pre>
<p>thanos-ingressroute.yaml</p>
<pre class="lang-yaml prettyprint-override"><code># We use IngressRoute to allow our grpc server to be reachable. (Supports grpc over http2)
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: thanos-discovery-ingress
spec:
entryPoints:
- websecure
routes:
- match: Host(`thanos-gateway.monitoring.domain.com`)
kind: Rule
services:
- name: monitoring-thanos-discovery
port: grpc
tls:
secretName: thanos-sidecar-grpc-tls
</code></pre>
<p>Here's a picture of what this should look like right now.</p>
<p><a href="https://i.stack.imgur.com/ot3jI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ot3jI.png" alt="Thanos querier talking to external sidecar " /></a></p>
<h3>Problem</h3>
<p>The gRPC service is not reachable from outside the cluster over the specified domain.</p>
<p>From within a container inside the cluster, I am able to communicate with the server using <a href="https://github.com/fullstorydev/grpcurl/issues/154#issuecomment-1209211177" rel="nofollow noreferrer">grpcurl</a> against the <code>monitoring-thanos-discovery</code> service using the internal cluster DNS.</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl exec -it debian-debug -- bash
root@debian-debug:/# grpcurl -plaintext monitoring-thanos-discovery.monitoring.svc.cluster.local:10901 grpc.health.v1.Health.Check
{
"status": "SERVING"
}
</code></pre>
<p>When I try the same from outside the cluster against the domain I have specified in the ingresses (thanos-gateway.monitoring.domain.com), I get the following.</p>
<pre class="lang-bash prettyprint-override"><code>$ grpcurl --plaintext thanos-gateway.monitoring.domain.com:443 list
Failed to list services: server does not support the reflection API
</code></pre>
<p>When I do a curl request against the endpoint I can verify that the request is being handled by Traefik, however an Internal Server Error response is given. Curling against the http endpoint results in 404, which is expected given the fact that I only specified <code>websecure</code> in my ingress. I had previously also had <code>web</code> specified in the ingress with the same response from grpc and curl as 443 port.</p>
<pre class="lang-bash prettyprint-override"><code>$ curl https://thanos-gateway.monitoring.domain.com
Internal Server Error
$ curl http://thanos-gateway.monitoring.domain.com
404 page not found
</code></pre>
| Beefcake | <p>To answer my own question, the issue was twofold.</p>
<ol>
<li>Calling grpcurl with <code>--plaintext</code> when the only available endpoint uses TLS results in the below response. Meaning, <code>--plaintext</code> should be left out of the command when you have configured your route to use TLS.</li>
</ol>
<blockquote>
<p>Failed to list services: server does not support the reflection API</p>
</blockquote>
<ol start="2">
<li>The IngressRoute configuration needed some polishing. <br />
I stumbled upon an <a href="https://stackoverflow.com/questions/74880981/how-to-create-traefik-ingressroute-out-of-traefik-configuration">unrelated stackoverflow question</a> which lead me to the correct way to set up the configuration. I am not sure which part of the changes made it work, but the addition of namespace, scheme and passHostHeader does the trick here I believe.</li>
</ol>
<p>What I changed</p>
<ul>
<li><p>I do not need the "fake" ingress (thanos-ingress-dummy.yaml) because I already have a wildcard certificate for *.domain.com</p>
</li>
<li><p>I changed the domain to thanos-grpc.domain.com to use the already existing tls cert (otherwise the old approach of making a fake ingress would probably still work, but I haven't checked)</p>
</li>
</ul>
<p>The new thanos-ingressroute.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: thanos
namespace: monitoring
spec:
entryPoints:
- websecure
routes:
- match: Host(`thanos-grpc.domain.com`)
kind: Rule
services:
- name: monitoring-thanos-discovery
namespace: monitoring
port: 10901
scheme: h2c
passHostHeader: true
tls:
secretName: my-domain-wildcard-tls
</code></pre>
<p>This is the response I now get calling the configured domain.</p>
<pre class="lang-bash prettyprint-override"><code>$ grpcurl thanos-grpc.domain.com:443 list
grpc.health.v1.Health
grpc.reflection.v1alpha.ServerReflection
thanos.Exemplars
thanos.Metadata
thanos.Rules
thanos.Store
thanos.Targets
thanos.info.Info
</code></pre>
<p>NOTE THAT I AM NOT USING <code>--plaintext</code> FLAG ANYMORE.</p>
<p>If I use the --plaintext I get the same old response <em>Failed to list services: server does not support the reflection API</em>.</p>
| Beefcake |
<p>Is Kubernetes version and AKS versions are refereing the same versioning ?</p>
<p>For example, Is AKS version 1.18.14 refers to the same of Kubernetes version 1.18.14?</p>
<p>Because Istio 1.7 version is support Kubernetes 1.18.x only. so need to know if we upgrade AKS to 1.19.x, whether it will be compatible with istio 1.7 ?</p>
| Vowneee | <p>Kubernetes version is equal to the AKS version of Kubernetes.</p>
<p>You can refer to the official documentation <a href="https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions" rel="nofollow noreferrer">here</a>.</p>
<p>This mean that if isto-v1.7 is supported only on K8S v1.18.x, it will <strong>NOT</strong> work on AKS v1.19.x</p>
| Abhinav Thakur |
<p>A while back a <code>GKE</code> cluster got created which came with a <code>daemonset</code> of:</p>
<pre><code>kubectl get daemonsets --all-namespaces
...
kube-system prometheus-to-sd 6 6 6 3 6 beta.kubernetes.io/os=linux 355d
</code></pre>
<p>Can I delete this <code>daemonset</code> without issue?
What is it being used for?
What functionality would I be losing without it?</p>
| Chris Stryczynski | <h2><strong>TL;DR</strong></h2>
<p><strong>Even if you delete it, it will be back.</strong></p>
<hr>
<h2>A little bit more explanation</h2>
<p>Citing explanation by user @Yasen what <code>prometheus-to-sd</code> is: </p>
<blockquote>
<p>prometheus-to-sd is a simple component that can scrape metrics stored in <a href="https://prometheus.io/docs/instrumenting/exposition_formats/" rel="nofollow noreferrer">prometheus text format</a> from one or multiple components and push them to the Stackdriver. Main requirement: k8s cluster should run on GCE or GKE.</p>
<p><a href="https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd" rel="nofollow noreferrer">Github.com: Prometheus-to-sd</a></p>
</blockquote>
<p>Assuming that the command deleting this daemonset will be:</p>
<p><code>$ kubectl delete daemonset prometheus-to-sd --namespace=kube-system</code> </p>
<p><strong>Executing this command will indeed delete the daemonset but it will be back after a while.</strong></p>
<p><code>prometheus-to-sd</code> daemonset is managed by <strong>Addon-Manager</strong> which will recreate deleted daemonset back to original state. </p>
<p>Below is the part of the <code>prometheus-to-sd</code> daemonset <code>YAML</code> definition which states that this daemonset is managed by <code>addonmanager</code>: </p>
<pre><code> labels:
addonmanager.kubernetes.io/mode: Reconcile
</code></pre>
<p>You can read more about it by following: <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager" rel="nofollow noreferrer">Github.com: Kubernetes: addon-manager</a></p>
<hr>
<p>Deleting this daemonset is strictly connected to the monitoring/logging solution you are using with your <code>GKE</code> cluster. There are 2 options: </p>
<ul>
<li>Stackdriver logging/monitoring</li>
<li>Legacy logging/monitoring </li>
</ul>
<h3>Stackdriver logging/monitoring</h3>
<p>You need to completely disable logging and monitoring of your <code>GKE</code> cluster to delete this daemonset. </p>
<p>You can do it by following a path: </p>
<p><code>GCP -> Kubernetes Engine -> Cluster -> Edit -> Kubernetes Engine Monitoring -> Set to disabled</code>. </p>
<p><a href="https://i.stack.imgur.com/q28q2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q28q2.png" alt="Disabling Stackdriver"></a></p>
<h3>Legacy logging/monitoring</h3>
<p>If you are using a legacy solution which is available to <code>GKE</code> version <code>1.14</code>, you need to disable the option of <code>Legacy Stackdriver Monitoring</code> by following the same path as above. </p>
<p><a href="https://i.stack.imgur.com/GHKed.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GHKed.png" alt="Disabling Legacy"></a></p>
<p>Let me know if you have any questions in that. </p>
| Dawid Kruk |
<p>I am creating a new Service of Type Load Balancer in google cloud.</p>
<p>As we can specify the connection idle timeout for AWS or azure by providing the annotation in metadata in service.yaml.</p>
<p>What is the similar annotation for the google cloud?</p>
<p><strong>service.kubernetes.io/aws-load-balancer-connection-idle-timeout: "500"</strong></p>
| Rohit Aggarwal | <p><strong>There is no possibility to configure the <code>idle connection timeout</code> for a service type of <code>LoadBalancer</code> in <code>GKE</code>.</strong></p>
<blockquote>
<p>Google Cloud external TCP/UDP Network Load Balancing (after this referred to as Network Load Balancing) is a regional, non-proxied load balancer.</p>
<p> <em><a href="https://cloud.google.com/load-balancing/docs/network" rel="nofollow noreferrer">Cloud.google.com: External TCP/UDP LoadBalancer</a></em> </p>
</blockquote>
<p>As said above, the network load balancer does not perform any type of modifications on the path as it's not a proxy but a forwarding rule. <strong>It does not provide any timeout facility.</strong> </p>
<p>If you are having issues with <code>idle connections</code> please check whole route that the traffic is taking to pinpoint where the issue could lie. </p>
<p>Please take a look on additional documentation: </p>
<ul>
<li><a href="https://cloud.google.com/load-balancing/docs/network" rel="nofollow noreferrer">Cloud.google.com: External TCP/UDP LoadBalancer</a></li>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/service" rel="nofollow noreferrer">Cloud.google.com: Kubernetes engine: Services</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes.io: Services</a></li>
</ul>
| Dawid Kruk |
<p>I am running a kubernetes cluster with <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">Kind</a> configured as shown bellow:</p>
<pre><code>kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: ${ingress_http_port}
protocol: TCP
- containerPort: 443
hostPort: ${ingress_https_port}
protocol: TCP
networking:
kubeProxyMode: "ipvs"
</code></pre>
<p>The cluster is running inside the kind-control-plane docker container:</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
53d9511b8282 kindest/node:v1.21.1 "/usr/local/bin/entr…" 5 hours ago Up 5 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 127.0.0.1:41393->6443/tcp kind-control-plane
</code></pre>
<p>I have also successfully deployed a deployment running a nodeJs application inside a pod and i have already exposed a service to access the app through an ingress controller and everything works as expected:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: application-deployment
spec:
ports:
- name: http
port: 3000
protocol: TCP
selector:
app: application-deployment
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: application-deployment
spec:
rules:
- http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: application-deployment
port:
number: 3000
</code></pre>
<p>I am using the WebStorm IDE to develop the application running inside the pod and i am trying to configure a remote debugger to connect to the application inside the Kind cluster. I know how to configure a debugger running inside a docker container but i dont know how to run a debugger inside a kubernetes pod running in a docker container.</p>
<p>I have already tried to configure it through WebStrom with the settings bellow:</p>
<p><a href="https://i.stack.imgur.com/q7cp8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q7cp8.png" alt="enter image description here" /></a></p>
<p>And these are the settings under the <strong>Docker container settings</strong> label:</p>
<p><a href="https://i.stack.imgur.com/CfGeB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CfGeB.png" alt="enter image description here" /></a></p>
<p>Any suggestions or workarounds in order to accomplish this would be more than appreciated.</p>
<p>Thank you in advance!</p>
| Charalarg | <p>Finally I managed to connect the remote debugger by following the steps described bellow:</p>
<ol>
<li>Start the node process inside the pod with the <code>--inspect-brk</code> arg in order to be able to attach a debugger. (<code>E.g. node --inspect-brk --loader ts-node/esm src/server.ts</code>)</li>
<li>Then I forwarded the debug port from the pod to my local computer by running the command <code>kubectl port-forward deploy/application-deployment 9229:9229</code></li>
<li>Finally I created an <strong>Attach to Node.js/Chrome</strong> run/debug configuration on WebStorm instead of the Node.js configuration as I tried on the beginning and everything worked like a charm.</li>
</ol>
<p><a href="https://www.jetbrains.com/help/webstorm/running-and-debugging-node-js.html" rel="nofollow noreferrer">This</a> linked helped me configure the described solution.</p>
| Charalarg |
<p>I am currently trying to create an EFS for use within an EKS cluster. I've followed all the instructions, and everything seems to be working for the most part. However, when trying to apply the multiple_pods example deployment from <a href="https://github.com/kubernetes-sigs/aws-efs-csi-driver" rel="nofollow noreferrer">here</a>, the pods cannot succesfully mount the file system. The PV and PVC are both bound and look good, however the pods do not start and yield the following error message:</p>
<pre><code> Warning FailedMount 116s (x10 over 6m7s) kubelet, ip-192-168-42-94.eu-central-1.compute.internal MountVolume.SetUp failed for volume "efs-pv" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Could not mount "fs-080b8b50:/" at "/var/lib/kubelet/pods/3f7c898d-c3de-42e7-84e5-bf3b56e691ea/volumes/kubernetes.io~csi/efs-pv/mount": mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t efs fs-080b8b50:/ /var/lib/kubelet/pods/3f7c898d-c3de-42e7-84e5-bf3b56e691ea/volumes/kubernetes.io~csi/efs-pv/mount
Output: Traceback (most recent call last):
File "/sbin/mount.efs", line 1375, in <module>
main()
File "/sbin/mount.efs", line 1355, in main
bootstrap_logging(config)
File "/sbin/mount.efs", line 1031, in bootstrap_logging
raw_level = config.get(CONFIG_SECTION, 'logging_level')
File "/lib64/python2.7/ConfigParser.py", line 607, in get
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'mount'
Warning FailedMount 110s (x2 over 4m4s) kubelet, ip-192-168-42-94.eu-central-1.compute.internal Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage default-token-d47s9]: timed out waiting for the condition
</code></pre>
<p>To me the error looks like it may not be related to my configuration, however as I expect AWS's example deployments to work, I doubt that. I'm neither familiar with python's ConfigParser nor with EFS, so I can only guess what that error really means. Thanks alot for any help!</p>
| Max Luchterhand | <p>Faced a similar problem.</p>
<p>Fixed by update efs-csi-node daemonset from <code>amazon/aws-efs-csi-driver:v0.3.0</code> image to <code>amazon/aws-efs-csi-driver:latest</code></p>
| Maxim Kruglov |
<p>I am a newbie and I may ask a stupid question, but I could not find answers on Kind or on stackoverflow, so I dare asking:</p>
<ul>
<li>I run kind (Kubernestes-in-Docker) on a Ubuntu machine, with 32GB memory and 120 GB disk.</li>
<li>I need to run a Cassandra cluster on this Kind cluster, and each node needs at least 0.5 CPU and 1GB memory.</li>
</ul>
<p>When I look at the node, it gives this:</p>
<pre><code>Capacity:
cpu: 8
ephemeral-storage: 114336932Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32757588Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 114336932Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32757588Ki
pods: 110
</code></pre>
<p>so in theory, there is more than enough resources to go. However, when I try to deploy the cassandra deployment, the first Pod keeps in a status 'Pending' because of a lack of resources. And indeed, the Node resources look like this:</p>
<pre><code>Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (1%) 100m (1%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
</code></pre>
<p>The node does not get actually access to the available resources: it stays limited at 10% of a CPU and 50MB memory.</p>
<p>So, reading the exchange above and having read #887, I understand that I need to actually configure Docker on my host machine in order for Docker to allow the containers simulating the Kind nodes to grab more resources. But then... how can give such parameters to Kind so that they are taken into account when creating the cluster ?</p>
| Thierry Souche | <p>\close</p>
<p>Sorry for this post: I finally found out that the issue was related to the <code>storageclass</code> not being properly configured in the spec of the Cassandra cluster, and not related to the dimensioning of the nodes.</p>
<p>I changed the <code>cassandra-statefulset.yaml</code> file to indicate the 'standard' storageclass: this storageclass is provisionned by default on a KinD cluster since version 0.7. And it works fine.
<em>Since Cassandra is resource hungry, and depending on the machine, you may have to increase the <code>timeout</code> parameters so that the Pods would not be considered faulty during the deployment of the Cassandra cluster. I had to increase the timouts from respectively 15 and 5s, to 25 and 15s.</em></p>
<p>This topic should be closed.</p>
| Thierry Souche |
<p>I want to schedule kubernetes cronjob in my local timezone (GMT+7), currently when I schedule cronjob in k8s I need to schedule in UTC but I want to schedule in my local timezone, As specify in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noreferrer">Kubernetes document</a>, that I need to change timezone in kube-controller manager as follows</p>
<blockquote>
<p>All CronJob schedule: times are based on the timezone of the
kube-controller-manager.</p>
<p>If your control plane runs the kube-controller-manager in Pods or bare
containers, the timezone set for the kube-controller-manager container
determines the timezone that the cron job controller uses.</p>
</blockquote>
<p>But I can't find a way to set timezone for kube-controller-manager, I'm using Kuberenetes on-premise v1.17, I found controller manager manifest file in - /etc/kubernetes/manifests/kube-controller-manager.yaml but can't find a way to or document to change the it's timezone.</p>
| Teerakiat Chitawattanarat | <p>Now, if you use Kubernetes version 1.25 can use the field <code>timeZone</code> to specify a time zone for a CronJob.</p>
<pre><code>spec:
timeZone: "Asia/Bangkok"
schedule: "0 17 * * *"
</code></pre>
<p>But if Kubernetes version < 1.25 and >= 1.21 you can fix the timezone in line with the schedule field.</p>
<pre><code>spec:
schedule: "CRON_TZ=Asia/Bangkok 0 17 * * *"
</code></pre>
<p>Official document: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones</a></p>
<p>TimeZone list: <a href="https://www.ibm.com/docs/en/cloudpakw3700/2.3.0.0?topic=SS6PD2_2.3.0/doc/psapsys_restapi/time_zone_list.html" rel="noreferrer">https://www.ibm.com/docs/en/cloudpakw3700/2.3.0.0?topic=SS6PD2_2.3.0/doc/psapsys_restapi/time_zone_list.html</a></p>
| Thanawat |
<p>When using the kubectl cli in a Windows DOS prompt, I get a prompt to enter a username, that works fine but when I press enter after entering a username the prompt for a password appears and then immediately acts like I hit the enter key, no chance to enter the password, looks like this, from the screen print you can see that I am using kubectl version 1.15.</p>
<p><a href="https://i.stack.imgur.com/9gZj8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9gZj8.png" alt="enter image description here"></a></p>
<p>If I try this using Git Bash, it behaves the same but responds with the error shown below</p>
<p><a href="https://i.stack.imgur.com/c83JD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c83JD.png" alt="enter image description here"></a></p>
<p>Same deal where the password prompt is not waiting for input. </p>
<p>Anyone ever seen this or have any thoughts on how I can provide a username and password to kubectl without storing it a plain test in the config file? </p>
<p>Also, I am using a corporate Kubernates cluster, so no options to move to a more current version or do anything else that would require admin access.</p>
| vscoder | <p>Posting this answer as community wiki with general guidelines for issues similar to this: </p>
<p><strong>TL;DR</strong></p>
<p><strong>The prompt for username and password is most probably caused by misconfigured <code>.kube/config</code>.</strong></p>
<p>As for: </p>
<blockquote>
<p>Anyone ever seen this or have any thoughts on how I can provide a username and password to kubectl without storing it a plain test in the config file?</p>
</blockquote>
<p>There are a lot of possibilities for authentication in Kubernetes. All of them have some advantages and disadvantages. Please take a look on below links:</p>
<ul>
<li><a href="https://medium.com/@etienne_24233/comparing-kubernetes-authentication-methods-6f538d834ca7" rel="nofollow noreferrer">Medium.com: Comparing kubernetes authentication methods</a></li>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Kubernetes.io: Authentication</a></li>
</ul>
<hr>
<p>The prompt for username and password can appear when <code>.kube/config</code> file is misconfigured. I included one possible reason below: </p>
<p>Starting with correctly configured <code>.kube/config</code> for a <code>minikube</code> instance. </p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
clusters:
- cluster:
certificate-authority: PATH_TO_SOMEWHERE/.minikube/ca.crt
server: https://172.17.0.3:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: PATH_TO_SOMEWHERE/client.crt
client-key: PATH_TO_SOMEWHERE/client.key
</code></pre>
<p>Issuing commands with above <code>.kube/config</code> should not prompt for user and password as below: </p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods
No resources found in default namespace.
</code></pre>
<p>Editing <code>.kube/config</code> and changing: </p>
<pre class="lang-yaml prettyprint-override"><code> user: minikube
</code></pre>
<p>to: </p>
<pre class="lang-yaml prettyprint-override"><code> user: not-minikube
</code></pre>
<p>Will lead to: </p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods
Please enter Username: minikube
Please enter Password:
</code></pre>
<p>Correctly configuring <code>.kube/config</code> is heavily dependent on a solution used (like <code>minikube</code>, <code>kubeadm</code> provisioned cluster, and a managed cluster like <code>GKE</code>). Please refer to official documentation of solution used. </p>
| Dawid Kruk |
<p>So I've read a bunch of these similar questions/issues on stackoverflow and I understand it enough but not sure what I am missing.</p>
<p>deployment.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: dev-namespace
labels:
web: nginx
spec:
replicas: 2
selector:
matchLabels:
web: nginx
template:
metadata:
labels:
web: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
</code></pre>
<p>service.yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
web: nginx
ports:
- protocol: TCP
port: 80
targetPort: 8080
</code></pre>
<p>This is my <em>minikube ip</em>:</p>
<pre><code>$ minikube ip
192.168.49.2
</code></pre>
<p>This is the service</p>
<pre><code>$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service NodePort 10.104.139.228 <none> 80:30360/TCP 14
</code></pre>
<p>This is the deployment</p>
<pre><code>$ kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 14h
</code></pre>
<p>This is the pods</p>
<pre><code>$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5b78696cc8-9fpmr 1/1 Running 0 14h 172.17.0.6 minikube <none> <none>
nginx-deployment-5b78696cc8-h4m72 1/1 Running 0 14h 172.17.0.4 minikube <none> <none>
</code></pre>
<p>This is the endpoints</p>
<pre><code>$ kubectl get endpoints
NAME ENDPOINTS AGE
nginx-service 172.17.0.4:8080,172.17.0.6:8080 14h
</code></pre>
<p>But when I try to <em>curl 10.104.139.228:30360</em> it just hangs. When I try to <em>curl 192.168.49.2:30360</em> I get the <em>Connection refused</em></p>
<p>I am sure that using <em>NodePort</em> means I need to use the <em>node</em> ip and that would be the server's local IP since I am using <em>minikube</em> and <em>control plane</em> and <em>worker</em> are in the same server.</p>
<p>What am I missing here? Please help, this is driving me crazy. I should mention that I am able to <em>kubectl exec -ti pod-name -- /bin/bash</em> and if I do a <em>curl localhost</em> I do get the famous response "Welcome to NGINX"</p>
| eljoeyjojo | <p>Nevermind :/ I feel very foolish I see that the mistake was the container ports :/ my <em>nginx</em> pods are listening on port 80 not port 8080</p>
<p>For anyone out there, I updated my config files to this:</p>
<p>service.yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: busy-qa
spec:
type: NodePort
selector:
web: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>deployment.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: busy-qa
labels:
web: nginx
spec:
replicas: 2
selector:
matchLabels:
web: nginx
template:
metadata:
labels:
web: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Now when I <em>curl</em> I get the NGINX response</p>
<pre><code>$ curl 192.168.49.2:31168
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
</code></pre>
| eljoeyjojo |
<p>We have written java application (Job) which reads some files from Azure blob and writes content to Azure Event hub. This is batch job runs at scheduled interval
We have deployed and scheduled the application as Kubernetes CronJob. We are recording events with some details when files are moved from blob to Event hub, but those events are not reflecting in application insight. But we can see events when we run locally from IDE(Eclipse or intellij)</p>
<p>Below is deployment yaml file</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: job-name-1.0.0.0
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
nodeSelector:
agentpool: agentpoolname
containers:
- name: job-name-0
image: opsregistry.azurecr.io/job-name:v1.0.0.0
imagePullPolicy: Always
command: ["java", "-jar","job-name-1.0.0.0.jar","$(connection_string)"]
env:
- name: connection_string
valueFrom:
configMapKeyRef:
name: job-configmap
key: connectionString
resources:
limits:
cpu: "15"
requests:
cpu: "0.5"
restartPolicy: Never
</code></pre>
<p>below is the java code used to write event to azure application insigh</p>
<pre><code> TelemetryClient telemetry = new TelemetryClient();
telemetry.getContext().setInstrumentationKey(instrumentationKey);
telemetry.getContext().getCloud().setRole("CloudRoleName");
telemetry.trackTrace("SOME INFORMATION ABOUT JOB", SeverityLevel.Information);
</code></pre>
<p>Please note we have deployed another Kafka stream job with same code but kind of deployment as <strong>kind: Deployment</strong> in yaml file and the events are flowing into Application insight without any issue we are facing for the <strong>kind: CronJob</strong></p>
<p>Is there any changes we have to do for cron jobs?</p>
<p>Thanks in advance.</p>
| chandu ram | <p>It is quite possible that job is ending before TelemetryClient could flush the pending telemetry from buffer. For continuous running job, it's not a problem (like in this case your Kafka stream job), but for scheduled job, execution ends leaving the pending telemetry. To fix this, add below in the code at the end of execution to ensure pending telemetries are written in the channel before execution ends.</p>
<pre class="lang-java prettyprint-override"><code>// here 'telemetry' is the instance of TelemetryClient as per your shared code
telemetry.flush();
</code></pre>
| krishg |
<p>I have kubernetes cluster with 3 masters and 7 workers. I use Calico as cni. When I deploy Calico, the calico-kube-controllers-xxx fails because it cannot reach 10.96.0.1:443.</p>
<pre><code>2020-06-23 13:05:28.737 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", DatastoreType:"kubernetes"}
W0623 13:05:28.740128 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2020-06-23 13:05:28.742 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2020-06-23 13:05:38.742 [ERROR][1] client.go 261: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2020-06-23 13:05:38.742 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
</code></pre>
<p>this is the situation in the kube-system namespace:</p>
<pre><code>kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-77d6cbc65f-6bmjg 0/1 CrashLoopBackOff 56 4h33m
calico-node-94pkr 1/1 Running 0 36m
calico-node-d8vc4 1/1 Running 0 36m
calico-node-fgpd4 1/1 Running 0 37m
calico-node-jqgkp 1/1 Running 0 37m
calico-node-m9lds 1/1 Running 0 37m
calico-node-n5qmb 1/1 Running 0 37m
calico-node-t46jb 1/1 Running 0 36m
calico-node-w6xch 1/1 Running 0 38m
calico-node-xpz8k 1/1 Running 0 37m
calico-node-zbw4x 1/1 Running 0 36m
coredns-5644d7b6d9-ms7gv 0/1 Running 0 4h33m
coredns-5644d7b6d9-thwlz 0/1 Running 0 4h33m
kube-apiserver-k8s01 1/1 Running 7 34d
kube-apiserver-k8s02 1/1 Running 9 34d
kube-apiserver-k8s03 1/1 Running 7 34d
kube-controller-manager-k8s01 1/1 Running 7 34d
kube-controller-manager-k8s02 1/1 Running 9 34d
kube-controller-manager-k8s03 1/1 Running 8 34d
kube-proxy-9dppr 1/1 Running 3 4d
kube-proxy-9hhm9 1/1 Running 3 4d
kube-proxy-9svfk 1/1 Running 1 4d
kube-proxy-jctxm 1/1 Running 3 4d
kube-proxy-lsg7m 1/1 Running 3 4d
kube-proxy-m257r 1/1 Running 1 4d
kube-proxy-qtbbz 1/1 Running 2 4d
kube-proxy-v958j 1/1 Running 2 4d
kube-proxy-x97qx 1/1 Running 2 4d
kube-proxy-xjkjl 1/1 Running 3 4d
kube-scheduler-k8s01 1/1 Running 7 34d
kube-scheduler-k8s02 1/1 Running 9 34d
kube-scheduler-k8s03 1/1 Running 8 34d
</code></pre>
<p>Besides, also coredns cannot get internal kubernetes service.</p>
<p>Within a node, if I run <code>wget -S 10.96.0.1:443</code>, I receive a response.</p>
<pre><code>wget -S 10.96.0.1:443
--2020-06-23 13:12:12-- http://10.96.0.1:443/
Connecting to 10.96.0.1:443... connected.
HTTP request sent, awaiting response...
HTTP/1.0 400 Bad Request
2020-06-23 13:12:12 ERROR 400: Bad Request.
</code></pre>
<p>But, if I run <code>wget -S 10.96.0.1:443</code> in a pod, I receive a <strong>timeout error</strong>.</p>
<p>Also, i cannot ping nodes from pods.</p>
<p>Cluster pod cidr is 192.168.0.0/16.</p>
| fsilletti | <p>I resolve recreating the cluster with different pod cidr</p>
| fsilletti |
<p>The problem: I have a spring boot service running on K8s. Generally API calls can be served by any pod of my service, but for a particular use case we have a requirement to propagate the call to all instances of the service.</p>
<p>A bit of googling led me to <a href="https://discuss.kubernetes.io/t/how-to-broadcast-message-to-all-the-pod/10002" rel="nofollow noreferrer">https://discuss.kubernetes.io/t/how-to-broadcast-message-to-all-the-pod/10002</a> where they suggest using</p>
<p><code>kubectl get endpoints cache -o yaml</code></p>
<p>and proceeding from there. This is fine for a human or a CLI environment, but how do I accomplish the same from within my Java service, aside from executing the above command via <code>Process</code> and parsing the output?</p>
<p>Essentially I want a way to do what the above command is doing but in a more java-friendly way.</p>
| hoodakaushal | <p>Seems like your spring boot service should be listening to a message queue, and when one service receives a specific HTTP request message to the <code>/propagateme</code> endpoint, it sends an event to the topic to all other clients listening to the <code>Propagation topic</code>, when the instances receive a message from the topic they perform the specific action</p>
<p>See JMS <a href="https://spring.io/guides/gs/messaging-jms/" rel="nofollow noreferrer">https://spring.io/guides/gs/messaging-jms/</a></p>
| Nick Bonilla |
<p>I'm looking to use the Kubernetes python client to delete a deployment, but then block and wait until all of the associated pods are deleted as well. A lot of the examples I'm finding recommend using the watch function something like follows.</p>
<pre><code>try:
# try to delete if exists
AppsV1Api(api_client).delete_namespaced_deployment(namespace="default", name="mypod")
except Exception:
# handle exception
# wait for all pods associated with deployment to be deleted.
for e in w.stream(
v1.list_namespaced_pod, namespace="default",
label_selector='mylabel=my-value",
timeout_seconds=300):
pod_name = e['object'].metadata.name
print("pod_name", pod_name)
if e['type'] == 'DELETED':
w.stop()
break
</code></pre>
<p>However, I see two problems with this.</p>
<ol>
<li>If the pod is already gone (or if some other process deletes all pods before execution reaches the watch stream), then the watch will find no events and the for loop will get stuck until the timeout expires. Watch does not seem to generate activity if there are no events.</li>
<li>Upon seeing events in the event stream for the pod activity, how do know all the pods got deleted? Seems fragile to count them.</li>
</ol>
<p>I'm basically looking to replace the <code>kubectl delete --wait</code> functionality with a python script.</p>
<p>Thanks for any insights into this.</p>
| Joe J | <pre class="lang-py prettyprint-override"><code>import json
def delete_pod(pod_name):
return v1.delete_namespaced_pod(name=pod_name, namespace="default")
def delete_pod_if_exists(pod_name):
def run():
delete_pod(pod_name)
while True:
try:
run()
except ApiException as e:
has_deleted = json.loads(e.body)['code'] == 404
if has_deleted:
return
</code></pre>
| Chetan Jain |
<p>I have this json</p>
<pre><code>{'kind': 'Secret', 'foo': 'secret_value'}
</code></pre>
<p>How can I use this json to create a secret in Kubernetes?</p>
<p>I want to run the equivalent of</p>
<pre><code>kubectl create secret {'kind': 'Secret', 'foo': 'secret_value'}
</code></pre>
| jor2 | <p><code>kubectl create secret generic secret-name --from-file=./your-file.json</code></p>
<p>or</p>
<p><code>kubectl create secret generic secret-name --from-literal=foo=secret_value</code></p>
| Luke Briner |
<p>how to edit below yaml file to get all secrets, keys, certificates in my Azure KeyVault instead of using array and type/write all of it here?</p>
<p>i'm be able to get only below listed secret and key, but i'd like to share all stored data in my AKV</p>
<pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: azure-kvname-podid
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: "kvname"
cloudName: "" # [OPTIONAL for Azure] if not provided, azure environment will default to AzurePublicCloud
objects: |
array:
- |
objectName: secret1
objectType: secret # object types: secret, key or cert
objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
- |
objectName: key1
objectType: key
objectVersion: ""
tenantId: "tid" # the tenant ID of the KeyVault
</code></pre>
<p><a href="https://github.com/Azure/secrets-store-csi-driver-provider-azure/blob/master/examples/v1alpha1_secretproviderclass_pod_identity.yaml" rel="nofollow noreferrer">reference1</a>
<a href="https://learn.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes" rel="nofollow noreferrer">reference2</a></p>
| Mahmoud | <p>I've used Azure CSI a bit, and there are pretty much 2 ways I know of.</p>
<p>Very quick disclaimer as it seems to be what you're asking for, there is no 'one-liner' to get all your secrets from Azure KeyVault. Meaning if you expect 'select * from AKV" without specifying specific IDs of those secrets/keys/cert, then this 'secrets store CSI' will not be what you expect. You more or less have to have a fair-sized YAML file to make it work for ALL your Azure KeyVault secrets.<br />
That said, you can deploy a very large YAML file with 200 secrets using a single command if you want, which will be mentioned below.</p>
<p>So with that out of the way, I'll go over pros/cons of the 2 methods I use, and give a sample of how they work.</p>
<h2><strong>Method 1</strong></h2>
<p><strong>Pros:</strong> Shorter YAML file, all AKV secrets are in one variable.</p>
<p><strong>Cons:</strong> All your AKV secrets are in one variable; which depending on your application might not work. For example, this equates to one single volume mount and the Pod would have access to ALL types of secrets you tell it to connect to.</p>
<p><strong>How to implement:</strong>
Actually, the sample YAML you have is pretty much how to have multiple secrets. Just keep adding to the 'array' field with all the secrets you want Azure CSI to inject for you, and below is a modified example:</p>
<pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: azure-kvname-podid # This ID, is what you use in your Volume Mapping to reference this.
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: "kvname"
objects: |
array:
- |
objectName: secret1
objectType: secret
- |
objectName: key1
objectType: key
- |
objectName: your_db_password # So this ID, matches the same ID in your Azure KeyVault (AKV)
objectType: secret # object types: secret, key or cert. There no other types for AKV.
- |
objectName: your_blob_storage_password # So this ID, matches the same ID in your Azure KeyVault (AKV)
objectType: secret # object types: secret, key or cert. There no other types for AKV.
- |
objectName: even_more_secrets_in_your_AKV # So this ID, matches the same ID in your Azure KeyVault (AKV)
objectType: secret # object types: secret, key or cert. There no other types for AKV.
tenantId: "tid" # the tenant ID of the KeyVault
</code></pre>
<h2>Method 2</h2>
<p><strong>Pros:</strong> Your secrets are broken up into individual variables, allowing flexibility to your deployment to select which ones get attached to which Pod(s)</p>
<p><strong>Cons:</strong> It's going to be a massively long YAML file, with a lot of duplicated fields. That said; this essentially is deployed using a one-liner for all the secrets, using <code>"kubectl apply -f <FILE_NAME>.yaml --namespace=<NAMESPACE>"</code></p>
<p><strong>How to implement:</strong>
It's pretty much copying/pasting what you had, just split up into multiple sections. So below is an example of 5 AKV secrets, into 5 individual variables that can be volume-mounted in your application:</p>
<pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: akv-secret1 # This ID, is what you use in your Volume Mapping to reference this.
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: "kvname"
objects: |
array:
- |
objectName: secret1
objectType: secret
tenantId: "tid" # the tenant ID of the KeyVault
---
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: akv-secret2 # This ID, is what you use in your Volume Mapping to reference this.
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: "kvname"
objects: |
array:
- |
objectName: secret2
objectType: secret
tenantId: "tid" # the tenant ID of the KeyVault
---
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: akv-secret3 # This ID, is what you use in your Volume Mapping to reference this.
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: "kvname"
objects: |
array:
- |
objectName: secret3
objectType: secret
tenantId: "tid" # the tenant ID of the KeyVault
---
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: akv-secret4 # This ID, is what you use in your Volume Mapping to reference this.
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: "kvname"
objects: |
array:
- |
objectName: secret4
objectType: secret
tenantId: "tid" # the tenant ID of the KeyVault
---
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: akv-secret5 # This ID, is what you use in your Volume Mapping to reference this.
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: "kvname"
objects: |
array:
- |
objectName: secret5
objectType: secret
tenantId: "tid" # the tenant ID of the KeyVault
</code></pre>
| Brett Kingyens |
<p>I've been looking for a similar question for a while but I haven't found one. I have a remote Kubernetes cluster with the architecture of one master and two workers. The versions installed are as following:
Kubernetes: 1.15.1-0
Docker: 18.09.1-3.el7
I'm trying to deploy & expose a JAR file of Spring project that has one REST endpoint.</p>
<p>Deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: microservices-deployment
labels:
app: microservices-deployment
spec:
replicas: 3
template:
metadata:
name: microservices-deployment
labels:
app: microservices-deployment
spec:
containers:
- name: microservices-deployment
image: **my_repo**/*repo_name*:latest
imagePullPolicy: Always
ports:
- containerPort: 8085
restartPolicy: Always
selector:
matchLabels:
app: microservices-deployment
</code></pre>
<p>service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: microservices-service
spec:
selector:
app: microservices-deployment
ports:
- port: 8085
targetPort: 8085
type: NodePort
</code></pre>
<p>my application.properties:</p>
<pre><code>server.port=8085
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM openjdk:8
ADD target/microservices.jar microservices.jar
EXPOSE 8085
ENTRYPOINT ["java", "-jar", "microservices.jar"]
</code></pre>
<p>It looks like my pods are ready and everything looks good, but I can't access the service I exposed even from the master's terminal.
Does anyone have any idea?
Thanks in advance.</p>
<p><em><strong>UPDATE</strong></em></p>
<p>I'm able to telnet from my master to port 30000 on my nodes (after I specified 30000 as my NodePort), as well as telnet to my pods on port 8085. When I'm trying to telnet from the master to any other port in the nodes\pods I get refuse, so I think that's a good start. Still, I'm unable to access the rest endpoint I specified although it is working on Docker locally:
docker run -p 8085:8085 <em>IMAGE_NAME</em></p>
| Yaakov Shami | <p>The problem was a network problem. Accessing the endpoint from one of the workers did the trick. Thanks for all.</p>
| Yaakov Shami |
<p>I am new to Kubernetes and trying to learn but I am stuck with an error that I cannot find an explanation for. I am running Pods and Deployments in my cluster and they are running perfectly as shown in the CLI, but after a while they keep crashing and the Pods need to restart.</p>
<p>I did some research to fix my issue before posting here, but the way I understood it, I will have to make a deployment so that my replicaSets will manage my Pods lifecycle and not deploy Pods independently. But as you can see also Pods in deployment is crashing.</p>
<p>kubectl get pods</p>
<pre><code>operator-5bf8c8484c-fcmnp 0/1 CrashLoopBackOff 9 34m
operator-5bf8c8484c-phptp 0/1 CrashLoopBackOff 9 34m
operator-5bf8c8484c-wh7hm 0/1 CrashLoopBackOff 9 34m
operator-pod 0/1 CrashLoopBackOff 12 49m
</code></pre>
<p>kubectl describe pods operator</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/operator-pod to workernode
Normal Created 30m (x5 over 34m) kubelet, workernode Created container operator-pod
Normal Started 30m (x5 over 34m) kubelet, workernode Started container operator-pod
Normal Pulled 29m (x6 over 34m) kubelet, workernode Container image "operator-api_1:java" already present on machine
Warning BackOff 4m5s (x101 over 33m) kubelet, workernode Back-off restarting failed container
</code></pre>
<p>deployment yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: operator
labels:
app: java
spec:
replicas: 3
selector:
matchLabels:
app: call
template:
metadata:
labels:
app: call
spec:
containers:
- name: operatorapi
image: operator-api_1:java
ports:
- containerPort: 80
</code></pre>
<p>Can someone help me out, how can I debug?</p>
| amin224 | <p>The reason is most probably the process running in container finished its task and terminated by container OS after a while. Then the pod is being restarted by kubelet.</p>
<p>What I recommend you to solve this issue, please check the process running in container and try to keep it alive forever. You can create a loop to run this process in container or you can use some commands for container on the deployment.yaml</p>
<p>Here is a reference for you to understand and debug pod failure reason.
<a href="https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/</a></p>
| Özcan YILDIRIM |
<p>I am not able to connect to elasticsearch in kubernetes inside docker. My elasticsearch is accessed via kubernetes and I have an index called 'radius_ml_posts'. I am using elasticsearch's python library to connect to elasticsearch. When I run the whole process on my python IDE (Spyder), it works just fine. However, when I try to run it inside a docker container, I get connection issues. What am I missing? Below are my configs and code:</p>
<p>The <code>localhost:9200</code>:</p>
<pre><code>{
"name" : "elasticsearch-dev-client-6858c5f9dc-zbz8p",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "lJJbPJpJRaC1j7k5IGhj7g",
"version" : {
"number" : "6.7.0",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "8453f77",
"build_date" : "2019-03-21T15:32:29.844721Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
</code></pre>
<p>My python code to connect to elasticsearch host:</p>
<pre><code>def get_data_es(question):
es = Elasticsearch(hosts=[{"host": "elastic", "port": 9200}], connection_class=RequestsHttpConnection, max_retries=30,
retry_on_timeout=True, request_timeout=30)
#es = Elasticsearch(hosts='http://host.docker.internal:5000', connection_class=RequestsHttpConnection, max_retries=30, timeout=30)
doc = {'author': 'gunner','text': 'event', "timestamp": datetime.now()}
es.indices.refresh(index="radius_ml_posts")
res = es.index(index="radius_ml_posts", id = 1, body = doc)
res = es.search(index="radius_ml_posts", size = 30, body={ "query": {
"query_string": {
"default_field": "search_text",
"query": question
}
}
}
)
return res
</code></pre>
<p>My <code>docker-compose.yml</code> file:</p>
<pre><code>version: '2.2'
services:
elastic:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.0
container_name: elastic
environment:
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9300:9300
- 9200:9200
networks:
- elastic
myimage:
image: myimage:myversion
ports:
- 5000:5000
expose:
- 5000
networks:
- elastic
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
</code></pre>
<p>My <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.7.4
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip3 install -U nltk
RUN python3 -m nltk.downloader all
RUN pip --default-timeout=100 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python"]
CMD ["main.py"]
</code></pre>
<p>The docker commands I am running stepwise:</p>
<ol>
<li><code>docker build -t myimage:myversion .</code></li>
<li><code>docker-compose up</code></li>
</ol>
<p>The error I am getting:</p>
<pre><code>myimage_1 | Traceback (most recent call last):
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
myimage_1 | response = self.full_dispatch_request()
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
myimage_1 | rv = self.handle_user_exception(e)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
myimage_1 | reraise(exc_type, exc_value, tb)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
myimage_1 | raise value
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
myimage_1 | rv = self.dispatch_request()
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
myimage_1 | return self.view_functions[rule.endpoint](**req.view_args)
myimage_1 | File "main.py", line 41, in launch_app
myimage_1 | ques = get_data_es(ques1)
myimage_1 | File "/app/Text_Cleaning.py", line 32, in get_data_es
myimage_1 | es.indices.refresh(index="radius_ml_posts")
myimage_1 | File "/usr/local/lib/python3.7/site-packages/elasticsearch/client/utils.py", line 92, in _wrapped
myimage_1 | return func(*args, params=params, headers=headers, **kwargs)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/elasticsearch/client/indices.py", line 42, in refresh
myimage_1 | "POST", _make_path(index, "_refresh"), params=params, headers=headers
myimage_1 | File "/usr/local/lib/python3.7/site-packages/elasticsearch/transport.py", line 362, in perform_request
myimage_1 | timeout=timeout,
myimage_1 | File "/usr/local/lib/python3.7/site-packages/elasticsearch/connection/http_requests.py", line 157, in perform_request
myimage_1 | raise ConnectionError("N/A", str(e), e)
myimage_1 | elasticsearch.exceptions.ConnectionError: ConnectionError(HTTPConnectionPool(host='elastic', port=9200): Max retries exceeded with url: /radius_ml_posts/_refresh (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f967a9b1710>: Failed to establish a new connection: [Errno -2] Name or service not known'))) caused by: ConnectionError(HTTPConnectionPool(host='elastic', port=9200): Max retries exceeded with url: /radius_ml_posts/_refresh (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f967a9b1710>: Failed to establish a new connection: [Errno -2] Name or service not known')))
</code></pre>
<p>Please help in fixing the issue.</p>
<p>Thanks in advance.</p>
| gunn gunner | <p>I fixed it by using the host as:</p>
<pre><code>host:"host.docker.internal"
</code></pre>
<p>Code change,</p>
<pre><code>es = Elasticsearch(hosts=[{"host": "host.docker.internal", "port": 9200}], connection_class=RequestsHttpConnection, max_retries=30,
retry_on_timeout=True, request_timeout=30)
</code></pre>
| gunn gunner |
<p>I have a kind:deployment file, and they are forcing the image to be defined down here in the "initContainers", but I can't get my image in my own registry to load. If I try to put</p>
<pre><code> imagePullSecrets:
- name: regcred
</code></pre>
<p>in line with the "image" down below, I get <code>error converting YAML to JSON: yaml: found character that cannot start any token</code>. And I get the same thing if I move it around to different spots. Any ideas how I can use imagePullCreds here?</p>
<pre><code>spec:
template:
metadata:
spec:
initContainers:
- env:
- name: "BOOTSTRAP_DIRECTORY"
value: "/bootstrap-data"
image: "my-custom-registry.com/my-image:1.6.24-SNAPSHOT"
imagePullPolicy: "Always"
name: "bootstrap"
</code></pre>
| Mike K. | <p>Check if you are using tabs for indentation; YAML doesn't allow tabs; it requires spaces.</p>
<p>Also, You should use imagePullSecrets under spec instead of under containers.</p>
<pre><code>spec:
template:
metadata:
spec:
imagePullSecrets:
- name: regcred
initContainers:
</code></pre>
| Hamid Ostadvali |
<p><strong>Goal</strong>: Prepare a <code>values.yaml</code> file for the <a href="https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq" rel="noreferrer">rabbitmq chart provided by bitnami</a>, such that the plugin <a href="https://github.com/noxdafox/rabbitmq-message-deduplication" rel="noreferrer">rabbitmq-message-deduplication</a> is ready and available after running <code>helm install ...</code></p>
<p><strong>Previous solution</strong>: Currently, I am using the <code>stable/rabbitmq-ha</code> chart with the following <code>values.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>extraPlugins: "rabbitmq_message_deduplication"
extraInitContainers:
- name: download-plugins
image: busybox
command: ["/bin/sh","-c"]
args: ["
wget
-O /opt/rabbitmq/plugins/elixir-1.8.2.ez/elixir-1.8.2.ez
https://github.com/noxdafox/rabbitmq-message-deduplication/releases/download/0.4.5/elixir-1.8.2.ez
--no-check-certificate
;
wget
-O /opt/rabbitmq/plugins/rabbitmq_message_deduplication-v3.8.4.ez/rabbitmq_message_deduplication-v3.8.4.ez
https://github.com/noxdafox/rabbitmq-message-deduplication/releases/download/0.4.5/rabbitmq_message_deduplication-v3.8.x_0.4.5.ez
--no-check-certificate
"]
volumeMounts:
# elixir is a dependency of the deduplication plugin
- name: elixir
mountPath: /opt/rabbitmq/plugins/elixir-1.8.2.ez
- name: deduplication-plugin
mountPath: /opt/rabbitmq/plugins/rabbitmq_message_deduplication-v3.8.4.ez
extraVolumes:
- name: elixir
emptyDir: {}
- name: deduplication-plugin
emptyDir: {}
extraVolumeMounts:
- name: elixir
mountPath: /opt/rabbitmq/plugins/elixir-1.8.2.ez
subPath: elixir-1.8.2.ez
- name: deduplication-plugin
mountPath: /opt/rabbitmq/plugins/rabbitmq_message_deduplication-v3.8.4.ez
subPath: rabbitmq_message_deduplication-v3.8.4.ez
</code></pre>
<p>This works A-OK. However, <code>stable/rabbitmq-ha</code> is going to disappear next month and so I'm migrating to <code>bitnami/rabbitmq</code>.</p>
<p><strong>Problem</strong>: <code>bitnami/rabbitmq</code> expects <code>values.yaml</code> in a different <a href="https://github.com/bitnami/charts/blob/master/bitnami/rabbitmq/values.yaml" rel="noreferrer">format</a> and I can't for the life of me figure out how I should set up a new <code>values.yaml</code> file to achieve the same result. I've tried messing around with <code>command</code>, <code>args</code> and <code>initContainers</code> but I just can't get it done...</p>
<p>P.S. I have a cluster running locally using minikube. I don't believe this is relevant, but putting this here just in case.</p>
<p><strong>UPDATE:</strong> Francisco's answer really helped. Somehow I missed that part of the documentation.</p>
<p>My new <code>.yaml</code> looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>communityPlugins: "https://github.com/noxdafox/rabbitmq-message-deduplication/releases/download/0.4.5/elixir-1.8.2.ez https://github.com/noxdafox/rabbitmq-message-deduplication/releases/download/0.4.5/rabbitmq_message_deduplication-v3.8.x_0.4.5.ez"
extraPlugins: "rabbitmq_message_deduplication"
</code></pre>
<p>It gets the plugin working just like I wanted, and with much less configuration. Good stuff.</p>
| igg | <p>Thanks for choosing our chart! Our <a href="https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq" rel="noreferrer">[bitnami/rabbitmq]</a> chart uses the parameter <code>communityPlugins</code> to install new plugins and <code>extraPlugins</code> to enable them. For example, to enable the <code>elixir</code> plugin you could try changing <code>values.yaml</code> to:</p>
<pre class="lang-yaml prettyprint-override"><code>communityPlugins: "https://github.com/noxdafox/rabbitmq-message-deduplication/releases/download/0.4.5/elixir-1.8.2.ez"
extraPlugins: "rabbitmq_auth_backend_ldap elixir"
</code></pre>
<p>For more information, please look into the <a href="https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq#plugins" rel="noreferrer">Plugin section</a> in our README and ask any more doubts if you need to!</p>
| Francisco De Paz Galan |
<p>I´m trying create a Python client to connect and exec a command in a pod on AKS Cluster, however when try connect i get message error from my client 401 Unauthorized.Has anyone experienced this problem in the API?</p>
<p><strong>API EXCEPTION MESSAGE:</strong></p>
<pre><code>kubernetes.client.rest.ApiException: (401)
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'ba23c2b3-d65b-4200-b802-161300119860', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Mon, 21 Sep 2020 18:21:59 GMT', 'Content-Length': '129'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
</code></pre>
<p><strong>Python Client API Kubernetes</strong></p>
<pre><code> from __future__ import print_function
import time
import kubernetes.client
import os
from kubernetes.stream import stream
from kubernetes.client.rest import ApiException
from pprint import pprint
name = input("Insira o POD name cadastrado")
namespace = input("namespace do POD cadastrado")
NomeAtuador = input("Insira o nome do atuador a ser gerado o arquivo de configuração")
configuration = kubernetes.client.Configuration()
#configuration.verify_ssl=False
#configuration.assert_hostname = False
configuration.api_key_prefix['authorization'] = 'Bearer'
configuration.api_key['authorization'] = 'MYTOKEN'
configuration.ssl_ca_cert= 'PATH TO CA.CRT'
configuration.host = "HOST_IP:443"
api_instance = kubernetes.client.CoreV1Api(
kubernetes.client.ApiClient(configuration))
exec_command = [
'/etc/openvpn/setup/newClientCert.sh',
(NomeAtuador),
'xxxxxxx']
resp = stream(api_instance.connect_post_namespaced_pod_exec(
(name), (namespace), command=exec_command,
stderr=True, stdin=True,
stdout=True, tty=True))
print("Response: " + resp)
</code></pre>
<p>I´m using a <strong>Python 3.8.2</strong> and <strong>Kubernetes 1.16.13</strong></p>
| Lucas Bittencourt | <p>To solve my problem i add the following configuration to cluster config.</p>
<pre><code>kubectl create clusterrolebinding serviceaccounts-cluster-admin \ --clusterrole=cluster-admin \ --group=system:serviceaccounts
</code></pre>
| Lucas Bittencourt |
<p>I am trying to add an header to all my requests being answered by my service, I am using Lua EnvoyFilter to do so. But the filter is not being applied to the sidecar-proxy, when I try to do configDump I dont find my filter nor the header in the resonses which I had added. I have manually labelled the pod & the deployment with app=gateway & the below is the filter I used, I can’t seem to find anything helpful in Istio doc nor in the envoy filter docs. Can anyone please help if I have missed something over here?</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: response-interceptor
namespace: gateway
spec:
workloadSelector:
labels:
app: gateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: envoy.http_connection_manager
subFilter:
name: envoy.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"@type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_response(response_handle)
response_handle:headers():add("X-Custom-Namespace", "worked");
end
</code></pre>
<p>Gateway over here is my service-name & is not the istio-ingress gateway.</p>
| Kush Trivedi | <p>Seems like an occasional error with EnvoyFilters on Minikube, after deleting & re-applying it magically started to work.
<a href="https://github.com/istio/istio/issues/8261" rel="nofollow noreferrer">https://github.com/istio/istio/issues/8261</a>
<a href="https://github.com/istio/istio/issues/8616" rel="nofollow noreferrer">https://github.com/istio/istio/issues/8616</a></p>
| Kush Trivedi |
<p>I know this question is asked many times, but all about docker, this time is crio.</p>
<pre><code>CentOS Linux release 7.6
CRI-O Version: 1.16.1
Kubernetes: v1.16.3
KubeAdm: v1.16.3
</code></pre>
<p>CoreDNS pods are in Error/CrashLoopBackOff state, and audit.log shows selinux prevents CoreDNS to read from /var/lib/kubelet/container_id/volumes/</p>
<pre><code>type=AVC msg=audit(1576203392.727:1431): avc: denied { read } for pid=15866 comm="coredns" name="Corefile" dev="dm-0" ino=35369330 scontext=system_u:system_r:container_t:s0:c307,c586 tcontext=system_u:object_r:var_lib_t:s0 tclass=file permissive=1
type=AVC msg=audit(1576203392.727:1431): avc: denied { open } for pid=15866 comm="coredns" path="/etc/coredns/..2019_12_13_02_13_30.965446608/Corefile" dev="dm-0" ino=35369330 scontext=system_u:system_r:container_t:s0:c307,c586 tcontext=system_u:object_r:var_lib_t:s0 tclass=file permissive=1
type=AVC msg=audit(1576203393.049:1432): avc: denied { open } for pid=15866 comm="coredns" path="/var/run/secrets/kubernetes.io/serviceaccount/..2019_12_13_02_13_30.605147375/token" dev="tmpfs" ino=124481 scontext=system_u:system_r:container_t:s0:c307,c586 tcontext=system_u:object_r:tmpfs_t:s0 tclass=file permissive=1
</code></pre>
<p>if I use docker newer than 1.7, it works fine, I assume this may related with the patch of mounting volume with z/Z option.</p>
<p>I can add policy like underneath, but it will compromise security.</p>
<pre><code>module coredns 0.1;
require {
type tmpfs_t;
type container_t;
type var_lib_t;
class file { open read };
}
allow container_t tmpfs_t:file open;
allow container_t var_lib_t:file { open read };
</code></pre>
<p>any better solution exists? just like docker, with a little efforts and don't compromise security.</p>
| Cyron | <p>I've looked into it and it seems that the problem lays in <strong>kubelet version</strong>. Let me elaborate on that: </p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/83679" rel="nofollow noreferrer">SELinux Volumes not relabeled in 1.16</a> - this link is providing more details about the issue. </p>
<p>I tried to reproduce this coredns issue on different versions of Kubernetes.</p>
<p>Issue shows on version 1.16 and newer. It seems to work properly with SELinux enabled on the version 1.15.6 </p>
<p>For this to work you will need working CentOS and CRI-O environment.</p>
<p>CRI-O version: </p>
<pre><code>Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.16.2
RuntimeApiVersion: v1alpha1
</code></pre>
<p>To deploy this insfrastructure I followed this site for the most part: <a href="https://kubevirt.io/2019/KubeVirt_k8s_crio_from_scratch.html" rel="nofollow noreferrer">KubeVirt</a> </p>
<h2>Kubernetes v1.15.7</h2>
<p><strong>Steps to reproduce</strong>: </p>
<ul>
<li>Disable SELinux and restart machine:
<ul>
<li><code>$ setenforce 0</code></li>
<li><code>$ sed -i s/^SELINUX=.*$/SELINUX=disabled/ /etc/selinux/config</code></li>
<li><code>$ reboot</code></li>
</ul></li>
<li>Check if SELinux is disabled by invoking command: <code>$ sestatus</code></li>
<li>Install packages with <code>$ yum install INSERT_PACKAGES_BELOW</code>
<ul>
<li>kubelet-1.15.7-0.x86_64</li>
<li>kubeadm-1.15.7-0.x86_64</li>
<li>kubectl-1.15.7-0.x86_64</li>
</ul></li>
<li>Initialize Kubernetes cluster with following command <code>$ kubeadm init --pod-network-cidr=10.244.0.0/16</code></li>
<li>Wait for cluster to initialize correctly and follow kubeadm instructions to connect to cluster</li>
<li>Apply Flannel CNI <code>$ kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml</code></li>
</ul>
<p>Check if coredns pods are running correctly with command:
<code>$ kubectl get pods -A</code></p>
<p>It should give similar output to that: </p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5c98db65d4-2c7lt 1/1 Running 2 7m59s
kube-system coredns-5c98db65d4-5dp9s 1/1 Running 2 7m59s
kube-system etcd-centos-kube-master 1/1 Running 2 7m20s
kube-system kube-apiserver-centos-kube-master 1/1 Running 2 7m4s
kube-system kube-controller-manager-centos-kube-master 1/1 Running 2 6m55s
kube-system kube-flannel-ds-amd64-mzh27 1/1 Running 2 7m14s
kube-system kube-proxy-bqll8 1/1 Running 2 7m58s
kube-system kube-scheduler-centos-kube-master 1/1 Running 2 6m58s
</code></pre>
<p>Coredns pods in kubernetes cluster with <strong>SELinux disabled</strong> are working properly. </p>
<p><strong>Enable SELinux</strong>:</p>
<p>From root account invoke commands to enable SELinux and restart the machine: </p>
<ul>
<li><code>$ setenforce 1</code></li>
<li><code>$ sed -i s/^SELINUX=.*$/SELINUX=enforcing/ /etc/selinux/config</code></li>
<li><code>$ reboot</code></li>
</ul>
<p>Check if coredns pods are running correctly. They should <strong>not get crashloopbackoff error</strong> when running:
<code>kubectl get pods -A</code></p>
<h2>Kubernetes v1.16.4</h2>
<p><strong>Steps to reproduce</strong>: </p>
<ul>
<li>Run <code>$ kubeadm reset</code> if coming from another another version</li>
<li>Remove old Kubernetes packages with <code>$ yum remove OLD_PACKAGES</code></li>
<li>Disable SELinux and restart machine:
<ul>
<li><code>$ setenforce 0</code></li>
<li><code>$ sed -i s/^SELINUX=.*$/SELINUX=disabled/ /etc/selinux/config</code></li>
<li><code>$ reboot</code></li>
</ul></li>
<li>Check if SELinux is disabled by invoking command: <code>$ sestatus</code></li>
<li>Install packages with <code>$ yum install INSERT_PACKAGES_BELOW</code>
<ul>
<li>kubelet-1.16.4-0.x86_64</li>
<li>kubeadm-1.16.4-0.x86_64</li>
<li>kubectl-1.16.4-0.x86_64</li>
</ul></li>
<li>Initialize Kubernetes cluster with following command <code>$ kubeadm init --pod-network-cidr=10.244.0.0/16</code></li>
<li>Wait for cluster to initialize correctly and follow kubeadm instructions to connect to cluster </li>
<li>Apply Flannel CNI <code>$ kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml</code></li>
</ul>
<p>Check if coredns pods are running correctly with command:
<code>$ kubectl get pods -A</code></p>
<p>It should give similar output to that: </p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-fgbkl 1/1 Running 1 13m
kube-system coredns-5644d7b6d9-x6h4l 1/1 Running 1 13m
kube-system etcd-centos-kube-master 1/1 Running 1 12m
kube-system kube-apiserver-centos-kube-master 1/1 Running 1 12m
kube-system kube-controller-manager-centos-kube-master 1/1 Running 1 12m
kube-system kube-proxy-v52ls 1/1 Running 1 13m
kube-system kube-scheduler-centos-kube-master 1/1 Running 1 12m
</code></pre>
<p><strong>Enable SELinux</strong>:</p>
<p>From root account invoke commands to enable SELinux and restart the machine: </p>
<ul>
<li><code>$ setenforce 1</code></li>
<li><code>$ sed -i s/^SELINUX=.*$/SELINUX=enforcing/ /etc/selinux/config</code></li>
<li><code>$ reboot</code></li>
</ul>
<p>After reboot coredns pods <strong>should enter crashloopbackoff state</strong> as shown below: </p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-fgbkl 0/1 CrashLoopBackOff 25 113m
kube-system coredns-5644d7b6d9-x6h4l 0/1 CrashLoopBackOff 25 113m
kube-system etcd-centos-kube-master 1/1 Running 1 112m
kube-system kube-apiserver-centos-kube-master 1/1 Running 1 112m
kube-system kube-controller-manager-centos-kube-master 1/1 Running 1 112m
kube-system kube-proxy-v52ls 1/1 Running 1 113m
kube-system kube-scheduler-centos-kube-master 1/1 Running 1 112m
</code></pre>
<p>Logs from the pod <code>coredns-5644d7b6d9-fgbkl</code> show: </p>
<pre><code>plugin/kubernetes: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied
</code></pre>
| Dawid Kruk |
<p>I'm trying to monitor Kubernetes PVC disk usage. I need the memory that is in use for Persistent Volume Claim. I found the command:</p>
<blockquote>
<p>kubectl get --raw / api / v1 / persistentvolumeclaims</p>
</blockquote>
<p>Return:</p>
<pre><code>"status":{
"phase":"Bound",
"accessModes":[
"ReadWriteOnce"
],
"capacity":{
"storage":"1Gi"
}
}
</code></pre>
<p>But it only brings me the full capacity of the disk, and as I said I need the used one</p>
<p>Does anyone know which command could return this information to me?</p>
| Danilo Marquiori | <p>I don't have a definitive anwser, but I hope this will help you. Also, I would be interested if someone has a better anwser.</p>
<h2>Get current usage</h2>
<blockquote>
<p>The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed.</p>
<p>-- <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#introduction:%7E:text=The%20PersistentVolume%20subsystem%20provides%20an%20API,provided%20from%20how%20it%20is%20consumed." rel="noreferrer">Persistent Volume | Kubernetes</a></p>
</blockquote>
<p>As stated in the Kubernetes documentation, PV (PersistentVolume) and PVC (PersistentVolumeClaim) are abstractions over storage. As such, I do not think you can inspect PV or PVC, but you can inspect the storage medium.</p>
<p>To get the usage, create a debugging pod which will use your PVC, from which you will check the usage. This should work depending on your storage provider.</p>
<pre class="lang-yaml prettyprint-override"><code># volume-size-debugger.yaml
kind: Pod
apiVersion: v1
metadata:
name: volume-size-debugger
spec:
volumes:
- name: debug-pv
persistentVolumeClaim:
claimName: <pvc-name>
containers:
- name: debugger
image: busybox
command: ["sleep", "3600"]
volumeMounts:
- mountPath: "/data"
name: debug-pv
</code></pre>
<p>Apply the above manifest with <code>kubectl apply -f volume-size-debugger.yaml</code>, and run a shell inside it with <code>kubectl exec -it volume-size-debugger sh</code>. Inside the shell run <code>du -sh</code> to get the usage in a human readable format.</p>
<h2>Monitoring</h2>
<p>As I am sure you have noticed, this is not especially useful for monitoring. It may be useful for a one-time check from time to time, but not for monitoring or low disk space alerts.</p>
<p>One way to setup monitoring would be to have a similar sidecar pod like ours above and gather our metrics from there. One such example seems to be the <a href="https://github.com/prometheus/node_exporter" rel="noreferrer">node_exporter</a>.</p>
<p>Another way would be to use <a href="https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/" rel="noreferrer">CSI</a> (Container Storage Interface). I have not used CSI and do not know enough about it to really explain more. But here are a couple of related issues and related Kubernetes documentation:</p>
<ul>
<li><a href="https://github.com/prometheus-operator/prometheus-operator/issues/2359" rel="noreferrer">Monitoring Kubernetes PersistentVolumes - prometheus-operator</a></li>
<li><a href="https://github.com/digitalocean/csi-digitalocean/issues/134" rel="noreferrer">Volume stats missing - csi-digitalocean</a></li>
<li><a href="https://kubernetes.io/docs/concepts/storage/storage-capacity/" rel="noreferrer">Storage Capacity | Kubernetes</a></li>
</ul>
| touchmarine |
<p>I've encountered rather strange behavior of my kubernetes cluster (1.18.20, calico 3.14.2): when I attempt to upload two-megabyte JSON file to pod via curl through <code>NodePort</code> service, transmission is interrupted with <code>Recv failure: Connection reset by peer</code>.
Traffic capture shows that both client and server receive RST packets from network, but didn't sent them.
Binary file of same size uploads successfully, but JSON is rejected regardless of <code>Content-Type</code> specified.
File transfer between pods (using similar commands and same file) proceeds smoothly.
Upload through ingress (also configured using <code>NodePort</code>) fails too.
Size of received fragment is always the same, approximately 850K.</p>
<p>I've used <code>nc -l 80*</code> instead of real service with the same outcome.</p>
<p>Apparently, <code>kube-proxy</code> doesn't like big JSON files.</p>
<p>Is it possible to send big JSON files to pod from external clients, or such a limit is hardcoded?</p>
<p><strong>UPD1</strong></p>
<p>Same behavior for fresh cluster (1.22.0, calico 3.20.0).</p>
<p><strong>UPD2</strong></p>
<p>System rejects not every big JSON payload, but only several percent of user uploads.
Payload is specially crafted by client application: first part of multy-volume Zip archive is base64-encoded and encapsulated as JSON file ('{ "data": "..." }').
Size of fragment causing connection break is about 640K.</p>
<p>Looks like error in filtering procedure inside of <code>kube-proxy</code>.</p>
| Anemon | <p>Unfortunately, the source of the problem was in misconfiguration of IDS/IPS.</p>
<p>Nothing to do with <strong>kube-proxy</strong>.</p>
| Anemon |
<p>I'm new to K8s and am currently using Minikube to play around with the platform. How do I configure a public (i.e. outside the cluster) port for the service? I followed the <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#exposing-the-service" rel="noreferrer">nginx example</a>, and K8s service tutorials. In my case, I created the service like so:</p>
<pre><code>kubectl expose deployment/mysrv --type=NodePort --port=1234
</code></pre>
<p>The service's port is 1234 for anyone trying to access it from INSIDE the cluster. The minikube tutorials say I need to access the service directly through it's random nodePort, which works for manual testing purposes:</p>
<pre><code>kubectl describe service mysrv | grep NodePort
...
NodePort: <unset> 32387/TCP
# curl "http://`minikube ip`:32387/"
</code></pre>
<p>But I don't understand how, in a real cluster, the service could have a fixed world-accessible port. The nginx examples describe something about using the LoadBalancer service kind, but they don't even specify ports there...</p>
<p>Any ideas how to fix the external port for the entire service?</p>
| Sagi Mann | <blockquote>
<p>The minikube tutorials say I need to access the service directly through it's random nodePort, which works for manual testing purposes:</p>
</blockquote>
<p>When you create service object of type <code>NodePort</code> with a <code>$ kubectl expose</code> command you cannot choose your <code>NodePort</code> port. To choose a <code>NodePort</code> port you will need to create a <code>YAML</code> definition of it.</p>
<p><strong>You can manually specify the port in service object of type <code>Nodeport</code> with below example:</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: example-nodeport
spec:
type: NodePort
selector:
app: hello # selector for deployment
ports:
- name: example-port
protocol: TCP
port: 1234 # CLUSTERIP PORT
targetPort: 50001 # POD PORT WHICH APPLICATION IS RUNNING ON
nodePort: 32222 # HERE!
</code></pre>
<p>You can apply above <code>YAML</code> definition by invoking command:
<code>$ kubectl apply -f FILE_NAME.yaml</code></p>
<p>Above service object will be created only if <code>nodePort</code> port is available to use.</p>
<blockquote>
<p>But I don't understand how, in a real cluster, the service could <strong>not</strong> have a fixed world-accessible port.</p>
</blockquote>
<p>In clusters managed by cloud providers (for example GKE) you can use a service object of type <code>LoadBalancer</code> which will have a fixed external IP and fixed port.</p>
<p>Clusters that have nodes with public IP's can use service object of type <code>NodePort</code> to direct traffic into the cluster.</p>
<p>In <code>minikube</code> environment you can use a service object of type <code>LoadBalancer</code> but it will have some caveats described in last paragraph.</p>
<h1><strong>A little bit of explanation:</strong></h1>
<h2><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="noreferrer">NodePort</a></h2>
<p><code>Nodeport</code> is exposing the service on each node IP at a static port. It allows external traffic to enter with the <code>NodePort</code> port. This port will be automatically assigned from range of <code>30000</code> to <code>32767</code>.</p>
<p>You can change the default <code>NodePort</code> port range by following <a href="http://www.thinkcode.se/blog/2019/02/20/kubernetes-service-node-port-range" rel="noreferrer">this manual</a>.</p>
<p>You can check what is exactly happening when creating a service object of type <code>NodePort </code> by looking on this <a href="https://stackoverflow.com/a/54345488/12257134">answer</a>.</p>
<p>Imagine that:</p>
<ul>
<li>Your nodes have IP's:
<ul>
<li><code>192.168.0.100</code></li>
<li><code>192.168.0.101</code></li>
<li><code>192.168.0.102</code></li>
</ul>
</li>
<li>Your pods respond on port <code>50001</code> with <code>hello</code> and they have IP's:
<ul>
<li><code>10.244.1.10</code></li>
<li><code>10.244.1.11</code></li>
<li><code>10.244.1.12</code></li>
</ul>
</li>
<li>Your Services are:
<ul>
<li><code>NodePort</code> (port <code>32222</code>) with:
<ul>
<li><code>ClusterIP</code>:
<ul>
<li>IP: <code>10.96.0.100</code></li>
<li><code>port</code>:<code>7654</code></li>
<li><code>targetPort</code>:<code>50001</code></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>A word about <code>targetPort</code>. It's a definition for port on the <strong>pod</strong> that is for example a web server.</p>
<p><strong>According to above example you will get <code>hello</code> response with:</strong></p>
<ul>
<li><code>NodeIP:NodePort</code> (all the pods could respond with <code>hello</code>):
<ul>
<li><code>192.168.0.100:32222</code></li>
<li><code>192.168.0.101:32222</code></li>
<li><code>192.168.0.102:32222</code></li>
</ul>
</li>
<li><code>ClusterIP:port</code> (all the pods could respond with <code>hello</code>):
<ul>
<li><code>10.0.96.100:7654</code></li>
</ul>
</li>
<li><code>PodIP:targetPort</code> (only the pod that request is sent to can respond with <code>hello</code>)
<ul>
<li><code>10.244.1.10:50001</code></li>
<li><code>10.244.1.11:50001</code></li>
<li><code>10.244.1.12:50001</code></li>
</ul>
</li>
</ul>
<p>You can check access with <code>curl</code> command as below:</p>
<p><code>$ curl http://NODE_IP:NODEPORT</code></p>
<hr />
<p><strong>In the example you mentioned:</strong></p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl expose deployment/mysrv --type=NodePort --port=1234
</code></pre>
<p>What will happen:</p>
<ul>
<li>It will assign a random port from range of <code>30000</code> to <code>32767</code> on your <code>minikube</code> instance directing traffic entering this port to pods.</li>
<li>Additionally it will create a <code>ClusterIP</code> with port of <code>1234</code></li>
</ul>
<p>In the example above there was no parameter <code>targetPort</code>. If <code>targetPort</code> is not provided it will be the same as <code>port</code> in the command.</p>
<p>Traffic entering a <code>NodePort</code> will be routed directly to pods and will not go to the <code>ClusterIP</code>.</p>
<p>From the <code>minikube</code> perspective a <code>NodePort</code> will be a port on your <code>minikube</code> instance. It's IP address will be dependent on the hypervisor used. Exposing it outside your local machine will be heavily dependent on operating system.</p>
<hr />
<h2><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="noreferrer">LoadBalancer</a></h2>
<p>There is a difference between a service object of type <code>LoadBalancer</code>(1) and an external <code>LoadBalancer</code>(2):</p>
<ul>
<li>Service object of type <code>LoadBalancer</code>(1) allows to expose a service externally using a cloud provider’s <code>LoadBalancer</code>(2). It's a service within Kubernetes environment that through service controller can schedule a creation of external <code>LoadBalancer</code>(2).</li>
<li>External <code>LoadBalancer</code>(2) is a load balancer provided by cloud provider. It will operate at Layer 4.</li>
</ul>
<p>Example definition of service of type <code>LoadBalancer</code>(1):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: example-loadbalancer
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 1234 # LOADBALANCER PORT
targetPort: 50001 # POD PORT WHICH APPLICATION IS RUNNING ON
nodePort: 32222 # PORT ON THE NODE
</code></pre>
<p>Applying above <code>YAML</code> will create a service of type <code>LoadBalancer</code>(1)</p>
<p>Take a specific look at:</p>
<pre class="lang-yaml prettyprint-override"><code> ports:
- port: 1234 # LOADBALANCER PORT
</code></pre>
<p>This definition will simultaneously:</p>
<ul>
<li>specify external <code>LoadBalancer</code>(2) <code>port</code> as 1234</li>
<li>specify <code>ClusterIP</code> <code>port</code> as 1234</li>
</ul>
<p>Imagine that:</p>
<ul>
<li>Your external <code>LoadBalancer</code>(2) have:
<ul>
<li><code>ExternalIP</code>: <code>34.88.255.5</code></li>
<li><code>port</code>:<code>7654</code></li>
</ul>
</li>
<li>Your nodes have IP's:
<ul>
<li><code>192.168.0.100</code></li>
<li><code>192.168.0.101</code></li>
<li><code>192.168.0.102</code></li>
</ul>
</li>
<li>Your pods respond on port <code>50001</code> with <code>hello</code> and they have IP's:
<ul>
<li><code>10.244.1.10</code></li>
<li><code>10.244.1.11</code></li>
<li><code>10.244.1.12</code></li>
</ul>
</li>
<li>Your Services are:
<ul>
<li><code>NodePort</code> (port <code>32222</code>) with:
<ul>
<li><code>ClusterIP</code>:
<ul>
<li>IP: <code>10.96.0.100</code></li>
<li><code>port</code>:<code>7654</code></li>
<li><code>targetPort</code>:<code>50001</code></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p><strong>According to above example you will get <code>hello</code> response with:</strong></p>
<ul>
<li><code>ExternalIP</code>:<code>port</code> (all the pods could respond with <code>hello</code>):
<ul>
<li><code>34.88.255.5:7654</code></li>
</ul>
</li>
<li><code>NodeIP:NodePort</code> (all the pods could respond with <code>hello</code>):
<ul>
<li><code>192.168.0.100:32222</code></li>
<li><code>192.168.0.101:32222</code></li>
<li><code>192.168.0.102:32222</code></li>
</ul>
</li>
<li><code>ClusterIP:port</code> (all the pods could respond with <code>hello</code>):
<ul>
<li><code>10.0.96.100:7654</code></li>
</ul>
</li>
<li><code>PodIP:targetPort</code> (only the pod that request is sent to can respond with <code>hello</code>)
<ul>
<li><code>10.244.1.10:50001</code></li>
<li><code>10.244.1.11:50001</code></li>
<li><code>10.244.1.12:50001</code></li>
</ul>
</li>
</ul>
<p><code>ExternalIP</code> can be checked with command: <code>$ kubectl get services</code></p>
<p>Flow of the traffic:
Client -> <code>LoadBalancer:port</code>(2) -> <code>NodeIP:NodePort</code> -> <code>Pod:targetPort</code></p>
<h2><a href="https://minikube.sigs.k8s.io/docs/tasks/loadbalancer/" rel="noreferrer">Minikube: LoadBalancer</a></h2>
<blockquote>
<p><strong>Note:</strong> This feature is only available for cloud providers or environments which support external load balancers.</p>
<p>-- <em><a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="noreferrer">Kubernetes.io: Create external LoadBalancer</a></em> </p>
<p>On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the <code>LoadBalancer</code> type makes the Service accessible through the <code>minikube service</code> command.</p>
<p>-- <em><a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="noreferrer">Kubernetes.io: Hello minikube</a></em> </p>
</blockquote>
<p><code>Minikube</code> can create service object of type <code>LoadBalancer</code>(1) but it will not create an external <code>LoadBalancer</code>(2).</p>
<p>The <code>ExternalIP</code> in command <code>$ kubectl get services</code> will have pending status.</p>
<p>To address that there is no external <code>LoadBalancer</code>(2) you can invoke <code>$ minikube tunnel</code> which will create a route from host to <code>minikube</code> environment to access the <code>CIDR</code> of <code>ClusterIP</code> directly.</p>
| Dawid Kruk |
<p>I can't seem to understand why the below mentioned pod manifest isn't working if I remove spec.containers.command, the pod fails if I remove the command.</p>
<p>I took this example from the official <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">documentation</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
</code></pre>
| Viplove | <p>Because the <code>busybox</code> image doesn't run any process at start by itself. Containers are designed to run single application and shutdown when the app exits. If the image doesn't run anything it will immediately exit. In the Kubernetes the <code>spec.containers.command</code> overwrites the default container command. You can try changing the manifest image for i.e. <code>image: nginx</code>, remove the <code>spec.containers.command</code> and it will run, because that image as default Nginx server.</p>
| Cloudziu |
<p>I'm trying to add 2 different discovery clients depending on my Env variable. I want to do that for easier local development, without running local k8s cluster.</p>
<pre><code> cloud:
retry:
initial-interval: 10
max-interval: 20
kubernetes:
discovery:
enabled: ${!CONSUL_ENABLED:true}
consul:
host: ${CONSUL_HOST:localhost}
port: ${CONSUL_PORT:8500}
discovery:
enabled: ${CONSUL_ENABLED:false}
fail-fast: true
instance-id: ${spring.application.name}-${server.port}-${instance-id}
health-check-path: /actuator/health
health-check-interval: 20s
</code></pre>
<p>My gradle file:</p>
<pre><code>implementation 'org.springframework.cloud:spring-cloud-starter-kubernetes'
implementation 'org.springframework.cloud:spring-cloud-starter-consul-discovery'
</code></pre>
<p>But this config fails with error:</p>
<pre><code>Field registration in org.springframework.cloud.netflix.zuul.ZuulProxyAutoConfiguration required a single bean, but 2 were found:
- getRegistration: defined by method 'getRegistration' in class path resource [org/springframework/cloud/kubernetes/discovery/KubernetesDiscoveryClientAutoConfiguration.class]
- consulRegistration: defined by method 'consulRegistration' in class path resource [org/springframework/cloud/consul/serviceregistry/ConsulAutoServiceRegistrationAutoConfiguration.class]
</code></pre>
<p>Is there any way to do that?</p>
| Pixel | <p>Ok, i found way to do that, you must disable all k8s, not only discovery:</p>
<pre><code> cloud:
retry:
initial-interval: 10
max-interval: 20
kubernetes:
discovery:
enabled: ${K8S_ENABLED:true}
enabled: ${K8S_ENABLED:true}
consul:
host: ${CONSUL_HOST:localhost}
port: ${CONSUL_PORT:8500}
discovery:
enabled: ${CONSUL_ENABLED:false}
fail-fast: true
instance-id: ${spring.application.name}-${server.port}-${instance-id}
health-check-path: /actuator/health
health-check-interval: 20s
</code></pre>
<p>But is there a way to combine K8S_ENABLED and CONSUL_ENABLED variable to one?</p>
| Pixel |
<p>I am trying to create a deployment out of my kompose file, but whenever I try: </p>
<pre><code>kompose convert -f docker-compose.yaml
</code></pre>
<p>I get the error: </p>
<pre><code>Volume mount on the host "[file directory]" isn't supported - ignoring path on the host
</code></pre>
<p>I have tried a few different solutions to my issue, firstly trying to add <code>hostPath</code> to my <code>kompose convert</code> as well as using persistent volumes, however both do not work.</p>
<p>my kompose files looks like this: </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes emptyDir
kompose.version: 1.7.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: es01
name: es01
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: es01
spec:
containers:
- env:
- name: COMPOSE_CONVERT_WINDOWS_PATHS
value: "1"
- name: COMPOSE_PROJECT_NAME
value: elastic_search_container
- name: ES_JAVA_OPTS
value: -Xms7g -Xmx7g
- name: discovery.type
value: single-node
- name: node.name
value: es01
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.1
name: es01
ports:
- containerPort: 9200
resources: {}
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: es01-empty0
restartPolicy: Always
volumes:
- emptyDir: {}
name: es01-empty0
status: {}
</code></pre>
<p>I am using kompose version 1.7.0 </p>
<p>My Docker Compose version: </p>
<pre><code>version: '3'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.1
container_name: es01
environment:
- node.name=es01
- COMPOSE_PROJECT_NAME=elastic_search_container
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms7g -Xmx7g"
- COMPOSE_CONVERT_WINDOWS_PATHS=1
ulimits:
nproc: 3000
nofile: 65536
memlock: -1
volumes:
- /home/centos/Sprint0Demo/Servers/elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- kafka_demo
</code></pre>
| James Ukilin | <p>You need to take a look on warning you get: </p>
<pre><code>Volume mount on the host "[file directory]" isn't supported - ignoring path on the host
</code></pre>
<p>It happens when volume in <code>docker-compose.yaml</code> is configured with direct path. </p>
<p>Example below: </p>
<pre><code>version: '3'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- "./storage1:/test1"
- "./storage2:/test2"
redis:
image: "redis:alpine"
volumes:
storage1:
storage2:
</code></pre>
<h3>Persistent Volume Claim</h3>
<p>Take a look on this link: <a href="https://kompose.io/conversion/" rel="nofollow noreferrer">Conversion matrix</a>.
It describes how <code>kompose</code> converts Docker's volumes to Kubernetes ones. </p>
<p>Executing the conversion command without <code>--volumes</code> parameter:</p>
<p><code>$ kompose convert -f docker-compose.yml</code> </p>
<p>With <code>kompose</code> 1.19 output will yield: </p>
<pre class="lang-sh prettyprint-override"><code>WARN Volume mount on the host "SOME_PATH" isn't supported - ignoring path on the host
WARN Volume mount on the host "SOME_PATH" isn't supported - ignoring path on the host
INFO Kubernetes file "web-service.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
INFO Kubernetes file "web-deployment.yaml" created
INFO Kubernetes file "web-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "web-claim1-persistentvolumeclaim.yaml" created
</code></pre>
<p>Warning message means that you are explicitly telling <code>docker-compose</code> to create volumes with direct path. By default <code>kompose</code> will convert Docker's volume to <a href="https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim" rel="nofollow noreferrer">Persistent Volume Claim</a>.</p>
<h3>emptyDir</h3>
<p>Executing the conversion command with <code>--volumes emptyDir</code> parameter:</p>
<p><code>$ kompose convert -f docker-compose.yml --volumes emptyDir</code> </p>
<p>Will yield effect: </p>
<pre class="lang-sh prettyprint-override"><code>WARN Volume mount on the host "SOME_PATH" isn't supported - ignoring path on the host
WARN Volume mount on the host "SOME_PATH" isn't supported - ignoring path on the host
INFO Kubernetes file "web-service.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
INFO Kubernetes file "web-deployment.yaml" created
</code></pre>
<p><code>kompose</code> will create <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a> declaration inside web-deployment.yaml instead of creating separate definitions of PVC as it has in default. </p>
<h3>hostPath</h3>
<p>Executing the conversion command with <code>--volumes hostPath</code> parameter:</p>
<p><code>$ kompose convert -f docker-compose.yml --volumes hostPath</code> </p>
<p>Will yield effect: </p>
<pre><code>INFO Kubernetes file "web-service.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
INFO Kubernetes file "web-deployment.yaml" created
</code></pre>
<p>As you can see there is no warning about not supported path. There is no warning because it created <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> explicitly using your own provided paths from <code>docker-compose.yml</code>. </p>
<p>Take a look on <code>web-deployment.yaml</code> volume section:</p>
<pre><code> volumes:
- hostPath:
path: /LOCAL_PATH-/POD_PATH/storage1
name: web-hostpath0
- hostPath:
path: /LOCAL_PATH-/POD_PATH/storage2
name: web-hostpath1
</code></pre>
| Dawid Kruk |
<p>We've hired a security consultant to perform a pentest on our Application's public IP (Kubernetes Loadbalancer) and write a report on our security flaws and the measurements required to avoid them. Their report warned us that we have TCP Timestamp enabled, and from what I've read about the issue, It would allow an attacker to predict boot time of the machine thus being able to grant control over it.</p>
<p>I also read that TCP Timestamp is important for TCP performance and, most importantly, for Protection Against Wrapping Sequence.</p>
<p>But since we use Kubernetes over GKE with Nginx Ingress Controller being in front of it, I wonder if that <code>TCP Timestamp</code> thing really matters for that context. Should we even care? If so, does it really make my network vulnerable for the lack of Protection Against Wrapping sequence?</p>
<p>More information about TCP Timestamp on this other question:
<a href="https://stackoverflow.com/questions/7880383/what-benefit-is-conferred-by-tcp-timestamp">What benefit is conferred by TCP timestamp?</a></p>
| Mauricio | <p>According to RFC 1323 (TCP Extensions for High Performance) TCP Timestamp is used for two main mechanisms: </p>
<ul>
<li>PAWS (Protect Against Wrapped Sequence) </li>
<li>RTT (Round Trip Time)</li>
</ul>
<p><strong>PAWS</strong> - defense mechanism for identification and rejection of packets that arrived in other wrapping sequence (data integrity). </p>
<p><strong>Round Trip Time</strong> - time for packet to get to the destination and sent acknowledgment back to the device it originated.</p>
<p>What can happen when you disable TCP Timestamps: </p>
<ul>
<li>Turning off TCP Timestamp can result with performance issues because the RTT would stop working. </li>
<li>It will disable <a href="https://tejparkash.wordpress.com/2010/12/05/paws-tcp-sequence-number-wrapping-explained/" rel="nofollow noreferrer">PAWS</a>. </li>
<li>As <a href="https://kc.mcafee.com/corporate/index?page=content&id=KB78776&locale=en_US" rel="nofollow noreferrer">McAfee</a> site says disabling timestamps can allow denial attacks. </li>
</ul>
<p>As previously mentioned McAfee's site: </p>
<blockquote>
<p>For these reasons, McAfee strongly recommends keeping this feature enabled and considers the vulnerability as low..</p>
<p>-- <a href="https://kc.mcafee.com/corporate/index?page=content&id=KB78776&locale=en_US" rel="nofollow noreferrer">McAfee</a></p>
</blockquote>
<p>Citation from another site: </p>
<blockquote>
<p>Vulnerabilities in TCP Timestamps Retrieval is a Low risk vulnerability that is one of the most frequently found on networks around the world. This issue has been around since at least 1990 but has proven either difficult to detect, difficult to resolve or prone to being overlooked entirely.</p>
<p>-- <a href="https://beyondsecurity.com/scan-pentest-network-vulnerabilities-tcp-timestamps-retrieval.html" rel="nofollow noreferrer">Beyond Security </a></p>
</blockquote>
<p>I would encourage you to look on this video: <a href="https://www.youtube.com/watch?v=bXXoz5-Z9h0" rel="nofollow noreferrer">HIP15-TALK:Exploiting TCP Timestamps</a>. </p>
<h3>What about GKE</h3>
<p>Getting the information about boot time (uptime in this case) can lead to knowledge about what security patches are <strong>not</strong> applied to the cluster. It can lead to exploitation of those unpatched vulnerabilities. </p>
<p>The best way to approach that would be <strong>regularly update</strong> existing cluster.
GKE implements 2 ways of doing that: </p>
<ul>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster" rel="nofollow noreferrer">Manual way </a></li>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades" rel="nofollow noreferrer">Automatic way </a></li>
</ul>
<p>Even if attacker knows the boot time of your machine it will be useless because system is up to date and all the security patches are applied.
There is dedicated site for Kubernetes engine security bulletins: <a href="https://cloud.google.com/kubernetes-engine/docs/security-bulletins" rel="nofollow noreferrer">Security bulletins</a></p>
| Dawid Kruk |
<p>Currently, I'm trying to create a Kubernetes cluster on Google Cloud with two <strong>load balancers</strong>: one for backend (in Spring boot) and another for frontend (in Angular), where each service (load balancer) communicates with 2 replicas (pods). To achieve that, I created the following ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
spec:
rules:
- http:
paths:
- path: /rest/v1/*
backend:
serviceName: sample-backend
servicePort: 8082
- path: /*
backend:
serviceName: sample-frontend
servicePort: 80
</code></pre>
<p>The ingress above mentioned can make the frontend app communicate with the REST API made available by the backend app. However, I have to create <strong>sticky sessions</strong>, so that every user communicates with the same POD because of the authentication mechanism provided by the backend. To clarify, if one user authenticates in POD #1, the cookie will not be recognized by POD #2.</p>
<p>To overtake this issue, I read that the <strong>Nginx-ingress</strong> manages to deal with this situation and I installed through the steps available here: <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a> using Helm. </p>
<p>You can find below the diagram for the architecture I'm trying to build:</p>
<p><a href="https://i.stack.imgur.com/iZeHX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iZeHX.png" alt="enter image description here"></a></p>
<p>With the following services (I will just paste one of the services, the other one is similar):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082
type: LoadBalancer
</code></pre>
<p>And I declared the following ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: sample-cookie
spec:
rules:
- http:
paths:
- path: /rest/v1/*
backend:
serviceName: sample-backend
servicePort: 8082
- path: /*
backend:
serviceName: sample-frontend
servicePort: 80
</code></pre>
<p>After that, I run <code>kubectl apply -f sample-nginx-ingress.yaml</code> to apply the ingress, it is created and its status is OK. However, when I access the URL that appears in "Endpoints" column, the browser can't connect to the URL.
Am I doing anything wrong?</p>
<h1>Edit 1</h1>
<p>** Updated service and ingress configurations **</p>
<p>After some help, I've managed to access the services through the Ingress Nginx. Above here you have the configurations:</p>
<h2>Nginx Ingress</h2>
<p>The paths shouldn't contain the "<em>", unlike the default Kubernetes ingress that is mandatory to have the "</em>" to route the paths I want.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "sample-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /rest/v1/
backend:
serviceName: sample-backend
servicePort: 8082
- path: /
backend:
serviceName: sample-frontend
servicePort: 80
</code></pre>
<h2>Services</h2>
<p>Also, the services shouldn't be of type "LoadBalancer" but "<strong>ClusterIP</strong>" as below:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082
type: ClusterIP
</code></pre>
<p>However, I still can't achieve sticky sessions in my Kubernetes Cluster, once I'm still getting 403 and even the cookie name is not replaced, so I guess the annotations are not working as expected.</p>
| migueltaoliveira | <p>I looked into this matter and I have found solution to your issue. </p>
<p><strong>To achieve sticky session for both paths you will need two definitions of ingress.</strong></p>
<p>I created example configuration to show you the whole process: </p>
<p><strong>Steps to reproduce:</strong> </p>
<ul>
<li>Apply Ingress definitions </li>
<li>Create deployments</li>
<li>Create services</li>
<li>Create Ingresses </li>
<li>Test </li>
</ul>
<p>I assume that the cluster is provisioned and is working correctly. </p>
<h2>Apply Ingress definitions</h2>
<p>Follow this <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">Ingress link </a> to find if there are any needed prerequisites before installing Ingress controller on your infrastructure. </p>
<p>Apply below command to provide all the mandatory prerequisites: </p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
</code></pre>
<p>Run below command to apply generic configuration to create a service: </p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
</code></pre>
<h2>Create deployments</h2>
<p>Below are 2 example deployments to respond to the Ingress traffic on specific services: </p>
<p><strong>hello.yaml:</strong> </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 1.0.0
replicas: 5
template:
metadata:
labels:
app: hello
version: 1.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:1.0"
env:
- name: "PORT"
value: "50001"
</code></pre>
<p>Apply this first deployment configuration by invoking command:</p>
<p><code>$ kubectl apply -f hello.yaml</code></p>
<p><strong>goodbye.yaml:</strong> </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: goodbye
spec:
selector:
matchLabels:
app: goodbye
version: 2.0.0
replicas: 5
template:
metadata:
labels:
app: goodbye
version: 2.0.0
spec:
containers:
- name: goodbye
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
</code></pre>
<p>Apply this second deployment configuration by invoking command:</p>
<p><code>$ kubectl apply -f goodbye.yaml</code></p>
<p>Check if deployments configured pods correctly: </p>
<p><code>$ kubectl get deployments</code></p>
<p>It should show something like that: </p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
goodbye 5/5 5 5 2m19s
hello 5/5 5 5 4m57s
</code></pre>
<h2>Create services</h2>
<p>To connect to earlier created pods you will need to create services. Each service will be assigned to one deployment. Below are 2 services to accomplish that:</p>
<p><strong>hello-service.yaml:</strong> </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 1.0.0
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
</code></pre>
<p>Apply first service configuration by invoking command:</p>
<p><code>$ kubectl apply -f hello-service.yaml</code></p>
<p><strong>goodbye-service.yaml:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: goodbye-service
spec:
type: NodePort
selector:
app: goodbye
version: 2.0.0
ports:
- name: goodbye-port
protocol: TCP
port: 50001
targetPort: 50001
</code></pre>
<p>Apply second service configuration by invoking command:</p>
<p><code>$ kubectl apply -f goodbye-service.yaml</code></p>
<p><strong>Take in mind that in both configuration lays type: <code>NodePort</code></strong></p>
<p>Check if services were created successfully: </p>
<p><code>$ kubectl get services</code> </p>
<p>Output should look like that:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
goodbye-service NodePort 10.0.5.131 <none> 50001:32210/TCP 3s
hello-service NodePort 10.0.8.13 <none> 50001:32118/TCP 8s
</code></pre>
<h2>Create Ingresses</h2>
<p>To achieve sticky sessions you will need to create 2 ingress definitions. </p>
<p>Definitions are provided below: </p>
<p><strong>hello-ingress.yaml:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "hello-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
</code></pre>
<p><strong>goodbye-ingress.yaml:</strong> </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goodbye-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "goodbye-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /v2/
backend:
serviceName: goodbye-service
servicePort: goodbye-port
</code></pre>
<p>Please change <code>DOMAIN.NAME</code> in both ingresses to appropriate to your case.
I would advise to look on this <a href="https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/" rel="noreferrer">Ingress Sticky session </a> link.
Both Ingresses are configured to HTTP only traffic. </p>
<p>Apply both of them invoking command: </p>
<p><code>$ kubectl apply -f hello-ingress.yaml</code></p>
<p><code>$ kubectl apply -f goodbye-ingress.yaml</code></p>
<p>Check if both configurations were applied: </p>
<p><code>$ kubectl get ingress</code></p>
<p>Output should be something like this: </p>
<pre class="lang-sh prettyprint-override"><code>NAME HOSTS ADDRESS PORTS AGE
goodbye-ingress DOMAIN.NAME IP_ADDRESS 80 26m
hello-ingress DOMAIN.NAME IP_ADDRESS 80 26m
</code></pre>
<h2>Test</h2>
<p>Open your browser and go to <code>http://DOMAIN.NAME</code>
Output should be like this: </p>
<pre><code>Hello, world!
Version: 1.0.0
Hostname: hello-549db57dfd-4h8fb
</code></pre>
<p><code>Hostname: hello-549db57dfd-4h8fb</code> is the name of the pod. Refresh it a couple of times. </p>
<p>It should stay the same. </p>
<p>To check if another route is working go to <code>http://DOMAIN.NAME/v2/</code>
Output should be like this: </p>
<pre><code>Hello, world!
Version: 2.0.0
Hostname: goodbye-7b5798f754-pbkbg
</code></pre>
<p><code>Hostname: goodbye-7b5798f754-pbkbg</code> is the name of the pod. Refresh it a couple of times. </p>
<p>It should stay the same. </p>
<p>To ensure that cookies are not changing open developer tools (probably F12) and navigate to place with cookies. You can reload the page to check if they are not changing. </p>
<p><a href="https://i.stack.imgur.com/Odr0O.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Odr0O.png" alt="Cookies"></a></p>
| Dawid Kruk |
<p>I have the following command:</p>
<pre><code>kubectl delete -n vnamespace secret --field-selector type=my_secret
</code></pre>
<p>this works fine but delete all secrets. I want to delete secrets older than 24 hours. How can I add it?</p>
| Slava | <p>You can use <code>jq</code> to parse kubectl output and list all secrets older than 24h, and pass it to delete.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete secret -n vnamespace $(kubectl -n vnamespace get secrets --field-selector type=my_secret -o json | jq -r "[.items[] | {name: .metadata.name, startTime: .metadata.creationTimestamp | fromdate } | select(.startTime < (now | . - 86400))]" | jq -r ".[].name")
</code></pre>
| Cloudziu |
<p>I've an operator which run a reconcile for some object changes, now I want to add ability to reconcile when specific <code>configmap</code> is changing, (my operator <strong>doesn't</strong> responsible on this CM just needs to listen to it and read on changes...) from the docs I think I need to use the <code>Owns(&corev1.Configmap{})</code> but not sure how to do it and provide specific configmap name to watch,</p>
<p>How should I refer to specific configmap <code>name: foo</code> in <code>namespace=bar</code></p>
<p><a href="https://sdk.operatorframework.io/docs/building-operators/golang/references/event-filtering/#using-predicates" rel="nofollow noreferrer">https://sdk.operatorframework.io/docs/building-operators/golang/references/event-filtering/#using-predicates</a></p>
| Jenney | <p>I haven't used this specific operator framework, but the concepts are familiar. Create a predicate function like this and use it when you are creating a controller by passing it into the SDK's <code>WithEventFilter</code> function:</p>
<pre><code>func specificConfigMap(name, namespace string) predicate.Predicate {
return predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
configmap := e.NewObject.(*corev1.ConfigMap)
if configmap.Name == name && configmap.Namespace == namespace {
return true
}
return false
},
}
}
</code></pre>
| Clark McCauley |
<p>I am trying to copy files from a <strong>Kubernetes</strong> pod to a <strong>GCP Bucket</strong>.
I can get the path of my file, but I was wondering if I want to do this programmatically using python, how can I do this.</p>
<p>I get my buckets using <code>gcsfs</code>. How can I copy a file in my program without using <code>kubectl</code>?</p>
<p>Is there anyway to do this through python.</p>
| ashes999 | <p>I need to agree with the comment made by @anemyte:</p>
<blockquote>
<p>There is <a href="https://stackoverflow.com/questions/59703610/copy-file-from-pod-to-host-by-using-kubernetes-python-client">this question</a> about how to copy a file from a pod. You can download it and then use your code to upload it to the bucket.</p>
</blockquote>
<hr />
<p>I see 2 possible solutions to this question:</p>
<ul>
<li>Use <code>GCS Fuse</code> and Python code to copy the file from your <code>Pod</code> to <code>GCS</code> bucket</li>
<li>Use the Python library to connect to the GCS bucket without <code>gcsfuse</code></li>
</ul>
<hr />
<h3>Use <code>GCS Fuse</code> and Python code to copy the file from your <code>Pod</code> to <code>GCS</code> bucket</h3>
<p>Assuming that you have a <code>Pod</code> that was configured with <code>GCS Fuse</code> and it's working correctly you can use a following code snippet to copy the files (where in <code>dst</code> you pass the mounted directory of a bucket):</p>
<blockquote>
<pre class="lang-py prettyprint-override"><code>from shutil import copyfile
copyfile(src, dst)
</code></pre>
<p>-- <em><a href="https://stackoverflow.com/questions/123198/how-can-a-file-be-copied">Stackoverflow.com: Questions: 123198: How can a file be copied</a></em></p>
</blockquote>
<hr />
<h3>Use the Python library to connect to the GCS bucket without <code>GCS Fuse</code></h3>
<p>As pointed by community member @anemyte, you can use the Cloud Storage client libraries to programmatically address your question:</p>
<ul>
<li><em><a href="https://cloud.google.com/storage/docs/reference/libraries" rel="nofollow noreferrer">Cloud.google.com: Storage: Docs: Reference: Libraries</a></em></li>
</ul>
<p>There is a Python code snippet that addresses the upload operation:</p>
<blockquote>
<pre class="lang-py prettyprint-override"><code>from google.cloud import storage
def upload_blob(bucket_name, source_file_name, destination_blob_name):
"""Uploads a file to the bucket."""
# The ID of your GCS bucket
# bucket_name = "your-bucket-name"
# The path to your file to upload
# source_file_name = "local/path/to/file"
# The ID of your GCS object
# destination_blob_name = "storage-object-name"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
print(
"File {} uploaded to {}.".format(
source_file_name, destination_blob_name
)
)
</code></pre>
</blockquote>
<p>Please have in mind that you will need to have appropriate permissions to use the GCS bucket. You can read more about it by following below link:</p>
<ul>
<li><em><a href="https://cloud.google.com/storage/docs/uploading-objects#prereqs" rel="nofollow noreferrer">Cloud.google.com: Storage: Docs Uploading objects: Prerequisites</a></em></li>
</ul>
<blockquote>
<p>A side note!</p>
<p>You can also use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">Workload Identity</a> as one of the ways to assign required permissions to your <code>Pod</code>.</p>
</blockquote>
<hr />
<h3>Additional resources:</h3>
<ul>
<li><em><a href="https://github.com/maciekrb/GCS-Fuse-sample" rel="nofollow noreferrer">Github.com: Maciekrb: Gcs fuse example</a></em></li>
<li><em><a href="https://cloud.google.com/storage/docs/gcs-fuse" rel="nofollow noreferrer">Cloud.google.com: Storage: Docs: GCS Fuse</a></em></li>
</ul>
<p>It passed my mind that you could want to use Python <strong>outside</strong> of the <code>Pod</code> (like from your laptop) to get the file copied from the <code>Pod</code> to <code>GCS bucket</code>. I'd reckon you could follow this example:</p>
<ul>
<li><em><a href="https://github.com/kubernetes-client/python/blob/master/examples/pod_exec.py" rel="nofollow noreferrer">Github.com: Kubernetes client: Python: Examples: Pod exec.py</a></em></li>
</ul>
| Dawid Kruk |
<p>I want to connect two containers with each other ... I start with the creation of an overlay-network <code>mynet</code>:</p>
<pre><code>docker network create -d overlay mynet
</code></pre>
<p>After that, I´ve created the first service <code>activemq</code>:</p>
<pre><code>docker service create --name activemq -p 61616:61616 -p 8161:8161 --replicas 1 --network mynet rmohr/activemq
</code></pre>
<p>This starts and works perfectly fine, I also can access the WebUI http://localhost:8161/admin/</p>
<p>Now I want to start my service <a href="https://hub.docker.com/repository/docker/ni920/timeserviceplain" rel="nofollow noreferrer">TimeService</a> I have the following settings in the container:</p>
<pre><code>docker service create --name timeservice -p 7000:7000 --replicas 1 --network mynet ni920/timeserviceplain:latest
</code></pre>
<pre><code>java.naming.provider.url=tcp://localhost:61616
java.naming.user=admin
java.naming.password=admin
io.jexxa.rest.host=0.0.0.0
io.jexxa.rest.port=7000
</code></pre>
<p>So it should connect via <code>tcp://localhost:61616</code> with the <code>ActiveMQ</code> but it doesn't.</p>
<p>Do you guys have any clue what I should try by the way the communication works perfectly in a none <code>Swarm</code> environment or in a <code>Kubernetes-Pod</code>?</p>
| Nico | <p>If you want your containers to communicate witch each other, you can use their names then let network driver resolves their ips.</p>
<p>Here is the <a href="https://docs.docker.com/network/#network-driver-summary" rel="nofollow noreferrer">network driver summary</a> from docker docs:</p>
<blockquote>
<ul>
<li><strong>User-defined bridge networks</strong> are best when you need multiple containers to communicate on the same Docker host.</li>
<li><strong>Host networks</strong> are best when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.</li>
<li><strong>Overlay networks</strong> are best when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.</li>
<li><strong>Macvlan networks</strong> are best when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.</li>
<li><strong>Third-party network plugins</strong> allow you to integrate Docker with specialized network stacks.</li>
</ul>
</blockquote>
<hr />
<p>In your case, replace <strong>localhost</strong> with service name <strong>activemq</strong>.</p>
<pre><code>java.naming.provider.url=tcp://activemq:61616
.
.
</code></pre>
| dr0plet35 |
<p>As part of writing logic for listing pods within a give k8s node I've the following api call:</p>
<pre><code>func ListRunningPodsByNodeName(kubeClient kubernetes.Interface, nodeName string (*v1.PodList, error) {
return kubeClient.
CoreV1().
Pods("").
List(context.TODO(), metav1.ListOptions{
FieldSelector: "spec.nodeName=" + nodeName,
})
}
</code></pre>
<p>In order to test <strong>ListRunningPodsByNodeName</strong> using the fake client provided by k8s, I came up with the following test initialization:</p>
<pre><code>func TestListRunningPodsByNodeName(t *testing.T) {
// happy path
kubeClient := fake.NewSimpleClientset(&v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod1",
Namespace: "default",
Annotations: map[string]string{},
},
Spec: v1.PodSpec{
NodeName: "foo",
},
}, &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod2",
Namespace: "default",
Annotations: map[string]string{},
},
Spec: v1.PodSpec{
NodeName: "bar",
},
})
got, _ := ListRunningPodsByNodeName(kubeClient, "foo")
for i, pod := range got.Items {
fmt.Println(fmt.Sprintf("[%2d] %s", i, pod.GetName()))
}
t.Errorf("Error, expecting only one pod")
}
</code></pre>
<p>When debugging, I got <strong>pod1</strong> and <strong>pod2</strong> Pods returned despite I'm filtering by those running within <strong>foo</strong> node. Using this same approach for filtering by certain metadata work like a charm but can't make this to work in case of filtering by nodeName. ¿Anyone knows why please? I suspect it might be a limitation with the fake client capabilities but not completely sure to open an issue yet</p>
<p>thanks by advance</p>
| Borja Tur | <p>The fake k8s client does not support filtering by field selector (see <a href="https://github.com/kubernetes/client-go/issues/326#issuecomment-412993326" rel="nofollow noreferrer">this comment</a>). When unit testing with the fake k8s client, it's best to assume that the k8s client will work as expected in the real world (return the correct pods based on your field selector query). In your test, provide the pods to the fake k8s client that <strong>your application</strong> expects and test your own logic, rather than also testing the query logic of the k8s client.</p>
<p>If it's absolutely critical that the fake client perform the filtering for you, you may be able to use the fake client reactors to inject this custom behavior into the fake client. It just means more boilerplate code.</p>
<blockquote>
<p>Anything non-generic (like field selection behavior) can be injected in your tests by adding reactors that deal with specific types of actions, use additional info in the action (in this case, ListAction#GetListRestrictions().Fields), and customize the data returned</p>
</blockquote>
<p>I haven't tested this at all but hopefully it gives you something to start with.</p>
<pre><code>client := fake.NewSimpleClientset()
client.AddReactor("*", "MyResource", func(action testing.Action) (handled bool, ret runtime.Object, err error) {
// Add custom filtering logic here
})
</code></pre>
| Clark McCauley |
<p>I have a container running in a GKE autopilot K8s cluster. I have the following in my deployment manifest (only relevant parts included):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
resources:
requests:
memory: "250Mi"
cpu: "512m"
</code></pre>
<p>So I've requested the minimum resources that <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#allowable_resource_ranges" rel="noreferrer">GKE autopilot allows for normal pods</a>. Note that I have not specified a <code>limits</code>.</p>
<p>However, having applied the manifest and looking at the yaml I see that it does not match what's in the manifest I applied:</p>
<pre><code> resources:
limits:
cpu: 750m
ephemeral-storage: 1Gi
memory: 768Mi
requests:
cpu: 750m
ephemeral-storage: 1Gi
memory: 768Mi
</code></pre>
<p>Any idea what's going on here? Why has GKE scaled up the resources. This is costing me more money unnecessarily?</p>
<p>Interestingly it was working as intended until recently. This behaviour only seemed to start in the past few days.</p>
| harryg | <p>If the resources that you've requested are following:</p>
<pre><code> memory: "250Mi"
cpu: "512m"
</code></pre>
<p>Then they are not compliant with the minimal amount of resources that <code>GKE Autopilot</code> will assign. Please take a look on the documentation:</p>
<blockquote>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>NAME</th>
<th>Normal Pods</th>
</tr>
</thead>
<tbody>
<tr>
<td>CPU</td>
<td>250 mCPU</td>
</tr>
<tr>
<td>Memory</td>
<td><strong>512 MiB</strong></td>
</tr>
<tr>
<td>Ephemeral storage</td>
<td>10 MiB (per container)</td>
</tr>
</tbody>
</table>
</div>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#allowable_resource_ranges" rel="noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Autopilot overview: Allowable resource ranges</a></em></p>
</blockquote>
<p><strong>As you can see the amount of memory you've requested was too small</strong> and that's why you saw the following message (and the manifest was modified to increate the <code>requests</code>/<code>limits</code>):</p>
<pre class="lang-sh prettyprint-override"><code>Warning: Autopilot increased resource requests for Deployment default/XYZ to meet requirements. See http://g.co/gke/autopilot-resources.
</code></pre>
<p>To fix that you will need to assign resources that are within the limits of the documentation, I've included in the link above.</p>
| Dawid Kruk |
<p>I have deployed pods running nginx using helm, but when I do minikube service service_name, I see my service running on localhost as shown below.
<a href="https://i.stack.imgur.com/IEACP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IEACP.png" alt="enter image description here" /></a></p>
<p>I thought that you need to access the service via the cluster IP not localhost?</p>
<p>I tried to access it using the cluster ip with the port of the service, but it doesn't seem to work.</p>
<p>I also tried to run it again after stopping docker, but it seems that docker is required to start the kubernetes cluster.</p>
<p>I'm following this <a href="https://www.youtube.com/watch?v=vQX5nokoqrQ&t=1214s" rel="nofollow noreferrer">kubecon demo</a> , in the demo she can access it using the cluster ip just fine.</p>
| allen | <p>It seems that the problem is the cluster was created using default docker driver.</p>
<p>Here's the thread that I found the solution, <a href="https://stackoverflow.com/questions/63600378/cant-access-minikube-service-using-nodeport-from-host-on-mac">enter link description here</a></p>
<p>Just needed to start minikube cluster using virtualbox as the driver.</p>
| allen |
<p>I have trouble about nginx ingress using condition</p>
<p>Install Nginx Ingress via helm instal (tested in both nginx-ingress-1.16 and nginx-ingress-1.36</p>
<p>I am try to follow
<a href="https://stackoverflow.com/questions/59230411/ingress-nginx-redirect-from-www-to-https">ingress nginx redirect from www to https</a>
Setup some condition </p>
<p>like</p>
<pre><code> nginx.ingress.kubernetes.io/configuration-snippet: |
if ( $host = "mydomain.co" ) {
rewrite ^ https://www.mydomain.co$uri permanent;
}
</code></pre>
<p>When apply the ingress rule, nginx ingress start reload in fail status</p>
<pre><code>-------------------------------------------------------------------------------
W0602 07:35:36.244415 6 queue.go:130] requeuing vincent/demoheader-ingress, err
-------------------------------------------------------------------------------
Error: exit status 1
2020/06/02 07:35:36 [notice] 982#982: ModSecurity-nginx v1.0.0
2020/06/02 07:35:36 [emerg] 982#982: invalid condition "~" in /tmp/nginx-cfg971999838:530
nginx: [emerg] invalid condition "~" in /tmp/nginx-cfg971999838:530
nginx: configuration file /tmp/nginx-cfg971999838 test failed
</code></pre>
<p>My Full ingress rule</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demoheader-ingress
namespace: namespace
annotations:
kubernetes.io/ingress.class: nginx-temp
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
if ( $uri ~* ^/xx/(.*) ) {
rewrite ^ https://www.xxx.co permanent;
}
spec:
rules:
- host: mydomain
http:
paths:
- backend:
serviceName: header-headerv1
servicePort: 80
path: /
EOF
</code></pre>
<p>Any idea ?</p>
| Vincent Ngai | <p>OK i know what happen in here</p>
<p>I encount a wired issue for k8s apply stuff ….
While official document told you , you can apply object by this method</p>
<pre><code>cat <<EOF | kubectl apply -f -
xxx
yyy
eee
EOF
</code></pre>
<p><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a></p>
<p>However for ingress rule , if you under a specific condition such like this</p>
<pre><code> nginx.ingress.kubernetes.io/configuration-snippet: |
if ( $host = ^mydomain ) {
rewrite ^ https://www.mydomain$uri permanent;
}
</code></pre>
<p>you are unable to make the nginx working again (never config reload success, it will affect the following config change)</p>
<p>Until you delete the ingress rule and re-apply by
<code>kubectl apply -f the-ingress-file</code></p>
| Vincent Ngai |
<p>I'm doing a lab and can't understand this:</p>
<blockquote>
<p>Kubectl has support for auto-completion allowing you to discover the
available options. This is applied to the current terminal session
with source <(kubectl completion bash)</p>
</blockquote>
<p>The cmd:</p>
<pre><code>source <(kubectl completion bash)
</code></pre>
<p>sources-in what?</p>
| ERJAN | <ul>
<li><code>source</code> (synonym for <code>.</code>) is a bash built in command which executes the given file in the current shell environment</li>
<li><code><(command ...)</code> is process substitution - the output of the commands are passed as a file</li>
<li>bash completion is implemented with shell functions, which must be set in the current shell environment</li>
<li>You can view the code that's executed to set up the completion functions: <code>kubectl completion bash</code></li>
</ul>
| dan |
<p>After a few days of running Couchbase 6.5.1 in kubernetes the defined "CouchbaseCluster" resource disappears, resulting in the deletion of my couchbase pods.</p>
<p>After a bit of digging i found that the Admissions Operator pod logs contained continual updates to the CouchbaseCluster resource more than once per second:</p>
<pre><code>I1021 15:05:20.013984 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:20.061531 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:20.613922 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:20.620427 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:21.414017 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:21.460600 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:22.013887 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:22.060931 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:22.413665 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:22.420773 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:23.014797 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:23.023459 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:23.614544 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:23.661482 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:24.014503 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:24.021428 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:24.613723 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:24.639612 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:25.217866 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:25.223814 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:25.614774 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:25.662553 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:26.213481 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:26.221502 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:26.813576 1 admission.go:185] Mutating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
I1021 15:05:26.820181 1 admission.go:137] Validating resource: UPDATE couchbase.com/v2, Kind=CouchbaseCluster test/cb-example
</code></pre>
<p>This causes the generation number on the CouchbaseCluster type definition to climb rapidly. After just 15 mins it gets to 1500. I suspect that this behaviour is not normal and eventually the CouchbaseCluster resource is deleted by kubernetes.</p>
<p>This behaviour occurs with the most basic CouchbaseCluster definition:</p>
<pre><code>apiVersion: couchbase.com/v2
kind: CouchbaseCluster
metadata:
name: cb-example
spec:
image: couchbase/server:6.5.1
security:
adminSecret: cb-auth
networking:
exposeAdminConsole: true
adminConsoleServices:
- data
buckets:
managed: true
servers:
- size: 3
name: all_services
services:
- data
- index
- query
- search
- eventing
- analytics
</code></pre>
<p>Are the operator admissions logs normal?</p>
<p>How do I debug it further?</p>
| Paul Clark | <p>The disappearing cluster turned out to be this issue:
<a href="https://forums.couchbase.com/t/multiple-couchbase-environments-in-one-kubernetes-cluster/28343" rel="nofollow noreferrer">https://forums.couchbase.com/t/multiple-couchbase-environments-in-one-kubernetes-cluster/28343</a></p>
| Paul Clark |
<p>I was trying readiness probe in kubernetes with a springboot app. After the app starts, lets say after 60 seconds I fire <code>ReadinessState.REFUSING_TRAFFIC</code> app event.</p>
<p>I use port-forward for kubernetes service(Cluster-Ip) and checked /actuator/health/readiness and see
<code>"status":"OUT_OF_SERVICE"</code> after 60 seconds.</p>
<p>I, then fire some GET/POST requests to service.</p>
<p>Expected:
Service unavailable message</p>
<p>Actual:
GET/POST endpoints return data as usual</p>
<p>Is the behavior expected. Please comment.</p>
<p>Sample liveness/readiness probe yaml</p>
<pre><code> livenessProbe:
failureThreshold: 3
httpGet:
httpHeaders:
- name: Authorization
value: Basic xxxxxxxxxxxxxx
path: /actuator/health/liveness
port: http
scheme: HTTP
initialDelaySeconds: 180
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 10
name: sample-app
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
httpHeaders:
- name: Authorization
value: Basic xxxxxxxxxxxxxx
path: /actuator/health/readiness
port: http
scheme: HTTP
initialDelaySeconds: 140
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 10
</code></pre>
| v47 | <p>This is expected behavior as:</p>
<ul>
<li><code>$ kubectl port-forward service/SERVICE_NAME LOCAL_PORT:TARGET_PORT</code></li>
</ul>
<p>is not considering the state of the <code>Pod</code> when doing a port forwarding (directly connects to a <code>Pod</code>).</p>
<hr />
<h3>Explanation</h3>
<p>There is already a great answer which pointed me on further investigation here:</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/59941521/12257134">Stackoverflow.com: Answer: Does kubectl port-forward ignore loadBalance services?</a></em></li>
</ul>
<p>Let's assume that you have a <code>Deployment</code> with a <code>readinessProbe</code> (in this example probe will never succeed):</p>
<ul>
<li><code>$ kubectl get pods, svc</code> (redacted not needed part)</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-64ff4d8749-7khht 0/1 Running 0 97m
pod/nginx-deployment-64ff4d8749-bklnf 0/1 Running 0 97m
pod/nginx-deployment-64ff4d8749-gsmml 0/1 Running 0 97m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx ClusterIP 10.32.31.105 <none> 80/TCP 97m
</code></pre>
<ul>
<li><code>$ kubectl describe endpoints nginx</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>Name: nginx
Namespace: default
Labels: <none>
Annotations: <none>
Subsets:
Addresses: <none>
NotReadyAddresses: 10.36.0.62,10.36.0.63,10.36.0.64 # <-- IMPORTANT
Ports:
Name Port Protocol
---- ---- --------
<unset> 80 TCP
Events: <none>
</code></pre>
<p>As you can see all of the <code>Pods</code> are not in <code>Ready</code> state and the <code>Service</code> will not send the traffic to it. This can be seen in a following scenario ( create a test <code>Pod</code> that will try to <code>curl</code> the <code>Service</code>):</p>
<ul>
<li><code>$ kubectl run -it --rm nginx-check --image=nginx -- /bin/bash</code></li>
<li><code>$ curl nginx.default.svc.cluster.local</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>curl: (7) Failed to connect to nginx.default.svc.cluster.local port 80: Connection refused
</code></pre>
<p>Using <code>kubectl port-forward</code>:</p>
<ul>
<li><code>$ kubectl port-forward service/nginx 8080:80</code></li>
<li><code>$ curl localhost:8080</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code><-- REDACTED -->
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<-- REDACTED -->
</code></pre>
<p>More light can be shed on why it happened by getting more verbose output from the command:</p>
<ul>
<li><code>kubectl port-forward service/nginx 8080:80 -v=6</code> (the number can be higher)</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>I0606 21:29:24.986382 7556 loader.go:375] Config loaded from file: /SOME_PATH/.kube/config
I0606 21:29:25.041784 7556 round_trippers.go:444] GET https://API_IP/api/v1/namespaces/default/services/nginx 200 OK in 51 milliseconds
I0606 21:29:25.061334 7556 round_trippers.go:444] GET https://API_IP/api/v1/namespaces/default/pods?labelSelector=app%3Dnginx 200 OK in 18 milliseconds
I0606 21:29:25.098363 7556 round_trippers.go:444] GET https://API_IP/api/v1/namespaces/default/pods/nginx-deployment-64ff4d8749-7khht 200 OK in 18 milliseconds
I0606 21:29:25.164402 7556 round_trippers.go:444] POST https://API_IP/api/v1/namespaces/default/pods/nginx-deployment-64ff4d8749-7khht/portforward 101 Switching Protocols in 62 milliseconds
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
</code></pre>
<p>What happened:</p>
<ul>
<li><code>kubectl</code> requested the information about the <code>Service</code>: <code>nginx</code></li>
<li><code>kubectl</code> used the <code>selector</code> associated with the <code>Service</code> and looked for <code>Pods</code> with the same <code>selector</code> (<code>nginx</code>)</li>
<li><code>kubectl</code> chose a single <code>Pod</code> and port-forwarded to it.</li>
</ul>
<p>The <code>Nginx</code> welcome page showed as the <code>port-forward</code> connected directly to a <code>Pod</code> and not to a <code>Service</code>.</p>
<hr />
<p>Additional reference:</p>
<ul>
<li><p><em><a href="https://github.com/kubernetes/kubernetes/issues/15180" rel="nofollow noreferrer">Github.com: Kubernetes: Issues: kubectl port-forward should allow forwarding to a Service</a></em></p>
</li>
<li><p><code>$ kubectl port-forward --help</code></p>
</li>
</ul>
<blockquote>
<p># Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a <strong>pod selected by
the service</strong></p>
<blockquote>
<p><code>kubectl port-forward service/myservice 8443:https</code></p>
</blockquote>
</blockquote>
| Dawid Kruk |
<p>I've got an issue, I'm trying to install linkerd on my cluster, all is going well</p>
<p>I went exactly with this official README</p>
<pre class="lang-sh prettyprint-override"><code>https://linkerd.io/2.11/tasks/install-helm/
</code></pre>
<p>installed it via helm</p>
<pre class="lang-sh prettyprint-override"><code>MacBook-Pro-6% helm list -n default
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
linkerd2 default 1 2021-12-15 15:47:10.823551 +0100 CET deployed linkerd2-2.11.1 stable-2.11.1
</code></pre>
<p>linkerd itself works, and the <code>linkerd check</code> command as well</p>
<pre class="lang-sh prettyprint-override"><code>MacBook-Pro-6% linkerd version
Client version: stable-2.11.1
Server version: stable-2.11.1
</code></pre>
<p>but when I try to install <code>viz</code> dashboard as described in the <a href="https://linkerd.io/2.11/getting-started/" rel="nofollow noreferrer">getting-started</a> page I run</p>
<pre class="lang-sh prettyprint-override"><code>linkerd viz install | kubectl apply -f -
</code></pre>
<p>and when going with</p>
<pre class="lang-sh prettyprint-override"><code>linkerd check
...
Status check results are √
Linkerd extensions checks
=========================
/ Running viz extension check
</code></pre>
<p>and it keeps on checking the viz extensions, and when I ran <code>linkerd dashboard</code> (deprecated I know) shows the same error</p>
<pre class="lang-sh prettyprint-override"><code>Waiting for linkerd-viz extension to become available
</code></pre>
<p>anyone got any clue what I'm doing wrong ? Been stuck at this part for 2hrs &_& and noone seem to have any answers</p>
<p>note, when I ran, <code>linkerd check</code> after instalation of viz I get</p>
<pre class="lang-sh prettyprint-override"><code>
linkerd-viz
-----------
√ linkerd-viz Namespace exists
√ linkerd-viz ClusterRoles exist
√ linkerd-viz ClusterRoleBindings exist
√ tap API server has valid cert
√ tap API server cert is valid for at least 60 days
‼ tap API service is running
FailedDiscoveryCheck: failing or missing response from https://10.190.101.142:8089/apis/tap.linkerd.io/v1alpha1: Get "https://10.190.101.142:8089/apis/tap.linkerd.io/v1alpha1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
see https://linkerd.io/2.11/checks/#l5d-tap-api for hints
‼ linkerd-viz pods are injected
could not find proxy container for grafana-8d54d5f6d-cv7q5 pod
see https://linkerd.io/2.11/checks/#l5d-viz-pods-injection for hints
√ viz extension pods are running
× viz extension proxies are healthy
No "linkerd-proxy" containers found in the "linkerd" namespace
see https://linkerd.io/2.11/checks/#l5d-viz-proxy-healthy for hints
</code></pre>
<hr />
<p>debugging</p>
| CptDolphin | <p>From your problem descripiton:</p>
<blockquote>
<p>‼ linkerd-viz pods are injected
could not find proxy container for grafana-8d54d5f6d-cv7q5 pod
see <a href="https://linkerd.io/2.11/checks/#l5d-viz-pods-injection" rel="nofollow noreferrer">https://linkerd.io/2.11/checks/#l5d-viz-pods-injection</a> for hints</p>
</blockquote>
<p>and:</p>
<blockquote>
<p>MacBook-Pro-6% helm list -n default</p>
</blockquote>
<p>I encountered a similar problem but with <code>flagger</code> pod rather than <code>grafana</code> pod (I didn't attempt to install <code>grafana</code> component like you did).</p>
<p>A side effect of my problem is this:</p>
<pre><code>$ linkerd viz dashboard
Waiting for linkerd-viz extension to become available
Waiting for linkerd-viz extension to become available
Waiting for linkerd-viz extension to become available
... ## repeating for 5 minutes or so before popping up the dashboard in browser.
</code></pre>
<p>The cause for my problem turned out to be that I installed the <code>viz</code> extension into the <code>linkerd</code> namespace. It should belong to the <code>linkerd-viz</code> namespace.</p>
<p>Looking at your original problem description, it seems that you installed the control plane into the <code>default</code> namespace (as opposed to the <code>linkerd</code> namespace.) While you can use any namespace you want, the control plane must be in a separate namespace from the <code>viz</code> extension. Details can be seen in the discussion I wrote here:</p>
<ul>
<li><a href="https://github.com/linkerd/website/issues/1309" rel="nofollow noreferrer">https://github.com/linkerd/website/issues/1309</a></li>
</ul>
| Vincent Yin |
<p>I want to understand what happens behind the scene if a liveness probe fails in kubernetes ?</p>
<p>Here is the context:</p>
<p>We are using <em>Helm Chart</em> for deploying out application in Kubernetes cluster.</p>
<p>We have a statefulsets and headless service. To initialize mTLS, we have created a <em>'job'</em> kind and in 'command' we are passing shell & python scripts are an arguments.</p>
<p>We have written a <em>'docker-entrypoint.sh'</em> inside <em>'docker image'</em> for some initialization work.</p>
<p>Inside statefulSet, we are passing a shell script as a command in <em>'livenessProbe'</em> which runs every 30 seconds.</p>
<p>I want to know if my livenessProbe fails for any reason :
1. Does helm chart monitor this probe & will restart container or it's K8s responsibility ?
2. Will my 'docker-entryPoint.sh' execute if container is restarted ?
3. Will 'Job' execute at the time container restart ?</p>
<p>How Kubernetes handles livenessProbe failure and what steps it takes?</p>
| Pushpendra | <p>To answer your question liveness probe and readiness probe are actions basically get calls to your application pod to check whether it is healthy.
This is not related to helm charts.
Once the liveness or readiness probe fails container restart takes place .
I would say these liveness probes failure can affect your app uptime, so use a rolling deployment and autoscale your pod counts to enable availability.</p>
| Sunjay Jeffrish |
<p>We are using <code>kubernetes/ingress-nginx</code> for our Azure AKS instance. I have a URI that is 9kb long approximately (it contains a <code>post_logout_redirect_uri</code> and a very long <code>id_token_hint</code> for our Identity server, running in .Net core 2.2).</p>
<p>However, I cannot get past the ingress as nginx is rejecting the query with <code>414 URI Too Long</code>. I can see the request in the Nginx logs but not on the Identity server logs, so it is clearly getting bounced before.</p>
<p>I have tried to update the nginx configuration using config map, but without success. The settings are applied (and have helped me fix other issues before). However, in this case nothing I try seems to have worked. Here is the config map I'm using:</p>
<pre><code>apiVersion: v1
data:
http2-max-header-size: "64k"
http2-max-field-size: "32k"
proxy-body-size: "100m"
client-header-buffer-size: "64k"
large-client-header-buffers: "4 64k"
kind: ConfigMap
metadata:
name: nginx-ingress-controller
namespace: kube-system
</code></pre>
<p>Here are the ingress annotations for the Identity server:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress-name
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/send_timeout: "180"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "180"
nginx.ingress.kubernetes.io/proxy-read-timeout: "180"
nginx.ingress.kubernetes.io/proxy-send-timeout: "180"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "authorization,content-type"
nginx.ingress.kubernetes.io/proxy-body-size: 250m
nginx.ingress.kubernetes.io/proxy-buffer-size: "64k"
</code></pre>
<p>Finally, if I check the nginx config on the pod it does contain my updated values, in the global config section.</p>
<pre><code>...
keepalive_timeout 75s;
keepalive_requests 100;
client_body_temp_path /tmp/client-body;
fastcgi_temp_path /tmp/fastcgi-temp;
proxy_temp_path /tmp/proxy-temp;
ajp_temp_path /tmp/ajp-temp;
client_header_buffer_size 64k;
client_header_timeout 60s;
large_client_header_buffers 4 64k;
client_body_buffer_size 8k;
client_body_timeout 60s;
http2_max_field_size 32k;
http2_max_header_size 64k;
http2_max_requests 1000;
types_hash_max_size 2048;
server_names_hash_max_size 1024;
server_names_hash_bucket_size 64;
map_hash_bucket_size 64;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
variables_hash_bucket_size 128;
variables_hash_max_size 2048;
underscores_in_headers off;
ignore_invalid_headers on;
...
</code></pre>
<p>Any info or suggestions would be appreciated, thanks!</p>
| Tim Trewartha | <p>I also tried the following annotations:</p>
<pre><code>nginx.ingress.kubernetes.io/large_client_header_buffers: 200m
nginx.ingress.kubernetes.io/proxy-body-size: 200m
</code></pre>
<p>They didn't help, what did help is the snippet I added in the Ingress controller yaml:</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippet: |
http2_max_header_size 256k;
http2_max_field_size 256k;
</code></pre>
| Omer |
<p>I have setup a mongodb workload in Rancher (2.5.8)</p>
<p>I have setup a volume:
<a href="https://i.stack.imgur.com/PgCm4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PgCm4.png" alt="volume" /></a></p>
<p>The workload start fine if I have the containers set to scale to 1. So 1 container will start and all is fine.</p>
<p>However if I set the workload to have 2 or more containers, one container will start fine, but then the others fail to start.</p>
<p>Here is what my workload looks like if I set it to scale to 2. one container started and running fine, but the second (and third if I have its scale to 3) are failing.
<a href="https://i.stack.imgur.com/VbwjH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VbwjH.png" alt="enter image description here" /></a></p>
<p>If I remove the volume, then 2+ containers will all start up fine, but then data is only being stored within each container (and gets lost whenever I redeploy).</p>
<p>But if I have the volume set, then the data does store in the volume (host), but then can only start one container.</p>
<p>Thank you in advance for any suggestions</p>
<p>Jason</p>
| Jason | <p>Posting this community wiki answer to set a baseline and to hopefully show one possible reason that the <code>mongodb</code> is failing.</p>
<p>Feel free to edit/expand.</p>
<hr />
<p>As there is a lot of information missing from this question like how it was created, how the <code>mongodb</code> was provisioned and there is also lack of logs from the container, the actual issue could be hard to pinpoint.</p>
<p>Assuming that the <code>Deployment</code> was created with a following manifest:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 1 # THEN SCALE TO 3
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo
imagePullPolicy: "Always"
ports:
- containerPort: 27017
volumeMounts:
- mountPath: /data/db
name: mongodb
volumes:
- name: mongodb
persistentVolumeClaim:
claimName: mongo-pvc
</code></pre>
<p>The part of the setup where the <code>Volume</code> is referenced could be different (for example <code>hostPath</code> can be used) but the premise of it was:</p>
<ul>
<li>If the <code>Pods</code> are physically referencing the same <code>data/db/mongod</code> it will go into <code>CrashLoopBackOff</code> state.</li>
</ul>
<p>Following on this topic:</p>
<ul>
<li><code>$ kubectl get pods</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE
mongo-5d849bfd8f-8s26t 1/1 Running 0 45m
mongo-5d849bfd8f-l6dzb 0/1 CrashLoopBackOff 13 44m
mongo-5d849bfd8f-wgh6m 0/1 CrashLoopBackOff 13 44m
</code></pre>
<ul>
<li><code>$ kubectl logs mongo-5d849bfd8f-l6dzb</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code><-- REDACTED -->
{"t":{"$date":"2021-06-05T12:43:58.025+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"DBPathInUse: Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory"}}
<-- REDACTED -->
</code></pre>
<hr />
<p>Citing the O`Reilly site on the <code>mongodb</code> production setup:</p>
<blockquote>
<p>Specify an alternate directory to use as the data directory; the default is <code>/data/db/</code> (or, on Windows, <code>\data\db\</code> on the MongoDB binary’s volume). Each mongod process on a machine needs its own data directory, so if you are running three instances of <code>mongod</code> on one machine, you’ll need three separate data directories. When <code>mongod</code> starts up, it creates a <code>mongod.lock</code> file in its data directory, which prevents any other <code>mongod</code> process from using that directory. If you attempt to start another MongoDB server using the same data directory, it will give an error:</p>
<blockquote>
<pre><code>exception in initAndListen: DBPathInUse: Unable to lock the
lock file: \ data/db/mongod.lock (Resource temporarily unavailable).
Another mongod instance is already running on the
data/db directory,
\ terminating`
</code></pre>
</blockquote>
<p>-- <em><a href="https://www.oreilly.com/library/view/mongodb-the-definitive/9781491954454/ch21.html" rel="nofollow noreferrer">Oreilly.com: Library: View: Mongodb the definitive: Chapter 21</a></em></p>
</blockquote>
<hr />
<p>As an alternative approach you can other means to provision <code>mongodb</code> like for example:</p>
<ul>
<li><em><a href="https://docs.mongodb.com/kubernetes-operator/master/tutorial/deploy-replica-set/" rel="nofollow noreferrer">Docs.mongodb.com: Kubernetes operator: Master: Tutorial: Deploy replica set</a></em> (I would check the configuration of <code>StorageClasses</code> here)</li>
<li><em><a href="https://bitnami.com/stack/mongodb/helm" rel="nofollow noreferrer">Bitname.com: Stack: Mongodb: Helm</a></em></li>
</ul>
| Dawid Kruk |
<p>I want to create dataBase for autotest in kubernetes. I want to create an image(postg-my-app-v1) from postgres image, add changelog files, and liquibase image. When I deploy this image with helm i just want to specify containers - postg-my-app-v1 and it should startup pod with database and create tables with liquibase changelog.</p>
<p>Now i create Dockerfile as below</p>
<pre><code>FROM postgres
ADD /changelog /liquibase/changelog
</code></pre>
<p>I don't understand how to add liquibase to this image? Or i must use docker compose? or helm lifecycle postStart for liquibase?</p>
| Tim | <pre><code>FROM docker-proxy.tcsbank.ru/liquibase/liquibase:3.10.x AS Liquibase
FROM docker-proxy.tcsbank.ru/postgres:9.6.12 AS Postgres
ENV POSTGRES_DB bpm
ENV POSTGRES_USER priest
ENV POSTGRES_PASSWORD Bpm_123
COPY --from=Liquibase /liquibase /liquibase
ENV JAVA_HOME /usr/local/openjdk-11
COPY --from=Liquibase $JAVA_HOME $JAVA_HOME
ENV LIQUIBASE_CHANGELOG /liquibase/changelog/
COPY /changelog $LIQUIBASE_CHANGELOG
COPY liquibase.sh /usr/local/bin/
COPY main.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/liquibase.sh && \
chmod +x /usr/local/bin/main.sh && \
ln -s /usr/local/bin/main.sh / && \
ln -s /usr/local/bin/liquibase.sh /
ENTRYPOINT ["main.sh"]
</code></pre>
<p>main.sh</p>
<pre><code>#!/bin/bash
bash liquibase.sh | awk '{print "liquiBase script: " $0}' &
bash docker-entrypoint.sh postgres
</code></pre>
<p>liquibase.sh</p>
<pre><code>#!/bin/bash
for COUNTER in {1..120}
do
sleep 1s
echo "check db $COUNTER times"
pg_isready
if [ $? -eq 0 ]
then
break
fi
done
echo "try execute liquibase"
bash liquibase/liquibase --url="jdbc:postgresql://localhost:5432/$POSTGRES_DB" --username=$POSTGRES_USER --password=$POSTGRES_PASSWORD --changeLogFile=/liquibase/changelog/changelog.xml update
</code></pre>
| Tim |
<p>I have been trying out to deploy a test container with the dockerfile for my test container as below which when deployed on the Kubernetes the Pod goes into CrashLoopBackOff. What would be the possible reasons for the failure ?</p>
<p><em>Dockerfile for my test container:</em></p>
<pre><code>FROM docker.asdfasdf.com/alpine:3.14
RUN apk add netcat-openbsd
ENTRYPOINT ["/bin/sh", "-c", "nc -l 8080 &"]
</code></pre>
<p>State of the container when deployed on kubernetes (describe pod info)</p>
<pre><code>main:
Container ID: docker://d845d4fb4asdf78sdfasdf8asdfasdf9asdfasda7
Image: asdf.sdfad.com/xyz/test1:asdfasdfasdfasdf
Image ID: docker-pullable://asdf.sdfad.com/xyz/test1:asdfasdfasdfasdf@sha256:a643c0e227a27ds5d4e78a9b50b894e1c934d95f88ddsdswvbc
Ports: 8080/TCP, 8081/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 26 May 2022 00:01:55 +0100
Finished: Thu, 26 May 2022 00:01:55 +0100
Ready: False
Restart Count: 6
Limits:
memory: 1Gi
Requests:
cpu: 1
memory: 500Mi
Liveness: http-get http://:8080/sadf/healthcheck delay=30s timeout=1s period=15s #success=1 #failure=3
Readiness: http-get http://:8080/asdf/healthcheck delay=20s timeout=1s period=5s #success=1 #failure=3
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 10m (x2 over 10m) kubelet Created container main
Warning FailedPreStopHook 10m (x2 over 10m) kubelet Exec lifecycle hook ([sh -c rm heartbeat.txt && sleep 30]) for Container "main" in Pod "test1--avi--default-76b6d7c76b-s9jdt_test1-dev(asdfc-ava45-45ae-78d-fgghh7765532)" failed - error: rpc error: code = Unknown desc = container not running (main), message: ""
Normal Killing 10m (x2 over 10m) kubelet FailedPostStartHook
Warning FailedPostStartHook 10m (x2 over 10m) kubelet Exec lifecycle hook ([sh -c dirname heartbeat.txt | xargs mkdir -p && touch heartbeat.txt
]) for Container "main" in Pod "test1--avi--default-76b6d7c76b-s9jdt_test1-dev(asdfc-ava45-45ae-78d-fgghh7765532)" failed - error: rpc error: code = Unknown desc = container not running (main), message: ""
Normal Started 10m (x2 over 10m) kubelet Started container main
Warning BackOff 21s (x58 over 10m) kubelet Back-off restarting failed container
</code></pre>
| Avi | <p>Entrypoint is: <code>"nc -l 8080 &"</code></p>
<p><code>netcat</code> behavior is: handle request → print it into stdout → exit</p>
<p>So your first healthcheck: <code>http-get http://:8080/sadf/healthcheck delay=30s timeout=1s period=15s #success=1 #failure=3</code> doing this first request and after that <code>nc</code> exiting. Because of it: <code>#success=1 #failure=3</code></p>
<p>For example you may set entrypoint as</p>
<pre><code>["/bin/sh", "-c", "while true; do nc -l 8080; echo 'restarting netcat'; done"]
</code></pre>
<p><code>while</code> loop will restart <code>nc</code> anytime it exit</p>
<p>Another (better) option is to use <code>-k</code> (<code>--keep-open</code>) argument to keep <code>nc</code> connection open:</p>
<pre><code>ENTRYPOINT ["nc", "-k", "-l", "8080"]
</code></pre>
| rzlvmp |
<p>What I want to achieve is to run the simplest echo server using <code>python</code> and <code>tornado</code>, code here:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import tornado.ioloop
import tornado.web
from tornado.log import enable_pretty_logging
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
def make_app():
return tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
port = 8080
print ("Starting up echo server at port %d" % port)
enable_pretty_logging()
app = make_app()
app.listen(int(port))
tornado.ioloop.IOLoop.current().start()
</code></pre>
<p>I want to run it inside docker that is running inside kubernetes's Pod.
I already achieved it but only partly. I'm running kubernetes cluster on physical machine, which is in my local network. To run cluster I used <code>minikube</code>.
Here are my configuration files, which I use to run kubernetes <code>Deployment</code> and <code>Service</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: testApp
spec:
replicas: 1
selector:
matchLabels:
app: testApp
template:
metadata:
labels:
app: testApp
spec:
containers:
- name: testApp-container
image: testApp-docker:latest
imagePullPolicy: Never
command: ["python3"]
args: ["-u", "echo_tornado.py"]
ports:
- containerPort: 5020
restartPolicy: Always
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: testApp-svc
labels:
app: testApp
spec:
type: NodePort
ports:
- port: 5020
targetPort: 5020
nodePort: 5020
selector:
app: testApp
</code></pre>
<p>What is problem exactly ? I can sent request to my echo server by <code>curl $NODE_IP:$NODE_PORT</code> and I'm getting a response, but I also want to be able do <code>curl localhost:$NODE_PORT</code> and also (what is crucial for me) I must be able to do <code>curl $MY_LOCAL_MACHINE_IP:$NODE_PORT</code> from other machine inside same local network.</p>
<p>Is it possible to achieve it ? Should I use some kind of forwarding my local IP and port to node's ip ?
Maybe I shouldn't use <code>ServiceType : NodePort</code> and I should use <code>LoadBalancer</code> ?
Maybe <code>minikube</code> is a problem and I should use different tool ?</p>
| AnDevi | <p><a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">Minikube</a> is a tool that spawn your single node Kubernetes cluster for development purposes on your machine (PC, Laptop, Server, etc.).</p>
<p>It uses different <code>--drivers</code> to run Kubernetes (it can be deployed as <code>bare-metal</code>, in <code>docker</code>, in <code>virtualbox</code>, in <code>kvm</code>, etc.). This allows for isolation from host and other devices. <strong>It also means that there are differences when it comes to the networking part of this setup.</strong></p>
<p>There is a lot to cover when it comes to the networking part of Kubernetes. I encourage you to check the official docs:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Service</a></em></li>
</ul>
<hr />
<p>As the <code>driver</code> used and the <code>OS</code> is unknown it could be hard to pinpoint the exact solution. <strong>Some</strong> of the pointers that could help you:</p>
<ul>
<li>When you are using Linux distribution you can opt to use driver <code>--none</code>. It will use <code>Docker</code> but everything (containers like <code>kubeapi-server</code>, <code>etcd</code>, etc.) will be provisioned directly on your host. By that you could run: <code>$ curl $MY_LOCAL_MACHINE_IP:$NODE_PORT</code>). With <code>--driver=docker</code> everything I mentioned will be put in a <code>Docker</code> container.</li>
</ul>
<blockquote>
<p>A side note!</p>
<p><code>--driver=none</code> have some limitations/issues (for example decreased security). You can read more about it by following this documentation:</p>
<ul>
<li><em><a href="https://minikube.sigs.k8s.io/docs/drivers/none/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Drivers: None</a></em></li>
</ul>
</blockquote>
<ul>
<li><p>As a workaround/<strong>temporary solution</strong> you can use previously mentioned (run on your host):</p>
<ul>
<li><code>$ kubectl port-forward svc/testApp-svc 5020:5020 --address 0.0.0.0</code> - this command will forward requests coming to your machine on port <code>5020</code> directly to the <code>Service: testApp-svc</code> on port <code>5020</code>.</li>
</ul>
</li>
<li><p>With other driver like for example <code>Virtualbox</code> you will need to reach the documentation of it for ability to expose your <code>minikube</code> instance on the LAN.</p>
</li>
</ul>
<hr />
<p>Seeing this part of the question:</p>
<blockquote>
<p>(what is crucial for me) I must be able to do curl $MY_LOCAL_MACHINE_IP:$NODE_PORT from other machine inside same local network.</p>
</blockquote>
<p>It could beneficiary to add that there are other solution outside of <code>minikube</code> that could provision your Kubernetes cluster. There are some differences between them and you would need to chose the one that suits your requirements the most. <strong>Some</strong> of them are:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">Kubeadm</a></em></li>
<li><em><a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">Kubespray</a></em></li>
<li><em><a href="https://microk8s.io/" rel="nofollow noreferrer">MicroK8S</a></em></li>
<li><em><a href="https://rancher.com/docs/k3s/latest/en/installation/install-options/" rel="nofollow noreferrer">K3S</a></em></li>
<li><em><a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">Kubernetes: The hard way</a></em></li>
</ul>
<blockquote>
<p>A side note!</p>
<p>For assigning the <code>IP addresses</code> for a <code>LoadBalancer</code> type of <code>Service</code> you could look on <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">metallb</a> (when using above options).</p>
</blockquote>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/65872990/how-to-route-traffic-from-pysical-servers-port-to-minikube-cluster">Stackoverflow.com: Questions: How to route traffic from physical servers port to minikube cluster</a></em></li>
<li><em><a href="https://kubernetes.io/docs/home/" rel="nofollow noreferrer">Kubernetes: Docs</a></em></li>
</ul>
| Dawid Kruk |
<p>We are running an application inside a cluster we created in GKE. We have created required yamls (consisting of Service and Deployment definition). We recently have decided to use Pod Topology for that I have added following piece in my Deployment yaml file under spec section-</p>
<pre><code> spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: node
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: foo-app
</code></pre>
<p>This change is working as expected when I am running the service inside a minikube cluster while the same change is not working inside a GKE cluster. It throws an error-</p>
<pre><code>Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "topologySpreadConstraints" in io.k8s.api.core.v1.PodSpec
</code></pre>
<p>I searched a lot but could not find a satisfactory answer. Has anybody faced this problem? Please help me understand the problem and its resolution.</p>
<p>Thanks in Advance.</p>
| Vinay Verma | <p>I assume you are running <code>1.17.7-gke.17</code> on your GKE cluster. Unfortunately this is the latest version you can upgrade to, through the <a href="https://cloud.google.com/kubernetes-engine/docs/release-notes-rapid" rel="nofollow noreferrer">rapid channel</a>, at the time of this post.</p>
<p><code>topologySpreadConstraints</code> is available in Kubernetes v1.18 <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">FEATURE STATE: [beta]</a></p>
| Neo Anderson |
<p>I'm working with microservice architecture using Azure AKS with Istio.</p>
<p>I configure all, and developers work with microservices to create the web platform, apis, etc.</p>
<p>But with this, I have a doubt. There is much yaml to configure for Istio and Kubernetes, e.g. <code>Ingress</code>, <code>VirtualService</code>, <code>Gateway</code> etc.</p>
<p>Is this configuration, part of the developer responsibility? should they create and configure this? or is these configuration files part of the responsibility for the DevOps team? so that developers only is responsible for creating nodejs project, and the DevOps team configure the nodejs project configuration to execute in k8s architecture?</p>
| mpanichella | <p>The whole point of Kubernetes is to help developers develop applications as fast as possible and not to get into the weeds of how the pods are deployed.</p>
<p>That being said, the developers are responsible for the applications and as mentioned here, should know the environment where their apps will be run.
It is up to the devops team to configure ingress, Istio, etc. Also (ideally), they should check the yamls if they were written by the developers. The developer should not worry about how many replica sets need to be there or any other K8s config.</p>
<p>That being said, it is always a good practice to standardize this process (who owns what) beforehand.</p>
| CyG |
<p>We have a docker image repository on GitLab which is hosted on the internal network ( repo.mycomapanydomain.io).</p>
<p>My K8 deployment is failing with Name not resolved error for repo.mycomapanydomain.io</p>
<p>I tried updating the kube-dns config as below. But I still have the same error.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{“mycomapanydomain”: [“10.131.0.4”]}
upstreamNameservers: |
[“10.131.0.4”]
</code></pre>
<p>How can I make my resolv.conf to have the Internal nameservers by default or K8 to resolve with my internal DNS IPs?</p>
| Dheeraj | <p>Editing <code>/etc/resolv.conf</code> either manually or automatically is discouraged as for:</p>
<blockquote>
<h3>Internal DNS and resolv.conf</h3>
<p>By default, most Linux distributions store DHCP information in <a href="http://man7.org/linux/man-pages/man5/resolv.conf.5.html" rel="nofollow noreferrer"><code>resolv.conf</code></a>. Compute Engine instances are configured to renew DHCP leases every 24 hours. For instances that are enabled for zonal DNS, the DHCP lease expires every hour. <strong>DHCP renewal overwrites this file, undoing any changes that you might have made.</strong> Instances using <a href="https://cloud.google.com/compute/docs/internal-dns#zonal-dns" rel="nofollow noreferrer">zonal DNS</a> have both zonal and global entries in the <code>resolv.conf</code> file.</p>
<p>-- <em><a href="https://cloud.google.com/compute/docs/internal-dns#resolv.conf" rel="nofollow noreferrer">Cloud.google.com: Compute: Docs: Internal DNS: resolv.conf</a></em></p>
</blockquote>
<p>Also:</p>
<blockquote>
<p><strong>Modifications on the boot disk of a node VM do not persist across node re-creations</strong>. Nodes are re-created during <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster#upgrade_nodes" rel="nofollow noreferrer">manual upgrade</a>, <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades" rel="nofollow noreferrer">auto-upgrade</a>, <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair" rel="nofollow noreferrer">auto-repair</a>, and <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler" rel="nofollow noreferrer">auto-scaling</a>. In addition, nodes are re-created when you enable a feature that requires node re-creation, such as <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods" rel="nofollow noreferrer">GKE sandbox</a>, <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/intranode-visibility" rel="nofollow noreferrer">intranode visibility</a>, and <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/shielded-gke-nodes" rel="nofollow noreferrer">shielded nodes</a>.</p>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#modifications" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Node images: Modifications</a></em></p>
</blockquote>
<hr />
<p>As for:</p>
<blockquote>
<p>How can I make my <code>resolv.conf</code> to have the Internal nameservers by default or K8 to resolve with my internal DNS IPs?</p>
</blockquote>
<p>From the <code>GCP</code> and <code>GKE</code> perspective, you can use the <a href="https://cloud.google.com/dns" rel="nofollow noreferrer">Cloud DNS</a> to configure your <code>DNS</code> resolution in either way that:</p>
<ul>
<li>your whole <code>DOMAIN</code> is residing in <code>GCP</code> infrastructure (and you specify all the records).</li>
<li>your <code>DOMAIN</code> queries are forwarded to the DNS server of your choosing.</li>
</ul>
<p>You can create your <code>DNS</code> zone by following:</p>
<ul>
<li><code>GCP Cloud Console</code> (Web UI) -> <code>Network Services</code> -> <code>Cloud DNS</code> -> <code>Create zone</code>:</li>
</ul>
<p>Assuming that you want to forward your <code>DNS</code> queries to your internal <code>DNS</code> server residing in <code>GCP</code> your configuration should look similar to the one below:</p>
<p><a href="https://i.stack.imgur.com/Pycq1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pycq1.png" alt="DNS" /></a></p>
<blockquote>
<p>A side note!</p>
<ol>
<li>Remember to follow the "Destination DNS Servers" steps to allow the <code>DNS</code> queries to your <code>DNS</code> server.</li>
<li>Put the internal IP address of your <code>DNS</code> server where the black rectangle is placed.</li>
</ol>
</blockquote>
<p>After that your <code>GKE</code> cluster should be able to resolve the <code>DNS</code> queries of your <code>DOMAIN.NAME</code>.</p>
<hr />
<h3>Additional resources:</h3>
<p>I found an article that shows how you can create a <code>DNS</code> forwarding for your <code>GCP</code> instances:</p>
<ul>
<li><em><a href="https://medium.com/faun/dns-forwarding-zone-forwarding-and-dns-policy-in-gcp-640a34b15bca" rel="nofollow noreferrer">Medium.com: Faun: DNS forwarding zone and dns policy in GCP</a></em></li>
</ul>
| Dawid Kruk |
<p>With <code>go mod tidy</code> I've updated protobuf to github.com//golang/[email protected]<br>
my project uses github.com/ericchiang/[email protected]
I build binary and when I try to run it I'm getting this panic error:</p>
<pre><code>panic: mismatching message name: got k8s.io.kubernetes.pkg.watch.versioned.Event, want github.com/ericchiang.k8s.watch.versioned.Event
goroutine 1 [running]:
google.golang.org/protobuf/internal/impl.legacyLoadMessageDesc(0x1f8d6c0, 0x1b85dc0, 0x1ce794f, 0x2f, 0x0, 0x0)
/home/andriy/go/pkg/mod/google.golang.org/[email protected]/internal/impl/legacy_message.go:136 +0x882
google.golang.org/protobuf/internal/impl.legacyLoadMessageInfo(0x1f8d6c0, 0x1b85dc0, 0x1ce794f, 0x2f, 0x4f7b57)
/home/andriy/go/pkg/mod/google.golang.org/[email protected]/internal/impl/legacy_message.go:48 +0xbd
google.golang.org/protobuf/internal/impl.Export.LegacyMessageTypeOf(0x1f4f0a0, 0x0, 0x1ce794f, 0x2f, 0xc000399360, 0xc0000a00d0)
/home/andriy/go/pkg/mod/google.golang.org/[email protected]/internal/impl/legacy_export.go:35 +0xa5
github.com/golang/protobuf/proto.RegisterType(0x1f4f0a0, 0x0, 0x1ce794f, 0x2f)
/home/andriy/go/pkg/mod/github.com/golang/[email protected]/proto/registry.go:186 +0x4d
github.com/ericchiang/k8s/watch/versioned.init.0()
/home/andriy/go/pkg/mod/github.com/ericchiang/[email protected]/watch/versioned/generated.pb.go:70 +0x4b
</code></pre>
<p>is there anyway to fix this, or should I downgrade protobuf to v1.3.5</p>
| Andriy Tymkiv | <p>I met the same problem as you,you can go directly to the error path to change the name. Replace the github.com/ericchiang.k8s.watch.versioned.Event in generated.pb.go's init() function with k8s.io.kubernetes.pkg.watch.versioned.Event.</p>
| yue |
<p>I have the following project:
<a href="https://github.com/ably/kafka-connect-ably" rel="nofollow noreferrer">https://github.com/ably/kafka-connect-ably</a></p>
<p>Running the dockerfile locally works perfectly well.
I have tried a few methods to get it working in k8s...</p>
<p>I have tried Kompose. This created the correct .yaml files, as well as a persistent volume properly linked with the mountpath "config", but when run I get the error:</p>
<pre><code>java.nio.file.NoSuchFileException: /config/docker-compose-worker-distributed.properties
</code></pre>
<p>Is there a way I can add the properties file to the persistent volume?
I have tried</p>
<pre><code>kubectl cp config/docker-compose-worker-distributed.properties connector:/
</code></pre>
<p>but I get:</p>
<pre><code>error: unable to upgrade connection: container not found ("connector")
</code></pre>
<p>I have also tried tagging the image with docker tag mycontainerreg.azure.io/name and then docker pushing, but that also fails, perhaps because not enough resources are allocated to it?</p>
<p>Mainly I just need the connector to port from Kafka (Apache) running in AKS to Ably, I understand I can add the connector to Confluent cloud if I purchase Enterprise but thats expensive!</p>
| Matt | <p>If anyone finds this. I ended up using Confluent and hosting it in AWS, which meant I could use the bring your own connector / custom connector feature.
Very nice feature allows you just to upload a zip of your connector and they host it for you.</p>
<p>Not technically an answer to the question but it means I dont need to find a solution any more.</p>
| Matt |
<p>I have an architectural question:
We have a Django project made of multiple apps. There is a core app that holds the main models used for the other sets of apps.
Then, we have a couple apps for user facing APIs. Lastly, we have some internal apps and tools used by developers only that are accessible in Admin UI as extended features.</p>
<p>Our deployment process is very monolithic. We use Kubernetes and we deploy the whole project as a whole. Meaning that if we only had changes in an internal app and we need that in production, we will build a new Docker image and deploy a new release with a new version tag incremented.</p>
<p>I'm not a big fan of this because change in internal tools shouldn't create a new release of the user facing applications.</p>
<p>I have been wondering if there is a way to split those deployments (maybe make them into a microservice architecture?). So we could deploy the user facing applications separate from the internal tools. I know I could build separate images, tags and everything for parts of the project but I'm not sure how they could communicate between each other if <code>internal_app_1</code> depends on the models of <code>core</code> app and potentially the <code>settings.py</code> and <code>manage.py</code> file as well.</p>
<p>Also because in Kubernetes, having to separate applications would mean to separate deployments with two servers running, so this means two separate Django projects isolated from each other but using the same database.</p>
<p>Has anyone worked with something similar or would like to suggest an alternative, if there's any?</p>
<p>Below is a tree example of how our project is structured at the moment:</p>
<pre class="lang-bash prettyprint-override"><code>├── core
| ├── models.py
| ├── views.py
| └── urls.py
├── userapi_1
| ├── views.py
| └── urls.py
├── userapi_2
| ├── views.py
| └── urls.py
├── insternal_app_1
| ├── templates
| | └── ...
| ├── models.py
| ├── views.py
| └── urls.py
├── manage.py
├── settings.py
└── Dockerfiles
├── Dockerfile.core
└── Dockerfile.internal_app_1
</code></pre>
| everspader | <p>Django and microservices? Yeah, maybe somewhere in the parallel universe.</p>
<p>Only one thing that I may recommend is to build two identical services like <code>django_container_internal</code> and <code>django_container_production</code>. In this case you will be able to release <code>internal tools</code> without stopping <code>production</code>.</p>
<p>If you want to prevent access to <code>production</code> functionality with <code>internal</code> endpoints you may deactivate <code>production</code> URLs by using <code>ENVs</code>. Usually Django project has common <code>config/urls.py</code> that aggregate all URL endpoints and looks like</p>
<pre class="lang-py prettyprint-override"><code>urlpatterns = [
url('core/api/v1/', include(core.urls)),
url('internal/api/v1/', include(internal_app_1.urls)),
url('user/api/v1/', include(userapi_1.urls))
...
]
</code></pre>
<p>For example you may add <code>IS_INTERNAL_TOOLS</code> environment variable and update <code>urls.py</code> like</p>
<pre class="lang-py prettyprint-override"><code>from os import environ
urlpatterns = [
url('core/api/v1/', include(core.urls)),
...
]
if environ.get('IS_INTERNAL_TOOLS', 'false').lower() in ('true', '1', 'yes'):
urlpatterns.append(url('insternal/api/v1/', include(insternal_app_1.urls)))
else:
urlpatterns.append(url('user/api/v1/', include(userapi_1.urls)))
</code></pre>
<ul>
<li><p>Pros:</p>
<ul>
<li>All models will be accessible at both services (only one common DAO => no double developers work to create models twice)</li>
<li>Functionality is separated so only necessary features are accessible</li>
<li>Easy to implement</li>
</ul>
</li>
<li><p>Cons:</p>
<ul>
<li>Whole source code stored inside both of containers even if half of it is not used</li>
<li>If you using two separate databases for internal tools and external API you have to create all tables in both of it (but looks like that is not your case)</li>
<li>Because of it is still monolith <code>internal</code> and <code>production</code> parts heavily dependable on common <code>core</code> and it is impossible to deploy only updated core separately</li>
</ul>
</li>
</ul>
| rzlvmp |
<p>Kubernetes v1.19 in AWS EKS</p>
<p>I'm trying to implement horizontal pod autoscaling in my EKS cluster, and am trying to mimic what we do now with ECS. With ECS, we do something similar to the following</p>
<ul>
<li>scale up when CPU >= 90% after 3 consecutive 1-min periods of sampling</li>
<li>scale down when CPU <= 60% after 5 consecutive 1-min periods of sampling</li>
<li>scale up when memory >= 85% after 3 consecutive 1-min periods of sampling</li>
<li>scale down when memory <= 70% after 5 consecutive 1-min periods of sampling</li>
</ul>
<p>I'm trying to use the <code>HorizontalPodAutoscaler</code> kind, and <code>helm create</code> gives me this template. (Note I modified it to suit my needs, but the <code>metrics</code> stanza remains.)</p>
<pre><code>{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "microserviceChart.Name" . }}
labels:
{{- include "microserviceChart.Name" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "microserviceChart.Name" . }}
minReplicas: {{ include "microserviceChart.minReplicas" . }}
maxReplicas: {{ include "microserviceChart.maxReplicas" . }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}
</code></pre>
<p>However, how do I fit the scale up/down information shown in <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noreferrer">Horizontal Pod Autoscaling</a> in the above template, to match the behavior that I want?</p>
| Chris F | <p>The Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set based on observed metrics (like <code>CPU</code> or <code>Memory</code>).</p>
<p>There is an official walkthrough focusing on <code>HPA</code> and it's scaling:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="noreferrer">Kubernetes.io: Docs: Tasks: Run application: Horizontal pod autoscale: Walkthrough</a></em></li>
</ul>
<hr />
<p>The algorithm that scales the amount of replicas is the following:</p>
<ul>
<li><code>desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]</code></li>
</ul>
<p>An example (of already rendered) autoscaling can be implemented with a <code>YAML</code> manifest like below:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: HPA-NAME
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: DEPLOYMENT-NAME
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 75
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 75
</code></pre>
<blockquote>
<p><strong>A side note!</strong></p>
<p><code>HPA</code> will use calculate both metrics and chose the one with bigger <code>desiredReplicas</code>!</p>
</blockquote>
<p>Addressing a comment I wrote under the question:</p>
<blockquote>
<p>I think we misunderstood each other. It's perfectly okay to "scale up when CPU >= 90" but due to logic behind the formula I don't think it will be possible to say "scale down when CPU <=70". According to the formula it would be something in the midst of: scale up when CPU >= 90 and scale down when CPU =< 45.</p>
</blockquote>
<p><strong>This example could be misleading and not 100% true in all scenarios.</strong> Taking a look on following example:</p>
<ul>
<li><code>HPA</code> set to <code>averageUtilization</code> of <code>75%</code>.</li>
</ul>
<p>Quick calculations with some degree of approximation (default tolerance for <code>HPA</code> is <code>0.1</code>):</p>
<ul>
<li><code>2</code> replicas:
<ul>
<li><code>scale-up</code> (by <code>1</code>) should happen when: <code>currentMetricValue</code> is >=<code>80%</code>:
<ul>
<li><code>x = ceil[2 * (80/75)]</code>, <code>x = ceil[2,1(3)]</code>, <code>x = 3</code></li>
</ul>
</li>
<li><code>scale-down</code> (by <code>1</code>) should happen when <code>currentMetricValue</code> is <=<code>33%</code>:
<ul>
<li><code>x = ceil[2 * (33/75)]</code>, <code>x = ceil[0,88]</code>, <code>x = 1</code></li>
</ul>
</li>
</ul>
</li>
<li><code>8</code> replicas:
<ul>
<li><code>scale-up</code> (by <code>1</code>) should happen when <code>currentMetricValue</code> is >=<code>76%</code>:
<ul>
<li><code>x = ceil[8 * (76/75)]</code>, <code>x = ceil[8,10(6)]</code>, <code>x = 9</code></li>
</ul>
</li>
<li><code>scale-down</code> (by <code>1</code>) should happen when <code>currentMetricValue</code> is <=<code>64%</code>:
<ul>
<li><code>x = ceil[8 * (64/75)]</code>, <code>x = ceil[6,82(6)]</code>, <code>x = 7</code></li>
</ul>
</li>
</ul>
</li>
</ul>
<p>Following this example, having <code>8</code> replicas with their <code>currentMetricValue</code> at <code>55</code> (<code>desiredMetricValue</code> set to <code>75</code>) should <code>scale-down</code> to <code>6</code> replicas.</p>
<p>More information that describes the decision making of <code>HPA</code> (<strong>for example why it's doesn't scale</strong>) can be found by running:</p>
<ul>
<li><code>$ kubectl describe hpa HPA-NAME</code></li>
</ul>
<pre><code>Name: nginx-scaler
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Sun, 07 Mar 2021 22:48:58 +0100
Reference: Deployment/nginx-scaling
Metrics: ( current / target )
resource memory on pods (as a percentage of request): 5% (61903667200m) / 75%
resource cpu on pods (as a percentage of request): 79% (199m) / 75%
Min replicas: 1
Max replicas: 10
Deployment pods: 5 current / 5 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 4m48s (x4 over 5m3s) horizontal-pod-autoscaler did not receive metrics for any ready pods
Normal SuccessfulRescale 103s horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 71s horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 71s horizontal-pod-autoscaler New size: 5; reason: cpu resource utilization (percentage of request) above target
</code></pre>
<hr />
<p><code>HPA</code> scaling procedures can be modified by the changes introduced in Kubernetes version <code>1.18</code> and newer where the:</p>
<blockquote>
<h3>Support for configurable scaling behavior</h3>
<p>Starting from <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md" rel="noreferrer">v1.18</a> the <code>v2beta2</code> API allows scaling behavior to be configured through the HPA <code>behavior</code> field. Behaviors are specified separately for scaling up and down in <code>scaleUp</code> or <code>scaleDown</code> section under the <code>behavior</code> field. A stabilization window can be specified for both directions which prevents the flapping of the number of the replicas in the scaling target. Similarly specifying scaling policies controls the rate of change of replicas while scaling.</p>
<p><em><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior" rel="noreferrer">Kubernetes.io: Docs: Tasks: Run application: Horizontal pod autoscale: Support for configurable scaling behavior</a></em></p>
</blockquote>
<p>I'd reckon you could used newly introduced field like <code>behavior</code> and <code>stabilizationWindowSeconds</code> to tune your workload to your specific needs.</p>
<p>I also do recommend reaching out to <code>EKS</code> documentation for more reference, support for metrics and examples.</p>
| Dawid Kruk |
<p>I am trying to create an operator using operator-sdk.</p>
<p>I have installed opeator-sdk on my mac OS.</p>
<p>My Environment Details :</p>
<p>go version <strong>go1.15.12 darwin/amd64</strong></p>
<p>operator-sdk version: <strong>"v1.7.2", commit: "6db9787d4e9ff63f344e23bfa387133112bda56b", kubernetes version: "v1.19.4", go version: "go1.16.3", GOOS: "darwin", GOARCH: "amd64"</strong></p>
<p>I am trying to create an operator using command -</p>
<pre><code>operator-sdk init hello-operator
</code></pre>
<p>I have enabled GO111MODULE.</p>
<p>When I am trying to run opeator-sdk init , I am getting following error.</p>
<pre><code>Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/[email protected]
# container/list
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# crypto/internal/subtle
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# unicode/utf8
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# internal/race
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# k8s.io/apimachinery/pkg/selection
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# encoding
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# unicode/utf16
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# internal/nettrace
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# math/bits
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# runtime/internal/sys
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# internal/unsafeheader
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# unicode
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# vendor/golang.org/x/crypto/internal/subtle
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# crypto/subtle
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# vendor/golang.org/x/crypto/cryptobyte/asn1
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# golang.org/x/sys/internal/unsafeheader
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# runtime/internal/atomic
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# google.golang.org/protobuf/internal/flags
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# github.com/google/go-cmp/cmp/internal/flags
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# k8s.io/utils/integer
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# k8s.io/utils/buffer
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# internal/cpu
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# k8s.io/apimachinery/pkg/types
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# sync/atomic
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# runtime/cgo
compile: version "go1.15.6" does not match go tool version "go1.15.12"
Error: failed to initialize project: unable to scaffold with "base.go.kubebuilder.io/v3": exit status 2
FATA[0003] failed to initialize project: unable to scaffold with "base.go.kubebuilder.io/v3": exit status 2
</code></pre>
<p>Does anybody has any idea about this?</p>
<p>Thanks in advance.</p>
| Rajendra Gosavi | <p>The bellow commands show how you can run and scaffold an operator with the <code>operator-sdk</code> CLI tool. As of writing the latest version is v1.20.0. It covers some of the pitfalls such as setting the correct GO environment variables and installing the gcc for some OS. I tried it with <code>Ubuntu 18.04.3 LTS (Bionic Beaver)</code></p>
<pre class="lang-sh prettyprint-override"><code>#golang
echo "--- Installing golang ---"
GOVERSION=1.17.9
GOTAR=go$GOVERSION.linux-amd64.tar.gz
wget https://dl.google.com/go/$GOTAR
sudo tar -xvf $GOTAR
rm $GOTAR
sudo mv go /usr/local/bin
#gcc (used by operator-sdk CLI)
echo "--- Installing gcc ---"
sudo apt update
sudo apt install -y build-essential
sudo apt-get install manpages-dev
#operator-sdk
echo "--- Installing operator-sdk ---"
curl -Lo ./operator-sdk https://github.com/operator-framework/operator-sdk/releases/download/v1.20.0/operator-sdk_linux_amd64
chmod +x ./operator-sdk
sudo mv ./operator-sdk /usr/local/bin/operator-sdk
#environment variables
export GOROOT=/usr/local/bin/go
export PATH=$GOROOT/bin:$PATH
#verify versions
go version
operator-sdk version
#scaffold and run the HelloWorld operator
sudo -s
mkdir hello-operator
chmod 777 hello-operator
cd hello-operator
operator-sdk init --domain example.com --repo github.com/example/memcached-operator
operator-sdk create api --group example --version v1alpha1 --kind HelloWorld --resource --controller
make manifests
make run
</code></pre>
| Kubus |
<p>I have a website deployed in Kubernetes and ingress controller working.</p>
<p>I need to redirect my old subdomain to the new subdomain(old.example.com -> new.example.com).</p>
<p>I made some research and found that I have to use the annotation:</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target
</code></pre>
<p>my ingress file:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rancher-ing
namespace: test
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- new.example.com
secretName: new.example.com
rules:
- host: new.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rancher-logo-service
port:
number: 80
- host: old.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rancher-logo-service
port:
number: 80
</code></pre>
| gharbi.bdr | <p>i created an ingress controller config for the new url.
and updated the old config into this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rancher-ing-redirect
namespace: test
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: https://new.example.com/
spec:
rules:
- host: old.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rancher-logo-service
port:
number: 80
</code></pre>
| gharbi.bdr |
<p>Im quite new to kubernetes and Im trying to set up a microk8s test environment on a VPS with CentOS.</p>
<p>What I did:</p>
<p>I set up the cluster, enabled the ingress and metallb</p>
<pre><code>microk8s enable ingress
microk8s enable metallb
</code></pre>
<p>Exposed the ingress-controller service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
type: LoadBalancer
selector:
name: nginx-ingress-microk8s
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
</code></pre>
<p>Exposed an nginx deployment to test the ingress</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
run: nginx-deploy
template:
metadata:
labels:
run: nginx-deploy
spec:
containers:
- image: nginx
name: nginx
</code></pre>
<p>This is the status of my cluster:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/hostpath-provisioner-5c65fbdb4f-m2xq6 1/1 Running 3 41h
kube-system pod/coredns-86f78bb79c-7p8bs 1/1 Running 3 41h
kube-system pod/calico-node-g4ws4 1/1 Running 6 42h
kube-system pod/calico-kube-controllers-847c8c99d-xhmd7 1/1 Running 4 42h
kube-system pod/metrics-server-8bbfb4bdb-ggvk7 1/1 Running 0 41h
kube-system pod/kubernetes-dashboard-7ffd448895-ktv8j 1/1 Running 0 41h
kube-system pod/dashboard-metrics-scraper-6c4568dc68-l4xmg 1/1 Running 0 41h
container-registry pod/registry-9b57d9df8-xjh8d 1/1 Running 0 38h
cert-manager pod/cert-manager-cainjector-5c6cb79446-vv5j2 1/1 Running 0 12h
cert-manager pod/cert-manager-794657589-srrmr 1/1 Running 0 12h
cert-manager pod/cert-manager-webhook-574c9758c9-9dwr6 1/1 Running 0 12h
metallb-system pod/speaker-9gjng 1/1 Running 0 97m
metallb-system pod/controller-559b68bfd8-trk5z 1/1 Running 0 97m
ingress pod/nginx-ingress-microk8s-controller-f6cdb 1/1 Running 0 65m
default pod/nginx-deploy-5797b88878-vgp7x 1/1 Running 0 20m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 42h
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 41h
kube-system service/metrics-server ClusterIP 10.152.183.243 <none> 443/TCP 41h
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.225 <none> 443/TCP 41h
kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.109 <none> 8000/TCP 41h
container-registry service/registry NodePort 10.152.183.44 <none> 5000:32000/TCP 38h
cert-manager service/cert-manager ClusterIP 10.152.183.183 <none> 9402/TCP 12h
cert-manager service/cert-manager-webhook ClusterIP 10.152.183.99 <none> 443/TCP 12h
echoserver service/echoserver ClusterIP 10.152.183.110 <none> 80/TCP 72m
ingress service/ingress LoadBalancer 10.152.183.4 192.168.0.11 80:32617/TCP,443:31867/TCP 64m
default service/nginx-deploy ClusterIP 10.152.183.149 <none> 80/TCP 19m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 42h
metallb-system daemonset.apps/speaker 1 1 1 1 1 beta.kubernetes.io/os=linux 97m
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 65m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 41h
kube-system deployment.apps/coredns 1/1 1 1 41h
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 42h
kube-system deployment.apps/metrics-server 1/1 1 1 41h
kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 41h
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 41h
container-registry deployment.apps/registry 1/1 1 1 38h
cert-manager deployment.apps/cert-manager-cainjector 1/1 1 1 12h
cert-manager deployment.apps/cert-manager 1/1 1 1 12h
cert-manager deployment.apps/cert-manager-webhook 1/1 1 1 12h
metallb-system deployment.apps/controller 1/1 1 1 97m
default deployment.apps/nginx-deploy 1/1 1 1 20m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/hostpath-provisioner-5c65fbdb4f 1 1 1 41h
kube-system replicaset.apps/coredns-86f78bb79c 1 1 1 41h
kube-system replicaset.apps/calico-kube-controllers-847c8c99d 1 1 1 42h
kube-system replicaset.apps/metrics-server-8bbfb4bdb 1 1 1 41h
kube-system replicaset.apps/kubernetes-dashboard-7ffd448895 1 1 1 41h
kube-system replicaset.apps/dashboard-metrics-scraper-6c4568dc68 1 1 1 41h
container-registry replicaset.apps/registry-9b57d9df8 1 1 1 38h
cert-manager replicaset.apps/cert-manager-cainjector-5c6cb79446 1 1 1 12h
cert-manager replicaset.apps/cert-manager-794657589 1 1 1 12h
cert-manager replicaset.apps/cert-manager-webhook-574c9758c9 1 1 1 12h
metallb-system replicaset.apps/controller-559b68bfd8 1 1 1 97m
default replicaset.apps/nginx-deploy-5797b88878 1 1 1 20m
</code></pre>
<p>It looks like Metallb works, as the ingress services received an ip from the pool I specified.
Now, when I try to deploy an ingress to reach the nginx deployment, I dont get the ADDRESS:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress-nginx-deploy
spec:
rules:
- host: test.com
http:
paths:
- backend:
serviceName: nginx-deploy
servicePort: 80
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default ingress-nginx-deploy <none> test.com 80 13m
</code></pre>
<p>An help would be really appreciated. Thank you!</p>
| Green | <p><strong>TL;DR</strong></p>
<p>There are some ways to fix your <code>Ingress</code> so that it would get the IP address.</p>
<p>You can <strong>either</strong>:</p>
<ul>
<li>Delete the <code>kubernetes.io/ingress.class: nginx</code> and add <code>ingressClassName: public</code> under <code>spec</code> section.</li>
<li>Use the newer example (<code>apiVersion</code>) from official documentation that by default will have assigned an <code>IngressClass</code>:
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></li>
</ul>
</li>
</ul>
<p>Example of <code>Ingress</code> resource that will fix your issue:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-deploy
spec:
ingressClassName: public
# above field is optional as microk8s default ingressclass will be assigned
rules:
- host: test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy
port:
number: 80
</code></pre>
<p>You can read more about <code>IngressClass</code> by following official documentation:</p>
<ul>
<li><em><a href="https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/" rel="nofollow noreferrer">Kubernetes.io: Blog: Improvements to the Ingress API in Kubernetes 1.18 </a></em></li>
</ul>
<p>I've included more explanation that should shed some additional light on this particular setup.</p>
<hr />
<p>After you apply above <code>Ingress</code> resource the output of:</p>
<ul>
<li><code>$ kubectl get ingress</code></li>
</ul>
<p>Will be following:</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-nginx-deploy public test.com 127.0.0.1 80 43s
</code></pre>
<p>As you can see the <code>ADDRESS</code> contains <code>127.0.0.1</code>. It's because this particular <code>Ingress controller</code> enabled by an addon, binds to your host (<code>MicroK8S</code> node) to ports <code>80</code>,<code>443</code>.</p>
<p>You can see it by running:</p>
<ul>
<li><code>$ sudo microk8s kubectl get daemonset -n ingress nginx-ingress-microk8s-controller -o yaml</code></li>
</ul>
<blockquote>
<p>A side note!</p>
<p>Look for <code>hostPort</code> and <code>securityContext.capabilities</code>.</p>
</blockquote>
<p>The <code>Service</code> of type <code>LoadBalancer</code> created by you will work with your <code>Ingress controller</code> but it will not be displayed under <code>ADDRESS</code> in <code>$ kubectl get ingress</code>.</p>
<blockquote>
<p>A side note!</p>
<p>Please remember that in this particular setup you will need to connect to your <code>Ingress controller</code> with a <code>Header</code> <code>Host: test.com</code> unless you have DNS resolution configured to support your setup. Otherwise you will get a <code>404</code>.</p>
</blockquote>
<hr />
<p>Additional resource:</p>
<ul>
<li><em><a href="https://github.com/ubuntu/microk8s/issues/1794" rel="nofollow noreferrer">Github.com: Ubuntu: Microk8s: Ingress with MetalLb issues</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/configuration/overview/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Configuration: Overview</a></em></li>
</ul>
| Dawid Kruk |
<p>I am currently trying to scale automatically a deployment when the readiness probe hass failed for its pods currently running.</p>
<p>A pod is IDLE until a POST request is sent to it and while it is processing the request, it is not answering any other request.</p>
<p>To know when a processing is in progress, I created an endpoint returning TRUE if the pod is IDLE, FALSE otherwise.</p>
<p>I configure the readiness probe to query this endpoint to mark it as unavailable when a processing is in progress (and to mark it back to available when it is not processing anymore).</p>
<p>By default I have a limited pool of pods (like 5) that can answer requests.</p>
<p>But I still want to be able to send another POST with other parameters to trigger another processing when all my 5 pods are unavailable.</p>
<p>So, when the readiness probe fails for all pods, I want to scale the deployment in order to have other pods availbel to answer requests.</p>
<p>The issue here is that I did not find how to do such a thing with K8S or if this is even possible. Is there someone who could help me on this?</p>
<p>An alternative would be to create a 'watcher' pod who would watch all the readiness probe for a given deployement and when I fails for all pods, the watch would be in charge of scaling the deployement.</p>
<p>But this alternative implies development that I would like to avoid if it is natively possible to do in K8S.</p>
<p>Thank you :)</p>
| Gabi | <p>A readinessprobe by itself shouldn't be able to scale a deployment. By default, the only thing it can do is removing the Pod's IP from the endpoints of all the services that match the Pod.</p>
<p>The only solution that comes to my mind is what you said, so having an Horizontal Pod Autoscaler with custom metrics pointing to a Pod that keeps track of all the readinessprobes.</p>
| shaki |
<p>I'm getting no healthy upstream error. when accessing ambassador. Pods/Services and Loadbalancer seems to be all fine and healthy. Ambassador is on top of aks.</p>
<p>At the moment I have got multiple services running in the Kubernetes cluster and each service has it's on Mapping with its own prefix. Is it possible to point out multiple k8s services to the same mapping so that I don't have too many prefixes? And all my k8s services will be under the same ambassador prefix?</p>
<p>By Default ambassador is taking me through https which is creating certificate issues, although I will be bringing https in near future for now I'm just looking to prove the concept so how can I disable HTTPS and do HTTP only ambassador?</p>
| Pavan Aleti | <ol>
<li><p>No healthy upstream typically means that, for whatever reason, Ambassador cannot find the service listed in the mapping. The first thing I usually do when I see this is to run <code>kubectl exec -it -n ambassador {my_ambassador_pod_name} -- sh</code> and try to <code>curl -v my-service</code> where "my-service" is the Kube DNS name of the service you are trying to hit. Depending on the response, it can give you some hints on why Ambassador is failing to see the service.</p>
</li>
<li><p>Mappings work on a 1-1 basis with services. If your goal, however, is to avoid prefix usage, there are other ways Ambassador can match to create routes. One common way I've seen is to use host-based routing (<a href="https://www.getambassador.io/docs/latest/topics/using/headers/host/" rel="nofollow noreferrer">https://www.getambassador.io/docs/latest/topics/using/headers/host/</a>) and create subdomains for either individual or logical sets of services.</p>
</li>
<li><p>AES defaults to redirecting to HTTPS, but this behavior can be overwritten by applying a host with insecure routing behavior. A very simple one that I commonly use is this:</p>
</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: getambassador.io/v2
kind: Host
metadata:
name: wildcard
namespace: ambassador
spec:
hostname: "*"
acmeProvider:
authority: none
requestPolicy:
insecure:
action: Route
selector:
matchLabels:
hostname: wildcard
</code></pre>
| Casey |
<p>I want to print a list of all my pods with the CPU requirements in a column</p>
<p>I'm pretty sure its something like
<code>kubectl get pods 'spec.containers[].resources.limits.cpu'</code></p>
<p>Can someone please give me the correct syntax?</p>
| R. Doolan | <p>You can also use the below command to get the cpu limit. It is cleaner than using jsonpath.</p>
<pre><code>kubectl get po -o custom-columns="Name:metadata.name,CPU-limit:spec.containers[*].resources.limits.cpu" order by cpu limit with `--sort-by=".spec.containers[*].resources.limits.cpu"`
</code></pre>
| Tarul Kinra |
<p>I am designing a web application where users can have trade bots running. So they will sign in, pay for membership then they will create a bot, enter the credentials and start the bot. The user can stop / start the trade bot.</p>
<p>I am trying to do this using kubernetes, so I will have everything running on kubernetes. I will create a namespace named bots and all bots for all clients will be running inside this bot namespace.</p>
<p>Stack is : python (django framework ) + mysql + aws + kubernetes</p>
<p>question: Is there a way to programmatically create a pod using python ? I want to integrate with the application code. So when user clicks on create new bot it will start a new pod running with all the parameters for the specific user.</p>
<p>Basically each pod will be a tenant. But a tenant can have multiple pods / bots.
So how do that ? Is there any kubernetes python lib that does it ? I did some online search but didn't find anything.
Thanks</p>
| gabrielpasv | <p>As noted by Harsh Manvar, you can user the official Kubernetes Python client. Here is a short function which allows to do it.</p>
<pre><code>from kubernetes import client, config, utils
from kubernetes.client.api import core_v1_api
config.load_incluster_config()
try:
c = Configuration().get_default_copy()
except AttributeError:
c = Configuration()
c.assert_hostname = False
Configuration.set_default(c)
self.core_v1 = core_v1_api.CoreV1Api()
def open_pod(self, cmd: list,
pod_name: str,
namespace: str='bots',
image: str=f'{repository}:{tag}',
restartPolicy: str='Never',
serviceAccountName: str='bots-service-account'):
'''
This method launches a pod in kubernetes cluster according to command
'''
api_response = None
try:
api_response = self.core_v1.read_namespaced_pod(name=pod_name,
namespace=namespace)
except ApiException as e:
if e.status != 404:
print("Unknown error: %s" % e)
exit(1)
if not api_response:
print(f'From {os.path.basename(__file__)}: Pod {pod_name} does not exist. Creating it...')
# Create pod manifest
pod_manifest = {
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'labels': {
'bot': current-bot
},
'name': pod_name
},
'spec': {
'containers': [{
'image': image,
'pod-running-timeout': '5m0s',
'name': f'container',
'args': cmd,
'env': [
{'name': 'env_variable', 'value': env_value},
]
}],
# 'imagePullSecrets': client.V1LocalObjectReference(name='regcred'), # together with a service-account, allows to access private repository docker image
'restartPolicy': restartPolicy,
'serviceAccountName': bots-service-account
}
}
print(f'POD MANIFEST:\n{pod_manifest}')
api_response = self.core_v1.create_namespaced_pod(body=pod_manifest, namespace=namespace)
while True:
api_response = self.core_v1.read_namespaced_pod(name=pod_name,
namespace=namespace)
if api_response.status.phase != 'Pending':
break
time.sleep(0.01)
print(f'From {os.path.basename(__file__)}: Pod {pod_name} in {namespace} created.')
return pod_name
</code></pre>
<p>For further investigation, refer to the examples in the official github repo: <a href="https://github.com/kubernetes-client/python/tree/master/examples" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/tree/master/examples</a></p>
| tdimitch |
<p>I have in my home lab a default installation of Kubernetes with kube-router as the network provider. kube-router is, as default, set as the service proxy. I have not set an explicit service-cluster-ip-network in my kube-controller-manager, so kube-router should be assigning service cluster IPs only within the default 10.96.x.x/16 subnet. However, I am regularly getting service cluster IPs anywhere within the larger 10.x.x.x./8 subnet. I am at a loss where/why it's not remaining within 10.96.x.x. Ideas? Thanks!</p>
| Scott Balmos | <p><strong>TL;DR</strong></p>
<p>Your Kubernetes cluster is behaving correctly.</p>
<p><strong>By default (if not specified otherwise) using <code>kubeadm</code> to provision your cluster, the <code>--service-cidr</code> is set to <code>10.96.0.0/12</code></strong>.</p>
<p><code>ClusterIP</code> address like <code>10.110.15.13</code> would be included in the above mentioned network (<code>10.96.0.0/12</code>).</p>
<p>I've provided more explanation below:</p>
<hr />
<h3>Subnetting</h3>
<p>If you use one of the available online <a href="http://jodies.de/ipcalc?host=10.96.0.0&mask1=12&mask2=" rel="nofollow noreferrer">IP calculators</a> you will be seeing exact same situation like the one included below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>CIDR</th>
<th>10.96.0.0/12</th>
</tr>
</thead>
<tbody>
<tr>
<td>Subnet mask</td>
<td>255.240.0.0</td>
</tr>
<tr>
<td>Network address (first)</td>
<td>10.96.0.0</td>
</tr>
<tr>
<td>Broadcast address (last)</td>
<td>10.111.255.255</td>
</tr>
<tr>
<td>First useable address</td>
<td><strong>10.96.0.1</strong></td>
</tr>
<tr>
<td>Last useable address</td>
<td><strong>10.111.255.254</strong></td>
</tr>
<tr>
<td>Number of hosts allocatable</td>
<td>1048574</td>
</tr>
</tbody>
</table>
</div>
<p>By above diagram you can see that the <code>Service IP</code> range would be following:</p>
<ul>
<li><code>10.96.0.1</code>-<code>10.111.255.254</code></li>
</ul>
<p>This would make IP's like: <code>10.104.5.2</code>, <code>10.110.15.13</code> <strong>be in range</strong> of above network.</p>
<hr />
<h3>Kubernetes <code>--service-cidr</code></h3>
<p>As said earlier if you don't specify the <code>--service-cidr</code> when using <code>$ kubeadm init</code> it will be set to default <code>10.96.0.0/12</code>.</p>
<p>Following the official documentation of <code>kubeadm</code>:</p>
<blockquote>
<pre class="lang-sh prettyprint-override"><code>--service-cidr string Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
</code></pre>
<p>-- <em><a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#options" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Setup tools: Kubeadm: Kubeadm init: Options</a></em></p>
</blockquote>
<p>If you provision the cluster without this parameter, you will be able to see it configured in the:</p>
<ul>
<li><code>kube-apiserver</code>:</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n kube-system kube-apiserver-kubernetes-NODE_NAME -o yaml | grep "service-cluster-ip-range"
- --service-cluster-ip-range=10.96.0.0/12
</code></pre>
<ul>
<li><code>kube-controller-manager</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n kube-system kube-controller-manager-kubernetes-NODE_NAME -o yaml | grep "service-cluster-ip-range"
- --service-cluster-ip-range=10.96.0.0/12
</code></pre>
<hr />
<h3>Kube-router</h3>
<p>It's also explicitly stated in the <code>kube-router</code>'s source code:</p>
<blockquote>
<pre class="lang-golang prettyprint-override"><code>func NewKubeRouterConfig() *KubeRouterConfig {
return &KubeRouterConfig{
// SKIPPED
ClusterIPCIDR: "10.96.0.0/12",
// SKIPPED
}
}
</code></pre>
<p>-- <em><a href="https://github.com/cloudnativelabs/kube-router/blob/c14588535433e3c334fa4dd639168619f9a34210/pkg/options/options.go#L73" rel="nofollow noreferrer">Github.com: Cloudnativelabds: Kube-router: Pkg: Options: Options.go: Line 73</a></em></p>
</blockquote>
<blockquote>
<pre class="lang-golang prettyprint-override"><code> fs.StringVar(&s.ClusterIPCIDR, "service-cluster-ip-range", s.ClusterIPCIDR,
"CIDR value from which service cluster IPs are assigned. Default: 10.96.0.0/12")
</code></pre>
<p>-- <em><a href="https://github.com/cloudnativelabs/kube-router/blob/c14588535433e3c334fa4dd639168619f9a34210/pkg/options/options.go#L187" rel="nofollow noreferrer">Github.com: Cloudnativelabds: Kube-router: Pkg: Options: Options.go: Line 187</a></em></p>
</blockquote>
<p>It's also referenced in the <a href="https://github.com/cloudnativelabs/kube-router/blob/master/docs/user-guide.md" rel="nofollow noreferrer">user guide</a>.</p>
| Dawid Kruk |
<p>I'm using liquibase in project and it is working fine so far.
<br/>I added a new changeset and it works good locally, once deployed , the container's state hungs with the following statement:
<strong>"liquibase: Waiting for changelog lock..."</strong>.
<br/><br/>
The limit resources of the deployment are not set.
<br/>The update of table "databasechangeloglock" is not working, cause the pod keeps locking it.
<br/>
How can i solve this?</p>
| maryam | <p>See <a href="https://stackoverflow.com/questions/30386274/is-there-a-liquibase-lock-timeout/30398337">other question here</a>. If the lock happens and the process exits unexpectedly, then the lock will stay there. </p>
<p>According to <a href="https://stackoverflow.com/a/19081612/13752392">this answer</a>, you can remove the lock by running SQL directly:</p>
<pre class="lang-sql prettyprint-override"><code>UPDATE DATABASECHANGELOGLOCK SET LOCKED=0, LOCKGRANTED=null, LOCKEDBY=null where ID=1;
</code></pre>
<blockquote>
<p>Note: Depending on your DB engine, you may need to use FALSE or 'f' instead of <code>0</code> for the <code>LOCKED</code> value.</p>
</blockquote>
<p>Per your question, your process itself is creating a new lock and still failing every time, then most likely it is the process that is exiting/failing for a different reason (or checking for the lock in the wrong order. </p>
<p>Another option is to consider the <a href="https://github.com/liquibase/liquibase-nochangeloglock" rel="noreferrer">Liquibase No ChangeLog Lock extension</a>.</p>
<blockquote>
<p>Note: This is probably a last resort. The extension could be an option if you were having more trouble with the changelog lock than getting any benefit (e.g. only running one instance of the app and don't really need locking). It is likely not the "best" solution, but is certainly an option depending on what you need. The README in the link says this too.</p>
</blockquote>
| aarowman |
<p>I am following the following tutorial to configure MongoDB as a stateful set within my Kubernetes cluster on GCP.</p>
<p><a href="https://codelabs.developers.google.com/codelabs/cloud-mongodb-statefulset/index.html?index=..%2F..index#0" rel="nofollow noreferrer">https://codelabs.developers.google.com/codelabs/cloud-mongodb-statefulset/index.html?index=..%2F..index#0</a></p>
<p>I am able to access the database using "kubectl exec -ti mongo-0 mongo" as shown in the tutorial.
However, my Node JS- Mongoose Application is unable to connect to it throwing the following error</p>
<pre><code>MongoDB connection error: { MongooseError [MongooseServerSelectionError]: connect ECONNREFUSED 10.16.0.22:27017
at new MongooseServerSelectionError (/usr/src/app/node_modules/mongoose/lib/error/serverSelection.js:24:11)
at NativeConnection.Connection.openUri (/usr/src/app/node_modules/mongoose/lib/connection.js:823:32)
at Mongoose.connect (/usr/src/app/node_modules/mongoose/lib/index.js:333:15)
at Object.<anonymous> (/usr/src/app/app.js:6:10)
at Module._compile (internal/modules/cjs/loader.js:816:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:827:10)
at Module.load (internal/modules/cjs/loader.js:685:32)
at Function.Module._load (internal/modules/cjs/loader.js:620:12)
at Function.Module.runMain (internal/modules/cjs/loader.js:877:12)
at internal/main/run_main_module.js:21:11
message: 'connect ECONNREFUSED 10.16.0.22:27017',
name: 'MongooseServerSelectionError',
reason:
TopologyDescription {
type: 'ReplicaSetNoPrimary',
setName: null,
maxSetVersion: null,
maxElectionId: null,
servers:
Map {
'mongo-0.mongo:27017' => [ServerDescription],
'mongo-1.mongo:27017' => [ServerDescription],
'mongo-2.mongo:27017' => [ServerDescription] },
stale: false,
compatible: true,
compatibilityError: null,
logicalSessionTimeoutMinutes: null,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
commonWireVersion: null },
[Symbol(mongoErrorContextSymbol)]: {} }
(node:29) UnhandledPromiseRejectionWarning: MongooseServerSelectionError: connect ECONNREFUSED 10.16.0.22:27017
at new MongooseServerSelectionError (/usr/src/app/node_modules/mongoose/lib/error/serverSelection.js:24:11)
at NativeConnection.Connection.openUri (/usr/src/app/node_modules/mongoose/lib/connection.js:823:32)
at Mongoose.connect (/usr/src/app/node_modules/mongoose/lib/index.js:333:15)
at Object.<anonymous> (/usr/src/app/app.js:6:10)
at Module._compile (internal/modules/cjs/loader.js:816:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:827:10)
at Module.load (internal/modules/cjs/loader.js:685:32)
at Function.Module._load (internal/modules/cjs/loader.js:620:12)
at Function.Module.runMain (internal/modules/cjs/loader.js:877:12)
at internal/main/run_main_module.js:21:11
(node:29) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:29) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
</code></pre>
<p>app.js connection code is as follows</p>
<pre><code>mongoose.connect(
process.env.MONGO_CONNECTION_STRING,
{
useUnifiedTopology: true,
useNewUrlParser: true,
}
);
mongoose.Promise = global.Promise;
let db = mongoose.connection;
db.on("error", console.error.bind(console, "MongoDB connection error:"));
</code></pre>
<p>And Environment variable is as follows in deployment file.</p>
<pre><code>env:
- name: MONGO_CONNECTION_STRING
value: "mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/test"
</code></pre>
<p>Mongo pods status</p>
<pre><code>mongo-0 2/2 Running 0 8m35s
mongo-1 2/2 Running 0 7m49s
mongo-2 2/2 Running 0 6m54s
</code></pre>
<p>kubectl get statefulset</p>
<pre><code>NAME READY AGE
mongo 3/3 9m31s
</code></pre>
<p>Service</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongo ClusterIP None <none> 27017/TCP 27m
</code></pre>
<p>rs.config() output</p>
<pre><code>{
"_id" : "rs0",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "mongo-0.mongo:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "mongo-1.mongo:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "mongo-2.mongo:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : 60000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5f16a0f3671c091fe183af72")
}
}
</code></pre>
<p>Any help is appreciated.</p>
| Zain Saleem | <p>Figured it out</p>
<p>Tutorial has a step missing to initiate the set.</p>
<p>After connecting to set using the command</p>
<pre><code>kubectl exec -ti mongo-0 mongo
</code></pre>
<p>Just Run following two commands</p>
<pre><code>rs.initiate({_id: "rs0", version: 1, members: [
{ _id: 0, host : "mongo-0.mongo:27017" },
{ _id: 1, host : "mongo-1.mongo:27017" },
{ _id: 2, host : "mongo-2.mongo:27017" }
]});
rs.slaveOk()
</code></pre>
| Zain Saleem |
<p>I'm trying to deploy a NodeRED pod on my cluster, and have created a service and ingress for it so it can be accessible as I access the rest of my cluster, under the same domain. However when i try to access it via <code>host-name.com/nodered</code> I receive <code>Cannot GET /nodered</code>.</p>
<p>Following are the templates used and describes of all the involved components.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nodered-app-service
namespace: {{ kubernetes_namespace_name }}
spec:
ports:
- port: 1880
targetPort: 1880
selector:
app: nodered-service-pod
</code></pre>
<p>I have also tried with port:80 for the service, to no avail.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nodered-service-deployment
namespace: {{ kubernetes_namespace_name }}
labels:
app: nodered-service-deployment
name: nodered-service-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nodered-service-pod
template:
metadata:
labels:
app: nodered-service-pod
target: gateway
buildVersion: "{{ kubernetes_build_number }}"
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: nodered-service-account
automountServiceAccountToken: false
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: nodered-service-statefulset
image: nodered/node-red:{{ nodered_service_version }}
imagePullPolicy: {{ kubernetes_image_pull_policy }}
readinessProbe:
httpGet:
path: /
port: 1880
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
livenessProbe:
httpGet:
path: /
port: 1880
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
securityContext:
allowPrivilegeEscalation: false
resources:
limits:
memory: "2048M"
cpu: "1000m"
requests:
memory: "500M"
cpu: "100m"
ports:
- containerPort: 1880
name: port-name
envFrom:
- configMapRef:
name: nodered-service-configmap
env:
- name: BUILD_TIME
value: "{{ kubernetes_build_time }}"
</code></pre>
<p>The <code>target: gateway</code> refers to the ingress controller</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nodered-ingress
namespace: {{ kubernetes_namespace_name }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: host-name.com
http:
paths:
- path: /nodered(/|$)(.*)
backend:
serviceName: nodered-app-service
servicePort: 1880
</code></pre>
<p>The following is what my Describes show</p>
<pre><code>Name: nodered-app-service
Namespace: nodered
Labels: <none>
Annotations: <none>
Selector: app=nodered-service-pod
Type: ClusterIP
IP: 55.3.145.249
Port: <unset> 1880/TCP
TargetPort: port-name/TCP
Endpoints: 10.7.0.79:1880
Session Affinity: None
Events: <none>
</code></pre>
<pre><code>Name: nodered-service-statefulset-6c678b7774-clx48
Namespace: nodered
Priority: 0
Node: aks-default-40441371-vmss000007/10.7.0.66
Start Time: Thu, 26 Aug 2021 14:23:33 +0200
Labels: app=nodered-service-pod
buildVersion=latest
pod-template-hash=6c678b7774
target=gateway
Annotations: <none>
Status: Running
IP: 10.7.0.79
IPs:
IP: 10.7.0.79
Controlled By: ReplicaSet/nodered-service-statefulset-6c678b7774
Containers:
nodered-service-statefulset:
Container ID: docker://a6f8c9d010feaee352bf219f85205222fa7070c72440c885b9cd52215c4c1042
Image: nodered/node-red:latest-12
Image ID: docker-pullable://nodered/node-red@sha256:f02ccb26aaca2b3ee9c8a452d9516c9546509690523627a33909af9cf1e93d1e
Port: 1880/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 26 Aug 2021 14:23:36 +0200
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 2048M
Requests:
cpu: 100m
memory: 500M
Liveness: http-get http://:1880/ delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:1880/ delay=30s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
nodered-service-configmap ConfigMap Optional: false
Environment:
BUILD_TIME: 2021-08-26T12:23:06.219818+0000
Mounts: <none>
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes: <none>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
</code></pre>
<pre><code>Name: nodered-app-service
Namespace: nodered
Labels: <none>
Annotations: <none>
Selector: app=nodered-service-pod
Type: ClusterIP
IP: 55.3.145.249
Port: <unset> 1880/TCP
TargetPort: port-name/TCP
Endpoints: 10.7.0.79:1880
Session Affinity: None
Events: <none>
PS C:\Users\hid5tim> kubectl describe ingress -n nodered
Name: nodered-ingress
Namespace: nodered
Address: 10.7.31.254
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
host-name.com
/nodered(/|$)(.*) nodered-app-service:1880 (10.7.0.79:1880)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
</code></pre>
<p>The logs from the ingress controller are below. I've been on this issue for the last 24 hours or so and its tearing me apart, the setup looks identical to other deployments I have that are functional. Could this be something wrong with the nodered image? I have checked and it does expose 1880.</p>
<pre><code>194.xx.xxx.x - [194.xx.xxx.x] - - [26/Aug/2021:10:40:12 +0000] "GET /nodered HTTP/1.1" 404 146 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" 871 0.008 [nodered-nodered-app-service-80] 10.7.0.68:1880 146 0.008 404
74887808fa2eb09fd4ed64061639991e ```
</code></pre>
| i-haidar | <p>as the comment by Andrew points out, I was using rewrite annotation wrong, once I removed the (/|$)(.*) and specified the path type as prefix it worked.</p>
| i-haidar |
<p>I'm trying to learn K8. I have a frontend app that is made using angular. I'm serving it behind an NGINX proxy. I also have a backend that has functionalities. There are my files,</p>
<p><strong>nginx.conf</strong></p>
<pre><code>http {
upstream gateway {
server sb-gateway:8081;
}
include /etc/nginx/mime.types;
server {
listen 80;
sendfile on;
default_type application/octet-stream;
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ /index.html;
index index.html index.htm;
}
location /api/ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://gateway;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
</code></pre>
<p><strong>user.service.ts</strong></p>
<pre><code>...
return this.http.post('/api/login', {data})
...
</code></pre>
<p><strong>ui-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: sb-ui
name: sb-ui
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
nodePort: 30000
selector:
app: sb-ui
</code></pre>
<p>The backend service is called <code>sb-gateway</code> and I'm trying to route all requests that have <code>/api/</code> in them to the backend (as specified in NGINX). But I get this error when i try to hit a route from the frontend/pressing a button.</p>
<p><code>POST http://192.168.64.2:30000/api/login 404 (NOT FOUND)</code>.</p>
<p>I'm trying to run this on minikube. I have SSH'd in the frontend pod and tried to curl the backend service. It works. I just need to figure out how am I supposed to communicated from the frontend service to the backend service on angular.</p>
<p>Any help would be greatly appreciated!</p>
| hrishikeshpaul | <p>Posting this answer as a community wiki to give more of a baseline on how you can allow/route the traffic to/in the Kubernetes cluster.</p>
<p>Feel free to expand it.</p>
<hr />
<p>Having in mind following statement:</p>
<blockquote>
<p>I'm trying to learn K8.</p>
</blockquote>
<p>Instead of using an <code>nginx</code> <code>Pod</code> that is acting as a proxy, you should be using <code>Ingress</code>.</p>
<p>In short <code>Ingress</code> can be described as:</p>
<blockquote>
<p>An API object that manages external access to the services in a cluster, typically HTTP.</p>
<p>Ingress may provide load balancing, SSL termination and name-based virtual hosting.</p>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></p>
</blockquote>
<p>Through an <code>Ingress</code> object you can tell your <code>Ingress</code> controller (which in this example will be <code>nginx-ingress</code>) to have a specific configuration that will allow you to route the traffic accordingly.</p>
<hr />
<p>Having as an example following situation:</p>
<ul>
<li>You have your application that should receive request on a <code>/api</code> path</li>
<li>You have other applications that should receive all other request (in this example simple <code>nginx</code> <code>Pod</code>)</li>
</ul>
<p>Being specific to:</p>
<blockquote>
<p>I'm trying to run this on minikube.</p>
</blockquote>
<p><code>Minikube</code> has already built-in <code>Ingress</code> addon that can be enabled with a single command (there are some exceptions, please refer to the documentation of specific <code>--driver</code>):</p>
<ul>
<li><code>$ minikube addons enable ingress</code></li>
</ul>
<p>With above command you will spawn <code>nginx-ingress</code>.</p>
<p>Creating a setup to show how it could be configured:</p>
<ul>
<li><code>$ kubectl create deployment hello --image=gcr.io/google-samples/hello-app:2.0</code></li>
<li><code>$ kubectl expose deployment hello --port=8080 --type=ClusterIP</code></li>
<li><code>$ kubectl create deployment nginx --image=nginx</code></li>
<li><code>$ kubectl expose deployment nginx --port=80 --type=ClusterIP</code></li>
</ul>
<p>The <code>Ingress</code> resource should look like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
ingressClassName: "nginx"
rules:
- host:
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: hello
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
</code></pre>
<p>What above example of <code>Ingress</code> will do:</p>
<ul>
<li>Request with <code>/api</code>, <code>/api/</code>, <code>/api/xyz</code>, etc.:
<ul>
<li>route to <code>hello</code> <code>Service</code> with <code>hello</code> <code>Pod</code></li>
</ul>
</li>
<li>Request with <code>/</code>, <code>/xyz</code>, <code>/abc/xyz</code>, etc.:
<ul>
<li>route to <code>nginx</code> <code>Service</code> with <code>nginx</code> <code>Pod</code></li>
</ul>
</li>
</ul>
<blockquote>
<p>A side note!</p>
<p>If you need to add some specific configuration to your <code>Ingress</code> you can check this documentation:</p>
<ul>
<li><em><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress nginx: User guide: Nginx configuration: Annotations</a></em></li>
</ul>
</blockquote>
<p>You can see from the <code>hello</code> <code>Pod</code> logs:</p>
<pre class="lang-sh prettyprint-override"><code>2021/04/09 15:22:07 Server listening on port 8080
2021/04/09 15:50:14 Serving request: /api
2021/04/09 15:50:17 Serving request: /api
2021/04/09 15:50:19 Serving request: /api/asdasd
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></li>
<li><em><a href="https://minikube.sigs.k8s.io/docs/commands/addons/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Commands: Addons</a></em></li>
<li><em><a href="https://stackoverflow.com/questions/66641786/nginx-installation-and-configuration/66691013#66691013">Stackoverflow.com: Questions: Nginx installation and configuration</a></em></li>
</ul>
| Dawid Kruk |
<p>I need to simulate Node "NotReady" status for a node and tried to stop kubelet to achieve that. But am getting the below error. Looks like this is not the right command for my k8s environment. I need this to verify the working of taints and tolerations.</p>
<p>systemctl stop kubelet.service</p>
<p>Failed to stop kubelet.service: Unit kubelet.service not loaded.</p>
| Ashok | <p>RKE is a K8s distribution that runs entirely within Docker containers as per <a href="https://rancher.com/docs/rke/latest/en/" rel="nofollow noreferrer">documentation</a>. That means that none of the K8s services are native Linux services. Try <code>docker ps</code>, and you'll find a container named <code>kubelet</code> running (among others).</p>
<p>So to stop kubelet service on RKE clusters, you'll need to stop the container:</p>
<pre><code>docker stop kubelet
</code></pre>
| seshadri_c |
<p>As far as i know the default service account in Kubernetes should not have any permissions assigned. But still I can perform following from the pod on my docker desktop k8s:</p>
<pre><code>APISERVER=https://kubernetes.default.svc
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/pods
</code></pre>
<p>How is that posible?</p>
<p>Furhermore I discovered that each pod have a different value of the SA token (<code>cat /var/run/secrets/kubernetes.io/serviceaccount/token</code>) and different from the one returned by <code>kubectl describe secret default-token-cl9ds</code>
Shouldn't it be the same?</p>
<p><strong>Update:</strong></p>
<pre><code>$ kubectl get rolebindings.rbac.authorization.k8s.io podviewerrolebinding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"podviewerrolebinding","namespace":"default"},"roleRef":{"apiGroup"
:"rbac.authorization.k8s.io","kind":"Role","name":"podviewerrole"},"subjects":[{"kind":"ServiceAccount","name":"podviewerserviceaccount"}]}
creationTimestamp: "2021-09-07T10:01:51Z"
name: podviewerrolebinding
namespace: default
resourceVersion: "402212"
uid: 2d32f045-b172-4fff-a6b0-1525b0b96e65
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: podviewerrole
subjects:
- kind: ServiceAccount
name: podviewerserviceaccount
</code></pre>
| Marcin | <p>I hit the same issue, it looks like docker desktop has elevated permissions (i.e. admin) by default, see the article <a href="https://github.com/docker/for-mac/issues/3694" rel="nofollow noreferrer">here</a>.</p>
<p>Removing the clusterrolebinding docker-for-desktop-binding via the following command fixes the issue.</p>
<pre><code>kubectl delete clusterrolebinding docker-for-desktop-binding
</code></pre>
| Daryl Hurst |
<p>I successfully deployed my Kubernetes app using <code>kubectl apply -f deployment.yaml</code>.
When I try to hit the URL endpoint, I'm getting an <code>nginx 404 Not Found</code> error page.</p>
<p>My next step is to open a <code>bash</code> shell on the docker instance that is running my app. How do I do this in Kubernetes?</p>
<p><strong>How do I ssh into the docker container running my app, or docker exec bash to an app I've deployed to Kubernetes?</strong></p>
<p>If I were running in docker I would run <code>docker ps</code> to find the container ID and then run <code>docker exec -t ### bash</code> to open a shell on my container to look around to troubleshoot why something isn't working.</p>
<p><strong>What is the equivalent way to do this in Kubernetes?</strong></p>
<h2>Searching for a solution</h2>
<p>I searched and found <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">this URL</a>, which says how to get a shell on your app.
The summary of that URL is:</p>
<pre><code>kubectl apply -f https://k8s.io/examples/application/shell-demo.yaml
kubectl get pod shell-demo
kubectl exec --stdin --tty shell-demo -- /bin/bash
</code></pre>
<p>But when I tried the equivalent commands, I got an error see below:</p>
<pre><code>kubectl get pods --namespace my-app-namespace
NAME READY STATUS RESTARTS AGE
dpl-my-app-787bc5b7d-4ftkb 1/1 Running 0 2h
</code></pre>
<p>Then I tried:</p>
<pre><code>kubectl exec --stdin --tty my-app-namespace -- /bin/bash
Error from server (NotFound): pods "my-app-namespace" not found
exit status 1
</code></pre>
<p>I figured this happened because I was trying to exec into the namespace not the pod, so I also tried with the <code>dpl-my-app-...</code> (see below) but got the same error.</p>
<pre><code>kubectl exec --stdin --tty dpl-my-app-787bc5b7d-4ftkb -- /bin/bash
Error from server (NotFound): pods "dpl-my-app-787bc5b7d-4ftkb" not found
exit status 1
</code></pre>
<p>What is the command I need to get the pod instance so that <code>kubectl exec</code> will work?</p>
| PatS | <p>As correctly stated by @David Maze:</p>
<blockquote>
<p>Your <code>kubectl get pods</code> command has a <code>--namespace</code> option; you need to repeat this in the <code>kubectl exec</code> command. – David Maze 12 hours ago</p>
</blockquote>
<p>If you've created your <code>Deployment</code>: <code>dpl-my-app</code> in a namespace: <code>my-app-namespace</code> you should also specify the <code>--namespace</code>/<code>-n</code> parameter in all of your commands.</p>
<blockquote>
<p>A side note!</p>
<p>There is a tool to change namespaces, called: <a href="https://github.com/ahmetb/kubectx/blob/master/kubens" rel="nofollow noreferrer">kubens</a></p>
</blockquote>
<hr />
<p>With a following command:</p>
<ul>
<li><code>kubectl exec --stdin --tty my-app-namespace -- /bin/bash</code></li>
</ul>
<p>You've correctly identified the issue that you are trying to exec into a <code>namespace</code> but not into a <code>Pod</code></p>
<p>With a following command:</p>
<ul>
<li><code>kubectl exec --stdin --tty dpl-my-app-787bc5b7d-4ftkb -- /bin/bash</code></li>
</ul>
<p>You've tried to exec into a <strong><code>Pod</code></strong> named <code>dpl-my-app-787bc5b7d-4ftkb</code> but in a <code>default</code> namespace. Not in a <code>namespace</code> your <code>Pod</code> is residing.</p>
<p>To exec into your <code>Pod</code> in a specific <code>namespace</code> you should use following command:</p>
<ul>
<li><code>kubectl exec --stdin --tty --namespace my-app-namespace dpl-my-app-787bc5b7d-4ftkb -- /bin/bash</code></li>
</ul>
<p>Please notice the <code>--namespace</code> is before <code>--</code> where the commands to the <code>Pod</code> should be placed (like <code>-- /bin/bash</code>).</p>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Overview: Working with objects: Namespaces</a></em></li>
<li><em><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Debug application cluster: Get shell running container</a></em></li>
</ul>
| Dawid Kruk |
<p>I am trying to use Ansible to put a pause in my playbook, since I am installing an operator from the Operator Hub and don't want to continue, until I know the CRDs I require in the following steps are installed. I have the following task but can't get it working yet.</p>
<pre><code>- name: Wait for CRDs to be available
command: kubectl get sub my-operator -n openshift-operators -o json
register: cmd
retries: 10
delay: 5
until: cmd.stdout | json_query('status.conditions[0].status') == true
</code></pre>
<p>Sample JSON</p>
<pre><code>{
"apiVersion": "operators.coreos.com/v1alpha1",
"kind": "Subscription",
"metadata": {
"creationTimestamp": "2021-12-13T04:23:58Z",
"generation": 1,
"labels": {
"operators.coreos.com/argocd-operator.openshift-operators": ""
},
"name": "argocd-operator",
"namespace": "openshift-operators",
"resourceVersion": "58122",
"uid": "6eaad3c1-8329-4d00-90b8-1ab635b3b370"
},
"spec": {
"channel": "alpha",
"config": {
"env": [
{
"name": "ARGOCD_CLUSTER_CONFIG_NAMESPACES",
"value": "argocd"
}
]
},
"installPlanApproval": "Automatic",
"name": "argocd-operator",
"source": "community-operators",
"sourceNamespace": "openshift-marketplace",
"startingCSV": "argocd-operator.v0.1.0"
},
"status": {
"catalogHealth": [
{
"catalogSourceRef": {
"apiVersion": "operators.coreos.com/v1alpha1",
"kind": "CatalogSource",
"name": "operatorhubio-catalog",
"namespace": "olm",
"resourceVersion": "57924",
"uid": "95871859-edbc-45ad-885c-3edaad2a1df6"
},
"healthy": true,
"lastUpdated": "2021-12-13T04:23:59Z"
}
],
"conditions": [
{
"lastTransitionTime": "2021-12-13T04:23:59Z",
"message": "targeted catalogsource openshift-marketplace/community-operators missing",
"reason": "UnhealthyCatalogSourceFound",
"status": "True",
"type": "CatalogSourcesUnhealthy"
}
],
"lastUpdated": "2021-12-13T04:23:59Z"
}
}
</code></pre>
| CodyK | <p>There is a small detail that is tripping up the condition. In the JSON output, the status is a string <code>"True"</code> and not a boolean which we are comparing.</p>
<p>Note: <code>"status": "True"</code></p>
<p>Changing the condition to match the string <code>True</code>...</p>
<pre><code>until: cmd.stdout | json_query('status.conditions[0].status') == "True"
</code></pre>
<p>Or, applying the <code>| bool</code> filter...</p>
<pre><code>until: stdout | json_query('status.conditions[0].status') | bool
</code></pre>
<p>The complete task:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: Wait for CRDs to be available
command: kubectl get sub my-operator -n openshift-operators -o json
register: cmd
retries: 10
delay: 5
until: cmd.stdout | json_query('status.conditions[0].status') | bool
</code></pre>
| seshadri_c |
<p>I am using kubernetes 1.26 and I have a kubernetes cluster with a single node which is both master and worker.
I have built a docker image locally and I can list the image with docker images command:</p>
<pre><code>$ sudo docker images | grep nipun
nipun/a 1.0 ed1059539edf 52 minutes ago 250MB
</code></pre>
<p>But when I use this image "nipun/a:1.0" with imagePullPolicy "Never" or "IfNotPresent" in a deployment, then also Kubernetes tries to pull the image and fails. My deployment definition is below:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nipun-deployment
labels:
app: nipun
spec:
replicas: 1
selector:
matchLabels:
app: nipun
template:
metadata:
labels:
app: nipun
spec:
containers:
- name: nipun
image: nipun/a:1.0
imagePullPolicy: Never
ports:
- containerPort: 8112
</code></pre>
<p>What might be the problem here ? I was using the same approach with lower version of k8s (1.20 probably) and it worked.</p>
| Nipun Talukdar | <p>You have first to build your own <a href="https://docs.docker.com/registry/deploying/" rel="nofollow noreferrer">local docker registry</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>docker run -d -p 5000:5000 --restart=always --name registry registry:2
</code></pre>
<p>Then, you can push your image to the registry as follows:</p>
<pre class="lang-yaml prettyprint-override"><code>docker tag nipun/a:1.0 localhost:5000/nipun/a:1.0
docker push localhost:5000/nipun/a:1.0
</code></pre>
<p>Set the image name with the local registry in your deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>localhost:5000/nipun/a:1.0
</code></pre>
| Khaled |
<p>I have "YYYY-MM-DD HH:MM:SS.QQ ERROR" in my splunk logs.
Now I want to search for similar date pattern along with Status like "2021-Apr-08 23:08:23.498 ERROR" in my splunk logs and create alert if the ERROR tag comes next to the date.
These date are changeable and are generated at run time.</p>
<p>Can any one suggest me how to check for Date time format along with Status in splunk query.</p>
| knowledge20 | <p>In the title you mentioned Amazon Web Services. If your events are actual AWS log data, you could install the <code>Splunk Add-on for Amazon Web Services</code>: <a href="https://splunkbase.splunk.com/app/1876/" rel="nofollow noreferrer">https://splunkbase.splunk.com/app/1876/</a></p>
<p>The add-on comes with a lot of field extractions. After installing the add-on, all you need to do is have a look at your events to find out the correct field name for the status text and then search for <code>status=ERROR</code>.</p>
<p>Alternatively, you can create the field extraction yourself. This regular expression should do:</p>
<pre><code>(?<date>\d\d\d\d-\w+-\d\d\s+\d\d:\d\d:\d\d\.\d\d\d)\s+(?<status>\w+)
</code></pre>
<p>You can test it here: <a href="https://regex101.com/r/pVg1Pm/1" rel="nofollow noreferrer">https://regex101.com/r/pVg1Pm/1</a></p>
<p>Now use Splunk's <code>rex</code> command to do the field extraction at search time:</p>
<p><a href="https://i.stack.imgur.com/Esuis.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Esuis.png" alt="Screenshot" /></a></p>
<p>To have the field extraction done automatically, you can add new field extractions via Settings / Fields / Field extractions.</p>
| whng |
<p>The service is external-ip is and is unable to bind with a external port. I have run the command to try to find the port.</p>
<blockquote>
<p>minikube service mongodb-express-service</p>
</blockquote>
<p>There is no error shown on the terminal. But the browser says that it is unable to connect to that url.</p>
<p><strong>Enviroment</strong></p>
<ul>
<li>Minikube</li>
<li>Oracle Virtual Box</li>
<li>Ubuntu 20.04</li>
</ul>
<p><strong>mongo-express.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-config-map
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-express-service
spec:
selector:
app: mongodb-express-service
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30001
</code></pre>
<p><strong>Update</strong>
<strong>Kubectl get service output</strong></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
mongodb-express-service LoadBalancer 10.106.190.5 <pending> 8081:30001/TCP 2d3h
mongodb-service ClusterIP 10.110.241.194 <none> 27017/TCP 3d23h
</code></pre>
<p><strong>Kubectl get pod output</strong></p>
<pre><code>mongo-express-85bdd6688f-mhv2x 1/1 Running 2 2d3h
mongodb-deployment-7f79c5f88f-h9dvf 1/1 Running 2 2d3h
</code></pre>
| tristan lim | <p>Posting this answer to give more reference on 2 topics:</p>
<ul>
<li>Kubernetes Service not able to get External IP</li>
<li>Issue that was present in this question</li>
</ul>
<hr />
<h3>Kubernetes Service not able to get External IP</h3>
<p><code>Service</code> of type <code>LoadBalancer</code>:</p>
<blockquote>
<p>Exposes the Service externally <strong>using a cloud provider's load balancer</strong>. <code>NodePort</code> and <code>ClusterIP</code> Services, to which the external load balancer routes, are automatically created.</p>
</blockquote>
<p>In a Kubernetes solution like <code>minikube</code> you won't get out of the box <code>External IP</code> that is associated with <code>Service</code> of type <code>LoadBalancer</code>. You don't need the <code>External IP</code> to connect to your <code>Services</code>. You can connect to your <code>Services</code> by <strong>either</strong>:</p>
<ul>
<li>Using a <code>NodePort</code> (<code>$ minikube service xyz</code> is using it)</li>
<li>Using a <code>$ minikube tunnel</code> (this would "assign" <code>External IP</code>'s)</li>
<li><code>Virtualbox</code> specific:
<ul>
<li>Use <code>metallb</code> (<a href="https://metallb.universe.tf/" rel="nofollow noreferrer">load-balancer implementation for bare metal Kubernetes clusters</a>) to allocate the range for your <code>LoadBalancers</code> from a subnet that Host-only networking adapter uses with <code>minikube</code>.</li>
</ul>
</li>
</ul>
<hr />
<h3>Issue that was present in this question</h3>
<p>The issue that was present in the question was connected with the mismatch between the <code>Service</code> <code>.spec.selector</code> and <code>Deployment</code> <code>.spec.selector.matchLabels</code>.</p>
<p>The easiest way to check if your <code>selector</code> is configured correctly (apart from looking at the <code>YAML</code> manifest), is by:</p>
<ul>
<li><code>$ kubectl get endpoints</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME ENDPOINTS AGE
kubernetes A.B.C.D:443 28m
nginx --> 10.32.0.13:80 <-- 5s
</code></pre>
<blockquote>
<p>Disclaimer!</p>
<p>The <code>endpoint</code> will be present when <code>Pod</code> is in <code>Running</code> state and <code>Ready</code>.</p>
</blockquote>
<p>If the <code>nginx</code> (<code>Service</code> name) <code>endpoints</code> was empty it could mean a mismatched selector or issue with the <code>Pod</code> (<code>CrashLoopBackOff</code>, <code>Pending</code>, etc.).</p>
<p>You can also check this by:</p>
<ul>
<li><code>$ kubectl describe svc SERVICE_NAME</code> <-- <code>Endpoints</code> field</li>
<li><code>$ kubectl describe endpoints SERVICE_NAME</code> <-- <code>Subsets.addresses</code> field</li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Service</a></em></li>
<li><em><a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Handbook: Accessing</a></em></li>
</ul>
| Dawid Kruk |
<p>I'm trying to add Kubernetes/helm support to a dot net core 3.0 project but I only see Docker Compose in the dropdown. What am I missing? I can start a new project with Kubernetes support just not able to convert a project. </p>
<p><a href="https://github.com/MicrosoftDocs/visualstudio-docs/issues/4029" rel="noreferrer">https://github.com/MicrosoftDocs/visualstudio-docs/issues/4029</a></p>
| Chéyo | <p>I encountered the same but it was solved after I installed Visual Studio Tools for Kubernetes by Visual Studio installer.</p>
| Mimi |
<h3>What I'm trying to achieve</h3>
<p>I'm setting up a microk8s cluster consisting of a postgres database and an elixir application that communicates with the database.</p>
<h3>The problem I'm encountering</h3>
<p>I always get an error from the database pod when attempting to connect:</p>
<pre><code>2022-01-05 18:54:05.179 UTC [216] DETAIL: Password does not match for user "phoenix_db_username
".
Connection matched pg_hba.conf line 99: "host all all all md5"
</code></pre>
<p>The connection between the database and app is clearly working since the database logs the error.</p>
<h3>What I've tried</h3>
<p>Others with the problem have suggested deleting the PV and PVC. See this github issue:
<a href="https://github.com/helm/charts/issues/16251#issuecomment-577560984" rel="nofollow noreferrer">https://github.com/helm/charts/issues/16251#issuecomment-577560984</a></p>
<ul>
<li>I've tried deleting the pvc and pv, I can confirm that the pv was removed as I checked <code>/var/snap/microk8s/common/default-storage</code> before and after removing it.</li>
<li>I've tried permanently deleting the storage by running <code>microk8s.disable storage</code> and then enabling storage again with <code>microk8s.enable storage</code>.</li>
</ul>
<p>Output from <code>microk8s.disable storage</code>:</p>
<pre><code>Disabling default storage
[...]
Storage removed
Remove PVC storage at /var/snap/microk8s/common/default-storage ? (Y/N): y
Storage space reclaimed
</code></pre>
<ul>
<li>I've checked the environment of the database pod with printenv and I see the correct values for POSTGRES_USER and POSTGRES_PASSWORD (phoenix_db_username, phoenix_db_password)</li>
<li>I've checked the environment of the application pod with printenv and I see the correct values for DB_USERNAME and DB_PASSWORD (phoenix_db_username, phoenix_db_password)</li>
<li><strong>I checked the pq_hba.conf file and it did not contain any user</strong></li>
</ul>
<p>According to postgres docker documentation this should create a user, however I don't think it is creating one. <a href="https://hub.docker.com/_/postgres" rel="nofollow noreferrer">https://hub.docker.com/_/postgres</a></p>
<h3>Elixir app yml resources</h3>
<p><strong>The configMap of the elixir app</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: phoenix-app-config
labels:
app: phoenix-app
data:
APP_NAME: "phoenix-app"
APP_HOST: "0.0.0.0"
APP_PORT: "4000"
DB_NAME: "prod_db"
DB_HOSTNAME: "phoenix-app-database"
NAMESPACE: "production"
</code></pre>
<p><strong>The secrets of the elixir app</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: phoenix-app-secrets
labels:
app: phoenix-app
data:
SECRET_KEY_BASE: KtpxCV3OY8KnRiC5yVo7Be+GRVeND+NxAsyk+FASDFasdfadffhNS804MLk
DB_PASSWORD: cGhvZW5peC1kYi1wYXNzd29yZAo= # phoenix_db_username
DB_USERNAME: cGhvZW5peC1kYi11c2VybmFtZQo= # phoenix_db_password
</code></pre>
<p><strong>The deployment of the elixir app</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: phoenix-app
labels:
app: phoenix-app
spec:
replicas: 2
selector:
matchLabels:
app: phoenix-app
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: phoenix-app
spec:
containers:
- name: phoenix-app
image: REDACTED
imagePullPolicy: Always
command: ["./bin/hello", "start"]
lifecycle:
preStop:
exec:
command: ["./bin/hello", "stop"]
ports:
- containerPort: 4000
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
envFrom:
- configMapRef:
name: phoenix-app-config
- secretRef:
name: phoenix-app-secrets
imagePullSecrets:
- name: gitlab-pull-secret
</code></pre>
<h3>Database yml resources</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: phoenix-app-database
labels:
app: phoenix-app-database
spec:
serviceName: phoenix-app-database
replicas: 1
selector:
matchLabels:
app: phoenix-app-database
template:
metadata:
labels:
app: phoenix-app-database
spec:
containers:
- name: phoenix-app-database
image: postgres:12-alpine
envFrom:
- configMapRef:
name: phoenix-app-database-config
- secretRef:
name: phoenix-app-database-secrets
ports:
- containerPort: 5432
name: postgresdb
volumeMounts:
- name: phoenix-app-database
mountPath: /var/lib/postgresql/data
volumes:
- name: phoenix-app-database
persistentVolumeClaim:
claimName: phoenix-app-db-pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: phoenix-app-db-pvc
spec:
storageClassName: microk8s-hostpath
capacity:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: phoenix-app-database-config
labels:
app: phoenix-app-database
data:
POSTGRES_DB: "prod_db"
---
apiVersion: v1
kind: Secret
metadata:
name: phoenix-app-database-secrets
labels:
app: phoenix-app-database
data:
POSTGRES_USER: cGhvZW5peF9kYl91c2VybmFtZQo= # phoenix_db_username
POSTGRES_PASSWORD: cGhvZW5peF9kYl9wYXNzd29yZAo= # phoenix_db_password
---
apiVersion: v1
kind: Service
metadata:
name: phoenix-app-database
labels:
app: phoenix-app-database
spec:
ports:
- port: 5432
name: phoenix-app-database
type: NodePort
selector:
app: phoenix-app-database
---
</code></pre>
<h3>Logs from database pod creation</h3>
<pre class="lang-sh prettyprint-override"><code>me@me:~/Documents/kubernetes-test$ kubectl logs phoenix-app-database-0 -n production
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
sh: locale: not found
2022-01-05 20:47:02.013 UTC [30] WARNING: no usable system locales were found
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....2022-01-05 20:47:02.621 UTC [36] LOG: starting PostgreSQL 12.9 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit
2022-01-05 20:47:02.623 UTC [36] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-01-05 20:47:02.641 UTC [37] LOG: database system was shut down at 2022-01-05 20:47:02 UTC
2022-01-05 20:47:02.645 UTC [36] LOG: database system is ready to accept connections
done
server started
CREATE DATABASE
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
waiting for server to shut down....2022-01-05 20:47:02.794 UTC [36] LOG: received fast shutdown request
2022-01-05 20:47:02.795 UTC [36] LOG: aborting any active transactions
2022-01-05 20:47:02.797 UTC [36] LOG: background worker "logical replication launcher" (PID 43) exited with exit code 1
2022-01-05 20:47:02.797 UTC [38] LOG: shutting down
2022-01-05 20:47:02.808 UTC [36] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
2022-01-05 20:47:02.904 UTC [1] LOG: starting PostgreSQL 12.9 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit
2022-01-05 20:47:02.904 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-01-05 20:47:02.905 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-01-05 20:47:02.909 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-01-05 20:47:02.925 UTC [50] LOG: database system was shut down at 2022-01-05 20:47:02 UTC
2022-01-05 20:47:02.929 UTC [1] LOG: database system is ready to accept connections
</code></pre>
| Beefcake | <p>Okay I have solved the issue and it was rather simple to fix, yet incredibly hard to notice.</p>
<p>I am creating the yml files via templates, such as this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: {{APP_NAME}}-database-secrets
labels:
app: {{APP_NAME}}-database
data:
POSTGRES_USER: {{DB_USERNAME_B64}}
POSTGRES_PASSWORD: {{DB_PASSWORD_B64}}
</code></pre>
<p>I then merge these templates together and replace all <code>{{ }}</code> enclosed declarations with a value from a specific environment. The ones that end with _B64 I encode into base64 format before inserting.</p>
<p>I was doing it like this which seemed to be working fine:</p>
<pre class="lang-bash prettyprint-override"><code>if [[ "${variable_key}" == *_B64 ]]; then
variable_value="$(echo "${variable_value}" | base64)"
fi
</code></pre>
<p>HOWEVER, this is NOT OK, because when I <code>echo</code> the variable here I am <strong>appending a newline</strong> to the variable <strong>which makes the database name and username illegal for postgres</strong>. I realized this when inspecting the decoded value on base64decode.org and saw that there was two lines.</p>
<p>I fixed it by changing the bash script to not print a newline (<code>-n</code>):</p>
<pre class="lang-bash prettyprint-override"><code>if [[ "${variable_key}" == *_B64 ]]; then
variable_value="$(echo -n "${variable_value}" | base64 -w 0 )"
fi
</code></pre>
<p>I hope this can help someone debug this problem in the future!</p>
| Beefcake |
<p>I need to run a python script from a KubernetesPodOperator, so I want to mount the python file into the Python docker Image. Reading some posts</p>
<ul>
<li><a href="https://stackoverflow.com/questions/57754521/how-to-mount-volume-of-airflow-worker-to-airflow-kubernetes-pod-operator">How to mount volume of airflow worker to airflow kubernetes pod operator?</a></li>
<li><a href="https://stackoverflow.com/questions/60950206/mounting-volume-issue-through-kubernetespodoperator-in-gke-airflow">Mounting volume issue through KubernetesPodOperator in GKE airflow</a></li>
<li><a href="https://stackoverflow.com/questions/69539197/mounting-folders-with-kubernetespodoperator-on-google-composer-airflow">Mounting folders with KubernetesPodOperator on Google Composer/Airflow</a></li>
<li><a href="https://github.com/apache/airflow/blob/main/airflow/providers/cncf/kubernetes/example_dags/example_kubernetes.py#L107" rel="nofollow noreferrer">https://github.com/apache/airflow/blob/main/airflow/providers/cncf/kubernetes/example_dags/example_kubernetes.py#L107</a></li>
<li><a href="https://www.aylakhan.tech/?p=655" rel="nofollow noreferrer">https://www.aylakhan.tech/?p=655</a></li>
</ul>
<p>it doesn't get clear at all for me.</p>
<p>The python file is located in the route <code>/opt/airflow/dags/test_dag</code>, so I would like to mount the entire folder and not only the script. I have tried with:</p>
<pre><code> vol1 = k8s.V1VolumeMount(
name='test_volume', mount_path='/opt/airflow/dags/test_dag'
)
volume = k8s.V1Volume(
name='test-volume',
persistent_volume_claim=k8s.V1PersistentVolumeClaimVolumeSource(claim_name='test-volume'),
)
k = KubernetesPodOperator(
task_id="dry_run_demo",
cluster_name="eks",
namespace="data",
image="python:3.9-buster",
volumes=[volume],
volume_mounts=[vol1],
arguments=["echo", "10"],
)
</code></pre>
<p>But I am getting the error:</p>
<blockquote>
<p>Pod "pod.388baaaa7c27489c9dd5f7f37ee8ce5b" is invalid: spec.containers[0].volumeMounts[0].name: Not found: "test_volume\</p>
</blockquote>
<p>I am using Airflow 2.1.1 deployed in a EC2 with docker-compose and <code>apache-airflow-providers-cncf-kubernetes==3.0.1</code></p>
<p>EDIT: with Elad's suggestion the question was "solved". Then I got the error <code>Pod Event: FailedScheduling - persistentvolumeclaim "test-volume" not found</code>, so I just took out the <code>persistent_volume_claim</code> argument and I didn't get any error, but I am getting an empty directory in the POD, without any file. I have read something about creating the persistentvolumeclain in the namespace, but it would be very convenient to create it manually instead of dynamically with every operator</p>
| Javier Lopez Tomas | <p>The error means that the names don't match.
you defined <code>name='test_volume'</code> for <code>V1VolumeMount</code> and <code>name='test-volume</code> for <code>V1Volume</code>.</p>
<p>To solve your issue names should be identical.</p>
<pre><code>vol1 = k8s.V1VolumeMount(
name='test-volume', mount_path='/opt/airflow/dags/test_dag'
)
volume = k8s.V1Volume(
name='test-volume',
persistent_volume_claim=k8s.V1PersistentVolumeClaimVolumeSource(claim_name='test-volume'),
)
</code></pre>
| Elad Kalif |
<p>I successfully snapshot my volume using python k8s client.</p>
<p>However, I got the below message.</p>
<p>I didn't have same volumesnapshot in the cluster.</p>
<p>Why this happen?</p>
<p>Code:</p>
<p>def create_snapshot(namespace, pvc_name):</p>
<pre><code>snapshot_class = "snapshotclass"
snapshot_name = f"snapshot-{pvc_name}"
snapshot_resource = {
"apiVersion": "snapshot.storage.k8s.io/v1beta1",
"kind": "VolumeSnapshot",
"metadata": {"name": snapshot_name},
"spec": {
"volumeSnapshotClassName": snapshot_class,
"source": {"persistentVolumeClaimName": pvc_name}
}
}
res = custom_api.create_namespaced_custom_object(
group="snapshot.storage.k8s.io",
version="v1beta1",
namespace= namespace,
plural="volumesnapshots",
body=snapshot_resource,
)
print(res)
create_snapshot("test", "test-pvc")
</code></pre>
<p>The volumesnapshot is created successfully, but I got a message:</p>
<pre><code> File "/home/new/my/test/rescheduler/utils/k8s_controller.py", line 72, in create_snapshot
body=snapshot_resource,
File "/home/new/my/test/venv/lib/python3.6/site-packages/kubernetes/client/api/custom_objects_api.py", line 225, in create_namespaced_custom_object
return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501
File "/home/new/my/test/venv/lib/python3.6/site-packages/kubernetes/client/api/custom_objects_api.py", line 358, in create_namespaced_custom_object_with_http_info
collection_formats=collection_formats)
File "/home/new/my/test/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 353, in call_api
_preload_content, _request_timeout, _host)
File "/home/new/my/test/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 184, in __call_api
_request_timeout=_request_timeout)
File "/home/new/my/test/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 397, in request
body=body)
File "/home/new/my/test/venv/lib/python3.6/site-packages/kubernetes/client/rest.py", line 280, in POST
body=body)
File "/home/new/my/test/venv/lib/python3.6/site-packages/kubernetes/client/rest.py", line 233, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (409)
Reason: Conflict
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'dec3c73a-e5fc-4c63-8d1a-6e2e6c6600e1', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'my, 25 Apr 2021 10:50:52 GMT', 'Content-Length': '346'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"volumesnapshots.snapshot.storage.k8s.io \"snapshot-test-pvc\" already exists","reason":"AlreadyExists","details":{"name":"snapshot-test-pvc,"group":"snapshot.storage.k8s.io","kind":"volumesnapshots"},"code":409}
</code></pre>
| sun | <p>Posting this answer as a community wiki to give one of the possible reasons why you can encounter error <code>409</code> when trying to create the resources with above snippet of code.</p>
<p>Feel free to expand it.</p>
<hr />
<p>The error encountered in the question:</p>
<pre><code>Reason: Conflict
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'dec3c73a-e5fc-4c63-8d1a-6e2e6c6600e1', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'my, 25 Apr 2021 10:50:52 GMT', 'Content-Length': '346'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"volumesnapshots.snapshot.storage.k8s.io \"snapshot-test-pvc\" already exists","reason":"AlreadyExists","details":{"name":"snapshot-test-pvc,"group":"snapshot.storage.k8s.io","kind":"volumesnapshots"},"code":409}
</code></pre>
<blockquote>
<p><code>"snapshot-test-pvc\" already exists","reason":"AlreadyExists"</code></p>
</blockquote>
<p>states that the resource already exists in the cluster. To check if the resource exists in the cluster you can run following commands:</p>
<ul>
<li><code>$ kubectl get volumesnapshots -A</code></li>
<li><code>$ kubectl describe volumesnapshots RESOURCE_NAME -A</code></li>
</ul>
<p>I've used the code that was in the question and had no issues with it. The course of actions was following:</p>
<ul>
<li>first run - <code>VolumeSnapshot</code> created successfully</li>
<li>second run - code returned the 409 error stating that resource already exists.</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>kubernetes.client.exceptions.ApiException: (409)
Reason: Conflict
<-- REDACTED -->
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"volumesnapshots.snapshot.storage.k8s.io \"snapshot-example-pvc\" already exists","reason":"AlreadyExists","details":{"name":"snapshot-example-pvc","group":"snapshot.storage.k8s.io","kind":"volumesnapshots"},"code":409}
</code></pre>
<blockquote>
<p>A side note!</p>
<p>Above error was returned with modified code from the question (mainly values).</p>
</blockquote>
<p>You can also see this error when trying to run <code>$ kubectl create -f resource.yaml -v=4</code> on already created resource.</p>
<hr />
<p>For anyone interested here is the <a href="https://stackoverflow.com/help/minimal-reproducible-example">minimal, reproducible example</a> of a code used in the question (it was missing <code>import</code> and the <code>def</code> was misplaced):</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
def create_snapshot(namespace, pvc_name):
config.load_kube_config()
custom_api = client.CustomObjectsApi()
snapshot_class = "snapshotclass"
snapshot_name = f"snapshot-{pvc_name}"
snapshot_resource = {
"apiVersion": "snapshot.storage.k8s.io/v1beta1",
"kind": "VolumeSnapshot",
"metadata": {"name": snapshot_name},
"spec": {
"volumeSnapshotClassName": snapshot_class,
"source": {"persistentVolumeClaimName": pvc_name}
}
}
res = custom_api.create_namespaced_custom_object(
group="snapshot.storage.k8s.io",
version="v1beta1",
namespace= namespace,
plural="volumesnapshots",
body=snapshot_resource,
)
print(res)
create_snapshot("default", "test-pvc")
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://github.com/fabric8io/kubernetes-client/issues/544" rel="nofollow noreferrer">Github.com: Fabric8io: Kubernetes client: Issues: 409 conflict status code when create pod</a></em></li>
<li><em><a href="https://www.oreilly.com/library/view/managing-kubernetes/9781492033905/ch04.html" rel="nofollow noreferrer">Oreilly.com: Library: View: Managing Kubernetes: Chapter 04</a></em></li>
</ul>
| Dawid Kruk |
<p>I'm installing Argo CD using its helm chart through Terraform definition like below. The setup is simple, I create a namespace separate so it can be deleted on <code>terraform destroy</code> then I install the chart.</p>
<pre><code>resource "kubernetes_namespace" "ns_argocd" {
metadata {
annotations = {
name = "argocd"
}
name = "argocd"
}
}
resource "helm_release" "argocd" {
name = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
namespace = kubernetes_namespace.ns_argocd.metadata.0.name
create_namespace = false
version = "5.13.1"
depends_on = [kubernetes_namespace.ns_argocd]
}
</code></pre>
<p>Since I am still experimenting on the setup, I have to destroy and recreate a few times and every time that I hit <code>terraform destroy</code> it times out because the pods and the namespace are stuck on <code>terminating</code> status (see below). My current fix is to manually force delete every pod and namespace via <code>kubectl delete [pod] -n argocd --force</code>.</p>
<p>Is there anything that I'm missing to pass to the values file that will prevent this behavior?</p>
<pre class="lang-bash prettyprint-override"><code>NAME STATUS AGE
argocd Terminating 3h58m
default Active 4h4m
kube-node-lease Active 4h4m
kube-public Active 4h4m
kube-system Active 4h4m
</code></pre>
<pre class="lang-bash prettyprint-override"><code>NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Terminating 0 3h55m
argocd-applicationset-controller-7bb6d6d68c-p52j7 1/1 Terminating 0 3h55m
argocd-dex-server-6bc5f5689c-ps6wk 1/1 Terminating 0 3h55m
argocd-notifications-controller-57bd8dbc87-msdnw 1/1 Terminating 0 3h55m
argocd-redis-77df65946d-fhb46 1/1 Terminating 0 3h55m
argocd-repo-server-6984b7475d-b2625 1/1 Terminating 0 3h55m
argocd-server-548dffcdcc-nbrhc 1/1 Terminating 0 3h55m
</code></pre>
| everspader | <p>Check out this link. It highlights the issue with orphaned CRDs left by Terraform. We also need to set the finalizers to [].</p>
<p><a href="https://github.com/aws-ia/terraform-aws-eks-blueprints/issues/865" rel="nofollow noreferrer">https://github.com/aws-ia/terraform-aws-eks-blueprints/issues/865</a></p>
| Sabdeth |
<p>We are trying to create MySQL pod with databases ready by cloning PVC of already running MySQL pod. <br/></p>
<p>Use case: we have a staging environment with database imported and want to create dynamic environments based off that database structure and data. This approach should save us significant bootstrapping time (download and import of dump vs clone of the PV). However once we have target MySQL pod running with cloned PVC attached, we can't see any databases available in it. MySQL starts normally, recognises <code>/var/lib/mysql/mysql</code> directory and skips new db setup, however databases are not there.
Details:</p>
<ul>
<li>MySQL image: mysql:5.7</li>
<li>we use InnoDb</li>
<li>we scale-in source StatefulSet before taking clone (expecting source MySQL saves everything to disk)</li>
<li>PVC is mounted as:
<pre><code>volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
</code></pre>
</li>
</ul>
<p>What are we missing?</p>
| Anton Andrushchenko | <p>Apparently the issue was related to the AWS EBS CSI. Volume Cloning does not seem to work, however VolumeSnapshot feature might solve the issue.</p>
| Anton Andrushchenko |
<p>I have a k8s deployment which consists of a cron job (runs hourly), service (runs the http service) and a storage class (pvc to store data, using gp2).</p>
<p>The issue I am seeing is that gp2 is only readwriteonce.</p>
<p>I notice when the cron job creates a job and it lands on the same node as the service it can mount it fine.</p>
<p>Is there something I can do in the service, deployment or cron job yaml to ensure the cron job and service always land on the same node? It can be any node but as long as cron job goes to the same node as service.</p>
<p>This isn't an issue in my lower environment as we have very little nodes but in our production environments where we have more nodes it is an issue.</p>
<p>In short I want to get my cron job, which creates a job then pod to run the pod on the same node as my services pod is on.</p>
<p>I know thing isn't best practice but our webservice reads data from the pvc and serves it. The cron job pulls new data in from other sources and leaves it for the webserver.</p>
<p>Happy for other ideas / ways.</p>
<p>Thanks</p>
| Lemex | <p>Focusing only on the part:</p>
<blockquote>
<p>How can I schedule a workload (<code>Pod</code>, <code>Job</code>, <code>Cronjob</code>) on a specific set of <code>Nodes</code></p>
</blockquote>
<p>You can spawn your <code>Cronjob</code>/<code>Job</code> either with:</p>
<ul>
<li><code>nodeSelector</code></li>
<li><code>nodeAffinity</code></li>
</ul>
<hr />
<h3><code>nodeSelector</code></h3>
<blockquote>
<p><code>nodeSelector</code> is the simplest recommended form of node selection constraint. <code>nodeSelector</code> is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). The most common usage is one key-value pair.</p>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Scheduling eviction: Assign pod node: Node selector</a></em></p>
</blockquote>
<p>Example of it could be following (assuming that your node have a specific label that is referenced in <code>.spec.jobTemplate.spec.template.spec.nodeSelector</code>):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
nodeSelector: # <-- IMPORTANT
schedule: "here" # <-- IMPORTANT
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
<p>Running above manifest will schedule your <code>Pod</code> (<code>Cronjob</code>) on a node that has a <code>schedule=here</code> label:</p>
<ul>
<li><code>$ kubectl get pods -o wide </code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-1616323740-mqdmq 0/1 Completed 0 2m33s 10.4.2.67 node-ffb5 <none> <none>
hello-1616323800-wv98r 0/1 Completed 0 93s 10.4.2.68 node-ffb5 <none> <none>
hello-1616323860-66vfj 0/1 Completed 0 32s 10.4.2.69 node-ffb5 <none> <none>
</code></pre>
<hr />
<h3><code>nodeAffinity</code></h3>
<blockquote>
<p>Node affinity is conceptually similar to <code>nodeSelector</code> -- it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.</p>
<p>There are currently two types of node affinity, called <code>requiredDuringSchedulingIgnoredDuringExecution</code> and <code>preferredDuringSchedulingIgnoredDuringExecution</code>. You can think of them as "hard" and "soft" respectively, in the sense that the former specifies rules that must be met for a pod to be scheduled onto a node (just like nodeSelector but using a more expressive syntax), while the latter specifies preferences that the scheduler will try to enforce but will not guarantee.</p>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Scheduling eviction: Assign pod node: Node affinity</a></em></p>
</blockquote>
<p>Example of it could be following (assuming that your node have a specific label that is referenced in <code>.spec.jobTemplate.spec.template.spec.nodeSelector</code>):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
# --- nodeAffinity part
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: schedule
operator: In
values:
- here
# --- nodeAffinity part
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
<ul>
<li><code>$ kubectl get pods</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-1616325840-5zkbk 0/1 Completed 0 2m14s 10.4.2.102 node-ffb5 <none> <none>
hello-1616325900-lwndf 0/1 Completed 0 74s 10.4.2.103 node-ffb5 <none> <none>
hello-1616325960-j9kz9 0/1 Completed 0 14s 10.4.2.104 node-ffb5 <none> <none>
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Overview: Working with objects: Labels</a></em></li>
</ul>
<p>I'd reckon you could also take a look on this StackOverflow answer:</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/51212904/kubernetes-pvc-with-readwritemany-on-aws">Stackoverflow.com: Questions: Kubernetes PVC with readwritemany on AWS</a></em></li>
</ul>
| Dawid Kruk |
<p>I am facing minor issue with getting secrets from external vaults to aws eks container.</p>
<p>I am using sidecar container for inject secrets in to pods.</p>
<p>I have created secrets at below path ,</p>
<pre><code>vault kv put secrets/mydemo-eks/config username='admin' password='secret'
</code></pre>
<p>my pod yaml is as below,</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mydemo
labels:
app: mydemo
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-inject-status: 'update'
vault.hashicorp.com/auth-path: 'auth/mydemo-eks'
vault.hashicorp.com/namespace: 'default'
vault.hashicorp.com/role: 'mydemo-eks-role'
vault.hashicorp.com/agent-inject-secret-credentials.txt: 'secrets/data/mydemo-eks/config'
spec:
serviceAccountName: mydemo-sa
containers:
- name: myapp
image: nginx:latest
ports:
- containerPort: 80
</code></pre>
<p>when i m checking real time logs,</p>
<p>getting as below,</p>
<p><a href="https://i.stack.imgur.com/Ywi7Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ywi7Z.png" alt="enter image description here" /></a></p>
<p>My Hashicorp Vault policy is as below,</p>
<pre><code>vault policy write mydemo-eks-policy - <<EOF
path "secrets/data/mydemo-eks/config" {
capabilities = ["read"]
}
EOF
</code></pre>
<p>actually secrets already there on mentioned path,</p>
<p><a href="https://i.stack.imgur.com/yBi94.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yBi94.png" alt="enter image description here" /></a></p>
<p>Any idea....</p>
<p>Is there any wrong i have done.</p>
<p>any one have worked on this scenario?</p>
<p>Thanks</p>
| Hardik Patel | <p>I have modified the policy as below,</p>
<pre><code>vault policy write mydemo-eks-policy - <<EOF
path "secrets/mydemo-eks/config" {
capabilities = ["read"]
}
EOF
</code></pre>
<p>Earlier i used like ,</p>
<pre><code>vault policy write mydemo-eks-policy - <<EOF
path "secrets/data/mydemo-eks/config" {
capabilities = ["read"]
}
EOF
</code></pre>
| Hardik Patel |
<p>I'm new with monitoring tools like Prometheus and Grafana and I would like to create dashboard which represents current <code>requests and limits resources</code> and usage for a pod. In addition, this pod has 2 containers inside.</p>
<p>My resources for first container looks like:</p>
<pre><code> resources:
requests:
cpu: "3800m"
memory: "9500Mi"
limits:
cpu: "6500m"
memory: "9500Mi"
</code></pre>
<p>and for second container:</p>
<pre><code>resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 50m
memory: 50Mi
</code></pre>
<p>When executing this query in Prometheus:</p>
<p><code>rate(container_cpu_usage_seconds_total{pod=~"MY_POD"}[5m])</code>
I get: <a href="https://i.stack.imgur.com/eBLDV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eBLDV.png" alt="enter image description here" /></a>
And to be honest I dont know how this returned data are valid with resources. On Grafana it looks like this:
<a href="https://i.stack.imgur.com/DGltA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGltA.png" alt="enter image description here" /></a></p>
<p>In addition I would like to add informations about <code>requests and limits</code> to dashboard, but I don't know how to scale dashboard to show all data.</p>
<p>When I execute this query: <code>kube_pod_container_resource_requests{pod=~"MY_POD"}</code> I get:</p>
<p><a href="https://i.stack.imgur.com/HPGOJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HPGOJ.png" alt="enter image description here" /></a></p>
<p>And this looks valid compared to my resources. I have proper value for limits too, but I would like to represent all this data (usage, requests, limits) on ona dashboard. Could somebody give my any tips how to achieve this?</p>
| Frendom | <p>Simple, just add 2 more queries</p>
<blockquote>
<p>if you don't know where to add, below Metrics Browser and Options tab you
can find + symbol to add more queries</p>
<p>and <code>container_cpu_usage_seconds_total</code> this metrics is a counter and
this will give you the CPU Usage in Cores.</p>
</blockquote>
<p><code>kube_pod_container_resource_requests{pod=~"MY_POD"}</code> and <code>kube_pod_container_resource_limits{pod=~"MY_POD"}</code>. If it's only one Pod means no issues. But if you're having mutliple Pods means try to use <code>Sum</code></p>
<ul>
<li>Query A: <code>sum(rate(container_cpu_usage_seconds_total{pod=~"MY_POD"}[5m]))</code></li>
<li>Query B: <code>sum(kube_pod_container_resource_requests{pod=~"MY_POD"})</code></li>
<li>Query C: <code>sum(kube_pod_container_resource_limits{pod=~"MY_POD"})</code></li>
</ul>
<p>This will looks good without too much details, for more details like container wise data just create three more Panel for Requests, Limits and Usage by Containers and add <code>by(container)</code> after the every Query</p>
<p><strong>Another Approach:</strong></p>
<p>Create Variables for Pod and Container so that you can select the Container which you want to see and add 3 queries in single panel so that the Panel looks more Dynamic and less Noise</p>
| Muthuraj R |
<p>I have a requirement in which I need to create a cronjob in kubernetes but the pod is having multiple containers (with single container its working fine).</p>
<p>Is it possible? </p>
<p>The requirement is something like this:
1. First container: Run the shell script to do a job.
2. Second container: run fluentbit conf to parse the log and send it.</p>
<p>Previously I thought to have a deployment in place and that is working fine but since that deployment was used just for 10 mins jobs I thought to make it a cron job.</p>
<p>Any help is really appreciated.</p>
<p>Also about the cronjob I am not sure if a pod can support multiple containers to do that same.</p>
<p>Thank you,
Sunny</p>
| sunnybhambhani | <p>I need to agree with the answer provided by @Arghya Sadhu. It shows how you can run multi container <code>Pod</code> with a <code>CronJob</code>. Before the answer I would like to give more attention to the comment provided by @Chris Stryczynski:</p>
<blockquote>
<p>It's not clear whether the containers are run in parallel or sequentially</p>
</blockquote>
<hr />
<p>It is not entirely clear if the workload that you are trying to run:</p>
<blockquote>
<p>The requirement is something like this:</p>
<ol>
<li>First container: Run the shell script to do a job.</li>
<li>Second container: run fluentbit conf to parse the log and send it.</li>
</ol>
</blockquote>
<p>could be used in <code>parallel</code> (both running at the same time) or require <code>sequential</code> approach (after X completed successfully, run Y).</p>
<p>If the workload could be run in parallel the answer provided by @Arghya Sadhu is correct, however if one workload is depending on another, I'd reckon you should be using <code>initContainers</code> instead of multi container <code>Pods</code>.</p>
<p>The example of a <code>CronJob</code> that implements the <code>initContainer</code> could be following:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: ubuntu
image: ubuntu
command: [/bin/bash]
args: ["-c","cat /data/hello_there.txt"]
volumeMounts:
- name: data-dir
mountPath: /data
initContainers:
- name: echo
image: busybox
command: ["bin/sh"]
args: ["-c", "echo 'General Kenobi!' > /data/hello_there.txt"]
volumeMounts:
- name: data-dir
mountPath: "/data"
volumes:
- name: data-dir
emptyDir: {}
</code></pre>
<p>This <code>CronJob</code> will write a specific text to a file with an <code>initContainer</code> and then a "main" container will display its result. It's worth to mention that the main container will not start if the <code>initContainer</code> won't succeed with its operations.</p>
<ul>
<li><code>$ kubectl logs hello-1234567890-abcde</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>General Kenobi!
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://linchpiner.github.io/k8s-multi-container-pods.html" rel="noreferrer">Linchpiner.github.io: K8S multi container pods</a></em></li>
</ul>
| Dawid Kruk |
<p>how to set up efk logging in aks cluster nodes?</p>
<p>Below are my spec files for efk logging in aks clusters.</p>
<pre><code># Elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium-retain-sc"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
########################
# Kibana yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
##################
# fluentd daemonset and rbac,sa,clusterrole specs
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
</code></pre>
<p>The setup is working fine only thing no logs are coming to elasticsearch cluster from fluentd whereas the same spec files work fine inside minikube cluster.</p>
<p>As for this setup kibana is up and able to connect with elasticsearch and the same is the case with fluentd, just logs are not coming inside elasticseach.</p>
<p>What extra configuration needs to be configured to make these config files work with azure k8 service(AKS) cluster nodes?</p>
| devops-admin | <p>Had to add below environment variables for Fluentd.</p>
<p>Reference Link: <a href="https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434" rel="nofollow noreferrer">https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434</a></p>
<pre><code> - name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
</code></pre>
<p>Here's the complete spec.</p>
<pre><code># Elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium-retain-sc"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
########################
# Kibana yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
##################
# fluentd daemonset and rbac,sa,clusterrole specs
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods
</code></pre>
| devops-admin |
<p>I have kubernetes clusters with prometheus and grafana for monitoring and I am trying to build a dashboard panel that would display the number of pods that have been restarted in the period I am looking at.</p>
<p>Atm I have this query that fills a vector with 1 if the pod's creation time is in the range (meaning it has been restarted during this period) and -1 otherwise.</p>
<p><code>-sgn((time() - kube_pod_created{cluster="$cluster"}) - $__range_s)</code></p>
<p><a href="https://i.stack.imgur.com/j8BjL.png" rel="nofollow noreferrer">what this looks like</a></p>
<p>Is there a way to count the number of positive values in this vector and display it? Like in this example just have a box with red 1 inside.
Or maybe there is a better way to accomplish what I am trying.</p>
| psilog | <p>To display the Pod Restarts we have different Prometheus metrics</p>
<p><code>kube_pod_container_status_restarts_total</code>. This is the counter metrics and this will record the container restarts.</p>
<p>To calculate the restarts:</p>
<ul>
<li>If you want to see all pods then,
<code>sum(increase(kube_pod_container_status_restarts_total{namespace="My-Namespace"}[5m])) by(pod)</code></li>
<li>or If you want Particular Pod then use,
<code>sum(increase(kube_pod_container_status_restarts_total{namespace="My-Namespace", pod="My-Pod"}[5m]))</code></li>
<li>or to show by container wise use
<code>sum(increase(kube_pod_container_status_restarts_total{namespace="My-Namespace", pod="My-Pod"}[5m])) by(container)</code></li>
</ul>
| Muthuraj R |
<p>I'm using the following Airflow version inside my Docker container and I am currently having some issues related to a broken DAG</p>
<pre><code>FROM apache/airflow:2.3.4-python3.9
</code></pre>
<p>I have other DAGs running with the same argument 'request_cpu' and perfectly functional, I'm not sure what the issue could be</p>
<pre><code>Broken DAG: [/home/airflow/airflow/dags/my_project.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 858, in __init__
self.resources = coerce_resources(resources)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 133, in coerce_resources
return Resources(**resources)
TypeError: Resources.__init__() got an unexpected keyword argument 'request_cpu'
</code></pre>
<p>This is my current DAG configuration</p>
<pre><code># DAG configuration
DAG_ID = "my_project_id"
DAG_DESCRIPTION = "description"
DAG_IMAGE = image
default_args = {
"owner": "airflow",
"depends_on_past": False,
"max_active_tasks": 1,
"max_active_runs": 1,
"email_on_failure": True,
"email": ["[email protected]"],
"retries": 0,
"email_on_retry": False,
"image_pull_policy": "Always",
}
# Define desired resources.
compute_resources = {
# Cpu: 500m milliCPU is about half cpu, other values, 1, 2, 4... for full cpu allocation
"request_cpu": "500m",
# Memory: Mi for Megabytes or Gi for Gigabytes
"request_memory": "512Mi",
"limit_cpu": "500m",
"limit_memory": "1Gi",
}
with DAG(
DAG_ID,
default_args=default_args,
start_date=datetime(2022, 5, 9),
schedule_interval="0 21 */16 * *", # Every 16 days or twice per month
max_active_runs=1,
max_active_tasks=1,
catchup=False,
description=DAG_DESCRIPTION,
tags=["my tags"],
) as dag:
# AWS credentials
creds = tools.get_config_params(key="AWS-keys")
my_task = KubernetesPodOperator(
namespace="airflow",
image=DAG_IMAGE,
image_pull_secrets=[k8s.V1LocalObjectReference("docker-registry")],
container_resources=compute_resources,
env_vars={
"AWS_ACCESS_KEY_ID": creds["access_key"],
"AWS_SECRET_ACCESS_KEY": creds["secret_access_key"],
"EXECUTION_DATE": "{{ execution_date }}",
},
cmds=["python3", "my_project.py"],
is_delete_operator_pod=True,
in_cluster=False,
name="my-project-name",
task_id="my-task",
config_file=os.path.expanduser("~") + "/.kube/config",
get_logs=True,
resources=compute_resources,
)
</code></pre>
| The Dan | <p>First <code>resources</code> is <a href="https://github.com/apache/airflow/blob/6b75be43171eafc45825d043ef051638aa103ccd/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L222-L223" rel="noreferrer">deprecated</a> so you should use only <code>container_resources</code>.</p>
<p>The <code>container_resources</code> is expecting <code>V1ResourceRequirements</code> not <code>dict</code>. You should do:</p>
<pre><code>from kubernetes.client import models as k8s
compute_resources=k8s.V1ResourceRequirements(
requests={
'memory': '512Mi',
'cpu': '500m'
},
limits={
'memory': '1Gi',
'cpu': 500m
}
)
</code></pre>
<p>Then</p>
<pre><code> my_task = KubernetesPodOperator(..., container_resources=compute_resources)
</code></pre>
| Elad Kalif |
<p>When you create a kubernetes service named <code>myservice</code>, you can access that service in your pods with urls like this: <code>http://myservice/media/images/...</code>.</p>
<p>Is it possible to make urls like <code>/media/images/...</code> get resolved to urls like <code>http://myservice/media/images/...</code> in a specific kubernetes pod?</p>
<p>For example suppose we have a pod named "podA". Is it possible for containers in "podA" to send GET requests with urls like <code>/media/images/...</code> instead of urls like <code>http://myservice/media/images/...</code> ?</p>
| HsnVahedi | <p>Answering the following question:</p>
<blockquote>
<p>Is it possible to make urls like /media/images/... get resolved to urls like http://myservice/media/images/... in a specific kubernetes pod?</p>
</blockquote>
<p><strong>In short, no</strong>.</p>
<p>Take a look on following example:</p>
<pre><code>http://awesomeservice/folder/sample.txt
\__/ \_______________/\_______/
| | |
protocol service name path to
resource
</code></pre>
<p>When you try to send a web request for a specific resource (from your Kubernetes <code>Pod</code>) you <strong>need</strong> to have a host part. If you tried like: <code>/media/images/...</code> you would be telling your OS/software/application that you want a resource from a local filesystem.</p>
<p>You can read more on the topic on how the <code>Pods</code> can communicate with each other by following this documentation:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Service</a></em></li>
</ul>
<hr />
<p>As a <strong>workaround</strong> to have an access to your resources on path like <code>/media/images/...</code> you could provision the storage solution that would be mounted to your <code>/media</code> catalogue. There are multiple ways to achieve that and you would need to refer to the specific solution documentation (it will be different on-premises and in the cloud-managed Kubernetes cluster). As a starting point/baseline you can look on:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Storage: Persistent Volumes</a></em></li>
</ul>
<hr />
<p>Giving more visibility to the useful comments made under the question:</p>
<blockquote>
<p>Path-only URLs are usually resolved with respect to some base URL and inherit the hostname from that base; how would your "podA" code know what that base URL is? Can you provide more specific code that demonstrates what you need and why you want to omit the hostname part of the URL? – David Maze</p>
</blockquote>
<blockquote>
<p>AFAIK from pure Kubernetes perspective this won't be possible. You would need to have some type of <code>base_url</code> provided to your <code>Pod</code> which is requesting the content. – Dawid Kruk</p>
</blockquote>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://en.wikipedia.org/wiki/Uniform_Resource_Identifier" rel="nofollow noreferrer">Wikipedia.org: Uniform resource identifier</a></em></li>
<li><em><a href="https://en.wikipedia.org/wiki/Path_(computing)" rel="nofollow noreferrer">Wikipedia.org. Wiki: Path (computing)</a></em></li>
</ul>
<blockquote>
<p><strong>Side note!</strong></p>
<p>As for:</p>
<blockquote>
<p>Is it possible to make urls like /media/images/... get resolved to urls like http://myservice/media/images/... in a specific kubernetes pod?</p>
</blockquote>
<p>If you were thinking about sending the request to a <strong>specific</strong> <code>Pod</code> you could look on:</p>
<ul>
<li><em><a href="https://istio.io/latest/docs/concepts/traffic-management/" rel="nofollow noreferrer">Istio.io: Latest: Docs: Concepts: Traffic management</a></em></li>
</ul>
</blockquote>
| Dawid Kruk |
<p>I am able to create an EKS cluster but when I try to add nodegroups, I receive a "Create failed" error with details:
"NodeCreationFailure": Instances failed to join the kubernetes cluster</p>
<p>I tried a variety of instance types and increasing larger volume sizes (60gb) w/o luck.
Looking at the EC2 instances, I only see the below problem. However, it is difficult to do anything since i'm not directly launching the EC2 instances (the EKS NodeGroup UI Wizard is doing that.)</p>
<p>How would one move forward given the failure happens even before I can jump into the ec2 machines and "fix" them?</p>
<blockquote>
<p>Amazon Linux 2</p>
<blockquote>
<p>Kernel 4.14.198-152.320.amzn2.x86_64 on an x86_64</p>
<p>ip-187-187-187-175 login: [ 54.474668] cloud-init[3182]: One of the
configured repositories failed (Unknown),
[ 54.475887] cloud-init[3182]: and yum doesn't have enough cached
data to continue. At this point the only
[ 54.478096] cloud-init[3182]: safe thing yum can do is fail. There
are a few ways to work "fix" this:
[ 54.480183] cloud-init[3182]: 1. Contact the upstream for the
repository and get them to fix the problem.
[ 54.483514] cloud-init[3182]: 2. Reconfigure the baseurl/etc. for
the repository, to point to a working
[ 54.485198] cloud-init[3182]: upstream. This is most often useful
if you are using a newer
[ 54.486906] cloud-init[3182]: distribution release than is
supported by the repository (and the
[ 54.488316] cloud-init[3182]: packages for the previous
distribution release still work).
[ 54.489660] cloud-init[3182]: 3. Run the command with the
repository temporarily disabled
[ 54.491045] cloud-init[3182]: yum --disablerepo= ...
[ 54.491285] cloud-init[3182]: 4. Disable the repository
permanently, so yum won't use it by default. Yum
[ 54.493407] cloud-init[3182]: will then just ignore the repository
until you permanently enable it
[ 54.495740] cloud-init[3182]: again or use --enablerepo for
temporary usage:
[ 54.495996] cloud-init[3182]: yum-config-manager --disable </p>
</blockquote>
</blockquote>
| CoderOfTheNight | <p>In my case, the problem was that I was deploying my node group in a private subnet, but this private subnet had no NAT gateway associated, hence no internet access. What I did was:</p>
<ol>
<li><p>Create a NAT gateway</p>
</li>
<li><p>Create a new routetable with the following routes (the second one is the internet access route, through nat):</p>
</li>
</ol>
<ul>
<li>Destination: VPC-CIDR-block Target: local</li>
<li><strong>Destination: 0.0.0.0/0 Target: NAT-gateway-id</strong></li>
</ul>
<ol start="3">
<li>Associate private subnet with the routetable created in the second-step.</li>
</ol>
<p>After that, nodegroups joined the clusters without problem.</p>
| manavellam |
<p>I am writing a script that runs each minute and looks for newly created pods. The script executes commands within these pods.</p>
<p>I need to add the missing part to my current solution which looks like this:</p>
<pre><code>while true; do
pods= $(kubectl get pods --field-selector status.phase=='Running' ---------- someting should be here ----------)
for pod in ${pods[@]} ; do
actions
done
sleep 60;
done
</code></pre>
| Abdelwahhab | <p>My current solution looks like this. Please feel free to review and enhance.</p>
<pre><code>current_pods=$(kubectl get pods | grep "Running" | awk '{print $1}')
# attach existing pods
for pod in $current_pods; do
do-something
done
# attach newly created pods each 60s
while true; do
new_pods=$(kubectl get pods | grep "Running" | awk '{print $1}')
for pod in $new_pods; do
if [[ ! " ${current_pods[@]} " =~ $pod ]]; then
do-something
fi
done
current_pods=$new_pods
sleep 60;
done
</code></pre>
| Abdelwahhab |
<p>I setup a series of VM <code>192.168.2.(100,105,101,104)</code> where kubernetes master is on <code>100</code> and two workers on <code>101,104</code>. Also setup the postgres on <code>192.168.2.105</code>, followed <a href="https://stackoverflow.com/questions/43354167/minikube-expose-mysql-running-on-localhost-as-service/43477742#43477742">this tutorial</a> but it is still unreachable from within. Tried it in minikube inside a test VM where minikube and postgres were installed in the same VM, worked just fine.</p>
<p>Changed the postgers config file from <code>localhost</code> to <code>*</code>, changed listen at pg_hba.conf to <code>0.0.0.0/0</code></p>
<p>Installed postgesql-12 and postgresql-client-12 in the VM <code>192.168.2.105:5432</code>, now i added headless service to kubernetes which is as follows</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
------
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 192.168.2.105
ports:
- port: 5432
</code></pre>
<p>in my deployment I am defining this to access database</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:11.0.0
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: 'my-service:5432'
- name: DB_DATABASE
value: postgres
- name: DB_PASSWORD
value: admin
- name: DB_SCHEMA
value: public
- name: DB_USER
value: postgres
- name: DB_VENDOR
value: POSTGRES
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
</code></pre>
<p>Also the VMs are bridged, not on NAT.</p>
<p>What i am doing wrong here ?</p>
| Mainak Das | <p>The first thing we have to do is create the headless service with custom endpoint. The <strong>IP</strong> in my solution is only specific for my machine.</p>
<p>Endpoint with service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgres-service
subsets:
- addresses:
- ip: 192.168.2.105
ports:
- port: 5432
</code></pre>
<p>As for my particular specs, I haven't defined any ingress or loadbalancer so i'll change the selector type from <strong>LoadBalancer</strong> to <strong>NodePort</strong> in the service after its deployed.</p>
<p>Now i deployed the keycloak with the the mentioned .yaml file</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
- name: https
port: 8443
targetPort: 8443
selector:
app: keycloak
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:11.0.0
env:
- name: KEYCLOAK_USER
value: "admin" # TODO give username for master realm
- name: KEYCLOAK_PASSWORD
value: "admin" # TODO give password for master realm
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: # <Node-IP>:<LoadBalancer-Port/ NodePort>
- name: DB_DATABASE
value: "keycloak" # Database to use
- name: DB_PASSWORD
value: "admin" # Database password
- name: DB_SCHEMA
value: public
- name: DB_USER
value: "postgres" # Database user
- name: DB_VENDOR
value: POSTGRES
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
</code></pre>
<p>After mentioning all the possible values, it connects successfully to the postgres server that is hosted on another server away from kubernetes master and workers node !</p>
| Mainak Das |
<p>im using this tutorial :
<a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html</a><br />
if i run it as is . it creates the cluster in the default namespace.<br />
i want to create it in a custom namespace for example "my-cluster"<br />
when running kubectl create -f elasticsearch.yml</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: my-cluster
namespace: my-cluster
spec:
version: 7.10.1
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
</code></pre>
<p>I'm getting error :</p>
<pre><code>Error from server (NotFound): error when creating "elasticsearch.yaml": namespaces "my-cluster" not found
</code></pre>
<p>can i even use namespace here ?</p>
| user63898 | <p>The namespace needs to exist before you start deploying Elasticsearch. The error is complaining about the absence of <code>my-cluster</code> namespace</p>
<p>You can create namespace by either of below approach</p>
<p>1 - <code>kubectl create namespace my-cluster</code></p>
<p>or</p>
<p>2 - <code>kubectl create -f ns.yaml</code> with content of <code>ns.yaml</code> as below</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: my-cluster
</code></pre>
| Syam Sankar |
<p>I'm trying to delete kubernetes pod via <code>go-client</code> library using the following code:</p>
<pre><code>err := ks.clientset.CoreV1().Pods(kubeData.PodNamespace).Delete(context.Background(), kubeData.PodName, metav1.DeleteOptions{})
if err != nil {
log.Fatal(err)
}
</code></pre>
<p>However receiving an error:</p>
<blockquote>
<p>pods "app-name" is forbidden: User "system:serviceaccount:default:app-name" cannot delete resource "pods" in API group "" in the namespace "default""</p>
</blockquote>
<p>Here is the serviceaccount.yaml:</p>
<pre><code>{{- $sa := print .Release.Name "-" .Values.serviceAccount -}}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ $sa }}
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $sa }}
rules:
- apiGroups: ["apps"]
verbs: ["patch", "get", "list"]
resources:
- deployments
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $sa }}
rules:
- apiGroups: ["apps"]
verbs: ["delete", "get", "list"]
resources:
- pods
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ $sa }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ $sa }}
subjects:
- kind: ServiceAccount
name: {{ $sa }}
</code></pre>
<p>Looks like something related to user permissions, however not sure how to properly configure it.
Thanks.</p>
| Avag Sargsyan | <p>As you can see from the error:</p>
<p><code>pods "app-name" is forbidden: User "system:serviceaccount:default:app-name" cannot delete resource "pods" in API group "" in the namespace "default""</code></p>
<p>The important part is: <strong><code>in API group ""</code></strong></p>
<p>Take a look on your manifest:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $sa }}
rules:
# - apiGroups: ["apps"] # <-- BAD!
- apiGroups: [""] # <-- GOOD!
verbs: ["delete", "get", "list"]
resources:
- pods
</code></pre>
<p>This definition is specifying that a resource <code>Pod</code> is in the <code>apiGroup: apps</code> which is not correct and <code>""</code> should be used instead.</p>
<p>More on that you can read here:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/reference/using-api/#api-groups" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Using API: API Groups</a></em></li>
</ul>
| Dawid Kruk |
<p>I am new to kubernetes helm chart. There is a yaml file named configmap. This file containing all the configuration which are related to the application. Since this file containing a lot of data, I was trying to put some data to the new file and access using FILE object. So created two different file with the name:
<strong>data1.yaml</strong> and <strong>data2.yaml</strong> <br>
<strong>data1.yaml</strong> file have only static data. On the other hand <strong>data2.yaml</strong> file contains dynamic data (some variables also like <code>$.Values.appUrl</code>).
I am able to read static file (data1.yaml) to the configmap.yaml file using FILE object. I am also able to read data2.yaml file but since this file containing some variable also, the variable value is not replacing by actual value. It printing the same variable in configmap file. So my question is,
Is there any way to access dynamic file (which contains variable type data) ? <br>
Below is the example data shown.<br><br>
configmap.yaml file is-></p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: example-configmap
namespace: default
data1: {{ .File.Get "data1.yaml" | indent 2 }}
data2: {{ .File.Get "data2.yaml" | indent 2 }}
</code></pre>
<p>data1.yaml file is -></p>
<pre><code>data1:
ui.type:2
ui.color1:red
ui.color2:green
</code></pre>
<p>data2.yaml file is -></p>
<pre><code>system.data.name: "app-name"
system.data.url: {{ $.Values.appUrl }} # variable
system.data.type_one: "app-type-xxx"
system.data.value: "3"
system.interface.properties: |
</code></pre>
<p>Values.yaml file is -></p>
<pre><code>appUrl: "https://app-name.com"
</code></pre>
<p>Output:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: example-configmap
namespace: default
data1:
ui.type:2
ui.color1:red
ui.color2:green
data2:
system.data.name: "app-name"
system.data.url: {{ $.Values.appUrl }} # here should be "https://app-name.com"
system.data.type_one: "app-type-xxx"
system.data.value: "3"
system.interface.properties: |
</code></pre>
| md samual | <pre><code>{{ (tpl (.Files.Glob "data2.yaml").AsConfig . ) | indent 2 }}
</code></pre>
<p>Using above syntax it's picking the actual value of variables. But it also printing the file name like below:</p>
<pre><code>data2.yaml: |-
</code></pre>
<p>So I resolve the issue by using below syntax:</p>
<pre><code>{{ (tpl (.Files.Get "data2.yaml") . ) | indent 2 }}
</code></pre>
| md samual |
Subsets and Splits