Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
β | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
β |
---|---|---|---|
<p>Based on <a href="https://stackoverflow.com/questions/59920413/podman-add-ports-to-expose-to-running-pod">podman-add-ports-to-expose-to-running-pod</a>
as quote From <a href="https://stackoverflow.com/a/60143446/4870357">Dominic P</a>:</p>
<blockquote>
<p>once the pod is created these attributes are assigned to the βinfraβ
container and cannot be changed. For example, if you create a pod and
then later decide you want to add a container that binds new ports,
Podman will not be able to do this. You would need to recreate the pod
with the additional port bindings before adding the new container.</p>
</blockquote>
<p>I know it's not supported to add new port for running pod</p>
<p>So, what is your suggestion to recreate (backup exist containers on pod) then create new pod and add the new port and restore the containers?</p>
| Zaman | <p>You could generate a Kubernetes pod manifest from your running pod using <code>podman generate kube <podname></code>, edit the resulting file, and then re-create the pod with <code>podman kube play <manifest>,yaml</code>.</p>
<p>For example:</p>
<p>I create a pod and spin up a webserver...</p>
<pre><code>podman pod create example-pod
podman run -d --name web --pod example-pod alpinelinux/darkhttpd
</code></pre>
<p>...only to realize that I forgot to publish port 8080 to the host. So I save the configuration and delete the pod:</p>
<pre><code>podman generate kube example-pod > example-pod.yaml
podman pod rm -f example-pod
</code></pre>
<p>Edit the manifest to add the port configuration:</p>
<pre><code>...
spec:
containers:
- image: docker.io/alpinelinux/darkhttpd:latest
name: web
ports:
- containerPort: 8080
hostPort: 8080
...
</code></pre>
<p>And then re-create the pod:</p>
<pre><code>podman kube play example-pod.yaml
</code></pre>
| larsks |
<p>We have a fairly large kubernetes deployment on GKE, and we wanted to make our life a little easier by enabling auto-upgrades. The <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades" rel="nofollow noreferrer">documentation on the topic</a> tells you how to enable it, but not how it actually <strong>works</strong>.</p>
<p>We enabled the feature on a test cluster, but no nodes were ever upgraded (although the UI kept nagging us that "upgrades are available").</p>
<p>The docs say it would be updated to the "latest stable" version and that it occurs "at regular intervals at the discretion of the GKE team" - both of which is not terribly helpful.</p>
<p>The UI always says: "Next auto-upgrade: Not scheduled"</p>
<p>Has someone used this feature in production and can shed some light on what it'll actually do?</p>
<h1>What I did:</h1>
<ul>
<li>I enabled the feature on the <em>nodepools</em> (<strong>not</strong> the cluster itself)</li>
<li>I set up a maintenance window</li>
<li>Cluster version was <code>1.11.7-gke.3</code></li>
<li>Nodepools had version <code>1.11.5-gke.X</code></li>
<li>The newest available version was <code>1.11.7-gke.6</code></li>
</ul>
<h1>What I expected:</h1>
<ul>
<li>The nodepool would be updated to either <code>1.11.7-gke.3</code> (the default cluster version) or <code>1.11.7-gke.6</code> (the most recent version)</li>
<li>The update would happen in the next maintenance window</li>
<li>The update would otherwise work like a "manual" update</li>
</ul>
<h1>What actually happened:</h1>
<ul>
<li>Nothing</li>
<li>The nodepools remained on <code>1.11.5-gke.X</code> for more than a week</li>
</ul>
<h1>My question</h1>
<ul>
<li>Is the nodepool version supposed to update?</li>
<li>If so, at what time?</li>
<li>If so, to what version?</li>
</ul>
| averell | <p>I'll finally answer this myself. The auto-upgrade <em>does</em> work, though it took several days to a week until the version was upgraded.</p>
<p>There is no indication of the planned upgrade date, or any feedback other than the version updating.</p>
<p>It will upgrade to the current master version of the cluster.</p>
<p><strong>Addition:</strong> It still doesn't work reliably, and still no way to debug if it doesn't. One information I got was that the mechanism does not work if you initially provided a specific version for the node pool. As it is not possible to deduce the inner workings of the autoupdates, we had to resort to manually checking the status again.</p>
| averell |
<p>I want to enable the security features of ElasticSearch and according to <a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-minimal-setup.html" rel="nofollow noreferrer">this</a> tutorial I need to add <code>xpack.security.enabled: true</code> to elasticsearch.yml to do so.</p>
<p>I tried doing this by adding the following command:</p>
<pre><code> command:
- "sh"
- "-c"
- "echo 'xpack.security.enabled: true >> /usr/share/elasticsearch/config/elasticsearch.yml"
</code></pre>
<p>But this put the pod into a CrashLoopBackOff. At first I thought this was because the elasticsearch.yml file did not exist at this point, but when I changed the command to:</p>
<pre><code> command:
- "sh"
- "-c"
- "cat /usr/share/elasticsearch/config/elasticsearch.yml"
</code></pre>
<p>I could see with <code>kubectl logs <pod-name></code> that it does exist and contains the following lines:</p>
<pre><code>cluster.name: "docker-cluster"
network.host: 0.0.0.0
</code></pre>
<p>Strangely, even if I use a very simple command like <code>ls</code> I always get the CrashLoopBackOff.</p>
<p>This is the complete manifest file of the ElasticSearch StatefulSet:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: efk-stack
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.10
command:
- "sh"
- "-c"
- "cat /usr/share/elasticsearch/config/elasticsearch.yml"
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
</code></pre>
| Jens Voorpyl | <p>If I understand you correctly, you main goal is simply to edit the <code>/usr/share/elasticsearch/config/elasticsearch.yml</code> file and then have elastisearch start up as normal?</p>
<p>I that case a ConfigMap and a VolumeMount are your friend.</p>
<p>TL;DR: Create a ConfigMap with the <em>entire</em> contents that you want in <code>elasticsearch.yml</code> (i.e. not just the part you want to add) and mount that as a volume at <code>/usr/share/elasticsearch/config/elasticsearch.yml</code>. This will overwrite the file at startup.</p>
<p>As follows:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
namespace: efk-stack
data:
elasticsearch.yml: |-
foo: bar
baz: foo
xpack.security.enabled: true
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: efk-stack
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
### added:
volumes:
- name: my-configmap
configMap:
name: my-configmap
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.10
## removed so that default startup command is used
# command:
# - "sh"
# - "-c"
# - "cat /usr/share/elasticsearch/config/elasticsearch.yml"
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
## added
- name: my-configmap
subPath: elasticsearch.yml
readOnly: true
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
</code></pre>
| Karl |
<p>Trying to export vault secrets as an environment variable to k8s pod using vault injector. Following vault documentation <a href="https://developer.hashicorp.com/vault/docs/platform/k8s/injector/examples#environment-variable-example" rel="nofollow noreferrer">https://developer.hashicorp.com/vault/docs/platform/k8s/injector/examples#environment-variable-example</a></p>
<p>as mention in example, you need to source config file inside a container and it will override ENTRYPOINT script/command that you are passing in dockerfile.</p>
<pre><code> containers:
- name: web
image: alpine:latest
command:
['sh', '-c']
args:
['source /vault/secrets/config && <entrypoint script>']
</code></pre>
<p>in my setup, I don't have a static entry point script that I can put here in args. docker file has its own command/script running as entrypoint script.</p>
<p>Trying to find if there is any alternative to source this vault config inside a container that allow me to not change anything in entrypoint script in dockerfile. not sure if k8s is providing any way to do this with post-hook or something. that runs entrypoint mentioned in dockerfile first and then execute other scripts/command passed in post-hook.</p>
| Meet101 | <p>You can use the <a href="https://github.com/hashicorp/vault-secrets-operator/" rel="nofollow noreferrer">Vault Secrets Operator</a> to synchronize secrets from Vault to Kubernetes <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secret</a> resources.</p>
<p>Once you've done that, you can then expose those secrets as environment variables using <code>envFrom</code> or <code>vaultFrom</code> directives in your deployment manifests, as described <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/" rel="nofollow noreferrer">in the documentation</a>.</p>
<p>This method does not require overriding the entrypoint or arguments of your containers.</p>
<hr />
<p>It looks like Vault Secrets Operator is relatively new and the documentation seems a bit slim. You can achieve similar functionality using the <a href="https://external-secrets.io/" rel="nofollow noreferrer">External Secrets Operator</a>, which has the added advantage that it supports a variety of secret store backends.</p>
| larsks |
<p>I am trying to create a service which follows vertical replication-</p>
<p><a href="https://i.stack.imgur.com/olfNe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/olfNe.png" alt="enter image description here" /></a></p>
<p>In this architecture, requestes goes to the master node. For that I can use a kubernetes ingress.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: master-node
servicePort: http
</code></pre>
<p>Now my requirement is if master is down, then request should go to the slave node.
I can achieve that by creating three paths <code>/master</code>, <code>/slave-1</code>, <code>/slave-2</code>. But the constraint is that <strong>the path of request must remain same</strong>. So, the path must always be <code>/</code></p>
<p>How can I create a ingress in such a way that if <code>master-node</code> is down, then all requests should forward to the <code>slave-1-node</code>?</p>
<p>I want to achieve something like below-</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: example.com
http:
paths:
- path: /
priority: 1
backend:
serviceName: master-node
servicePort: http
- host: example.com
http:
paths:
- path: /
priority: 2
backend:
serviceName: slave-1-node
servicePort: http
- host: example.com
http:
paths:
- path: /
priority: 3
backend:
serviceName: slave-2-node
servicePort: http
</code></pre>
| Prateek Agrawal | <p>I'm not sure how to do this using <em>just</em> an ingress resource, but it would be very easy if you were to deploy an haproxy pod in front of your services, so that your architecture looks like this:</p>
<p><a href="https://i.stack.imgur.com/H7HKx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H7HKx.png" alt="enter image description here" /></a></p>
<p>Using an haproxy configuration like this, you would get the behavior you want:</p>
<pre><code>global
log stdout format raw local0
maxconn 4000
user haproxy
group haproxy
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout connect 10s
timeout client 1m
timeout server 1m
frontend example_fe
bind 0.0.0.0:8080
default_backend example_be
backend example_be
option httpchk GET /healthz
server alpha example-alpha:80 check
server beta example-beta:80 check backup
server gamma example-gamma:80 check backup
</code></pre>
<p>This will send all requests to <code>alpha</code> as long it is running. If <code>alpha</code> is offline, requests will go to <code>beta</code>, and if <code>beta</code> is not running, requests will go to <code>gamma</code>. I found <a href="https://www.haproxy.com/blog/failover-and-worst-case-management-with-haproxy/" rel="nofollow noreferrer">this article</a> useful when looking for information about how to set this up.</p>
<p>You create a Deployment that runs haproxy:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: haproxy
name: haproxy
spec:
replicas: 1
selector:
matchLabels:
app: haproxy
template:
metadata:
labels:
app: haproxy
spec:
containers:
- image: docker.io/haproxy:latest
name: haproxy
ports:
- containerPort: 8080
name: http
volumeMounts:
- mountPath: /usr/local/etc/haproxy
name: haproxy-config
volumes:
- configMap:
name: haproxy-config-ddc898c5f5
name: haproxy-config
</code></pre>
<p>A Service pointing at that Deployment:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: haproxy
name: haproxy
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app: haproxy
</code></pre>
<p>And then point the Ingress at that Service:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
spec:
rules:
- host: example.com
http:
paths:
- backend:
service:
name: haproxy
port:
name: http
path: /
pathType: Prefix
</code></pre>
<p>I've put together a complete configuration <a href="https://github.com/larsks/so-example-73996233" rel="nofollow noreferrer">here</a> if you want to try this out.</p>
| larsks |
<p>Can I a list all the namespaces using a role and not a clusterrole resource? If yes, can anyone help me with some example</p>
| Mahesh | <p>Namespaces are cluster-wide objects.</p>
<p>The only way to access them is if there exists a ClusterRoleBinding and a corresponding ClusterRole that gives your principal (user, service account,etc) permissions to list namespaces.</p>
| larsks |
<p>I have this kubeconfig file</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ***
server: ***
name: ***
contexts:
- context:
cluster: ***
user: webhook
name: *****
current-context: *****
kind: Config
preferences: {}
users:
- name: webhook
user:
token: ${MY_APIKEY}
</code></pre>
<p>I am storing it in secret in github that is named <code>DEV_CLUSTER_KUBECONF</code></p>
<p>I am also storing MY_APIKEY in GitHub secrets</p>
<p>Then I have this actions workflow file whose purpose is the write the content into a file and then populate it with a token coming form github actions.</p>
<pre><code>name: wfl
on:
push:
env:
DEV_CLUSTER_KUBECONF: ${{ secrets.DEV_CLUSTER_KUBECONF }}
jobs:
j1:
needs: build
runs-on: ....
container:
image: .....
steps:
- name: pull kubeconfig
run: |
mkdir kubeconf
touch kubeconf.conf
echo $DEV_CLUSTER_KUBECONF >> kubeconf/kubeconf.conf
- name: envsub kube.conf
run: |
cat kubeconf/kubeconf.conf | envsubst > populated_kube.conf
env:
MY_APIKEY: ${{ secrets.MY_APIKEY }}
- name: export KUBECONFIG path
run: echo "KUBECONFIG=populated_kube.conf" >> $GITHUB_ENV
- name: kubectl
run: kubectl get po
</code></pre>
<p>This is a simplified version of my work. I prefer to keep it three steps. I mean I want to store the kubeconfig with the api key placeholder separate from the api key in the github secrets.</p>
<p>However, right now it is not connecting to k8s.</p>
<p>I used to keep kubeconf/kubeconf.conf in the repo and after checkout, do the envsubt on that file and it was working. I am not sure why it is not working now. seems like <code>kubeconf/kubeconf.conf</code> is not correct but when I try to print it and debug it is showing *******.</p>
<p>Any idea how to fix this?</p>
| Amin Ba | <p>You need to properly quote your variables. Consider what happens if I create a multiline environment variable:</p>
<pre><code>MVAR='this
is
a
test'
</code></pre>
<p>And then try to echo it without using quotes:</p>
<pre><code>$ echo $MYVAR
this is a test
</code></pre>
<p>Compare that to the result when using quotes:</p>
<pre><code>$ echo "$MYVAR"
this
is
a
test
</code></pre>
<hr />
<p>So you need to edit your action to read:</p>
<pre><code> - name: pull kubeconfig
run: |
mkdir kubeconf
echo "$DEV_CLUSTER_KUBECONF" > kubeconf/kubeconf.conf
</code></pre>
<p>(I've removed an unnecessary <code>touch kubeconf.conf</code> here, because that wasn't creating the file in the correct path and wasn't necessary in any case.)</p>
<p>If that doesn't work, the first thing you should do is add a <code>cat kubeconf/kubeconf.conf</code> to your workflow so you can inspect the generated file.</p>
| larsks |
<p>If I have a ConfigMap like so:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: id-config
data:
uuid: "{{ randAlphaNum 32 }}"
</code></pre>
<p>and a StatefulSet specification like so (taken and slightly modified from kubernetes' <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">StatefulSet Basics</a> page):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.k8s.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
env:
- name: UUID
valueFrom:
configMapKeyRef:
name: id-config
key: uuid
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
</code></pre>
<p>Will <code>randAlphaNum</code> only be called once, when the initial deployment occurs, and then the value returned by <code>randAlphaNum</code> is stored in the <code>uuid</code> key and used if a pod restarts? Or will <code>randAlphaNum</code> be called every time a pod is created or restarted, with a different uuid being returned every time? Thanks in advance.</p>
| quantumferret | <p>When you deploy something with Helm you need to differentiate between two distinct things that are happening:</p>
<ol>
<li>Rendering the manifests, which happens locally on the machine where you execute the <code>helm</code> CLI.</li>
<li>Deploying the rendered manifests. I.e. sending the instruction to the Kubernetes API to deploy certain resources.</li>
</ol>
<p>Entries in the manifest files such as <code>{{ randAlphaNum 32 }}</code> are rendered in step 1 and the pre-rendered results are sent to the Kubernetes API. It will only change if you ask Helm to render and deploy again.</p>
| Karl |
<p>I would like permit a Kubernetes pod in namespace <code>my-namespace</code> to access <code>configmap/config</code> in the same namespace. For this purpose I have defined the following <code>role</code> and <code>rolebinding</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: List
items:
- kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: config
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["config"]
verbs: ["get"]
- kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: config
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: my-namespace
roleRef:
kind: Role
name: config
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Yet still, the pod runs into the following error:</p>
<pre class="lang-none prettyprint-override"><code>configmaps \"config\" is forbidden: User \"system:serviceaccount:my-namespace:default\"
cannot get resource \"configmaps\" in API group \"\" in the namespace \"my-namespace\"
</code></pre>
<p>What am I missing? I guess it must be a simple thing, which a second pair of eyes may spot immediately.</p>
<p><strong>UPDATE</strong> Here is a relevant fragment of my client code, which uses <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">go-client</a>:</p>
<pre class="lang-go prettyprint-override"><code>cfg, err := rest.InClusterConfig()
if err != nil {
logger.Fatalf("cannot obtain Kubernetes config: %v", err)
}
k8sClient, err := k8s.NewForConfig(cfg)
if err != nil {
logger.Fatalf("cannot create Clientset")
}
configMapClient := k8sClient.CoreV1().ConfigMaps(Namespace)
configMap, err := configMapClient.Get(ctx, "config", metav1.GetOptions{})
if err != nil {
logger.Fatalf("cannot obtain configmap: %v", err) // error occurs here
}
</code></pre>
| rookie099 | <p>I don't see anything in particular wrong with your Role or
Rolebinding, and in fact when I deploy them into my environment they
seem to work as intended. You haven't provided a complete reproducer in your question, so here's how I'm testing things out:</p>
<ul>
<li><p>I started by creating a namespace <code>my-namespace</code></p>
</li>
<li><p>I have the following in <code>kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
commonLabels:
app: rbactest
resources:
- rbac.yaml
- deployment.yaml
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: config
literals:
- foo=bar
- this=that
</code></pre>
</li>
<li><p>In <code>rbac.yaml</code> I have the Role and RoleBinding from your question (without modification).</p>
</li>
<li><p>In <code>deployment.yaml</code> I have:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: cli
spec:
replicas: 1
template:
spec:
containers:
- name: cli
image: quay.io/openshift/origin-cli
command:
- sleep
- inf
</code></pre>
</li>
</ul>
<p>With this in place, I deploy everything by running:</p>
<pre><code>kubectl apply -k .
</code></pre>
<p>And then once the Pod is up and running, this works:</p>
<pre><code>$ kubectl exec -n my-namespace deploy/cli -- kubectl get cm config
NAME DATA AGE
config 2 3m50s
</code></pre>
<p>Attempts to access other ConfigMaps will not work, as expected:</p>
<pre><code>$ kubectl exec deploy/cli -- kubectl get cm foo
Error from server (Forbidden): configmaps "foo" is forbidden: User "system:serviceaccount:my-namespace:default" cannot get resource "configmaps" in API group "" in the namespace "my-namespace"
command terminated with exit code 1
</code></pre>
<p>If you're seeing different behavior, it would be interesting to figure out where your process differs from what I've done.</p>
<hr />
<p>Your Go code looks fine also; I'm able to run this in the "cli" container:</p>
<pre><code>package main
import (
"context"
"fmt"
"log"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
func main() {
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
namespace := "my-namespace"
configMapClient := clientset.CoreV1().ConfigMaps(namespace)
configMap, err := configMapClient.Get(context.TODO(), "config", metav1.GetOptions{})
if err != nil {
log.Fatalf("cannot obtain configmap: %v", err)
}
fmt.Printf("%+v\n", configMap)
}
</code></pre>
<p>If I compile the above, <code>kubectl cp</code> it into the container and run it, I get as output:</p>
<pre><code>&ConfigMap{ObjectMeta:{config my-namespace 2ef6f031-7870-41f1-b091-49ab360b98da 2926 0 2022-10-15 03:22:34 +0000 UTC <nil> <nil> map[app:rbactest] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","data":{"foo":"bar","this":"that"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"rbactest"},"name":"config","namespace":"my-namespace"}}
] [] [] [{kubectl-client-side-apply Update v1 2022-10-15 03:22:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:foo":{},"f:this":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:app":{}}}} }]},Data:map[string]string{foo: bar,this: that,},BinaryData:map[string][]byte{},Immutable:nil,}
</code></pre>
| larsks |
<p>Let's say I create a PersistentVolume of type hostPath:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: mypv
spec:
storageClassName: normal
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: /etc/foo # Path on the host machine
</code></pre>
<p>and I create the corresponding PersistentVolumeClaim, and use it in a Pod:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- image: busybox
name: busybox
volumeMounts:
- name: myvolume
mountPath: /var/log # is this relative to the hostPath of the PV?
restartPolicy: Never
volumes:
- name: myvolume
persistentVolumeClaim
claimName: mypvc
status: {}
</code></pre>
<p>Is it expected that the result would be <code>{hostPath}/{mountPath}</code> (e.g. <code>/etc/foo/var/log</code>), or would I have to specifically define <code>mountPath: /etc/foo/var/log</code> to get it there?</p>
| KJ0797 | <p>The <code>mountPath</code> is the mountpoint inside the container. It is an absolute path starting at the <em>container's</em> filesystem root. If <code>mountPath</code> is <code>/var/log</code>, then the volume will be mounted on <code>/var/log</code> inside the container.</p>
<p>If you have a <code>hostPath</code> volume pointing at <code>/etc/foo</code> on the host and <code>mountPath</code> pointing at <code>/var/log</code> in the container, then you will find the contents of <code>/etc/foo</code> available inside <code>/var/log</code> inside the container.</p>
<p>For more details, see <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="noreferrer">the documentation</a>.</p>
| larsks |
<p>Docker provides the following functions to stop and start the same container.</p>
<pre><code>OP46B1:/ # docker stop 18788407a60c
OP46B1:/ # docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
18788407a60c ubuntu:test "/bin/bash" 34 minutes ago Exited (0) 7 seconds ago charming_gagarin
OP46B1:/ # docker start 18788407a60c
</code></pre>
<p>But k3s agent does not provide this function. A container stopped by "k3s crictl stop" cannot be restarted by "k3s crictl start". The following error will appear. How to stop and start the same container at k3s agent?</p>
<pre><code>OP46B1:/data # ./k3s-arm64 crictl stop 5485f899c7bb6
5485f899c7bb6
OP46B1:/data # ./k3s-arm64 crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
5485f899c7bb6 b58be220837f0 3 days ago Exited pod-webapp86 0 92a94e8eec410
OP46B1:/data# ./k3s-arm64 crictl start 5485f899c7bb6
FATA[2020-10-20T00:54:04.520056930Z] Starting the container "5485f899c7bb6" failed: rpc error: code = Unknown desc = failed to set starting state for container "5485f899c7bb6f2d294a3a131b33d8f35c9cf84df73cacb7b8af1ee48a591dcf": container is in CONTAINER_EXITED state
</code></pre>
| Wei Yang | <p>k3s is a distribution of kubernetes. Kubernetes is an abstraction over the container framework (containerd/docker/etc.). As such, you shouldn't try to control the containers directly using <code>k3s crictl</code>, but instead use the pod abstraction provided by kubernetes.</p>
<p><code>k3s kubectl get pods -A</code> will list all the pods that are currently running in the k3s instance.<a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_get/" rel="nofollow noreferrer">1</a><br />
<code>k3s kubectl delete pod -n <namespace> <pod-selector></code> will delete the pod(s) specified, which will stop (and delete) their containers.<a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_delete/" rel="nofollow noreferrer">2</a></p>
| T0xicCode |
<p>I'm new to k8s and trying to get a cluster on GKE set up. I had it working close with just services and nginx built into the frontend image, however the routing was not working correctly, so looking online and its clear I should use an ingress. Been trying to get the nginx-ingress set up, but keep getting a 404 response, and I cannot figure out what I'm doing wrong.</p>
<p>I'm pulling the ingress from here:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>and have the controller yaml set up like so:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/user-regex: "true"
nginx.ingress.kuberenetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS, DELETE"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-CustomHeader,X-LANG,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,X-Api-Key,X-Device-Id,Access-Control-Allow-Origin"
spec:
rules:
- http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-service
port:
number: 3000
- path: /api/?(.*)
pathType: Prefix
backend:
service:
name: server-service
port:
number: 5000
ingressClassName: nginx
</code></pre>
<p>client service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: client-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: http-port
selector:
app: client
type: ClusterIP
</code></pre>
<p>client deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: client-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: jowz/client:v4.2
ports:
- containerPort: 3000
name: http-port
selector:
matchLabels:
app: client
</code></pre>
<p>I have no errors, image pull issues, or anything showing either in the terminal or in the GKE console.</p>
<p>pods:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
default client-deployment-9ccf8cf87-27pvv 1/1 Running 0 87m
default mongodb 1/1 Running 0 21h
default server-deployment-df679664f-q9fjw 1/1 Running 0 87m
ingress-nginx ingress-nginx-admission-create-tsp4n 0/1 Completed 0 9m23s
ingress-nginx ingress-nginx-admission-patch-q7spm 0/1 Completed 1 9m23s
ingress-nginx ingress-nginx-controller-86b55bb769-2bqs5 1/1 Running 0 9m24s
kube-system event-exporter-gke-d4b7ff94-4st8l 2/2 Running 0 21h
kube-system fluentbit-gke-xlv7v 2/2 Running 0 21h
kube-system gke-metrics-agent-fhnmq 2/2 Running 0 21h
kube-system konnectivity-agent-697c66b96-bvfdw 1/1 Running 0 21h
kube-system konnectivity-agent-autoscaler-864fff96c4-n9tlp 1/1 Running 0 21h
kube-system kube-dns-autoscaler-758c4689b9-7gzx8 1/1 Running 0 21h
kube-system kube-dns-fc686db9b-hjl4d 4/4 Running 0 21h
kube-system kube-proxy-gke-octodemo-default-pool-20afc590-94qj 1/1 Running 0 21h
kube-system l7-default-backend-9db4bd868-zgx8s 1/1 Running 0 21h
kube-system metrics-server-v0.5.2-66bbcdbffc-bmzp6 2/2 Running 0 21h
kube-system pdcsi-node-fqzzl 2/2 Running 0 21h
</code></pre>
<p>services:</p>
<pre><code>default client-service ClusterIP 10.92.6.30 <none> 3000/TCP 88m
default kubernetes ClusterIP 10.92.0.1 <none> 443/TCP 21h
default mongodb-service ClusterIP 10.92.4.232 <none> 27017/TCP 21h
default server-service ClusterIP 10.92.15.190 <none> 5000/TCP 88m
ingress-nginx ingress-nginx-controller LoadBalancer 10.92.2.242 35.235.98.121 80:30275/TCP,443:30487/TCP 10m
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.92.13.144 <none> 443/TCP 10m
kube-system default-http-backend NodePort 10.92.14.152 <none> 80:30200/TCP 21h
kube-system kube-dns ClusterIP 10.92.0.10 <none> 53/UDP,53/TCP 21h
kube-system metrics-server ClusterIP 10.92.11.249 <none> 443/TCP 21h
</code></pre>
<p>Curling gets the same response as the browser:</p>
<pre><code>* Trying 35.235.98.121:80...
* Connected to 35.235.98.121 (35.235.98.121) port 80 (#0)
> GET / HTTP/1.1
> Host: 35.235.98.121
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Date: Tue, 18 Jul 2023 21:05:23 GMT
< Content-Type: text/html
< Content-Length: 146
< Connection: keep-alive
<
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host 35.235.98.121 left intact
</code></pre>
<p>but ping returns find:</p>
<pre><code>--- 35.235.98.121 ping statistics ---
43 packets transmitted, 43 received, 0% packet loss, time 42063ms
rtt min/avg/max/mdev = 21.119/36.551/206.584/33.645 ms
</code></pre>
<p>If anyone can see where I'm going wrong, please let me know. Been running in circles trying to wrap my head around this</p>
| Jowz | <p>I think the primary problem here is that you've misspelled some of the annotations on your Ingress resource.</p>
<ol>
<li><p>You wrote <code>user</code> where you meant <code>use</code>. Instead of:</p>
<pre><code>nginx.ingress.kubernetes.io/user-regex: "true"
</code></pre>
<p>You need:</p>
<pre><code>nginx.ingress.kubernetes.io/use-regex: "true"
</code></pre>
</li>
<li><p>You spelled <code>kubernetes</code> as <code>kuberenetes</code>. Instead of:</p>
<pre><code>nginx.ingress.kuberenetes.io/rewrite-target: /$1
</code></pre>
<p>You need:</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /$1
</code></pre>
</li>
</ol>
<p>With these two changes, things seem to work as expected. You can see my complete test <a href="https://github.com/larsks/so-example-76716570-nginx-ingress" rel="nofollow noreferrer">here</a>.</p>
| larsks |
<p>In my project, we let developers update a repo containing all of the kubernetes manifests. The repo uses kustomize. I've decided to add a validation / lint step to our CI to catch mistakes early.</p>
<p>To do so, I'm trying to run <code>kustomize build</code> on everything in the repo. Where I'm running into trouble is our use of ksops. In this scenario, it's not important to actually decode the secrets. I don't want to install the appropriate key on the CI server or allow it to be pulled. What I'd really like to do is skip all the ksops stuff. I'm looking for something like this (doesn't seems to exist)</p>
<pre><code>kustomize build --ignore-kind=ksops ./apps/myapp/production
</code></pre>
<p>If I don't skip the ksops stuff, I get this:</p>
<blockquote>
<p>trouble decrypting file Error getting data key: 0 successful groups required, got 0Error: failure in plugin configured via /tmp/kust-plugin-config-24824323; exit status 1: exit status 1</p>
</blockquote>
<p>I noticed that someone else thought this was important too. <a href="https://github.com/argyle-engineering/ksops" rel="nofollow noreferrer">They made a patched version of ksops that can handle my scenario.</a> I'm hoping to do this with the unpatched stuff. Reason: because the folks that come after me will wonder what this is all about.</p>
<hr />
<p>Update: For reference, I'm doing this in Docker.</p>
<p>Trying out larsks' solution, here's the code I tried:</p>
<p>Dockerfile</p>
<pre><code>FROM alpine
RUN apk add bash curl git
RUN curl -s https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash \
&& mv kustomize /usr/bin/kustomize \
&& kustomize version
ENV XDG_CONFIG_HOME=/root/.config
RUN mkdir -p /root/.config/kustomize/plugin
RUN mkdir -p /root/.config/kustomize/plugin/viaduct.ai/v1/ksops \
&& ln -s /bin/true /root/.config/kustomize/plugin/viaduct.ai/v1/ksops/ksops
ENV KUSTOMIZE_PLUGIN_HOME=/root/.config/kustomize/plugin
WORKDIR /code
COPY . /code
RUN ./validate.sh
</code></pre>
<p>validate.sh</p>
<pre><code>#! /bin/bash
set -e
for i in `find . -name kustomization* -type f | grep -v \/base`; do
d=`dirname $i`
echo "$d"
kustomize build --enable-alpha-plugins "$d"
done
</code></pre>
| 101010 | <p>The solution is to create a dummy filter for processing ksops resources. For example, something like this:</p>
<pre><code>mkdir -p fakeplugins/viaduct.ai/v1/ksops
ln -s /bin/true fakeplugins/viaduct.ai/v1/ksops/ksops
export KUSTOMIZE_PLUGIN_HOME=$PWD/fakeplugins
kustomize build --enable-alpha-plugins
</code></pre>
<p>This will cause <code>kustomize</code> to call <code>/bin/true</code> when it encounters ksops-encrypted resources. You won't have secrets in your output, but it will generate all other resources.</p>
<p>(The above has been tested with kustomize 4.5.5)</p>
<hr />
<p>The reason your code is failing is because you're using a Busybox-based Docker image. Busybox is a multi-call binary; it figures out what applet to run based on the name with which it was called. So while on a normal system, we can run <code>ln -s /bin/true /path/to/ksops</code> and then run <code>/path/to/ksops</code>, this won't work in a Busybox environment: it sees that it's being called as <code>ksops</code> and doesn't know what to do.</p>
<p>Fortunately, that's an easy problem to solve:</p>
<pre><code>FROM alpine
RUN apk add bash curl git
RUN curl -s https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash \
&& mv kustomize /usr/bin/kustomize \
&& kustomize version
RUN mkdir -p /root/fakeplugins/viaduct.ai/v1/ksops \
&& printf "#!/bin/sh\nexit 0\n" > /root/fakeplugins/viaduct.ai/v1/ksops/ksops \
&& chmod 755 /root/fakeplugins/viaduct.ai/v1/ksops/ksops
ENV KUSTOMIZE_PLUGIN_HOME=/root/fakeplugins
COPY validate.sh /bin/validate-overlays
WORKDIR /code
</code></pre>
<p>And now, given a layout like this:</p>
<pre><code>.
βββ Dockerfile
βββ example
βΒ Β βββ base
βΒ Β βΒ Β βββ deployment.yaml
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βΒ Β βββ pvc.yaml
βΒ Β βββ overlay
βΒ Β βββ deployment_patch.yaml
βΒ Β βββ kustomization.yaml
βΒ Β βββ pg-password.enc.yaml
βΒ Β βββ secret-generator.yaml
βββ validate.sh
</code></pre>
<p>I can run from the top directory:</p>
<pre><code>docker run --rm -v $PWD:/code my-kustomize-image validate-overlays
</code></pre>
<hr />
<p>NB: I've slightly modified <code>validate.sh</code> to do the filtering in <code>find</code> rather than piping the output to <code>grep -v</code> :</p>
<pre><code>#!/bin/bash
set -e
find . -name base -prune -o -name kustomization.yaml -print |
while read -r overlay; do
overlay="${overlay%/*}"
echo "$overlay"
kustomize build --enable-alpha-plugins "$overlay"
done
</code></pre>
| larsks |
<p>I have a cronjob that sends out emails to customers. It occasionally fails for various reasons. I <em>do not want</em> it to restart, but it still does.</p>
<p>I am running Kubernetes on GKE. To get it to stop, I have to delete the CronJob and then kill all the pods it creates manually. </p>
<p>This is bad, for obvious reasons. </p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: 2018-06-21T14:48:46Z
name: dailytasks
namespace: default
resourceVersion: "20390223"
selfLink: [redacted]
uid: [redacted]
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- kubernetes/daily_tasks.sh
env:
- name: DB_HOST
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
envFrom:
- secretRef:
name: my-secrets
image: [redacted]
imagePullPolicy: IfNotPresent
name: dailytasks
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: 0 14 * * *
successfulJobsHistoryLimit: 3
suspend: true
status:
active:
- apiVersion: batch
kind: Job
name: dailytasks-1533218400
namespace: default
resourceVersion: "20383182"
uid: [redacted]
lastScheduleTime: 2018-08-02T14:00:00Z
</code></pre>
| Doug | <p>It turns out that you have to set a <code>backoffLimit: 0</code> in combination with <code>restartPolicy: Never</code> in combination with <code>concurrencyPolicy: Forbid</code>. </p>
<p><strong>backoffLimit</strong> means the number of times it will retry before it is considered failed. The default is 6.</p>
<p><strong>concurrencyPolicy</strong> set to <code>Forbid</code> means it will run 0 or 1 times, but not more.</p>
<p><strong>restartPolicy</strong> set to <code>Never</code> means it won't restart on failure.</p>
<p>You need to do all 3 of these things, or your cronjob may run more than once.</p>
<pre><code>spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
[ADD THIS -->]backoffLimit: 0
template:
... MORE STUFF ...
</code></pre>
| Doug |
<p>As an IT contractor, I was tasked with fixing an issue in a client's software which made simple use of a third-party library to encrypt or decrypt byte strings. For reasons relating to AWS temporary access tokens, the library required occasional reinitialisation for a fresh access token to be acquired (via AWS EKS) and used.</p>
<p>I came up with a simple solution in which initialization and use of this library was relegated to a child process forked for the purpose, with input and output strings passed each way in hex via a couple of pipes. Then to reinitialize the library the parent could simply kill the child process and fork a new one.</p>
<p>Seems pretty standard stuff, used everywhere in Unix. But the client rep said this might cause problems in a Kubernetes pod, relating to access rights and process monitoring among other things. Is he correct, or is he (as I suspect) being an over-cautious pearl clutcher?</p>
<p>If he is right then what kind of problems can arise, and how can these be avoided?</p>
| John R Ramsden | <blockquote>
<p>But the client rep said this might cause problems in a Kubernetes pod, relating to access rights and process monitoring among other things.</p>
</blockquote>
<p>There is nothing special about kubernetes with respect to child processes and access rights. It's just Unix processes: a child runs with the same credentials as the parent and can access the same files and other resources as the parent.</p>
<p>The process monitoring question is worth exploring in a little more detail. Typically, we say that containers in general -- not just in Kubernetes, but in docker/podman/etc as well -- should have a single entrypoint. In other words, you don't want to create a single container running multiple services, like a webserver and a database. This is because in a multi-entrypoint container, the failure of a service is hidden from the container management tools, so the container runtime can't destroy and re-create the container in response to the service failure.</p>
<p>As long as your application is able to respond properly to the child process dying unexpectedly -- both by calling <code>wait()</code> on it to clean up the process entry and properly respawning it when necessary -- you're in good shape.</p>
| larsks |
<p>In K8s i'm practising the example <em>6.1. A pod with two containers sharing the same volume: fortune-pod.yaml</em> from the book <strong>kubernetes in Action</strong>. In volumes concept where my pod contain 2 containers, one of the containers is not running, Please guide me where i'm doing wrong. to run the pod successfully.
on checking the logs of the container i'm getting the below error:</p>
<pre><code>Defaulted container "fortune-cont" out of: fortune-cont, web-server
</code></pre>
<p>But where as in pod description events it looks like this.</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40m default-scheduler Successfully assigned book/vol-1-fd556f5dc-8ggj6 to minikube
Normal Pulled 40m kubelet Container image "nginx:alpine" already present on machine
Normal Created 40m kubelet Created container web-server
Normal Started 40m kubelet Started container web-server
Normal Created 39m (x4 over 40m) kubelet Created container fortune-cont
Normal Started 39m (x4 over 40m) kubelet Started container fortune-cont
Normal Pulled 38m (x5 over 40m) kubelet Container image "xxxx/fortune:v1" already present on machine
Warning BackOff 25s (x188 over 40m) kubelet Back-off restarting failed container
</code></pre>
<p>here is my deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: vol-1
namespace: book
spec:
replicas: 1
selector:
matchLabels:
name: fortune-vol-1
type: volume
template:
metadata:
labels:
name: fortune-vol-1
type: volume
spec:
containers:
- image: ****/fortune:v1
name: fortune-cont
volumeMounts:
- name: html
mountPath: /var/htdocs
- image: nginx:alpine
name: web-server
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
ports:
- containerPort: 80
protocol: TCP
volumes:
- name: html
emptyDir: {}
</code></pre>
<p>Here is my pod description for containers.</p>
<pre><code>Containers:
fortune-cont:
Container ID: docker://3959e47a761b670ee826b2824efed09d8f5d6dfd6451c4c9840eebff018a3586
Image: prav33n/fortune:v1
Image ID: docker-pullable://prav33n/fortune@sha256:671257f6387a1ef81a293f8aef27ad7217e4281e30b777a7124b1f6017a330f8
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 24 Nov 2022 02:05:26 +0530
Finished: Thu, 24 Nov 2022 02:05:26 +0530
Ready: False
Restart Count: 17
Environment: <none>
Mounts:
/var/htdocs from html (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spdq4 (ro)
web-server:
Container ID: docker://37d831a2f7e97abadb548a21ecb20b5c784b5b3d6102cf8f939f2c13cdfd08c0
Image: nginx:alpine
Image ID: docker-pullable://nginx@sha256:455c39afebd4d98ef26dd70284aa86e6810b0485af5f4f222b19b89758cabf1e
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 24 Nov 2022 01:02:55 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/nginx/html from html (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spdq4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
html:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-spdq4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m20s (x281 over 64m) kubelet Back-off restarting failed container
</code></pre>
| Praveen | <p>Your <em>Pod</em> named <code>vol-1</code> has two <em>containers</em>:</p>
<ul>
<li>One named <code>fortune-cont</code></li>
<li>One named <code>web-server</code></li>
</ul>
<p>If you run <code>kubectl logs vol-1</code>, Kubernetes doesn't know which container you're asking about, so it has to pick one, and tells you:</p>
<blockquote>
<p>Defaulted container "fortune-cont" out of: fortune-cont, web-server</p>
</blockquote>
<p>You can select a container explicitly with the <code>-c</code> option:</p>
<pre><code>kubectl logs vol-1 -c web-server
</code></pre>
| larsks |
<p>We want to use <a href="https://hub.tekton.dev/tekton/task/buildpacks" rel="nofollow noreferrer">the official Tekton buildpacks task</a> from Tekton Hub to run our builds using Cloud Native Buildpacks. The <a href="https://buildpacks.io/docs/tools/tekton/" rel="nofollow noreferrer">buildpacks documentation for Tekton</a> tells us to install the <code>buildpacks</code> & <code>git-clone</code> Task from Tekton Hub, create <code>Secret</code>, <code>ServiceAccount</code>, <code>PersistentVolumeClaim</code> and <a href="https://buildpacks.io/docs/tools/tekton/#43-pipeline" rel="nofollow noreferrer">a Tekton <code>Pipeline</code></a>.</p>
<p>As the configuration is parameterized, we don't want to start our Tekton pipelines using a huge kubectl command but instead configure the <code>PipelineRun</code> using a separate <code>pipeline-run.yml</code> YAML file (<a href="https://buildpacks.io/docs/tools/tekton/#5-create--apply-pipelinerun" rel="nofollow noreferrer">as also stated in the docs</a>) containing the references to the <code>ServiceAccount</code>, workspaces, image name and so on:</p>
<pre><code>apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: buildpacks-test-pipeline-run
spec:
serviceAccountName: buildpacks-service-account # Only needed if you set up authorization
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
- name: cache-workspace
subPath: cache
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- name: image
value: <REGISTRY/IMAGE NAME, eg gcr.io/test/image > # This defines the name of output image
</code></pre>
<p>Now running the Tekton pipeline once is no problem using <code>kubectl apply -f pipeline-run.yml</code>. But how can we restart or reuse this YAML-based configuration for all the other pipelines runs?</p>
| jonashackt | <p><a href="https://github.com/tektoncd/cli/" rel="nofollow noreferrer">tkn cli</a> has the switch --use-pipelinerun to the command <code>tkn pipeline start</code>, what this command does is to reuse the params/workspaces from that pipelinerun and create a new one, so effectively "restarting" it.</p>
<p>so to 'restart' the pipelinerun pr1 which belong to the pipeline p1 you would do:</p>
<p><code>tkn pipeline start p1 --use-pipelinerun pr1</code></p>
<p>maybe we should have a easier named command, I kicked the discussion sometime ago feel free to contribute a feedback :</p>
<p><a href="https://github.com/tektoncd/cli/issues/1091" rel="nofollow noreferrer">https://github.com/tektoncd/cli/issues/1091</a></p>
| Chmouel Boudjnah |
<p>I am naive in Kubernetes world. I was going through a interesting concept called headless service.</p>
<p>I have read it, understand it, and I can create headless service. But I am still not convinced about use cases. Like why do we need it. There are already three types of service clusterIP, NodePort and loadbalancer service with their separate use cases.</p>
<p>Could you please tell me what is exactly which headless service solve and all those other three services could not solve it.</p>
<p>I have read it that headless is mainly used with the application which is stateful like dB based pod for example cassandra, MongoDB etc. But my question is why?</p>
| Rohit | <p>A headless service doesn't provide any sort of proxy or load balancing -- it simply provides a mechanism by which clients can look up the ip address of pods. This means that when they connect to your service, they're connecting <em>directly</em> to the pods; there's no intervening proxy.</p>
<p>Consider a situation in which you have a service that matches three pods; e.g., I have this Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
replicas: 3
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- image: docker.io/traefik/whoami:latest
name: whoami
ports:
- containerPort: 80
name: http
</code></pre>
<p>If I'm using a typical <code>ClusterIP</code> type service, like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: example
name: example
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app: example
</code></pre>
<p>Then when I look up the service in a client pod, I will see the ip address of the service proxy:</p>
<pre><code>/ # host example
example.default.svc.cluster.local has address 10.96.114.63
</code></pre>
<p>However, when using a headless service, like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: example
name: example-headless
spec:
clusterIP: None
ports:
- name: http
port: 80
targetPort: http
selector:
app: example
</code></pre>
<p>I will instead see the addresses of the pods:</p>
<pre><code>/ # host example-headless
example-headless.default.svc.cluster.local has address 10.244.0.25
example-headless.default.svc.cluster.local has address 10.244.0.24
example-headless.default.svc.cluster.local has address 10.244.0.23
</code></pre>
<p>By removing the proxy from the equation, clients are aware of the actual pod ips, which may be important for some applications. This also simplifies the path between clients and the service, which may have performance benefits.</p>
| larsks |
<p>I'm trying to apply the same job history limits to a number of CronJobs using a <a href="https://github.com/kubernetes-sigs/kustomize/blob/572d5841c60b9a4db1a75443b8badb7e8334f727/examples/patchMultipleObjects.md" rel="nofollow noreferrer">patch</a> like the following, named <code>kubeJobHistoryLimit.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1
kind: CronJob
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
</code></pre>
<p>My <code>kustomization.yml</code> looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>bases:
- ../base
configMapGenerator:
- name: inductions-config
env: config.properties
patches:
- path: kubeJobHistoryLimit.yml
target:
kind: CronJob
patchesStrategicMerge:
- job_specific_patch_1.yml
- job_specific_patch_2.yml
...
resources:
- secrets-uat.yml
</code></pre>
<p>And at some point in my CI pipeline I have:</p>
<pre><code>kubectl --kubeconfig $kubeconfig apply --force -k ./
</code></pre>
<p>The <code>kubectl</code> version is <code>1.21.9</code>.</p>
<p>The issue is that the job history limit values don't seem to be getting picked up. Is there something wrong w/ the configuration or the version of K8s I'm using?</p>
| Tianxiang Xiong | <p>With kustomize 4.5.2, your patch as written doesn't apply; it fails with:</p>
<pre><code>Error: trouble configuring builtin PatchTransformer with config: `
path: kubeJobHistoryLimit.yml
target:
kind: CronJob
`: unable to parse SM or JSON patch from [apiVersion: batch/v1
kind: CronJob
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
]
</code></pre>
<p>This is because it's missing <code>metadata.name</code>, which is required, even if it's ignored when patching multiple objects. If I modify the patch to look like this:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: ignored
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
</code></pre>
<p>It seems to work.</p>
<p>If I have <code>base/cronjob1.yaml</code> that looks like:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob1
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 5
jobTemplate:
spec:
template:
spec:
containers:
- command:
- sleep
- 60
image: docker.io/alpine:latest
name: example
schedule: 30 3 * * *
</code></pre>
<p>Then using the above patch and a <code>overlay/kustomization.yaml</code> like this:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- path: kubeJobHistoryLimit.yml
target:
kind: CronJob
</code></pre>
<p>I see the following output from <code>kustomize build overlay</code>:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob2
spec:
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- command:
- sleep
- 60
image: docker.io/alpine:latest
name: example
schedule: 30 3 * * *
successfulJobsHistoryLimit: 1
</code></pre>
<p>You can see the two attributes have been updated correctly.</p>
| larsks |
<p>kustomize build --enable-helm .I have the following project structure:</p>
<pre><code>project
- helm-k8s
- values.yml
- Chart.yml
- templates
- base
- project-namespace.yml
- grafana
- grafana-service.yml
- grafana-deployment.yml
- grafana-datasource-config.yml
- prometheus
- prometheus-service.yml
- prometheus-deployment.yml
- prometheus-config.yml
- prometheus-roles.yml
- kustomization.yml
- prod
- kustomization.yml
- test
- kustomization.yml
</code></pre>
<p>I'm trying to build my kustomization file using helm like below:</p>
<pre><code>project/helm-k8s/templates/base/$ kubectl kustomize build . --enable-helm -> dummy.yml
</code></pre>
<p>I get an error message like this:</p>
<pre><code>project/helm-k8s/templates/base$ kubectl kustomize . --enable-helm
error: accumulating resources: accumulation err='accumulating resources from 'project-namespace.yml': missing metadata.name in object {{v1 Namespace} {{ } map[name:] map[]}}': must build at directory: '/home/my-user/project/helm-k8s/templates/base/project-namespace.yml': file is not directory
</code></pre>
<p>Is it not possible for kustomize to use the values.yml which is located directly under helm-k8s folder and create the final manifest for my cluster? What am I doing wrong here?</p>
<p>EDIT: Here is how my kustomization.yml looks like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: open-electrons-monitoring-kustomization
resources:
# 0. Get the namespaces first
- project-namespace.yml
# 1. Set up monitoring services (prometheus)
#- monitoring/prometheus/prometheus-roles.yml
- prometheus/prometheus-config.yml
- prometheus/prometheus-roles.yml
- prometheus/prometheus-deployment.yml
- prometheus/prometheus-service.yml
# 2. Set up monitoring services (grafana)
- grafana/grafana-datasource-config.yml
- grafana/grafana-deployment.yml
- grafana/grafana-service.yml
</code></pre>
| joesan | <p>I think you may have misunderstood the use of the <code>--enable-helm</code> parameter. It does not allow kustomize to perform helm-style templating on files, so when you write:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.app.namespace }}
labels:
name: {{ .Values.app.namespace }}
</code></pre>
<p>That doesn't do anything useful. It just generates invalid YAML output.</p>
<hr />
<p>The <code>--enable-helm</code> option allows you to explode Helm charts using Kustomize; see <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md" rel="nofollow noreferrer">here</a> for the documentation, but for example it allows you to process a <code>kustomization.yaml</code> file like this:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: traefik
repo: https://helm.traefik.io/traefik
includeCRDs: true
releaseName: example
version: 20.8.0
valuesInline:
deployment:
replicas: 3
logs:
access:
enabled: true
</code></pre>
<p>Running <code>kubectl kustomize --enable-helm</code> will cause kustomize to fetch the helm chart and run <code>helm template</code> on it, producing YAML manifests on stdout.</p>
| larsks |
<p>How to parse the json to retrieve a field from output of </p>
<pre><code>kubectl get pods -o json
</code></pre>
<p>From the command line I need to obtain the system generated container name from a google cloud cluster ... Here are the salient bits of json output from above command :
<a href="https://i.stack.imgur.com/ysqWI.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ysqWI.png" alt="enter image description here"></a></p>
<p><a href="https://gist.github.com/scottstensland/278ce94dc6873aa54e44" rel="noreferrer">click here to see entire json output</a></p>
<p>So the top most json key is an array : items[] followed by metadata.labels.name where the search critera value of that compound key is "web" (see above image green marks). On a match, I then need to retrieve field </p>
<pre><code>.items[].metadata.name
</code></pre>
<p>which so happens to have value :</p>
<pre><code>web-controller-5e6ij // I need to retrieve this value
</code></pre>
<p><a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="noreferrer">Here are docs on jsonpath</a></p>
<p>I want to avoid text parsing output of</p>
<pre><code>kubectl get pods
</code></pre>
<p>which is</p>
<pre><code>NAME READY STATUS RESTARTS AGE
mongo-controller-h714w 1/1 Running 0 12m
web-controller-5e6ij 1/1 Running 0 9m
</code></pre>
<p>Following will correctly parse this <code>get pods</code> command yet I feel its too fragile </p>
<pre><code>kubectl get pods | tail -1 | cut -d' ' -f1
</code></pre>
| Scott Stensland | <p>After much battling this one liner does retrieve the container name :</p>
<pre><code>kubectl get pods -o=jsonpath='{.items[?(@.metadata.labels.name=="web")].metadata.name}'
</code></pre>
<p>when this is the known search criteria :</p>
<pre><code>items[].metadata.labels.name == "web"
</code></pre>
<p>and this is the desired field to retrieve </p>
<pre><code>items[].metadata.name : "web-controller-5e6ij"
</code></pre>
| Scott Stensland |
<p>I'm configuring Traefik Proxy to run on a GKE cluster to handle proxying to various microservices. I'm doing everything through their CRDs and deployed Traefik to the cluster using a custom deployment. The Traefik dashboard is accessible and working fine, however when I try to setup an IngressRoute for the service itself, it is not accessible and it does not appear in the dashboard. I've tried setting it up with a regular k8s Ingress object and when doing that, it did appear in the dashboard, however I ran into some issues with middleware, and for ease-of-use I'd prefer to go the CRD route. Also, the deployment and service for the microservice seem to be deploying fine, they both appear in the GKE dashboard and are running normally. No ingress is created, however I'm unsure of if a custom CRD IngressRoute is supposed to create one or not.</p>
<p>Some information about the configuration:<br />
I'm using Kustomize to handle overlays and general data<br />
I have a setting through kustomize to apply the namespace <code>users</code> to everything</p>
<p>Below are the config files I'm using, and the CRDs and RBAC are defined by calling</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
</code></pre>
<p><strong>deployment.yml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: users-service
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: users-service
spec:
containers:
- name: users-service
image: ${IMAGE}
imagePullPolicy: IfNotPresent
ports:
- name: web
containerPort: ${HTTP_PORT}
readinessProbe:
httpGet:
path: /ready
port: web
initialDelaySeconds: 10
periodSeconds: 2
envFrom:
- secretRef:
name: users-service-env-secrets
</code></pre>
<p><strong>service.yml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: users-service
spec:
ports:
- name: web
protocol: TCP
port: 80
targetPort: web
selector:
app: users-service
</code></pre>
<p><strong>ingress.yml</strong></p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: users-stripprefix
spec:
stripPrefix:
prefixes:
- /userssrv
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: users-service-ingress
spec:
entryPoints:
- service-port
routes:
- kind: Rule
match: PathPrefix(`/userssrv`)
services:
- name: users-service
namespace: users
port: service-port
middlewares:
- name: users-stripprefix
</code></pre>
<p>If any more information is needed, just lmk. Thanks!</p>
| Zach W | <p>A default Traefik installation on Kubernetes creates two entrypoints:</p>
<ul>
<li><code>web</code> for http access, and</li>
<li><code>websecure</code> for https access</li>
</ul>
<p>But you have in your <code>IngressRoute</code> configuration:</p>
<pre><code>entryPoints:
- service-port
</code></pre>
<p>Unless you have explicitly configured Traefik with an entrypoint named "service-port", this is probably your problem. You want to remove the <code>entryPoints</code> section, or specify something like:</p>
<pre><code>entryPoints:
- web
</code></pre>
<p>If you omit the <code>entryPoints</code> configuration, the service will be available on all entrypoints. If you include explicit entrypoints, then the service will only be available on those specific entrypoints (e.g. with the above configuration, the service would be available via <code>http://</code> and not via <code>https://</code>).</p>
<hr />
<p>Not directly related to your problem, but if you're using Kustomize, consider:</p>
<ul>
<li><p>Drop the <code>app: users-service</code> label from the deployment, the service selector, etc, and instead set that in your <code>kustomization.yaml</code> using the <code>commonLabels</code> directive.</p>
</li>
<li><p>Drop the explicit namespace from the service specification in your IngressRoute and instead use kustomize's namespace transformer to set it (this lets you control the namespace exclusively from your <code>kustomization.yaml</code>).</p>
</li>
</ul>
<p>I've put together a deployable example with all the changes mentioned in this answer <a href="https://github.com/larsks/so-example-74672718/tree/main" rel="nofollow noreferrer">here</a>.</p>
| larsks |
<p>I have a job that runs on deployment of our app. The job runs fine 99.9% of the time but every so often something goes wrong (in the application config) and we need to run commands by hand. Because the job has several initContainers it's not as simple as just running an instance of an application pod and execing into it.</p>
<p>We've considered creating a utility pod as part of the application (and this may be the way to go) but I was wondering if there was a <em>good</em> way to convert a completed job into a pod? I've experimented with getting the pod definition, editing by hand, and then applying; but since it's often urgent when we need to do this and it's quite possible to introduce errors when hand editing, this feels wrong.</p>
<p>I'm sure this can't be an unusual requirement, are there tools, commands, or approaches to this problem that I'm simply ignorant of?</p>
| Mr Morphe | <h2>Option 1: Just re-submit the job</h2>
<p>"Converting a job into a pod" is basically what happens when you submit a Job resource to Kubernetes...so one option is just to delete and re-create the job:</p>
<pre><code>kubectl get job myjob -o json | kubectl replace --force -f-
</code></pre>
<p>Poof, you have a new running pod!</p>
<h2>Option 2: Extract the pod template</h2>
<p>You can use <code>jq</code> to extract <code>.spec.template</code> from the Job and attach the necessary bits to turn it into a Pod manifest:</p>
<pre><code>kubectl get job myjob -o json |
jq '
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {"name": "example"}
} * .spec.template
' |
kubectl apply -f-
</code></pre>
<p>The above will create a pod named <code>example</code>; change the <code>name</code> attribute if you want to name it something else.</p>
| larsks |
<p>I am trying to use the module <a href="https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_module.html" rel="nofollow noreferrer">community.kubernetes.k8s β Manage Kubernetes (K8s) objects</a> with variables from the role (e.g. role/sampleRole/vars file).</p>
<p>I am failing when it comes to the integer point e.g.:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: sample
community.kubernetes.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ name }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec:
replicas: 2
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: "{{ name }}"
image: "{{ image }}"
ports:
- containerPort: {{ containerPort }}
</code></pre>
<p>When I deploy with this format obviously it will fail at it can not parse the "reference" to the var.</p>
<p>Sample of error:</p>
<pre class="lang-sh prettyprint-override"><code>ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
found unacceptable key (unhashable type: 'AnsibleMapping')
The error appears to be in 'deploy.yml': line <some line>, column <some column>, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
ports:
- containerPort: {{ containerPort }}
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
</code></pre>
<p>When I use quotes on the variable e.g. <code>- containerPort: "{{ containerPort }}"</code> then I get the following error (part of it):</p>
<pre class="lang-sh prettyprint-override"><code>v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Ports: []v1.ContainerPort: v1.ContainerPort.ContainerPort: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|nerPort\\\\\":\\\\\"80\\\\\"}]}],\\\\\"d|..., bigger context ...|\\\\\",\\\\\"name\\\\\":\\\\\"samplegreen\\\\\",\\\\\"ports\\\\\":[{\\\\\"containerPort\\\\\":\\\\\"80\\\\\"}]}],\\\\\"dnsPolicy\\\\\":\\\\\"ClusterFirst\\\\\",\\\\\"restartPolicy\\\\\"|...\",\"field\":\"patch\"}]},\"code\":422}\\n'", "reason": "Unprocessable Entity", "status": 422}
</code></pre>
<p>I tried to cast the string to int by using <code>- containerPort: "{{ containerPort | int }}"</code> but it did not worked. The problem seems to be coming from the quotes, independently how I define the var in my var file e.g. <code>containerPort: 80</code> or <code>containerPort: "80"</code>.</p>
<p>I found a similar question on the forum <a href="https://stackoverflow.com/questions/55821144/ansible-k8s-and-variables">Ansible, k8s and variables</a> but the user seems not to have the same problems that I am having.</p>
<p>I am running with the latest version of the module:</p>
<pre class="lang-sh prettyprint-override"><code>$ python3 -m pip show openshift
Name: openshift
Version: 0.11.2
Summary: OpenShift python client
Home-page: https://github.com/openshift/openshift-restclient-python
Author: OpenShift
Author-email: UNKNOWN
License: Apache License Version 2.0
Location: /usr/local/lib/python3.8/dist-packages
Requires: ruamel.yaml, python-string-utils, jinja2, six, kubernetes
</code></pre>
<p>Is there any workaround this problem or is it a bug?</p>
<p><strong>Update (08-01-2020):</strong> The problem is fixed on version 0.17.0.</p>
<pre class="lang-sh prettyprint-override"><code>$ python3 -m pip show k8s
Name: k8s
Version: 0.17.0
Summary: Python client library for the Kubernetes API
Home-page: https://github.com/fiaas/k8s
Author: FiaaS developers
Author-email: [email protected]
License: Apache License
Location: /usr/local/lib/python3.8/dist-packages
Requires: requests, pyrfc3339, six, cachetools
</code></pre>
| Thanos | <p>You could try the following as a workaround; in this example, we're creating a text template, and then using the <code>from_yaml</code> filter to transform this into our desired data structure:</p>
<pre><code>- name: sample
community.kubernetes.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ name }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec: "{{ spec|from_yaml }}"
vars:
spec: |
replicas: 2
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: "{{ name }}"
image: "{{ image }}"
ports:
- containerPort: {{ containerPort }}
</code></pre>
| larsks |
<p>Basically, my kubeconfig file has:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
server: <OAM ip address> this is what I want
(...)
</code></pre>
<p>I want to get the server address.
Previously searching , I've found this solution:</p>
<pre><code>config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
nodes, err := clientset.CoreV1().Nodes().List(metav1.ListOptions{})
if err != nil {
panic(err)
}
nodeip := []corev1.NodeAddress{}
for i := 0; i < len(nodes.Items); i++ {
nodeip = nodes.Items[i].Status.Addresses
fmt.Println(nodeip[0].Address)
}
fmt.Println(nodes.Items[0].Status.Addresses)
</code></pre>
<p>But it gives me the Internal IP, not the OAM server IP (which is inside the Kubernetes config file)</p>
| digolira2 | <p>If you want the server address from the <code>kubeconfig</code> file, just read it from your <code>config</code> variable:</p>
<pre><code>package main
import (
"flag"
"fmt"
"path/filepath"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
fmt.Printf("server: %s\n", config.Host)
}
</code></pre>
<hr />
<p>If you're curious what other fields are available on the <code>rest.Config</code> object, a quick solution is to print out the <code>config</code> variable using the <code>%+v</code> format specifier:</p>
<pre><code>fmt.Printf("%+v\n", config)
</code></pre>
<p>For more details, look at the <a href="https://pkg.go.dev/k8s.io/client-go/rest#Config" rel="nofollow noreferrer">reference documentation</a>.</p>
| larsks |
<p>Hi I am trying to add built-in OpenShift(v4.8) prometheus data source to a local grafana server. I have given basic auth with username and password and as of now I have enabled skip tls verify also. Still I'm getting this error</p>
<p><a href="https://i.stack.imgur.com/xvKu8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xvKu8.png" alt="error" /></a></p>
<p>Prometheus URL = <code>https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com</code></p>
<p>this is the grafana log</p>
<pre><code>
logger=tsdb.prometheus t=2022-04-12T17:35:23.47+0530 lvl=eror msg="Instant query failed" query=1+1 err="client_error: client error: 403"
logger=context t=2022-04-12T17:35:23.47+0530 lvl=info msg="Request Completed" method=POST path=/api/ds/query status=400 remote_addr=10.100.95.27 time_ms=36 size=65 referer=https://grafana.xxxx.xxxx.com/datasources/edit/6TjZwT87k
</code></pre>
| Ashutosh Patole | <p>You cannot authenticate to the OpenShift prometheus instance using basic authentication. You need to authenticate using a bearer token, e.g. one obtained from <code>oc whoami -t</code>:</p>
<pre><code>curl -H "Authorization: Bearer $(oc whoami -t)" -k https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com/
</code></pre>
<p>Or from a <code>ServiceAccount</code> with appropriate privileges:</p>
<pre><code>secret=$(oc -n openshift-monitoring get sa prometheus-k8s -o jsonpath='{.secrets[1].name}')
token=$(oc -n openshift-monitoring get secret $secret -o jsonpath='{.data.token}' | base64 -d)
curl -H "Authorization: Bearer $token" -k https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com/
</code></pre>
| larsks |
<p>I having hard time looping over array values in values.yaml. The Error some kind of syntax/yaml structure related error but the message is so cryptic I have no idea what the issue is:
My values.yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code>agent:
namespace: "" <values are subject to change>
enabled: true
admin_group:
- "TEST_GROUP_1"
- "TEST_GROUP_2"
</code></pre>
<p>my template are:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- if .Values.agent.enabled }}
kind: Rolebinding
apiVersion: rbac.authorization.k8s.io/v1
subjects:
{{- range .Values.agent.admin_group }}
- kind: group
apiGroup: rbac. authorization.k8s.io
name: {{ .Values.agent.admin_group }}
{{- end}}
roleRef:
apigroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
{{- end -}}
</code></pre>
<p>I would also like to create multiple groups based on the array values of admin_group:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- if .Values.agent.enabled }}
{{- range .Values.agent.admin_group }}
kind: Sync
apiVersion: group.xxx.com/v1alpha1
metadata:
name: group-sync-{{ .Values.agent.namespace }}
namespace: {{ .Values.agent.namespace }}
spec:
domain: external
group: {{ .Values.agent.admin_group }}
{{- end }}
{{- end -}}
</code></pre>
<p>Really appreciate the help. Happy Holidays!!!</p>
| Ricky18 | <p>Your problem is with the <em>body</em> of your <em>range</em> loops. When you write this:</p>
<pre><code>{{- range .Values.agent.admin_group }}
...
{{- end}}
</code></pre>
<p>Then you're redefining the meaning of <code>.</code>. Within the loop, <code>.</code> refers to each successive value produced by the <code>range</code> operator. When you write <code>.Values.agent.admin_group</code> <em>inside</em> the loop, you're getting an error because the list items don't have a <code>.Values</code> field (because they're strings). You want to write:</p>
<pre><code>{{- range .Values.agent.admin_group }}
- kind: group
apiGroup: rbac. authorization.k8s.io
name: {{ . | quote }}
{{- end}}
</code></pre>
<p>And similarly in your <code>Sync</code> template.</p>
<p>You can find the relevant documentation <a href="https://helm.sh/docs/chart_template_guide/control_structures/#looping-with-the-range-action" rel="nofollow noreferrer">here</a> which includes a number of examples (both on that page and on the subsequence page about variables).</p>
| larsks |
<p>I want to add a resource limit and request using Kustomize if and only if it's not already configured. Problem is that the deployment is in fact a list of deployments, so I cannot use default values:</p>
<p>values.yaml</p>
<pre><code>myDeployments:
- name: deployment1
- name: deployment2
resources:
limits:
cpu: 150
memory: 200
</code></pre>
<p>kustomize.yaml</p>
<pre><code>- target:
kind: Deployment
patch: |-
- op: add
path: "/spec/template/spec/containers/0/resources"
value:
limits:
cpu: 300
memory: 400
</code></pre>
<p>Problem here is that it's replaces both deployments' resources, ignoring the resources defined in values.yaml.</p>
| theplayer777 | <p>You can't make Kustomize conditionally apply a patch based on whether or not the resource limits already exists. You <em>could</em> use labels to identify deployments that should receive the default resource limits, e.g. given something like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
template:
spec:
metadata:
labels:
example.com/default_limits: "true"
[...]
</code></pre>
<p>You could do something like this in your <code>kustomization.yaml</code>:</p>
<pre><code>- target:
kind: Deployment
labelSelector: example.com/default_limits=true
patch: |-
- op: add
path: "/spec/template/spec/containers/0/resources"
value:
limits:
cpu: 300
memory: 400
</code></pre>
<hr />
<p>However, you could also simply set a default resource limits in your target namespace. See "<a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/" rel="nofollow noreferrer">Configure Default CPU Requests and Limits for a Namespace</a>" for details. You would create a LimitRange resource in your namespace:</p>
<pre><code>apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- type: container
default:
cpu: 150
memory: 200
</code></pre>
<p>This would be applied to any containers that don't declare their own resource limits, which is I think the behavior you're looking for.</p>
| larsks |
<p>I am attempting to install <code>istio</code> in my DO K8s cluster.</p>
<p>I have created a null resource to download <code>istio</code> and run the helm charts using this example - <a href="https://mohsensy.github.io/sysadmin/2021/04/09/install-istio-with-terraform.html" rel="nofollow noreferrer">https://mohsensy.github.io/sysadmin/2021/04/09/install-istio-with-terraform.html</a></p>
<p>TF looks like -</p>
<pre><code>resource "kubernetes_namespace" "istio_system" {
metadata {
name = "istio-system"
}
}
resource "null_resource" "istio" {
provisioner "local-exec" {
command = <<EOF
set -xe
cd ${path.root}
rm -rf ./istio-1.9.2 || true
curl -sL https://istio.io/downloadIstio | ISTIO_VERSION=1.9.2 sh -
rm -rf ./istio || true
mv ./istio-1.9.2 istio
EOF
}
triggers = {
build_number = timestamp()
}
}
resource "helm_release" "istio_base" {
name = "istio-base"
chart = "istio/manifests/charts/base"
timeout = 120
cleanup_on_fail = true
force_update = true
namespace = "istio-system"
depends_on = [
digitalocean_kubernetes_cluster.k8s_cluster,
kubernetes_namespace.istio_system,
null_resource.istio
]
}
</code></pre>
<p>I can see the <code>istio</code> charts are downloaded with the CRDs.</p>
<pre><code>β Error: failed to install CRD crds/crd-all.gen.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
β
β with helm_release.istio_base,
β on istio.tf line 32, in resource "helm_release" "istio_base":
β 32: resource "helm_release" "istio_base" {
</code></pre>
<p>I need help in understanding what <code>unable to recognize ""</code> tells here!</p>
<p>I am looking for a resolution with some explanation.</p>
| cs1193 | <p>The error is trying to help you out:</p>
<pre><code>unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
</code></pre>
<p>Take a look at the API resources available in your Kubernetes enviornment:</p>
<pre><code>$ kubectl api-resources | grep CustomResourceDefinition
</code></pre>
<p>You will probably see something like:</p>
<pre><code>customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
</code></pre>
<p>Note the API version there: it's <code>aspiextensions.k8s/io/v1</code>, not <code>/v1beta1</code>. Your manifest was built for an older version of Kubernetes. Changes are you can just change the <code>apiVersion</code> in the manifest to the correct value and it will work.</p>
| larsks |
<p>I used the following yaml to create a postgres deployment in my kubernetes cluser.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: database-secret
namespace: todo-app
data:
# todoappdb
db_name: dG9kb2FwcGRiCg==
# todo_db_user
username: dG9kb19kYl91c2VyCg==
# password
password: cGFzc3dvcmQK
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: database
namespace: todo-app
labels:
app: database
spec:
replicas: 1
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- name: database
image: postgres:11
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret
key: password
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: database-secret
key: username
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: database-secret
key: db_name
---
apiVersion: v1
kind: Service
metadata:
name: database
namespace: todo-app
labels:
app: database
spec:
type: NodePort
selector:
app: database
ports:
- port: 5432
</code></pre>
<p>When I try to run psql in the pod itself using the following command.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl exec -it database-5764d75d58-msf7h -n todo-app -- psql -U todo_db_user -d todoappdb
</code></pre>
<p>I get the following error.</p>
<pre class="lang-sh prettyprint-override"><code>psql: FATAL: role "todo_db_user" does not exist
</code></pre>
<p>Here are the logs of the pod.</p>
<pre><code>The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Etc/UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data/pgdata -l logfile start
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....2022-01-15 12:46:26.009 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-01-15 12:46:26.015 UTC [50] LOG: database system was shut down at 2022-01-15 12:46:25 UTC
2022-01-15 12:46:26.017 UTC [49] LOG: database system is ready to accept connections
done
server started
CREATE DATABASE
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
waiting for server to shut down...2022-01-15 12:46:26.369 UTC [49] LOG: received fast shutdown request
.2022-01-15 12:46:26.369 UTC [49] LOG: aborting any active transactions
2022-01-15 12:46:26.370 UTC [49] LOG: background worker "logical replication launcher" (PID 56) exited with exit code 1
2022-01-15 12:46:26.371 UTC [51] LOG: shutting down
2022-01-15 12:46:26.376 UTC [49] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
2022-01-15 12:46:26.482 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-01-15 12:46:26.482 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-01-15 12:46:26.483 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-01-15 12:46:26.489 UTC [77] LOG: database system was shut down at 2022-01-15 12:46:26 UTC
2022-01-15 12:46:26.492 UTC [1] LOG: database system is ready to accept connections
</code></pre>
<p>Is there something wrong with the config?</p>
<p>When I don't use POSTGRES_USER env var, it works using role <code>postgres</code>. Also, with the current config I tried to use psql with the <code>postgres</code> role but that doesn't work either.</p>
| Nishant Mittal | <p>You have an error in your <code>Secret</code>. If you base64-decode these values:</p>
<pre><code>data:
# todoappdb
db_name: dG9kb2FwcGRiCg==
# todo_db_user
username: dG9kb19kYl91c2VyCg==
# password
password: cGFzc3dvcmQK
</code></pre>
<p>You will find that they all include a terminal <code>\n</code> character:</p>
<pre><code>$ kubectl get secret database-secret -o json > secret.json
$ jq '.data.username|@base64d' secret.json
"todo_db_user\n"
$ jq '.data.password|@base64d' secret.json
"password\n"
$ jq '.data.db_name|@base64d' secret.json
"todoappdb\n"
</code></pre>
<p>I suspect this is because you generate the values by running something
like:</p>
<pre><code>$ echo password | base64
</code></pre>
<p>But of course, the <code>echo</code> command emits a trailing newline (<code>\n</code>).</p>
<p>There are two ways of solving this:</p>
<ol>
<li><p>Use <code>stringData</code> instead of <code>data</code> in your <code>Secret</code> so you can just
write the unencoded values:</p>
<pre><code>apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: database-secret
stringData:
db_name: todoappdb
username: todo_db_user
password: password
</code></pre>
</li>
<li><p>Instruct <code>echo</code> to not emit a trailing newline:</p>
<pre><code>$ echo -n todo_db_user | base64
</code></pre>
<p>(Or use something like <code>printf</code> which doesn't emit a newline by
default).</p>
</li>
</ol>
<p>I would opt for the first option (using <code>stringData</code>) because it's much simpler.</p>
| larsks |
<p>I am containerizing spring-boot applications on kubernetes and I want to have a different application property file for each replica of POD.
As I want to have different config for different pod replicas.</p>
<p>Any help on above would be appreciated.</p>
| user3132096 | <p>They're not really replicas if you want a unique configuration for each pod. I think you may be looking for a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer"><code>StatefulSet</code></a>. Quoting from the docs:</p>
<blockquote>
<p>Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.</p>
</blockquote>
<p>For example, given a StatefulSet like this:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example
spec:
selector:
matchLabels:
app: example
serviceName: "example"
replicas: 3
template:
metadata:
labels:
app: example
spec:
containers:
- name: nginx
image: docker.io/nginxinc/nginx-unprivileged:mainline
ports:
- containerPort: 80
name: http
</code></pre>
<p>I end up with:</p>
<pre><code>$ kubectl get pod
NAME READY STATUS RESTARTS AGE
example-0 1/1 Running 0 34s
example-1 1/1 Running 0 31s
example-2 1/1 Running 0 28s
</code></pre>
<p>In each pod, I can look at the value of <code>$HOSTNAME</code> to find my unique name, and I could use that to extract appropriate configuration from a directory path/structured file/etc.</p>
| larsks |
<p>I'm using Traefik 2.7.0 on an AKS Kubernetes Cluster 1.22.6.
Currently, everything routes to the same service:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: api
namespace: namespace1
spec:
entryPoints:
- websecure
routes:
- match: Host(`api.my-domain.com`)
kind: Rule
services:
- name: api
namespace: namespace1
port: 80
tls:
secretName: api-my-domain-com-cert
</code></pre>
<p>I'm currently in the process of externalizing an API resource from this service to a dedicated new service ("/users") because there will be other services in the future that will need the same functionality.</p>
<p>What I'm trying (and failing) to do, is to route calls to "/users" to the new service:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: api
namespace: namespace1
spec:
entryPoints:
- websecure
routes:
- match: Host(`api.my-domain.com`) && Path(`/users`)
kind: Rule
services:
- name: users-api
namespace: namespace2
port: 80
- match: Host(`api.my-domain.com`)
kind: Rule
services:
- name: api
namespace: namespace1
port: 80
tls:
secretName: api-baywa-lusy-com-cert
</code></pre>
<p>I tried <code>Path(..)</code> and <code>PathPrefix(..)</code>. No success. Everything is still routed to the old service. The new service has slightly different output. So I can tell with certainty that it's still routed to the old service.</p>
<p>Adding the priority manually didn't help either:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: api
namespace: namespace1
spec:
entryPoints:
- websecure
routes:
- match: Host(`api.my-domain.com`) && Path(`/users`)
kind: Rule
priority: 2000
services:
- name: users-api
namespace: namespace2
port: 80
- match: Host(`api.my-domain.com`)
kind: Rule
priority: 1000
services:
- name: api
namespace: namespace1
port: 80
tls:
secretName: api-baywa-lusy-com-cert
</code></pre>
<p>Am I Missing something here? Any help is appreciated!</p>
<p>Thanks,</p>
<p>best regards,</p>
<p>Pascal</p>
| Pascal Paulis | <p>You can only expose services in the same namespace as your IngressRoute resource. If you watch the logs of your Traefik pod when you deploy your IngressRoute, you should see something like:</p>
<blockquote>
<p>time="2023-01-26T13:57:17Z" level=error msg="service namespace2/users-api not in the parent resource namespace namespace1" providerName=kubernetescrd ingress=namespace1 namespace=namespace1</p>
</blockquote>
<p>To do what you want, you need to create two separate IngressRoute resources, one in <code>namespace1</code> and one in <code>namespace2</code>.</p>
<p>In <code>namespace1</code>:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
labels:
app: old-api
name: old-api
namespace: namespace1
spec:
entryPoints:
- web
routes:
- kind: Rule
priority: 1000
match: Host(`api.my-domain.com`)
services:
- name: old-api
port: 80
</code></pre>
<p>In <code>namespace2</code>:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
labels:
app: new-api
name: new-api
namespace: namespace2
spec:
entryPoints:
- web
routes:
- kind: Rule
priority: 2000
match: Host(`api.my-domain.com`) && PathPrefix(`/users`)
services:
- name: new-api
port: 80
</code></pre>
<p>You can find all the files I used to test this configuration <a href="https://github.com/larsks/so-example-75218872-traefik-ingress" rel="nofollow noreferrer">here</a>.</p>
<p>I don't know if the explicit priorities are necessary or not; it worked for me without them but maybe I was just lucky so I left them there. I would generally assume that a "more specific route" takes precedence over a "less specific route", but I don't know if that's actually true.</p>
| larsks |
<p>I'm trying to use Kustomize but as a result I have wrong placeholder lines.</p>
<p>I have <code>base/deployment.yaml</code> like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- image: IMAGE_PLACEHOLDER
name: CONTAINER_NAME_PLACEHOLDER
</code></pre>
<p>And also <code>base/kustomization.yaml</code>:</p>
<pre><code>resources:
- deployment.yaml
</code></pre>
<p>And also overlays/dev/kustomization.yaml:</p>
<pre><code>resources:
- ../../base
commonLabels:
app: myname
patchesStrategicMerge:
- change_container_name.yaml
</code></pre>
<p>And <code>overlays/dev/change_container_name.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
template:
spec:
containers:
- name: myname
image: registry.mycompany.ru/project/app:VERSION
</code></pre>
<p>And as a result of running <code>kubectl kustomize ./overlays/dev</code> i see in my console something like this:</p>
<pre><code>...
containers:
- image: registry.mycompany.ru/project/app:VERSION
name: myname
image: IMAGE_PLACEHOLDER #--- unwanted line
name: CONTAINER_NAME_PLACEHOLDER #--- unwanted line
ports:
- containerPort: 80
</code></pre>
<p>How to fix this problem?</p>
| Victoria Agafonova | <p>This isn't going to work the way you want: because <code>containers:</code> is a list, Kustomize uses the container name to identify which element you want to modify. Because the container name in your patch doesn't match an existing container name, you are <em>adding a new container</em> to the deployment. That's why you end up with:</p>
<pre><code> containers:
- image: registry.mycompany.ru/project/app:VERSION
name: myname
- image: IMAGE_PLACEHOLDER
name: CONTAINER_NAME_PLACEHOLDER
</code></pre>
<p>I'm not sure what your motivation is here, but if you really want to modify the name of an existing container you can do that using a jsonpatch patch instead of a strategic merge patch. The patch might look like this:</p>
<pre><code>- op: replace
path: /spec/template/spec/containers/0/name
value: myname
</code></pre>
<p>And I would use it in a kustomization file like this:</p>
<pre><code>resources:
- ../../base
commonLabels:
app: myname
patches:
- path: change_container_name.yaml
target:
kind: Deployment
name: app
</code></pre>
<p>Given your examples, this would produce the following output:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myname
name: app
spec:
replicas: 1
selector:
matchLabels:
app: myname
template:
metadata:
labels:
app: myname
spec:
containers:
- image: IMAGE_PLACEHOLDER
name: myname
</code></pre>
<p>You could do the same thing with the image name, but you could <em>also</em> use an <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md#images-transformer" rel="nofollow noreferrer">image transformer</a>:</p>
<pre><code>resources:
- ../../base
commonLabels:
app: myname
patches:
- path: change_container_name.yaml
target:
kind: Deployment
name: app
images:
- name: IMAGE_PLACEHOLDER
newName: registry.mycompany.ru/project/app
newTag: VERSION
</code></pre>
<p>Which gets us:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myname
name: app
spec:
replicas: 1
selector:
matchLabels:
app: myname
template:
metadata:
labels:
app: myname
spec:
containers:
- image: registry.mycompany.ru/project/app:VERSION
name: myname
</code></pre>
| larsks |
<p>I'm trying to add envs injection based on <code>envFrom</code></p>
<p>A simplified structure looks something like that:</p>
<pre><code>βββ base
βΒ Β β β backend
βΒ Β βββ backend.properties
βΒ Β βββ app1
βΒ Β βΒ Β βββ app1_backend.properties
β ββ deployment.yaml
βΒ Β βΒ Β βββ ingress.yaml
βΒ Β βΒ Β βββ kustomization.yaml
βββ common.properties
βββ frontend
βΒ Β βββ app1
β βββ app1_frontend.properties
βΒ Β βΒ Β βββ deployment.yaml
βΒ Β βΒ Β βββ ingress.yaml
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βΒ Β βββ service.yaml
βΒ Β βββ frontend.properties
βΒ Β βββ kustomization.yaml
βββ kustomization.yaml
</code></pre>
<p>I would like to generate properties on the main level(common), backend/frontend level, and particular app level.
So I was trying to add the following patch on main level and it works:</p>
<pre><code> - op: add
path: /spec/template/spec/containers/0/envFrom
value:
- configMapRef:
name: common-properties
</code></pre>
<p>and following code to nested directories(backend/frontend/particular app)</p>
<pre><code>- op: add
path: "/spec/template/spec/containers/0/envFrom/-"
value:
configMapRef:
name: backend-properties
</code></pre>
<p>But it doesn't work with the following error:</p>
<pre><code>add operation does not apply: doc is missing path: "/spec/template/spec/containers/0/envFrom/-": missing value
</code></pre>
<p>I have seen some examples on GitHub where that syntax was used: <a href="https://github.com/search?l=YAML&p=1&q=%2Fspec%2Ftemplate%2Fspec%2Fcontainers%2F0%2FenvFrom%2F-&type=Code" rel="nofollow noreferrer">https://github.com/search?l=YAML&p=1&q=%2Fspec%2Ftemplate%2Fspec%2Fcontainers%2F0%2FenvFrom%2F-&type=Code</a> (you have to be logged in to see results) And I'm not sure this stopped work on specific Kustomize version(I'm using the newest version - 4.5.3) or it never worked</p>
<p>I have already written some Kustomize patches and syntax with <code>/-</code> to resources usually worked fine to resources that already exist on the manifest.
It's possible to inject that <code>envFrom</code> on different levels?</p>
| rafal1337 | <p>It's hard to diagnose your problem without a reproducible example, but if I start with this <code>Deployment</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
spec:
containers:
- name: example
image: docker.io/alpine:latest
envFrom:
- configMapRef:
name: example-config
</code></pre>
<p>And use this <code>kustomization.yaml</code>, which includes your patch without
changes:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
patches:
- target:
kind: Deployment
name: example
patch: |-
- op: add
path: "/spec/template/spec/containers/0/envFrom/-"
value:
configMapRef:
name: backend-properties
</code></pre>
<p>Then everything seems to work and I get the resulting output from
<code>kustomize build</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
spec:
containers:
- envFrom:
- configMapRef:
name: example-config
- configMapRef:
name: backend-properties
image: docker.io/alpine:latest
name: example
</code></pre>
| larsks |
<p>My Kubernetes deployment has an <code>initContainer</code> which fetches a token from a URL. My app container (3rd party) then needs that token as an environment variable.</p>
<p>A possible approach would be: the <code>initContainer</code> creates a Kubernetes Secret with the token value; the app container <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables" rel="nofollow noreferrer">uses the secret as an environment variable</a> via <code>env[].valueFrom.secretKeyRef</code>.</p>
<p>Creating the Secret from the <code>initContainer</code> requires <a href="https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/#directly-accessing-the-rest-api" rel="nofollow noreferrer">accessing the Kubernetes API from a Pod</a> though, which tends to be a tad cumbersome. For example, <a href="https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/#directly-accessing-the-rest-api" rel="nofollow noreferrer">directly accessing the REST API</a> requires granting proper permissions to the pod's service account; otherwise, creating the secret will fail with</p>
<pre><code>secrets is forbidden: User \"system:serviceaccount:default:default\"
cannot create resource \"secrets\" in API group \"\" in the namespace \"default\"
</code></pre>
<p>So I was wondering, isn't there any way to just write the token to a file on an <code>emptyDir</code> volume...something like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
initContainers:
- name: fetch-auth-token
image: curlimages/curl
command:
- /bin/sh
args:
- -c
- |
echo "Fetching token..."
url=https://gist.githubusercontent.com/MaxHorstmann/a99823d5aff66fe2ad4e7a4e2a2ee96b/raw/662c19aa96695e52384337bdbd761056bb324e72/token
curl $url > /auth-token/token
volumeMounts:
- mountPath: /auth-token
name: auth-token
...
volumes:
- name: auth-token
emptyDir: {}
</code></pre>
<p>... and then somehow use that file to populate an environment variable in the app container, similar to <code>env[].valueFrom.secretKeyRef</code>, along the lines of:</p>
<pre><code> containers:
- name: my-actual-app
image: thirdpartyappimage
env:
- name: token
valueFrom:
fileRef:
path: /auth-token/token
# ^^^^ this does not exist
volumeMounts:
- mountPath: /auth-token
name: auth-token
</code></pre>
<p>Unfortunately, there's no <code>env[].valueFrom.fileRef</code>.</p>
<p>I considered overwriting the app container's <code>command</code> with a shell script which loads the environment variable from the file before launching the main command; however, the container image doesn't even contain a shell.</p>
<p>Is there any way to set the environment variable in the app container from a file?</p>
| Max | <blockquote>
<p>Creating the Secret from the initContainer requires accessing the Kubernetes API from a Pod though, which tends to be a tad cumbersome...</p>
</blockquote>
<p>It's not actually all that bad; you only need to add a ServiceAccount, Role, and RoleBinding to your deployment manifests.</p>
<p>The ServiceAccount manifest is minimal, and you only need it if you don't want to grant permissions to the <code>default</code> service account in your namespace:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: secretmaker
</code></pre>
<p>Then your Role grants access to secrets (we need <code>create</code> and <code>delete</code> permissions, and having <code>get</code> and <code>list</code> is handy for debugging):</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: env-example
name: secretmaker
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- get
- delete
- list
</code></pre>
<p>A RoleBinding connects the ServiceAccount to the Role:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: env-example
name: secretmaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secretmaker
subjects:
- kind: ServiceAccount
name: secretmaker
namespace: default
</code></pre>
<p>And with those permissions in place, the Deployment is relatively simple:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: env-example
name: env-example
namespace: env-example
spec:
selector:
matchLabels:
app: env-example
template:
metadata:
labels:
app: env-example
spec:
serviceAccountName: secretmaker
initContainers:
- command:
- /bin/sh
- -c
- |
echo "Fetching token..."
url=https://gist.githubusercontent.com/MaxHorstmann/a99823d5aff66fe2ad4e7a4e2a2ee96b/raw/662c19aa96695e52384337bdbd761056bb324e72/token
curl $url -o /tmp/authtoken
kubectl delete secret authtoken > /dev/null 2>&1
kubectl create secret generic authtoken --from-file=AUTH_TOKEN=/tmp/authtoken
image: docker.io/alpine/k8s:1.25.6
name: create-auth-token
containers:
- name: my-actual-app
image: docker.io/alpine/k8s:1.25.6
command:
- sleep
- inf
envFrom:
- secretRef:
name: authtoken
</code></pre>
<p>The application container here is a no-op that runs <code>sleep inf</code>; that gives you the opportunity to inspect the environment by running:</p>
<pre><code>kubectl exec -it deployment/env-example -- env
</code></pre>
<p>Look for the <code>AUTH_TOKEN</code> variable created by our <code>initContainer</code>.</p>
<hr />
<p>All the manifests mentioned here can be found in <a href="https://github.com/larsks/so-example-75265056-createsecret" rel="nofollow noreferrer">this repository</a>.</p>
| larsks |
<p>I have a kubernetes pod configuration with a named volume and want to run it via <code>podman play kube</code> which fails for an unknown reason:</p>
<p><code>podman play kube kubernetes.yml</code>:</p>
<pre><code>Error: kubernetes.yml: Volume mount database-data-volume specified for container but not configured in volumes
</code></pre>
<hr />
<p>The error indicates that the volume does not exist, but it's there:</p>
<pre><code>> podman volume list
DRIVER VOLUME NAME
local database-data-volume
</code></pre>
<p><code>kubernetes.yml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
...
spec:
containers:
...
- image: it.impl/h2-database:2.1.214-0
name: database
ports:
- containerPort: 8082
hostPort: 8082
- containerPort: 9092
hostPort: 9092
volumeMounts:
- mountPath: /opt/h2-database/data
name: database-data-volume
volumes:
- persistentVolumeClaim:
claimName: database-data-volume
restartPolicy: Never
</code></pre>
| asbachb | <p>Your <code>volume</code> is missing a name (this would fail on Kubernetes as well). The <code>volumes</code> section maps a volume name to some sort of volume definition; when you write:</p>
<pre><code> volumes:
- persistentVolumeClaim:
claimName: database-data-volume
</code></pre>
<p>You have a volume definition but no volume name. You need:</p>
<pre><code> volumes:
- name: database-data-volume
persistentVolumeClaim:
claimName: database-data-volume
</code></pre>
| larsks |
<p>I can't seem to find any clear information on this anywhere, but is it possible in a Helm chart to require a third party, such as <code>stable/jenkins</code>, and specify configuration values? </p>
<p>All the examples I see are for running the <code>helm install</code> command directly but I would like to be able to configure it as part of my application.</p>
| Ryall | <p>In answer, @desaintmartin referred me to these documents in Slack:</p>
<ul>
<li><a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/" rel="nofollow noreferrer" title="Subcharts and Globals">Subcharts and Globals</a></li>
<li><a href="https://helm.sh/docs/topics/chart_best_practices/dependencies/" rel="nofollow noreferrer">Requirements</a></li>
<li><a href="https://helm.sh/docs/helm/helm_dependency/" rel="nofollow noreferrer">Helm Dependencies</a></li>
</ul>
<p>This led me to find the <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#overriding-values-from-a-parent-chart" rel="nofollow noreferrer">specific part I was looking for</a>, where the parent chart can override sub-charts by specifying the chart name as a key in the parent <code>values.yaml</code>.</p>
<p>In the application chart's <code>requirements.yaml</code>:</p>
<pre><code>dependencies:
- name: jenkins
# Can be found with "helm search jenkins"
version: '0.18.0'
# This is the binaries repository, as documented in the GitHub repo
repository: 'https://kubernetes-charts.storage.googleapis.com/'
</code></pre>
<p>Run:</p>
<pre><code>helm dependency update
</code></pre>
<p>In the application chart's <code>values.yaml</code>:</p>
<pre><code># ...other normal config values
# Name matches the sub-chart
jenkins:
# This will be override "someJenkinsConfig" in the "jenkins" sub-chart
someJenkinsConfig: value
</code></pre>
| Ryall |
<p>I am mounting a ConfigMap volume to a pod using Go k8 cient where <code>template-conf</code> is a file (deployed helm yaml in the namespace)</p>
<pre class="lang-golang prettyprint-override"><code> pod.Spec.Volumes = append(pod.Spec.Volumes, corev1.Volume{
Name: "conf",
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{
Name: "template-conf",
},
},
},
})
</code></pre>
<p>I was wondering if the following are possible to do in Go or if there's a workaround:</p>
<ol>
<li>Replacing the entire contents of the configMap file with another file</li>
<li>Appending more lines to the configMap file. So something like this:</li>
</ol>
<pre class="lang-golang prettyprint-override"><code>if (...) {
...append(configMap, {
"hello-world\nhello-world\nhello-world"
}),
}
</code></pre>
| koonig | <p>ConfigMap resource is not designed to be modified directly by replacing the entire contents or appending lines.</p>
<p>Try:</p>
<pre><code>// Retrieve the existing ConfigMap
configMap, err := clientset.CoreV1().ConfigMaps("namespace").Get(context.TODO(), "template-conf", metav1.GetOptions{})
if err != nil {
// handle error
}
// Modify the data in the ConfigMap
configMap.Data["key1"] = "value1"
configMap.Data["key2"] = "value2"
// Update the ConfigMap
updatedConfigMap, err := clientset.CoreV1().ConfigMaps("namespace").Update(context.TODO(), configMap, metav1.UpdateOptions{})
if err != nil {
// handle error
}
</code></pre>
| Andrew Arrow |
<p>I was wondering if is possible to get resources from <a href="https://kustomize.io/" rel="noreferrer">kustomize</a> in a private GitHub repository, I already tried something like this without success</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- [email protected]:gituser/kustomize.git/kustomize/main/nginx.yaml
- ssh://github.com/gituser/kustomize.git/kustomize/main/nginx.yaml
</code></pre>
<p>error</p>
<pre><code>Error: accumulating resources: accumulation err='accumulating resources from 'ssh://github.com/diego1277/kustomize.git//kustomize/main/nginx.yaml': evalsymlink failure on '/Users/diego/Desktop/estudo/kustomize/see/base/ssh:/github.com/diego1277/kustomize.git/kustomize/main/nginx.yaml' : lstat /Users/diego/Desktop/estudo/kustomize/see/base/ssh:: no such file or directory': evalsymlink failure on '/private/var/folders/qq/mk6t7dpd5435qm78_zsfdjvm0000gp/T/kustomize-056937086/kustomize/main/nginx.yaml' : lstat /private/var/folders/qq/mk6t7dpd5435qm78_zsfdjvm0000gp/T/kustomize-056937086/kustomize: no such file or directory
</code></pre>
| user3573246 | <p>Your remote resource needs to resolve to a <em>directory</em> that contains a
<code>kustomization.yaml</code> file. That is, instead of:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- [email protected]:gituser/kustomize.git/kustomize/main/nginx.yaml
</code></pre>
<p>You need:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- [email protected]:gituser/kustomize.git/kustomize/main/
</code></pre>
<p>And your <code>kustomize/main</code> directory should contain
<code>kustomization.yaml</code>. You can try this out using a public repository,
for example:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- "[email protected]:kubernetes-sigs/kustomize/examples/helloWorld"
</code></pre>
| larsks |
<p>I am searching for a solution that enables me to set up a single node K8s cluster and if I needed I add nodes to it later.</p>
<p>I am aware of solutions such as minikube and microk8s but they are not expandable. I am trying k3s at the moment exactly because it is offering this feature but I have some problems with storage and other stuff that I am working on them. </p>
<p>Now my questions:</p>
<ul>
<li>What other solution for this exists?</li>
<li>What are the disadvantages if I untaint the master node and run everything there (for a long period and not just for test)?</li>
</ul>
| AVarf | <p>You can use <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm</a> to setup a single node "cluster". Then you can use the <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/" rel="nofollow noreferrer">join command</a> to add more nodes</p>
| fat |
<p>I used below file to create service account</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-reader
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: reader-cr
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- '*'
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-only-rb
subjects:
- kind: ServiceAccount
name: sa-reader
roleRef:
kind: ClusterRole
name: reader-cr
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>The kubeconfig I created is something similar</p>
<pre><code>apiVersion: v1
kind: Config
preferences: {}
clusters:
- name: qa
cluster:
certificate-authority-data: ca
server: https:/<server>:443
users:
- name: sa-reader
user:
as-user-extra: {}
token: <token>
contexts:
- name: qa
context:
cluster: qa
user: sa-reader
namespace: default
current-context: qa
</code></pre>
<p>With this kubeconfig file, I am able to access resources in the default namespace but not any other namespace. How to access resources in other namespaces as well?</p>
| Vikas Rathore | <p>You can operate on a namespace explicitly by using the <code>-n</code> (<code>--namespace</code>) option to <code>kubectl</code>:</p>
<pre><code>$ kubectl -n my-other-namespace get pod
</code></pre>
<p>Or by changing your default namespace with the <code>kubectl config</code> command:</p>
<pre><code>$ kubectl config set-context --current --namespace my-other-namespace
</code></pre>
<p>With the above command, all future invocations of <code>kubectl</code> will assume the <code>my-other-namespace</code> namespace.</p>
| larsks |
<p>Using <code>crictl an </code>containerd<code>, is there an easy way to find to which pod/container belongs a given process, using it's </code>PID` on the host machine?</p>
<p>For example, how can I retrieve the name of the pod which runs the process below (<code>1747</code>):</p>
<pre><code>root@k8s-worker-node:/# ps -ef | grep mysql
1000 1747 1723 0 08:58 ? 00:00:01 mysqld
</code></pre>
| Fabrice Jammes | <p>Assuming that you're looking at the primary process in a pod, you could do something like this:</p>
<pre><code>crictl ps -q | while read cid; do
if crictl inspect -o go-template --template '{{ .info.pid }}' $cid | grep -q $target_pid; then
echo $cid
fi
done
</code></pre>
<p>This walks through all the crictl managed pods and checks the pod pid against the value of the <code>$target_pid</code> value (which you have set beforehand to the host pid in which you are interested).</p>
| larsks |
<p>I hope all is well!</p>
<p>I'm trying to wrap my head around Ingresses and Services. I'm trying to reach my pod thru my ingress hostname:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: oc-backend
app.kubernetes.io/version: 1.16.0
helm.sh/chart: oc-backend-0.1.0
name: oc-backend
namespace: oc
spec:
rules:
- host: oc-backend.com
http:
paths:
- backend:
service:
name: oc-backend
port:
number: 80
path: /
pathType: ImplementationSpecific
</code></pre>
<p>And am exposing a service that would be reached by the ingress:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: oc-backend
app.kubernetes.io/version: 1.16.0
helm.sh/chart: oc-backend-0.1.0
name: oc-backend
namespace: oc
spec:
ports:
- port: 80
targetPort: 3000
selector:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/name: oc-backend
type: LoadBalancer
</code></pre>
<p>The deployment the pods are running in run on port 3000:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: oc-backend
app.kubernetes.io/version: 1.16.0
helm.sh/chart: oc-backend-0.1.0
name: oc-backend
namespace: oc
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/name: oc-backend
template:
metadata:
labels:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/name: oc-backend
spec:
containers:
- envFrom:
- secretRef:
name: secrets
image: 'registry.gitlab.com/open-concepts/open-concepts-backend:master'
imagePullPolicy: Always
name: oc-backend
ports:
- containerPort: 3000
resources: {}
securityContext: {}
imagePullSecrets:
- name: credentials
securityContext: {}
serviceAccountName: oc-backend-sa
</code></pre>
<p>Yet everytime I try to ping a route in <code>oc-backend.local</code>, I get a <code>getaddrinfo ENOTFOUND oc-backend.local</code> error. Am I missing something in the flow?</p>
<p>TIA!</p>
<p><strong>EDIT</strong></p>
<p>I'm adding some info about the Minikube tunnelling and ingress addons. I've confirmed that the add-on was enabled prior to this post, and for the sake of troubleshooting I've explicitly re-ran the commands:</p>
<pre class="lang-bash prettyprint-override"><code>β minikube addons enable ingress
π‘ ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
π‘ After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
βͺ Using image k8s.gcr.io/ingress-nginx/controller:v1.2.1
βͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
βͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
π Verifying ingress addon...
π The 'ingress' addon is enabled
</code></pre>
<p>And made sure to keep the Minikube tunnel open:</p>
<pre class="lang-bash prettyprint-override"><code>β sudo minikube tunnel
β
Tunnel successfully started
π NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
β The service/ingress argocd-server requires privileged ports to be exposed: [80 443]
π sudo permission will be asked for it.
π Starting tunnel for service argocd-server.
β The service/ingress oc-backend requires privileged ports to be exposed: [80]
π sudo permission will be asked for it.
π Starting tunnel for service oc-backend.
β The service/ingress oc-backend requires privileged ports to be exposed: [80 443]
π sudo permission will be asked for it.
π Starting tunnel for service oc-backend.
</code></pre>
<p>Thanks to the tunnelling, I get an external ip and I try to curl my endpoint, with no success:</p>
<pre class="lang-bash prettyprint-override"><code>β kubectl get svc -n oc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oc-backend LoadBalancer 10.109.58.97 127.0.0.1 80:32026/TCP 26h
β curl 10.109.58.97
curl: (56) Recv failure: Connection reset by peer
</code></pre>
<p><strong>EDIT 2</strong>:</p>
<p>Minikube tunnelling takes me to 127.0.0.1, and <a href="https://github.com/kubernetes/minikube/issues/7344#issuecomment-607318525" rel="nofollow noreferrer">apparently it's a "feature"</a>.</p>
<pre class="lang-bash prettyprint-override"><code>β curl -H 'Host: oc-backend.com' 192.168.49.2
curl: (56) Recv failure: Connection reset by peer
</code></pre>
| Fares | <p><strong>Update 1</strong></p>
<p>Given this output:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oc-backend LoadBalancer 10.109.58.97 127.0.0.1 80:32026/TCP 26h
</code></pre>
<p>You're trying to reach the service using the cluster ip. That's not going to work; the cluster ip is internal to Kubernetes and isn't something that you interact with.</p>
<p>You would normally expect to use either (a) the external ip for the service or (b) the address on which your ingress service is listening.</p>
<p>There's something awry with the external ip assigned to your service; it shouldn't be <code>127.0.0.1</code>. With <code>minikube tunnel</code> running, we expect to see something like this:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oc-backend LoadBalancer 10.99.21.223 10.99.21.223 80:31301/TCP 111s
</code></pre>
<p>Where that 10.99.21.223 address is on a private network created by minikube for this cluster to which we have access because of routes set up by the <code>minikube tunnel</code> command:</p>
<pre><code>$ ip route
[...]
10.96.0.0/12 via 192.168.49.2 dev br-577fbef29a21
192.168.49.0/24 dev br-577fbef29a21 proto kernel scope link src 192.168.49.1
</code></pre>
<p>However, you should also be able to access your service via your Ingress, since you've enabled the ingress addon. Running <code>kubectl get ingress</code> should yield something like:</p>
<pre><code>$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
oc-backend nginx oc-backend.com 192.168.49.2 80 6m2s
</code></pre>
<p>That <code>192.168.49.2</code> is the address of our ingress service. Using the same deployment manifests as in the original answer (below), we can see that works as expected:</p>
<pre><code>$ curl -H 'Host: oc-backend.com' 192.168.49.2
Hostname: oc-backend-988c667f5-5v4pv
IP: 127.0.0.1
IP: 172.17.0.3
RemoteAddr: 172.17.0.5:49692
GET / HTTP/1.1
Host: oc-backend.com
User-Agent: curl/7.82.0
Accept: */*
X-Forwarded-For: 192.168.49.1
X-Forwarded-Host: oc-backend.com
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Scheme: http
X-Real-Ip: 192.168.49.1
X-Request-Id: 1315d986890020f4c8cfbba70a2a2047
X-Scheme: http
</code></pre>
<p>Here I'm providing an explicit <code>Host:</code> header, but we could achieve the same goal by adding an entry to <code>/etc/hosts</code>:</p>
<pre><code>192.168.49.2 oc-backend.com
</code></pre>
<p>With this in place, we can use the hostname in <code>curl</code> (or in our browser) to access the service:</p>
<pre><code>$ curl oc-backend.com
Hostname: oc-backend-988c667f5-5v4pv
IP: 127.0.0.1
IP: 172.17.0.3
RemoteAddr: 172.17.0.5:49414
GET / HTTP/1.1
Host: oc-backend.com
User-Agent: curl/7.82.0
Accept: */*
X-Forwarded-For: 192.168.49.1
X-Forwarded-Host: oc-backend.com
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Scheme: http
X-Real-Ip: 192.168.49.1
X-Request-Id: 2c5b2aac46e9641690485c93ec02dadf
X-Scheme: http
</code></pre>
<hr />
<p>I think there are a couple of issues here.</p>
<p>First, minikube, out of the box, doesn't even <em>have</em> an Ingress provider, so unless you're leaving out some details from your question, your Ingress resource isn't doing anything. You would need to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">install an ingress provider</a> after setting up minikube.</p>
<p>If you create a <code>LoadBalancer</code> service, you can access that without an Ingress, but according to <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">the minikube documentation</a> you need to run <code>minikube tunnel</code> first. When you first create a <code>LoadBalancer</code> service, it doesn't get an external ip address:</p>
<pre><code>$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oc-backend LoadBalancer 10.100.71.243 <pending> 80:32539/TCP 2m25s
</code></pre>
<p>After you run <code>minikube tunnel</code>, your service will have an external ip:</p>
<pre><code>$ k get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oc-backend LoadBalancer 10.100.71.243 10.100.71.243 80:32539/TCP 4m23s
</code></pre>
<p>And your system will have the necessary routes to reach the service at that address. So with the configuration you show in your question, and the above output, we would expect your service to be available at <code>http://10.100.71.243</code>.</p>
<hr />
<p>Let's try it out. First, I've made some minor changes to the manifests in your question (I'm running a dummy image instead of the open-concepts-backend image, because I didn't want to muck around with any application configure issues. I'm using the <code>containous/whoami</code> image, that just display some metadata about the environment (it listens on port 80 instead of port 3000). My manifests look like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/name: oc-backend
name: oc-backend
namespace: oc
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/name: oc-backend
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/name: oc-backend
name: oc-backend
namespace: oc
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/name: oc-backend
template:
metadata:
labels:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/name: oc-backend
spec:
containers:
- image: docker.io/containous/whoami:latest
imagePullPolicy: Always
name: oc-backend
ports:
- containerPort: 80
name: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app.kubernetes.io/instance: oc-backend
app.kubernetes.io/name: oc-backend
name: oc-backend
namespace: oc
spec:
rules:
- host: oc-backend.com
http:
paths:
- backend:
service:
name: oc-backend
port:
name: http
path: /
pathType: Prefix
</code></pre>
<p>With all of these manifests deployed, I see the output from <code>kubectl get service</code> that I show above, and I can access the service as expected from my host (after running the <code>minikube tunnel</code> command):</p>
<pre><code>$ curl 10.100.71.243
Hostname: oc-backend-988c667f5-6shxv
IP: 127.0.0.1
IP: 172.17.0.3
RemoteAddr: 172.17.0.1:25631
GET / HTTP/1.1
Host: 10.100.71.243
User-Agent: curl/7.82.0
Accept: */*
</code></pre>
| larsks |
<p>I'm trying to incorporate a helm chart into my kustomize setup. So I have added it to my <code>kustomization.yaml</code>:</p>
<pre><code>...
helmGlobals:
chartHome: ../test/vault
helmCharts:
- name: helm-vault
valuesFile: ../test/vault-values.yaml
...
</code></pre>
<p>When I try to build this on command line I get this error:</p>
<pre><code>$ kustomize build --enable-helm infrastructure/vault/dev
Error: accumulating resources: accumulation err='accumulating resources from '../_base': '/home/tel/Private/Sources/[email protected]/k8s-applications/infrastructure/vault/_base' must resolve to a file': recursed accumulation of path '/home/tel/Private/Sources/[email protected]/k8s-applications/infrastructure/vault/_base': unable to run: 'helm version -c --short' with env=[HELM_CONFIG_HOME=/tmp/kustomize-helm-821391692/helm HELM_CACHE_HOME=/tmp/kustomize-helm-821391692/helm/.cache HELM_DATA_HOME=/tmp/kustomize-helm-821391692/helm/.data] (is 'helm' installed?)
</code></pre>
<p>The message suggests that <code>helm</code> might not be installed; however when I run it it is readily found:</p>
<pre><code>$ helm version -c --short
v3.10.1+g9f88ccb
</code></pre>
<p>Also; my kustomize version:</p>
<pre><code>$ kustomize version
v5.0.1
</code></pre>
<p>What might be the reason?</p>
<p><strong>Update</strong></p>
<p>I tried to follow this very nice guide: <a href="https://github.com/kubernetes-sigs/kustomize/blob/kustomize/v4.1.0/examples/chart.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize/blob/kustomize/v4.1.0/examples/chart.md</a></p>
<p>I still get same error message even though I used the <code>--helm-command</code> argument to give kustomize the absolute path to the helm binary <code>/snap/bin/helm</code>.</p>
| thoredge | <p>It may seem like <code>kustomize</code> have a problem with <code>snap</code> in some way. I downloaded a binary version from theΒ <a href="https://github.com/helm/helm/releases" rel="nofollow noreferrer">helm release page</a> and specifying that binary with <code>--helm-command</code> seems to remove the problem entirely.</p>
<p><strong>Update</strong></p>
<p>It wasn't this entirely neither. So far it seems like the helm-binary have to reside under my home-directory. See <a href="https://github.com/kubernetes-sigs/kustomize/issues/5094#issuecomment-1473459361" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize/issues/5094#issuecomment-1473459361</a></p>
<p><strong>Updatier</strong></p>
<p>It seem to be snap after all; although not the helm installation. The kustomize binary was also installed by snap. I removed it and installed a binary version instead and then everything worked like a charm.</p>
<p>It could be related to kustomize installed as regular snap-application and helm installed as "classic" snap-application.</p>
| thoredge |
<p>I have a huge patch file that I want to apply to specific overlays. I usually patch files under overlays as it is supposed to be. But the file is same and I do not want to copy it to each overlay. If I could keep my patch file <code>app-new-manifest.yaml</code> under base and patch it under overlay with a single line in <code>kustomization.yaml</code>, it would be awesome.</p>
<pre><code>βββ base
β βββ app-new-manifest.yaml # I am trying to patch this
β βββ kustomization.yaml
β βββ app
β β βββ app.yaml
β β βββ kustomization.yaml
βββ overlay
βββ environment1
β βββ kustomization.yaml # I want to patch app-new-manifest.yaml in base
β
βββ environment2
β βββ kustomization.yaml # No patch. app.yaml will be as is
β
βββ environment3
βββ kustomization.yaml # I want to patch app-new-manifest.yaml in base
</code></pre>
<p>When I'm trying to do so, I get this error:</p>
<pre><code>'/base/app/app-new-manifest.yaml' is not in or below '/overlays/environment1'
</code></pre>
<p>Which means, when you patch, the patch file has to be located under overlay not base. Is there any workaround to do this? Because copying the same file to each environment does not make sense to me.</p>
<p>Any ideas around this will highly be appreciated, thanks!</p>
<p>Edit:</p>
<p>Add /base/app/kustomization.yaml</p>
<pre><code>resources:
- app.yaml
</code></pre>
<p>Add /overlays/environment1/kustomization.yaml</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/app
patchesStrategicMerge:
- ../../base/app/app-new-manifest.yaml # Patch new manifest
</code></pre>
<p>kustomize version:</p>
<pre><code>{Version:kustomize/v4.2.0 GitCommit:d53a2ad45d04b0264bcee9e19879437d851cb778 BuildDate:2021-07-01T00:44:28+01:00 GoOs:darwin GoArch:amd64}
</code></pre>
| titanic | <p>You can't include a file that is outside of you current directory, but you <em>can</em> include another directory that has a <code>kustomize.yaml</code> file. So organize your layout like this:</p>
<pre><code>.
βββ base
βββ overlay
βββ patched_based
βββ environment1
βββ environment2
βββ environment3
</code></pre>
<p>In <code>overlay/patched_base</code>, place your patch file and a kustomization file like:</p>
<pre><code>resources:
- ../base
patchesStrategicMerge:
- app-new-manifest.yaml
</code></pre>
<p>In <code>overlay/environment1</code> and <code>overlay/environment3</code>, you have:</p>
<pre><code>resources:
- ../patched_base
</code></pre>
<p>Whereas in <code>overlay/environment2</code>, you have:</p>
<pre><code>resources:
- ../../base
</code></pre>
<p>I think this solves all your requirements:</p>
<ul>
<li>You only need a single instance of the patch</li>
<li>You can choose to use the patch or not from each individual overlay</li>
</ul>
| larsks |
<p>If I do</p>
<pre><code>POD=$($KUBECTL get pod -lsvc=app,env=production -o jsonpath="{.items[0].metadata.name}")
kubectl debug -it --image=mpen/tinker "$POD" -- zsh -i
</code></pre>
<p>I can get into a shell running inside my pod, but I want access to the filesystem for a container I've called "php". I think this should be at <code>/proc/1/root/app</code> but that directory doesn't exist. For reference, my Dockerfile has:</p>
<pre><code>WORKDIR /app
COPY . .
</code></pre>
<p>So all the files should be in the root <code>/app</code> directory.</p>
<p>If I add <code>--target=php</code> then I get permission denied:</p>
<pre><code>β― cd /proc/1/root
cd: permission denied: /proc/1/root
</code></pre>
<p>How do I get access to the files?</p>
| mpen | <p>Reading through <a href="https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/" rel="nofollow noreferrer">the documentation</a>, using <code>kubectl debug</code> won't give you access to the filesystem in another container.</p>
<p>The simplest option may be to use <code>kubectl exec</code> to start a shell inside an existing container. There are some cases in which this isn't an option (for example, some containers contain only a single binary, and won't have a shell or other common utilities avaiable), but a php container will typically have a complete filesystem.</p>
<p>In this case, you can simply:</p>
<pre><code>kubectl exec -it $POD -- sh
</code></pre>
<p>You can replace <code>sh</code> by <code>bash</code> or <code>zsh</code> depending on what shells are available in the existing image.</p>
<hr />
<p>The linked documentation provides several other debugging options, but all involve working on <em>copies of</em> the pod.</p>
| larsks |
<p>I have an overlay <code>kustomization.yaml</code> as following:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base/
patches:
- patch.yaml
secretGenerator:
- name: my-secrets
env: password.env
</code></pre>
<p>When applying it with embedded <code>kustomize</code> on <code>kubectl</code> like <code>kubectl -k</code> it works fine, but now I need to generate the final yaml before applying it so when I attempt to use itself through <code>kustomize build devops/kustomize/my-cluster/overlay/local > local.yaml</code> I'm getting this error:</p>
<pre><code>Error: json: unknown field "env"
</code></pre>
<p><code>secretGenerator</code> spec has <code>env</code> parameter so I'm not sure what am I doing wrong.</p>
| Andrey | <p>Turns out that newer versions of kustomize use <code>envs</code> parameter instead of <code>env</code></p>
| Andrey |
<p>I'm just adding the containers part of the spec. Everything is otherwise set up and working fine and values are hardcoded here. This is a simple Postgres pod that is part of a single replica deployment with its own PVC to persist state. But the problem is having nothing to do with my pod/deployment setup.</p>
<pre><code>containers:
- name: postgres-container
image: postgres
imagePullPolicy: Always
volumeMounts:
- name: postgres-internal-volume
mountPath: /var/lib/postgresql/data
subPath: postgres
envFrom:
- configMapRef:
name: postgres-internal-cnf
ports:
- containerPort: 5432
command: ['psql']
args: [-U postgres -tc "SELECT 1 FROM pg_database WHERE datname = 'dominion'" | grep -q 1 || psql -h localhost -p 5432 -U postgres -c "CREATE DATABASE dominion"]
</code></pre>
<p>This command will create a database if it does not already exist. If I create the deployment and exec into the pod and run this command everything works fine. If I however run it here the pod fails to spin up and I get this error:</p>
<p>psql: error: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?</p>
<p>I was under the impression that this error comes from the default connection values being incorrect, but here I am hardcoding the localhost and the port number.</p>
| Happy Machine | <p>With your pod spec, you've replaced the default command -- which starts up the postgres server -- with your own command, so the server never starts. The proper way to perform initialization tasks with the official Postgres image is <a href="https://github.com/docker-library/docs/blob/master/postgres/README.md#initialization-scripts" rel="nofollow noreferrer">in the documentation</a>.</p>
<p>You want to move your initialization commands into a ConfigMap, and then mount the scripts into <code>/docker-entrypoint-initdb.d</code> as described in those docs.</p>
<p>The docs have more details, but here's a short example. We want to run
<code>CREATE DATABASE dominion</code> when the postgres server starts (and only
if it is starting with an empty data directory). We can define a
simple SQL script in a <code>ConfigMap</code>:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init-scripts
data:
create-dominion-db.sql: |
CREATE DATABASE dominion
</code></pre>
<p>And then mount that script into the appropriate location in the pod
spec:</p>
<pre><code>volumes:
- name: postgres-init-scripts
configMap:
name: postgres-init-scripts
containers:
- name: postgres-container
image: postgres
imagePullPolicy: Always
volumeMounts:
- name: postgres-internal-volume
mountPath: /var/lib/postgresql/data
subPath: postgres
- name: postgres-init-scripts
mountPath:
/docker-entrypoint-initdb.d/create-dominion-db.sql
subPath: create-dominion-db.sql
envFrom:
- configMapRef:
name: postgres-internal-cnf
ports:
- containerPort: 5432
</code></pre>
| larsks |
<p>how can I add object to array via Kustomize? As a result I would like to have two <code>ServiceAccount</code>s added to <code>subjects</code>, like so:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: name
namespace: test1
- kind: ServiceAccount
name: name
namespace: test2
</code></pre>
<p>I'm trying with that patch:</p>
<pre><code>- op: add
path: "/subjects/0"
value:
kind: ServiceAccount
name: name
namespace: test1
</code></pre>
<p>And another patch for second environment:</p>
<pre><code>- op: add
path: "/subjects/1"
value:
kind: ServiceAccount
name: name
namespace: test2
</code></pre>
<p>But in result I'm getting duplicated <code>subjects</code>, so of course it is wrong:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: name
namespace: test1 // the same...
- kind: ServiceAccount
name: name
namespace: test1 // ...as here
</code></pre>
<p>What would be a proper way to add it?</p>
| Murakami | <p>If I start with a ClusterRoleBinding that looks like this in <code>crb.yaml</code>:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects: []
</code></pre>
<p>And I create a <code>kustomization.yaml</code> file like this:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- crb.yaml
patches:
- target:
kind: ClusterRoleBinding
name: binding
patch: |
- op: add
path: /subjects/0
value:
kind: ServiceAccount
name: name
namespace: test1
- target:
kind: ClusterRoleBinding
name: binding
patch: |
- op: add
path: /subjects/1
value:
kind: ServiceAccount
name: name
namespace: test2
</code></pre>
<p>Then I get as output:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: name
namespace: test1
- kind: ServiceAccount
name: name
namespace: test2
</code></pre>
<p>Which is I think what you're looking for. Does this help? Note that instead of explicitly setting an index in the <code>path</code>, like:</p>
<pre><code>path: /subjects/0
</code></pre>
<p>We can instead specify:</p>
<pre><code>path: /subjects/-
</code></pre>
<p>Which means "append to the list", and in this case will generate the same output.</p>
| larsks |
<p>Any ideas how can I replace variables via Kustomize? I simply want to use a different ACCOUNT_ID and IAM_ROLE_NAME for each overlay.</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::${ACCOUNT_ID}:role/${IAM_ROLE_NAME}
</code></pre>
<p>Thanks in advance!</p>
| cosmos-1905-14 | <p>Kustomize doesn't use "variables". The way you would typically handle this is by patching the annotation in an overlay. That is, you might start with a base directory that looks like:</p>
<pre><code>base
βββ kustomization.yaml
βββ serviceaccount.yaml
</code></pre>
<p>Where <code>serviceaccount.yaml</code> contains your <code>ServiceAccount</code> manifest:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
annotions:
eks.amazonaws.com/role-arn: "THIS VALUE DOESN'T MATTER"
</code></pre>
<p>And <code>kustomization.yaml</code> looks like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- serviceaccount.yaml
</code></pre>
<p>Then in your overlays, you would replace the <code>eks.amazonaws.com/role-arn</code> annotation by using a patch. For example, if you had an overlay called <code>production</code>, you might end up with this layout:</p>
<pre><code>.
βββ base
βΒ Β βββ kustomization.yaml
βΒ Β βββ serviceaccount.yaml
βββ overlay
βββ production
βββ kustomization.yaml
βββ patch_aws_creds.yaml
</code></pre>
<p>Where <code>overlay/production/patch_aws_creds.yaml</code> looks like:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::1234:role/production-role
</code></pre>
<p>And <code>overlay/production/kustomization.yaml</code> looks like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- patch_aws_creds.yaml
</code></pre>
<p>With this in place, running...</p>
<pre><code>kustomize build overlay/production
</code></pre>
<p>...would generate output using your production role information, and so forth for any other overlays you choose to create.</p>
<hr />
<p>If you don't like the format of the strategic merge patch, you can use a json patch document instead. Here's what it would look like inline in your <code>kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- target:
version: v1
kind: ServiceAccount
name: my-service-account
patch: |-
- op: replace
path: /metadata/annotations/eks.amazonaws.com~1role-arn
value: arn:aws:iam::1234:role/production-role
</code></pre>
<p>I don't think this really gets you anything, though.</p>
| larsks |
<p>The field <code>employeeData</code> is of type <code>json</code>.</p>
<p>Consider the query:</p>
<pre><code>select * FROM "employees" where employeeData::text like '%someSampleHere%'
</code></pre>
<p>When I'm running the query inside Postgres it works perfect and I get the rows that I'm asking for.</p>
<p>However when I use it with KubeCTL and running this query outside of the PG App</p>
<pre><code>psql -d employess_db -U postgres -c 'select * FROM "employees" where employeeData::text like 'someSampleHere';'
</code></pre>
<p>PG throws</p>
<pre><code>ERROR: syntax error at or near "%"
LINE 1: select * FROM "employees" where employeeData::text like %someSampleHere%;
</code></pre>
<p>How can we fix it ?</p>
| JAN | <p>Sounds like a quoting problem to me. You neglected to show your actual <code>kubectl</code> command line in your question, but this works for me without any errors:</p>
<pre><code>kubectl exec postgres-pod -- psql -U postgres -d employees_db \
-c "select * from \"employees\" where employeeData::text like '%someSampleHere%'"
</code></pre>
| larsks |
<p>We have several services running in OpenShift Cluster. Each service is exposed with a route.</p>
<p>For each route we are creating a SSL Certificate. Problem is around managing Certificate expiry for these many routes</p>
<p>One way to solve this problem is to update the Cluster level cert with WildCard domain entries. We do not want to do this as this could be a security risk. Will end up using this approach if no other solution is found.</p>
<p>What are other ways to Manage these many certificates for all the services</p>
<p>How can we auto renew several certificates, any best practice around this will be helpful.</p>
| user804401 | <p>We use <a href="https://cert-manager.io" rel="nofollow noreferrer">cert-manager</a> to manage certificate generation and renewal. To get cert-manager to inter-operate with OpenShift routes, you need to create <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resources that reference the generated certificates; OpenShift will transform these into appropriate Route resources.</p>
<p>I recently put together a demonstration of this that you can find <a href="https://github.com/larsks/cert-manager-routes-demo/blob/main/README.md" rel="nofollow noreferrer">here</a>. See the link for complete details, but essentially we create an Ingress resource like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-server
annotations:
cert-manager.io/issuer: demo-issuer
spec:
tls:
- hosts:
- demo-server.example.com
secretName: demo-server-certificate
rules:
- host: demo-server.example.com
http:
paths:
- backend:
service:
name: demo-server
port:
name: http
path: /
pathType: Prefix
</code></pre>
<p>Cert-manager see the <code>cert-manager.io/issuer</code> annotation, generates a certificate in the Secret named by <code>spec.tls.0.secretName</code>, and then OpenShift generates a Route in which the certificate is embedded.</p>
<p>Note that while we're using cert-manager with LetsEncrypt, that's not a requirement; cert-manager supports a number of issuers.</p>
<p>Cert-manager will take care of renewing these certificates automatically.</p>
| larsks |
<p>I am trying to get yaml for the particular storage class using Kubernetes python client, following code does the job:</p>
<pre><code>list_storage_class = v1.list_storage_class(_preload_content=False)
storageclass = list_storage_class.read().decode('utf-8') # <class 'urllib3.response.HTTPResponse'>
print(json.dumps(json.loads(storageclass),indent=2))
</code></pre>
<p>Is it a way to specify storage class with the call? On the same hand is it possible to get response directly in yaml conforming to <code>k get sc NAME -o yaml</code> ?</p>
| l00p | <p>Take a look at the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/StorageV1Api.md" rel="nofollow noreferrer"><code>StorageV1API</code></a> documentation. To get a single <code>StorageClass</code>, you want the <code>read_storage_class</code> method:</p>
<pre><code>>>> from kubernetes import client, config
>>> config.load_kube_config()
>>> v1 = client.StorageV1Api()
>>> sc = v1.read_storage_class('ocs-external-storagecluster-ceph-rbd')
>>> sc.metadata.name
'ocs-external-storagecluster-ceph-rbd'
</code></pre>
<p>If you want to dump the result to a YAML document:</p>
<pre><code>>>> import yaml
>>> print(yaml.safe_dump(sc.to_dict()))
allow_volume_expansion: true
allowed_topologies: null
api_version: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
description: Provides RWO Filesystem volumes, and RWO and RWX Block volumes
storageclass.kubernetes.io/is-default-class: 'true'
labels:
app.kubernetes.io/instance: cluster-resources-ocp-staging
...
</code></pre>
| larsks |
<p>What is the port opened by kube-proxy forοΌWhy does it listen on so many ports?
From my node, I can see that kube-proxy is listening to a lot of ports. Can someone explain to me why they are listening to so many ports and what is it for?
the output like below:</p>
<pre><code>[root@runsdata-test-0001 ~]# netstat -antup|grep kube-proxy
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 14370/kube-proxy
tcp 0 0 10.0.0.154:59638 10.0.0.154:6443 ESTABLISHED 14370/kube-proxy
tcp6 0 0 :::31860 :::* LISTEN 14370/kube-proxy
tcp6 0 0 :::11989 :::* LISTEN 14370/kube-proxy
tcp6 0 0 :::26879 :::* LISTEN 14370/kube-proxy
tcp6 0 0 :::8100 :::* LISTEN 14370/kube-proxy
tcp6 0 0 :::10055 :::* LISTEN 14370/kube-proxy
tcp6 0 0 :::27688 :::* LISTEN 14370/kube-proxy
tcp6 0 0 :::29932 :::* LISTEN 14370/kube-proxy
tcp6 0 0 :::4303 :::* LISTEN 14370/kube-proxy
tcp6 0 0 :::31504 :::* LISTEN 14370/kube-proxy
tcp6 0 0 :::10256 :::* LISTEN 14370/kube-proxy
tcp6 0 0 :::21201 :::* LISTEN 14370/kube-proxy
[root@runsdata-test-0001 ~]# ss -antup|grep kube-proxy
tcp LISTEN 0 128 127.0.0.1:10249 *:* users:(("kube-proxy",pid=14370,fd=9))
tcp ESTAB 0 0 10.0.0.154:59638 10.0.0.154:6443 users:(("kube-proxy",pid=14370,fd=6))
tcp LISTEN 0 128 [::]:31860 [::]:* users:(("kube-proxy",pid=14370,fd=16))
tcp LISTEN 0 128 [::]:11989 [::]:* users:(("kube-proxy",pid=14370,fd=18))
tcp LISTEN 0 128 [::]:26879 [::]:* users:(("kube-proxy",pid=14370,fd=11))
tcp LISTEN 0 128 [::]:8100 [::]:* users:(("kube-proxy",pid=14370,fd=17))
tcp LISTEN 0 128 [::]:10055 [::]:* users:(("kube-proxy",pid=14370,fd=14))
tcp LISTEN 0 128 [::]:27688 [::]:* users:(("kube-proxy",pid=14370,fd=13))
tcp LISTEN 0 128 [::]:29932 [::]:* users:(("kube-proxy",pid=14370,fd=12))
tcp LISTEN 0 128 [::]:4303 [::]:* users:(("kube-proxy",pid=14370,fd=10))
tcp LISTEN 0 128 [::]:31504 [::]:* users:(("kube-proxy",pid=14370,fd=3))
tcp LISTEN 0 128 [::]:10256 [::]:* users:(("kube-proxy",pid=14370,fd=8))
tcp LISTEN 0 128 [::]:21201 [::]:* users:(("kube-proxy",pid=14370,fd=15))
</code></pre>
<p>As can be seen from the following results, the port that kube-proxy listens on is not the port for every service of type clusterip or nodeport. most service port is not being listened on</p>
<pre><code>[root@runsdata-test-0001 ~]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
admin-dashboard ClusterIP 10.0.6.133 <none> 8652/TCP 76d app=admin-dashboard
basic-customer-service-web ClusterIP 10.0.6.70 <none> 80/TCP 88d app=basic-customer-service-web
cloud-agent-dashboard-web ClusterIP 10.0.6.82 <none> 80/TCP 88d app=cloud-agent-dashboard-web
config-server ClusterIP 10.0.6.199 <none> 8100/TCP 17d app=config-server
content-management-service-v2-0 ClusterIP 10.0.6.149 <none> 8511/TCP 88d app=content-management-service-v2-0
customer-service-web-v1 ClusterIP 10.0.6.64 <none> 80/TCP 88d app=customer-service-web-v1
customer-service-web-v2 ClusterIP 10.0.6.12 <none> 80/TCP 88d app=customer-service-web-v2
default-http-backend ClusterIP 10.0.6.102 <none> 80/TCP 62d k8s-app=default-http-backend
file-server ClusterIP 10.0.6.221 <none> 80/TCP 88d app=file-server
glusterfs-cluster ClusterIP 10.0.6.197 <none> 1990/TCP 88d <none>
glusterfs-dynamic-2364ef3c-21d9-4b57-8416-3bec33191c63 ClusterIP 10.0.6.145 <none> 1/TCP 76d <none>
glusterfs-dynamic-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b ClusterIP 10.0.6.139 <none> 1/TCP 76d <none>
glusterfs-dynamic-65ab49bf-ea94-471a-be8a-ba9a32eca3f2 ClusterIP 10.0.6.72 <none> 1/TCP 76d <none>
glusterfs-dynamic-86817d19-5173-4dfb-a09f-b27785d62619 ClusterIP 10.0.6.42 <none> 1/TCP 76d <none>
glusterfs-dynamic-8b31e26e-b33d-4ddf-8604-287b015f4463 ClusterIP 10.0.6.40 <none> 1/TCP 76d <none>
glusterfs-dynamic-8ede2720-863d-4329-8c7a-7bc2a7f540e4 ClusterIP 10.0.6.148 <none> 1/TCP 76d <none>
glusterfs-dynamic-b0d2f15d-847c-44e6-8272-0390d42806d1 ClusterIP 10.0.6.185 <none> 1/TCP 76d <none>
glusterfs-dynamic-b16b2a65-d21d-412e-88b5-ca5fb5ce8626 ClusterIP 10.0.6.29 <none> 1/TCP 76d <none>
glusterfs-dynamic-ee1be4cc-d90f-4ac4-a662-6a6fdc25e628 ClusterIP 10.0.6.251 <none> 1/TCP 76d <none>
hr-dashboard-web-global ClusterIP 10.0.6.66 <none> 80/TCP 88d app=hr-dashboard-web-global
hystrix-dashboard ClusterIP 10.0.6.87 <none> 8650/TCP 48d app=hystrix-dashboard
kafka-hs ClusterIP None <none> 9092/TCP 76d app=kafka
kafka-server ClusterIP 10.0.6.209 <none> 9092/TCP 76d app=kafka
mongo-master ClusterIP 10.0.6.39 <none> 27017/TCP 88d name=mongo
mongodb-1 ClusterIP 10.0.6.11 <none> 27017/TCP 17d <none>
mongodb-2 ClusterIP 10.0.6.55 <none> 27017/TCP 17d <none>
mongodb-3 ClusterIP 10.0.6.114 <none> 27017/TCP 17d <none>
mysql-master ClusterIP 10.0.6.201 <none> 3306/TCP 88d <none>
news-content-management-web ClusterIP 10.0.6.93 <none> 80/TCP 61d app=news-content-management-web
peony-ali-api ClusterIP 10.0.6.151 <none> 9220/TCP 62d app=peony-ali-api
peony-app-update ClusterIP 10.0.6.138 <none> 9410/TCP 87d app=peony-app-update
peony-authenticate-storage-service-v3-0 ClusterIP 10.0.6.37 <none> 8241/TCP 88d app=peony-authenticate-storage-service-v3-0
peony-hr-file-server ClusterIP 10.0.6.53 <none> 80/TCP 87d app=peony-hr-file-server
peony-infrastructure-gateway ClusterIP 10.0.6.132 <none> 8020/TCP 60d app=peony-infrastructure-gateway
peony-log-file-server ClusterIP 10.0.6.54 <none> 80/TCP 14d app=peony-log-file-server
peony-media-hr-file-server ClusterIP 10.0.6.129 <none> 80/TCP 87d app=peony-media-hr-file-server
peony-medical-file-server ClusterIP 10.0.6.31 <none> 80/TCP 87d app=peony-medical-file-server
peony-online-file-server ClusterIP 10.0.6.217 <none> 80/TCP 87d app=peony-online-file-server
peony-payment-service ClusterIP 10.0.6.38 <none> 9400/TCP 87d app=peony-payment-service
peony-sms-api ClusterIP 10.0.6.204 <none> 9200/TCP 87d app=peony-sms-api
peony-sms-gateway ClusterIP 10.0.6.7 <none> 80/TCP 87d app=peony-sms-gateway
peony-sms-sender ClusterIP 10.0.6.135 <none> 9211/TCP 87d app=peony-sms-sender
peony-sms-web ClusterIP 10.0.6.74 <none> 80/TCP 61d app=peony-sms-web
plum-gatherer-api ClusterIP 10.0.6.239 <none> 80/TCP 87d app=plum-gatherer-api
plum-gatherer-gateway ClusterIP 10.0.6.67 <none> 7010/TCP 87d app=plum-gatherer-gateway
plum-live-gatherer ClusterIP 10.0.6.187 <none> 7011/TCP 87d app=plum-live-gatherer
rabbit-server ClusterIP 10.0.6.125 <none> 5672/TCP,15672/TCP 68d app=rabbit-server
redis-foundation-master ClusterIP 10.0.6.127 <none> 6379/TCP 17d name=redis-foundation
redis-sentinel-0 ClusterIP 10.0.6.203 <none> 36379/TCP 20d <none>
redis-sentinel-1 ClusterIP 10.0.6.10 <none> 36379/TCP 20d <none>
redis-sentinel-2 ClusterIP 10.0.6.222 <none> 36379/TCP 20d <none>
redis-sms-master ClusterIP 10.0.6.50 <none> 6379/TCP 87d name=redis-sms
redis-user-master ClusterIP 10.0.6.71 <none> 6379/TCP 87d name=redis-user
si-console-web ClusterIP 10.0.6.88 <none> 80/TCP 87d app=si-console-web
si-gov-admin-web ClusterIP 10.0.6.152 <none> 80/TCP 87d app=si-gov-admin-web
society-admin-web ClusterIP 10.0.6.105 <none> 80/TCP 86d app=society-admin-web
society-admin-web-v2 ClusterIP 10.0.6.119 <none> 80/TCP 49d app=society-admin-web-v2
society-app-config-service-v2-0 ClusterIP 10.0.6.112 <none> 8013/TCP 88d app=society-app-config-service-v2-0
society-assistance-service-v1-0 ClusterIP 10.0.6.238 <none> 8531/TCP 88d app=society-assistance-service-v1-0
society-authenticate-storage-service-v3-0 ClusterIP 10.0.6.177 <none> 8241/TCP 35d app=society-authenticate-storage-service-v3-0
society-authorization-server ClusterIP 10.0.6.183 <none> 10681/TCP,9010/TCP 88d app=society-authorization-server
society-certification-service-v2-0 ClusterIP 10.0.6.198 <none> 8215/TCP 88d app=society-certification-service-v2-0
society-config-app-api ClusterIP 10.0.6.9 <none> 80/TCP 80d app=society-config-app-api
society-employment-mobile-universal-web ClusterIP 10.0.6.247 <none> 80/TCP 88d app=society-employment-mobile-universal-web
society-employment-service-v1-0 ClusterIP 10.0.6.211 <none> 8541/TCP 87d app=society-employment-service-v1-0
society-im-service-v1-0 ClusterIP 10.0.6.235 <none> 8551/TCP 87d app=society-im-service-v1-0
society-insurance-app-api ClusterIP 10.0.6.6 <none> 80/TCP 88d app=society-insurance-app-api
society-insurance-foundation-service-v2-0 ClusterIP 10.0.6.49 <none> 8223/TCP 88d app=society-insurance-foundation-service-v2-0
society-insurance-gateway ClusterIP 10.0.6.202 <none> 8020/TCP 88d app=society-insurance-gateway
society-insurance-management-service-v2-0 NodePort 10.0.6.140 <none> 8235:31860/TCP 63d app=society-insurance-management-service-v2-0
society-insurance-resident-service-v2-0 ClusterIP 10.0.6.5 <none> 8311/TCP 88d app=society-insurance-resident-service-v2-0
society-insurance-storage-service-v2-0 ClusterIP 10.0.6.2 <none> 8228/TCP 88d app=society-insurance-storage-service-v2-0
society-insurance-user-service-v2-0 ClusterIP 10.0.6.23 <none> 8221/TCP 88d app=society-insurance-user-service-v2-0
society-insurance-web-api ClusterIP 10.0.6.236 <none> 80/TCP 88d app=society-insurance-web-api
society-material-h5-web ClusterIP 10.0.6.43 <none> 80/TCP 73d app=society-material-h5-web
society-material-service-v1-0 ClusterIP 10.0.6.241 <none> 8261/TCP 67d app=society-material-service-v1-0
society-material-web ClusterIP 10.0.6.65 <none> 80/TCP 83d app=society-material-web
society-notice-service-v1-0 ClusterIP 10.0.6.16 <none> 8561/TCP 14d app=society-notice-service-v1-0
society-online-business-admin-web ClusterIP 10.0.6.230 <none> 80/TCP 88d app=society-online-business-admin-web
society-online-business-configure-h5-web ClusterIP 10.0.6.8 <none> 80/TCP 88d app=society-online-business-configure-h5-web
society-online-business-mobile-web ClusterIP 10.0.6.137 <none> 80/TCP 88d app=society-online-business-mobile-web
society-online-business-mobile-web-v2-0 ClusterIP 10.0.6.108 <none> 80/TCP 87d app=society-online-business-mobile-web-v2-0
society-online-business-mobile-web-v2-1 ClusterIP 10.0.6.128 <none> 80/TCP 87d app=society-online-business-mobile-web-v2-1
society-online-business-processor-service-v1-0 ClusterIP 10.0.6.99 <none> 10042/TCP 88d app=global-online-business-processor-service-v1-0
society-online-business-service-v2-0 ClusterIP 10.0.6.186 <none> 8216/TCP 88d app=society-online-business-service-v2-0
society-online-business-service-v2-1 ClusterIP 10.0.6.162 <none> 8216/TCP 88d app=society-online-business-service-v2-1
society-operation-gateway ClusterIP 10.0.6.4 <none> 8010/TCP 88d app=society-operation-gateway
society-operation-user-service-v1-1 ClusterIP 10.0.6.35 <none> 8012/TCP 88d app=society-operation-user-service-v1-1
society-operator-management-service-v1-0 ClusterIP 10.0.6.234 <none> 8271/TCP 83d app=society-operator-management-service-v1-0
society-operator-management-web ClusterIP 10.0.6.150 <none> 80/TCP 77d app=society-operator-management-web
society-portal-mobile-universal-web ClusterIP 10.0.6.244 <none> 80/TCP 88d app=society-portal-mobile-universal-web
society-portal-nationwide-web ClusterIP 10.0.6.237 <none> 80/TCP 88d app=society-portal-nationwide-web
society-proxy-access-service-v2-0 ClusterIP 10.0.6.243 <none> 8411/TCP 58d app=society-proxy-access-service-v2-0
society-resident-service-v3-0 ClusterIP 10.0.6.63 <none> 8231/TCP 88d app=society-resident-service-v3-0
society-training-exam-web ClusterIP 10.0.6.83 <none> 80/TCP 37d app=society-training-exam-web
society-training-mobile-universal-web ClusterIP 10.0.6.210 <none> 80/TCP 88d app=society-training-mobile-universal-web
society-training-service-v1-0 ClusterIP 10.0.6.36 <none> 8521/TCP 88d app=society-training-service-v1-0
society-user-service-v2-0 ClusterIP 10.0.6.216 <none> 8211/TCP 87d app=society-user-service-v2-0
society-user-service-v3-0 ClusterIP 10.0.6.227 <none> 8211/TCP 88d app=society-user-service-v3-0
sports-training-web ClusterIP 10.0.6.123 <none> 80/TCP 87d app=sports-training-web
static-file-server ClusterIP 10.0.6.73 <none> 80/TCP 88d app=static-file-server
traefik-ingress-controller ClusterIP 10.0.6.225 <none> 80/TCP,6080/TCP,443/TCP 17d app=traefik-ingress-controller
turbine-server ClusterIP 10.0.6.160 <none> 8989/TCP 76d app=turbine-server
weedfs-filer ClusterIP 10.0.6.32 <none> 8080/TCP 19d app=weedfs-filer
weedfs-master ClusterIP 10.0.6.91 <none> 9333/TCP 87d app=weedfs-master
weedfs-volume-1 ClusterIP 10.0.6.79 <none> 8080/TCP 87d app=weedfs-volume-1
zipkin-server ClusterIP 10.0.6.184 <none> 9411/TCP 48d app=zipkin-server
zk-cs ClusterIP 10.0.6.194 <none> 2181/TCP 76d app=zk
zk-hs ClusterIP None <none> 2888/TCP,3888/TCP 76d app=zk
[root@runsdata-test-0001 ~]# ss -antup|grep kube-proxy
tcp LISTEN 0 128 127.0.0.1:10249 *:* users:(("kube-proxy",pid=14370,fd=9))
tcp ESTAB 0 0 10.0.0.154:59638 10.0.0.154:6443 users:(("kube-proxy",pid=14370,fd=6))
tcp LISTEN 0 128 [::]:31860 [::]:* users:(("kube-proxy",pid=14370,fd=16))
tcp LISTEN 0 128 [::]:11989 [::]:* users:(("kube-proxy",pid=14370,fd=18))
tcp LISTEN 0 128 [::]:26879 [::]:* users:(("kube-proxy",pid=14370,fd=11))
tcp LISTEN 0 128 [::]:8100 [::]:* users:(("kube-proxy",pid=14370,fd=17))
tcp LISTEN 0 128 [::]:10055 [::]:* users:(("kube-proxy",pid=14370,fd=14))
tcp LISTEN 0 128 [::]:27688 [::]:* users:(("kube-proxy",pid=14370,fd=13))
tcp LISTEN 0 128 [::]:29932 [::]:* users:(("kube-proxy",pid=14370,fd=12))
tcp LISTEN 0 128 [::]:4303 [::]:* users:(("kube-proxy",pid=14370,fd=10))
tcp LISTEN 0 128 [::]:31504 [::]:* users:(("kube-proxy",pid=14370,fd=3))
tcp LISTEN 0 128 [::]:10256 [::]:* users:(("kube-proxy",pid=14370,fd=8))
tcp LISTEN 0 128 [::]:21201 [::]:* users:(("kube-proxy",pid=14370,fd=15))
[root@runsdata-test-0001 ~]# kubectl get svc -o wide |grep 31860
society-insurance-management-service-v2-0 NodePort 10.0.6.140 <none> 8235:31860/TCP 63d app=society-insurance-management-service-v2-0
[root@runsdata-test-0001 ~]# kubectl get svc -o wide |grep 11989
[root@runsdata-test-0001 ~]# kubectl get svc -o wide |grep 26879
[root@runsdata-test-0001 ~]# kubectl get svc -o wide |grep 8100
config-server ClusterIP 10.0.6.199 <none> 8100/TCP 17d app=config-server
[root@runsdata-test-0001 ~]# kubectl get svc -o wide |grep 10055
[root@runsdata-test-0001 ~]# kubectl get svc -o wide |grep 27688
[root@runsdata-test-0001 ~]# kubectl get svc -o wide |grep 29932
[root@runsdata-test-0001 ~]# kubectl get svc -o wide |grep 4303
[root@runsdata-test-0001 ~]# kubectl get svc -o wide |grep 31504
[root@runsdata-test-0001 ~]# kubectl get svc -o wide |grep 10256
[root@runsdata-test-0001 ~]# kubectl get svc -o wide |grep 21201
[root@runsdata-test-0001 ~]#
</code></pre>
| Esc | <p>Based on the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p>kube-proxy reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends</p>
</blockquote>
<p>Basically, it listens for the active <code>Service</code>s and forwards them across your cluster.</p>
<p>You can get the list of registered services with:</p>
<pre><code>kubectl --all-namespaces get svc
</code></pre>
| Farcaller |
<p>I wish to use <code>promtool</code> to run unit tests against alerts that I have setup e.g.</p>
<pre><code>promtool test rules alert-test.yaml
</code></pre>
<p>Here's an example test file:</p>
<pre><code># alert-test.yaml
rule_files:
- 'my-alert.yaml'
tests:
- name: 'Fire ManyRequests Alert'
interval: 1s # every second
input_series:
- series: rate(http_requests_total[5m])
values: '0+1x30' # starting at 0 requests per second increase to 30 requests per second
alert_rule_test:
- alertname: ManyRequests
eval_time: 10s
exp_alerts: # no alert
- alertname: ManyRequests
eval_time: 25s
exp_alerts:
- exp_labels:
severity: p2
exp_annotations:
description: Request increase too high
</code></pre>
<p>Nothing new here. And this would be no problem if <code>my-alert.yaml</code> looked like this:</p>
<pre><code>groups:
- name: alert_group_name
rules:
- alert: ManyRequests
expr: rate(http_requests_total[5m]) > 20
for: 5s
annotations:
description: Request increase too high
labels:
severity: P2
</code></pre>
<p>but instead i have a Kubernetes manifest that is being deployed i.e.</p>
<pre><code>my-alert.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
prometheus: prometheus-core-rules
role: alert-rules
name: tesla-maps-cert-alert-rules
spec:
groups:
- name: alert_group_name
rules:
- alert: ManyRequests
expr: rate(http_requests_total[5m]) > 20
for: 5s
annotations:
description: Request increase too high
labels:
severity: P2
</code></pre>
<p>so running <code>promtool test rules alert-test.yaml</code> throws this error:</p>
<pre><code>Unit Testing: alerts-test.yaml
FAILED:
my-alert.yaml: yaml: unmarshal errors:
line 1: field apiVersion not found in type rulefmt.RuleGroups
line 2: field kind not found in type rulefmt.RuleGroups
line 3: field metadata not found in type rulefmt.RuleGroups
line 8: field spec not found in type rulefmt.RuleGroups
</code></pre>
<p>what can i do to get this working?</p>
| Erich Shan | <blockquote>
<p>what can i do to get this working?</p>
</blockquote>
<p>Just extract the relevant portion of the YAML manifest:</p>
<pre><code>yq .spec my-alert.yaml > my-alert-for-testing.yaml
</code></pre>
<p>And modify your test file to reference the extracted file:</p>
<pre><code>rules_files:
- my-alert-for-testing.yaml
</code></pre>
<p>I'm using <a href="https://kislyuk.github.io/yq/" rel="nofollow noreferrer">this yq</a>, but <a href="https://github.com/mikefarah/yq" rel="nofollow noreferrer">the other one</a> should work as well.</p>
| larsks |
<p>I am trying to print out the kubernetes version and client version with Ansible however the output comes with slashes and how can I remove the brackets for a more cleaner output?</p>
<pre><code>- name: Kubernetes version
run_once: true
changed_when: False
shell: |
kubectl version
delegate_to: localhost
register: kubernetes_version
</code></pre>
<p>Output:</p>
<pre><code> name: Output
run_once: true
delegate_to: localhost
debug:
msg: "{{ kubernetes_version.stdout_lines }}"
</code></pre>
<p>output:</p>
<pre><code>ok: [localhost] => {
"msg": [
"Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.4\", GitCommit:\"e0fccafd69541e3750d460ba0f9743\", GitTreeState:\"clean\", BuildDate:\"2020-04-16T11:44:03Z\", GoVersion:\"
go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}",
"Server Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.4\", GitCommit:\"e0fccafd69541e3750d460ba0f9743\", GitTreeState:\"clean\", BuildDate:\"2020-04-16T11:35:47Z\", GoVersion:\"
go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}"
]
}
</code></pre>
| user3270211 | <p>I'm replacing my original answer, because I was forgetting that
<code>kubectl version</code> can produce JSON output for us, which makes this
much easier.</p>
<p>By taking the output of <code>kubectl version -o json</code> and passing it
through the <code>from_json</code> filter, we can create an Ansible dictionary
variable from the result.</p>
<p>Then we can use a <code>debug</code> task to print out keys from this variable,
and I think you'll get something closer to what you want.</p>
<p>This playbook:</p>
<pre><code>- hosts: localhost
gather_facts: false
tasks:
- name: run kubectl version
command: kubectl version -o json
register: kv_raw
- set_fact:
kv: "{{ kv_raw.stdout | from_json }}"
- debug:
msg:
- "{{ kv.clientVersion }}"
- "{{ kv.serverVersion }}"
</code></pre>
<p>Will produce output like this:</p>
<pre><code>PLAY [localhost] ********************************************************************************************
TASK [run kubectl version] **********************************************************************************
changed: [localhost]
TASK [set_fact] *********************************************************************************************
ok: [localhost]
TASK [debug] ************************************************************************************************
ok: [localhost] => {
"msg": [
{
"buildDate": "2020-11-14T01:08:04Z",
"compiler": "gc",
"gitCommit": "6082e941e6d62f3a0c6ca8ba52927100948b1d0d",
"gitTreeState": "clean",
"gitVersion": "v1.18.2-0-g52c56ce",
"goVersion": "go1.13.15",
"major": "1",
"minor": "18",
"platform": "linux/amd64"
},
{
"buildDate": "2020-10-25T05:12:54Z",
"compiler": "gc",
"gitCommit": "45b9524",
"gitTreeState": "clean",
"gitVersion": "v1.18.3+45b9524",
"goVersion": "go1.13.4",
"major": "1",
"minor": "18+",
"platform": "linux/amd64"
}
]
}
PLAY RECAP **************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
</code></pre>
| larsks |
<p>I have worked with Jenkins X which is Jenkins running in Kubernetes cluster and I am seeing a new feature in the <a href="https://marketplace.gcr.io/google/jenkins" rel="nofollow noreferrer">Google Cloud marketplace here</a>, which is offering Jenkins, are these same? </p>
| Rajib Mitra | <p>The Jenkins currently available in Google Cloud Marketplace is for the Jenkins Server.
Jenkins X is a new project, that is utilizing Kubernetes and ecosystem to provider a Kubernetes native equivalent to the Jenkins server, and provide horizontal scaling with no single point of failure and small footprint.</p>
| Rob Davies |
<p>I am trying to kill a container using client-go and <a href="https://github.com/kubernetes-sigs/e2e-framework/blob/1af0fd64ebd2474f40cbb1a29c8997ed56aba89d/klient/k8s/resources/resources.go#L293" rel="nofollow noreferrer">e2e</a> framework in Golang but not able to do it successfully.
Example of the full implementation can be accessed <a href="https://github.com/kubernetes-sigs/e2e-framework/tree/1af0fd64ebd2474f40cbb1a29c8997ed56aba89d/examples/pod_exec" rel="nofollow noreferrer">e2e</a> apart from this I am using kind image as "kindest/node:v1.26.6"</p>
<p>I have tried the following commands but none using the following pieces of code.</p>
<pre><code>args := []string{"kill", "1"}
var stdout, stderr bytes.Buffer
err := cfg.Client().Resources().ExecInPod(ctx, namespace, podName, containerName, args, &stdout, &stderr)
</code></pre>
<pre><code>args = []string{"/bin/sh", "-c", "'kill", "1'"}
err = cfg.Client().Resources().ExecInPod(ctx, namespace, podName, containerName, args, &stdout, &stderr)
</code></pre>
<pre><code>args = []string{"/bin/sh", "-c", "\"kill 1\""}
err = cfg.Client().Resources().ExecInPod(ctx, namespace, podName, containerName, args, &stdout, &stderr)
</code></pre>
<p>But all of them are giving error. Some are giving</p>
<p>exec failed: unable to start container process: exec: "kill": executable file not found in $PATH: unknown"</p>
<p>while some are giving</p>
<p>"command terminated with exit code 127" or
"command terminated with exit code 2"</p>
<p>I have also tried the following and it is working but in this case I have a dependency on kubectl which I want to avoid.</p>
<pre><code>cmdString := fmt.Sprintf("/c kubectl exec -it %s -n %s -c %s -- bash -c 'kill 1'", podName, namespace, containerName)
args := strings.Split(cmdString, " ")
cmd := exec.Command("powershell", args...)
err := cmd.Run()
</code></pre>
| Aryaman | <p>Your first attempt looks okay at first glance, although I don't see an <code>ExecInPod</code> method in the <code>client-go</code> package. Because you haven't provided a reproducible example I haven't tried building and running your code.</p>
<p>There's no guarantee that the <code>kill</code> command is available inside a particular container. If you update your question to include details about your deployment manifests -- so we can see what image you're using -- we can provide you with a better answer.</p>
<hr />
<p>Your second two examples are simply invalid:</p>
<h2>1</h2>
<pre><code>args = []string{"/bin/sh", "-c", "'kill", "1'"}
</code></pre>
<p>Here you're trying to run the command <code>'kill</code>, which doesn't exist. You need to pass a single string to <code>sh -c</code>, so this would need to look like:</p>
<pre><code>args = []string{"/bin/sh", "-c", "kill 1"}
</code></pre>
<p>The argument to <code>-c</code> is the script to run; any additional parameters are provided as arguments to that script (so in this example, the shell would see <code>1'</code> in <code>$0</code>).</p>
<p>You could avoid the shell altogether and just run:</p>
<pre><code>args = []string{"kill", "1"}
</code></pre>
<p>But again, both of these solutions require that the target container have the <code>kill</code> command available.</p>
<h2>2</h2>
<pre><code>args = []string{"/bin/sh", "-c", "\"kill 1\""}
</code></pre>
<p>Here you're trying to run the command <code>"kill 1"</code>, which against doesn't exist; if you were to run this from the command line you would see:</p>
<pre><code>$ "kill 1"
bash: kill 1: command not found...
</code></pre>
<p>The correct syntax would be as shown in the previous section.</p>
| larsks |
<p>I have a executed <code>command1 | command2</code> which runs from inside a container.</p>
<p>I am trying to run the same command by passing it to the running container, but it doesn't work.
I tried <code>kubectl -n namespace exec -it pod -- 'command1 | command2'</code></p>
<p>Any ideas? If pipes are not supported, any alternatives to run these 2 commands in sequence ?</p>
| USR | <p>The arguments to the <code>kubectl exec</code> command are executed directly, without a shell. Because you're passing not a single command but a shell expression, this isn't going to work.</p>
<p>The solution is to explicitly invoke the shell:</p>
<pre><code>kubectl -n namespace exec -it pod -- sh -c 'command1 | command2'
</code></pre>
<p>For example:</p>
<pre><code>$ kubectl exec -it fedora0 -- sh -c 'echo hello world | sed s/hello/goodbye/'
goodbye world
</code></pre>
| larsks |
<p>I have created a K8 service account token using following command;</p>
<pre><code>kubectl create serviceaccount test-sat-account
</code></pre>
<p>I have deployment yaml for a dotnet service and I am importing the above token in a volume as below;</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
serviceAccountName: test-sat-account
containers:
- name: my-container
image: ""
imagePullPolicy: Always
volumeMounts:
- name: my-token
mountPath: /var/run/secrets/tokens
env:
- name: SATToken
value: ****<Can we Pass the SAT token here?>****
ports:
- name: http
containerPort: 80
protocol: TCP
volumes:
- name: my-token
projected:
sources:
- serviceAccountToken:
path: my-token
audience: test-audience
</code></pre>
<p>Now, instead of reading the token from the mountpath in the code, I want to pass the value of the token to an environment variable in the above yaml.
Is it possible to do that?
If yes, how?</p>
| Abhijit | <p>Arrange for the token to be stored in a Secret resource:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: test-sat-account-token
annotations:
kubernetes.io/service-account.name: test-sat-account
type: kubernetes.io/service-account-token
</code></pre>
<p>Now, use that Secret as the source for an environment value:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
serviceAccountName: test-sat-account
containers:
- name: my-container
image: ""
imagePullPolicy: Always
env:
- name: SATToken
valueFrom:
secretKeyRef:
name: test-sat-account-token
key: token
ports:
- name: http
containerPort: 80
protocol: TCP
</code></pre>
| larsks |
<p>I have a configmap.yaml file as below :</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: abc
namespace: monitoring
labels:
app: abc
version: 0.17.0
data:
application.yml: |-
myjava:
security:
enabled: true
abc:
server:
access-log:
enabled: ${myvar}. ## this is not working
</code></pre>
<p>"myvar" value is available in pod as shell environment variable from secretkeyref field in deployment file.</p>
<p>Now I want to replace myvar shell environment variable in configmap above i.e before application.yml file is available in pod it should have replaced myvar value. which is not working i tried ${myvar} and $(myvar) and "#{ENV['myvar']}"</p>
<p>Is that possible in kubernetes configmap to reference with in data section pod's environment variable if yes how or should i need to write a script to replace with sed -i application.yml etc.</p>
| ankur-AJ | <blockquote>
<p>Is that possible in kubernetes configmap to reference with in data section pod's environment variable</p>
</blockquote>
<p>That's not possible. A <code>ConfigMap</code> is not associated with a particular pod, so there's no way to perform the sort of variable substitution you're asking about. You would need to implement this logic inside your containers (fetch the <code>ConfigMap</code>, perform variable substitution yourself, then consume the data).</p>
| larsks |
<p>I have a pod which I want to insert the following annotation for:</p>
<pre><code>annotationexample: |
[
{
"a": "b",
"c": "d",
"e": [
"f"
]
}
]
</code></pre>
<p>When I try to apply this using <code>kubectl annotate --overwrite</code> I get an error saying <code>name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character</code>. The <code>[</code>, <code>]</code>, <code>{</code> and <code>}</code> characters are causing issues, but I have previously been able to apply the annotation through a helm chart. But I am seeking a solution through just using kubectl to only edit annotations. I don't want to create a new pod from a yaml file (<code>kubectl apply -f</code>) as then extra fields are added.</p>
| Sabo Boz | <p>It sounds like you have an error in your command line. If you want to set that annotation on a resource, you would run a command like this:</p>
<pre><code>kubectl annotate pod/example annotationexample='[
{
"a": "b",
"c": "d",
"e": [
"f"
]
}
]
'
</code></pre>
<p>That results in:</p>
<pre><code>$ kubectl get pod example -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
annotationexample: |
[
{
"a": "b",
"c": "d",
"e": [
"f"
]
}
]
.
.
.
</code></pre>
| larsks |
<p>I have been using following snippet to manage kubernetes auto scaling with terraform</p>
<pre><code>resource "helm_release" "cluster-autoscaler" {
depends_on = [
module.eks
]
name = "cluster-autoscaler"
namespace = local.k8s_service_account_namespace
repository = "https://kubernetes.github.io/autoscaler"
chart = "cluster-autoscaler"
version = "9.10.7"
create_namespace = false
</code></pre>
<p>While all of this has been in working state for months (Gitlab CI/CD), it has suddenly stopped working and throwing following error.</p>
<pre><code>module.review_vpc.helm_release.cluster-autoscaler: Refreshing state... [id=cluster-autoscaler]
β·
β Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
β
β with module.review_vpc.helm_release.cluster-autoscaler,
β on ..\..\modules\aws\eks.tf line 319, in resource "helm_release" "cluster-autoscaler":
β 319: resource "helm_release" "cluster-autoscaler" {
</code></pre>
<p>I am using AWS EKS with kubernetes version 1.21.</p>
<p>The terraform providers are as follows</p>
<pre><code>terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "1.14.0"
}
}
</code></pre>
<p><strong>UPDATE 1</strong></p>
<p>Here is the module for eks</p>
<pre><code>module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "17.24.0"
</code></pre>
| CuriousMind | <p>I had to do couple of changes to terraform scripts (not sure whey they were not required earlier).</p>
<ul>
<li><p>Added helm to required_providers section</p>
<p>helm = {
source = "hashicorp/helm"
version = "2.3.0"
}</p>
</li>
<li><p>Replaced token generation from</p>
<p>exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", var.eks_cluster_name]
command = "aws"
}</p>
</li>
</ul>
<p>to</p>
<pre><code>token = data.aws_eks_cluster_auth.cluster.token
</code></pre>
<p>Note that I am using <code>hashicorp/terraform:1.0.11</code> image on Gitlab runner to execute Terraform Code. Hence manually installing kubectl or aws CLI is not applicable in my case.</p>
| CuriousMind |
<p>I've a docker container based ReactJS based app, a shell script is defined in docker image as the ENTRYPOINT, and I'm able to use docker run image-name successfully.</p>
<p>Now the task is to use this docker image for Kubernetes deployment using standard deployment.yaml file templates, something like following</p>
<pre><code># Deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
terminationGracePeriodSeconds: 120
containers:
- name: my-app
imagePullPolicy: Always
image: my-docker-image
command: ["/bin/bash"]
args: ["-c","./entrypoint.sh;while true; do echo hello; sleep 10;done"]
</code></pre>
<hr />
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: NodePort
selector:
app: my-app
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31110
</code></pre>
<hr />
<pre><code>spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 3000
</code></pre>
<p>when I do kubectl apply -f mydeployment.yaml, its creating required pod, but the entrypoint.sh script is not being executed upon creation of pod unlike direct running of docker image. Can someone please help in sharing what is wrong with above yaml file, am I missing or doing something incorrectly?</p>
<p>I also tried direclty call npm run start in command [] within yaml but no luck. I can enter in pod container using kubectl exec but I don't see react app running, I can manually execute entrypoint.sh and see the required output in browser.</p>
<p>Edit: Adding kubectl logs and describe output</p>
<p>logs: when I removed command/args from yaml and applied deploy.yaml, I get following logs as is, until starting the dev server line, there's nothing beyond that.</p>
<pre><code>> myapp start /app
> react-scripts start
βΉ ο½’wdsο½£: Project is running at http://x.x.x.x/
βΉ ο½’wdsο½£: webpack output is served from
βΉ ο½’wdsο½£: Content not from webpack is served from /app/public
βΉ ο½’wdsο½£: 404s will fallback to /
Starting the development server...
</code></pre>
<p>Describe output</p>
<pre><code>Name: my-view-85b597db55-72jr8
Namespace: default
Priority: 0
Node: my-node/x.x.x.x
Start Time: Fri, 16 Apr 2021 11:13:20 +0800
Labels: app=my-app
pod-template-hash=85b597db55
Annotations: cni.projectcalico.org/podIP: x.x.x.x/xx
cni.projectcalico.org/podIPs: x.x.x.x/xx
Status: Running
IP: x.x.x.x
IPs:
IP: x.x.x.x
Controlled By: ReplicaSet/my-view-container-85b597db55
Containers:
my-ui-container:
Container ID: containerd://671a1db809b7f583b2f3702e06cee3477ab1412d1e4aa8ac93106d8583f2c5b6
Image: my-docker-image
Image ID: my-docker-image@sha256:29f5fc74aa0302039c37d14201f5c85bc8278fbeb7d70daa2d867b7faa6d6770
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 16 Apr 2021 11:13:41 +0800
Finished: Fri, 16 Apr 2021 11:13:43 +0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 16 Apr 2021 11:13:24 +0800
Finished: Fri, 16 Apr 2021 11:13:26 +0800
Ready: False
Restart Count: 2
Environment:
MY_ENVIRONMENT_NAME: TEST_ENV
MY_SERVICE_NAME: my-view-service
MY_SERVICE_MAIN_PORT: 3000
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9z8bw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-9z8bw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9z8bw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned default/my-view-container-85b597db55-72jr8 to my-host
Normal Pulled 31s kubelet Successfully pulled image "my-docker-image" in 184.743641ms
Normal Pulled 28s kubelet Successfully pulled image "my-docker-image" in 252.382942ms
Normal Pulling 11s (x3 over 31s) kubelet Pulling image "my-docker-image"
Normal Pulled 11s kubelet Successfully pulled image "my-docker-image" in 211.2478ms
Normal Created 11s (x3 over 31s) kubelet Created container my-view-container
Normal Started 11s (x3 over 31s) kubelet Started container my-view-container
Warning BackOff 8s (x2 over 26s) kubelet Back-off restarting failed container
</code></pre>
<p>and my entrypoint.sh is</p>
<pre><code>#!/bin/bash
( export REACT_APP_ENV_VAR=env_var_value;npm run start )
exec "$@"
</code></pre>
| S N | <p>When you write this in a pod description:</p>
<pre><code> containers:
- name: my-app
imagePullPolicy: Always
image: my-docker-image
command: ["/bin/bash"]
args: ["-c","./entrypoint.sh;while true; do echo hello; sleep 10;done"]
</code></pre>
<p>The <code>command</code> argument overrides the container <code>ENTRYPOINT</code>. The above
is roughly equivalent to:</p>
<pre><code>docker run --entrypoint /bin/bash my-docker-image ...args here...
</code></pre>
<p>If you want to use the <code>ENTRYPOINT</code> from the image, then just set <code>args</code>.</p>
| larsks |
<p>I have setup a Postgres pod on my Kubernetes cluster, and I am trying to troubleshoot it a bit.</p>
<p>I would like to use the <a href="https://hub.docker.com/_/postgres" rel="nofollow noreferrer">official Postgres image</a> and deploy it to my Kubernetes cluster using <code>kubectl</code>. Given that my Postgres server connection details are:</p>
<pre><code>host: mypostgres
port: 5432
username: postgres
password: 12345
</code></pre>
<p>And given that I <em>think</em> the command will be <em>something</em> like:</p>
<pre><code>kubectl run -i --tty --rm debug --image=postgres --restart=Never -- sh
</code></pre>
<p>What do I need to do so that I can deploy this image to my cluster, connect to my Postgres server and start running SQL command against it (for troubleshooting purposes)?</p>
| hotmeatballsoup | <p>If your primarily interested in troubleshooting, then you're probably looking for the <code>kubectl port-forward</code> command, which will expose a container port on your local host. First, you'll need to deploy the Postgres pod; you haven't shown what your pod manifest looks like, so I'm going to assume a <code>Deployment</code> like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgres
namespace: sandbox
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- env:
- name: POSTGRES_PASSWORD
value: secret
- name: POSTGRES_USER
value: example
- name: POSTGRES_DB
value: example
image: docker.io/postgres:13
name: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres-data
strategy: Recreate
volumes:
- emptyDir: {}
name: postgres-data
</code></pre>
<p>Once this is running, you can access postgres with the <code>port-forward</code>
command like this:</p>
<pre><code>kubectl -n sandbox port-forward deploy/postgres 5432:5432
</code></pre>
<p>This should result in:</p>
<pre><code>Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
</code></pre>
<p>And now we can connect to Postgres using <code>psql</code> and run queries
against it:</p>
<pre><code>$ psql -h localhost -U example example
psql (13.4)
Type "help" for help.
example=#
</code></pre>
<hr />
<p><code>kubectl port-forward</code> is only useful as a troubleshooting mechanism. If
you were trying to access your <code>postgres</code> pod from another pod, you
would create a <code>Service</code> and then use the service name as the hostname
for your client connections.</p>
<hr />
<p><strong>Update</strong></p>
<p>If your goal is to deploy a <em>client</em> container so that you can log
into it and run <code>psql</code>, the easiest solution is just to <code>kubectl rsh</code>
into the postgres container itself. Assuming you were using the
<code>Deployment</code> shown earlier in this question, you could run:</p>
<pre><code>kubectl rsh deploy/postgres
</code></pre>
<p>This would get you a shell prompt <em>inside</em> the postgres container. You
can run <code>psql</code> and not have to worry about authentication:</p>
<pre><code>$ kubectl rsh deploy/postgres
$ psql -U example example
psql (13.4 (Debian 13.4-1.pgdg100+1))
Type "help" for help.
example=#
</code></pre>
<p>If you want to start up a separate container, you can use the <code>kubectl debug</code> command:</p>
<pre><code>kubectl debug deploy/postgres
</code></pre>
<p>This gets you a root prompt in a debug pod. If you know the ip address
of the postgres pod, you can connect to it using <code>psql</code>. To get
the address of the pod, run this on your local host:</p>
<pre><code>$ kubectl get pod/postgres-6df4c549f-p2892 -o jsonpath='{.status.podIP}'
10.130.0.11
</code></pre>
<p>And then inside the debug container:</p>
<pre><code>root@postgres-debug:/# psql -h 10.130.0.11 -U example example
</code></pre>
<p>In this case you would have to provide an appropriate password,
because you are accessing postgres from "another machine", rather than
running directly inside the postgres pod.</p>
<p>Note that in the above answer I've used the shortcut
<code>deploy/<deployment_name</code>, which avoids having to know the name of the
pod created by the <code>Deployment</code>. You can replace that with
<code>pod/<podname></code> in all cases.</p>
| larsks |
<p>Now i have Pods as Kubernetes structs wiht the help of the command</p>
<pre><code>pods , err := clientset.CoreV1().Pods("namespace_String").List(context.TODO(), metav1.ListOptions{})
</code></pre>
<p>now i do i get it as individual yaml files
which command should i use</p>
<pre><code>for i , pod := range pods.Items{
if i==0{
t := reflect.TypeOF(&pod)
for j := 0; j<t.NumMethod(); j++{
m := t.Method(j)
fmt.Println(m.Name)
}
}
}
</code></pre>
<p>this function will print the list of functions in the pod item which should i use</p>
<p>Thanks for the answer</p>
| Pradeep Padmanaban C | <p>The <code>yaml</code> is just a representation of the Pod object in the kubernetes internal storage in etcd. With your <code>client-go</code> what you have got is the <code>Pod</code> instance, of the type <code>v1.Pod</code>. So you should be able to work with this object itself and get whatever you want, for example <code>p.Labels()</code> etc. But if for some reason, you are insisting on getting a yaml, you can do that via:</p>
<pre><code>import (
"sigs.k8s.io/yaml"
)
b, err := yaml.Marshal(pod)
if err != nil {
// handle err
}
log.Printf("Yaml of the pod is: %q", string(b))
</code></pre>
<p>Note that <code>yaml</code> library coming here is not coming from <code>client-go</code> library. The documentation for the <code>yaml</code> library can be found in: <a href="https://pkg.go.dev/sigs.k8s.io/yaml#Marshal" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/yaml#Marshal</a></p>
<p>Instead of <code>yaml</code> if you want to use <code>json</code>, you can simply use the <code>Marshal</code> function <a href="https://pkg.go.dev/k8s.io/apiserver/pkg/apis/example/v1#Pod.Marshal" rel="nofollow noreferrer">https://pkg.go.dev/k8s.io/apiserver/pkg/apis/example/v1#Pod.Marshal</a> provided by the <code>v1.Pod</code> struct itself, like any other Go object.</p>
| Sankar |
<p>Cron template</p>
<pre><code>kind: CronJob
metadata:
name: some-example
namespace: some-example
spec:
schedule: "* 12 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-example
image: gcr.io/some-example/some-example
imagePullPolicy: Always
env:
- name: REPO_URL
value: https://example.com/12/some-example
</code></pre>
<p>I need to create multiple Job files with different URLs of <code>REPO_URL</code> over 100s save in a file. I am looking for a solution where I can set Job template and get the required key:value from another file.</p>
<p>so far I've tried <a href="https://kustomize.io/" rel="nofollow noreferrer">https://kustomize.io/</a>, <a href="https://ballerina.io/" rel="nofollow noreferrer">https://ballerina.io/</a>, and <a href="https://github.com/mikefarah/yq" rel="nofollow noreferrer">https://github.com/mikefarah/yq</a>. But I am not able to find a great example to fit the scenario.</p>
| Rahul Sharma | <p>That would be pretty trivial with <code>yq</code> and a shell script. Assuming
your template is in <code>cronjob.yml</code>, we can write something like this:</p>
<pre><code>let count=0
while read url; do
yq -y '
.metadata.name = "some-example-'"$count"'"|
.spec.jobTemplate.spec.template.spec.containers[0].env[0].value = "'"$url"'"
' cronjob.yml
echo '---'
let count++
done < list_of_urls.txt | kubectl apply -f-
</code></pre>
<p>E.g., if my <code>list_of_urls.txt</code> contains:</p>
<pre><code>https://google.com
https://stackoverflow.com
</code></pre>
<p>The above script will produce:</p>
<pre><code>[...]
metadata:
name: some-example-0
namespace: some-example
spec:
[...]
env:
- name: REPO_URL
value: https://google.com
---
[...]
metadata:
name: some-example-1
namespace: some-example
spec:
[...]
env:
- name: REPO_URL
value: https://stackoverflow.com
</code></pre>
<p>You can drop the <code>| kubectl apply -f-</code> if you just want to see the
output instead of actually creating resources.</p>
<hr />
<p>Or for more structured approach, we could use Ansible's <a href="https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_module.html" rel="nofollow noreferrer">k8s</a>
module:</p>
<pre><code>- hosts: localhost
gather_facts: false
tasks:
- k8s:
state: present
definition:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "some-example-{{ count }}"
namespace: some-example
spec:
schedule: "* 12 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-example
image: gcr.io/some-example/some-example
imagePullPolicy: Always
env:
- name: REPO_URL
value: "{{ item }}"
loop:
- https://google.com
- https://stackoverflow.com
loop_control:
index_var: count
</code></pre>
<p>Assuming that the above is stored in <code>playbook.yml</code>, running this with
<code>ansible-playbook playbook.yml</code> would create the same resources as the
earlier shell script.</p>
| larsks |
<p>I need to view logs a pod of specific name:
Basically, I want to see logs for the pod having name <code>infra</code> in it.</p>
<p>I'm using below command:</p>
<pre><code>kubectl logs $(kubectl get pods | awk '{print $1}' | grep -e "infra")
</code></pre>
<p>But, it's not working.</p>
| Temp Expt | <p>The <a href="https://github.com/stern/stern" rel="nofollow noreferrer"><code>stern</code></a> command make this very simple:</p>
<pre><code>stern infra
</code></pre>
<p>That will stream the logs from any pods that have <code>infra</code> in the name.</p>
<hr />
<p>But even without <code>stern</code>, you can do something like:</p>
<pre><code>kubectl logs $(kubectl get pods -o name | grep infra)
</code></pre>
<p>That will work as long as your <code>grep</code> command returns a single line. If the <code>grep</code> command results in multiple matches, you'll need to use a more specific pattern.</p>
<p>If you want to see logs from multiple pods with a single command, you can request logs by label using <code>kubectl logs -l <label></code>. E.g., if I have several pods with the label <code>app=my-app</code>, I can run:</p>
<pre><code>kubectl logs -l app=my-app
</code></pre>
| larsks |
<p>I'm trying to expose a SignalR hub hosted in a Kubernetes (Azure) pod. Basically, the authentication and the handshake steps work fine, but when I trigger some action, all clients connected via the k8s Ingress doesn't receive the message. Has anybody experienced this issue or just have shared SignalR hubs through Kubernetes - Ingress? </p>
<p><strong>ingress.yml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: endpoints
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.org/websocket-services: "myservice"
spec:
rules:
- host: api.[MY-DOMAIN].com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
path: /myservice
</code></pre>
| Alvaro Inckot | <p>Try: </p>
<pre><code>annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: REALTIMESERVERID
</code></pre>
<p>I wrote a sample project a while back, if you want a working example: <a href="https://github.com/DenisBiondic/RealTimeMicroservices" rel="noreferrer">DenisBiondic/RealTimeMicroservices</a></p>
<p>As a side note, consider using Azure SignalR Service, it should remove many headaches (also in the example above).</p>
| Denis Biondic |
<p>I'm looking for a way to restart all the pods of my service. They should restart one by one so the service is always available. The restart should happen when a Python script from a different service is done. </p>
<p>I'm doing this because on the pods I want to restart there is a Gunicorn-server running which needs to reload some data. That only works when the server gets restarted.</p>
<p>The gunicorn service gets started in a Dockerfile:</p>
<pre><code>CMD gunicorn -c gunicorn.conf.py -b :$PORT --preload app:app
</code></pre>
<p>But I'm guessing this is not too relevant.</p>
<p>I imagine the solution to be some kind of kubectl command that I can run in the Python script or a hint for a kubectl endpoint, that I couldn't find.</p>
| Florian | <p><code>kubectl rollout restart</code> has landed in Kubernetes v1.15 [1]. This feature is designed for exactly what you are looking to do - a rolling restart of pods.</p>
<p>[1] <a href="https://github.com/kubernetes/kubernetes/issues/13488" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/13488</a></p>
| vishal |
<p>I'm running prometheus and grafana under k3s, accessible (respectively) at <a href="http://monitoring.internal/prometheus" rel="nofollow noreferrer">http://monitoring.internal/prometheus</a> and <a href="http://monitoring.internal/grafana" rel="nofollow noreferrer">http://monitoring.internal/grafana</a>. The grafana Ingress object, for example, looks like:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: monitoring.internal
http:
paths:
- path: /grafana
pathType: Prefix
backend:
service:
name: grafana
port:
number: 3000
</code></pre>
<p>This works fine, except that if you land at
<a href="http://monitoring.internal/" rel="nofollow noreferrer">http://monitoring.internal/</a>, you get a 404 error. I would like
requests for <a href="http://monitoring.internal/" rel="nofollow noreferrer">http://monitoring.internal/</a> to redirect to
<a href="http://monitoring.internal/grafana" rel="nofollow noreferrer">http://monitoring.internal/grafana</a>. I could perhaps create another
service that runs something like <code>darkhttpd ... --forward-all http://monitoring.internal/grafana</code>, and create an Ingress object
that would map <code>/</code> to that service, but it seems like there ought to
be a way to do this with Traefik itself.</p>
<p>It looks like I'm running Traefik 2.4.8 locally:</p>
<pre><code>$ kubectl -n kube-system exec -it deployment/traefik -- traefik version
Version: 2.4.8
Codename: livarot
Go version: go1.16.2
Built: 2021-03-23T15:48:39Z
OS/Arch: linux/amd64
</code></pre>
<p>I've found <a href="https://doc.traefik.io/traefik/v1.7/configuration/backends/kubernetes/" rel="nofollow noreferrer">this documentation for 1.7</a> that suggests there is an annotation for exactly this purpose:</p>
<ul>
<li><code>traefik.ingress.kubernetes.io/app-root: "/index.html"</code>: Redirects
all requests for / to the defined path.</li>
</ul>
<p>But setting that on the grafana ingress object doesn't appear to have
any impact, and I haven't been able to find similar docs for 2.x
(I've looked around
<a href="https://github.com/traefik/traefik/tree/master/docs" rel="nofollow noreferrer">here</a>, for
example).</p>
<p>What's the right way to set up this sort of redirect?</p>
| larsks | <p>Since I haven't been able to figure out traefik yet, I thought I'd post my solution here in case anyone else runs into the same situation. I am hoping someone comes along who knows The Right Way to to do this, and if I figure out I'll update this answer.</p>
<p>I added a new deployment that runs <a href="https://github.com/emikulic/darkhttpd" rel="nofollow noreferrer">darkhttpd</a> as a simple director:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redirector
spec:
replicas: 1
template:
spec:
containers:
- name: redirector
image: docker.io/alpinelinux/darkhttpd
ports:
- containerPort: 8080
args:
- --forward-all
- http://monitoring.internal/grafana
</code></pre>
<p>A corresponding Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: redirector
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
</code></pre>
<p>And the following Ingress object:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redirector
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: monitoring.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: redirector
port:
number: 8080
</code></pre>
<p>These are all deployed with <a href="https://kustomize.io/" rel="nofollow noreferrer">kustomize</a>, which takes care of
adding labels and selectors in the appropriate places. The
<code>kustomization.yaml</code> look like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- ingress.yaml
- service.yaml
commonLabels:
component: redirector
</code></pre>
<p>With all this in place, requests to <code>http://monitoring.internal/</code> hit the redirector pod.</p>
| larsks |
<p>The HPA docs at <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a> provide two examples of the <code>selector</code> property, one for the pod and the other for the object metric types. One of these examples is a nested object, the other is a string. For example:</p>
<pre><code>- type: External
external:
metric:
name: queue_messages_ready
selector: "queue=worker_tasks"
target:
type: AverageValue
averageValue: 30
</code></pre>
<p>and</p>
<pre><code>type: Object
object:
metric:
name: http_requests
selector: {matchLabels: {verb: GET}}
</code></pre>
<p>The API docs at <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#metricidentifier-v2beta2-autoscaling" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#metricidentifier-v2beta2-autoscaling</a> provide this description for the <code>selector</code> field:</p>
<blockquote>
<p>selector is the string-encoded form of a standard kubernetes label selector for the given metric</p>
</blockquote>
<p>So does this mean you can define the <code>selector</code> property as a string (for example <code>selector: "matchLabels: {verb: GET}"</code>) as well as a nested object?</p>
| Phyxx | <p>It turns out the selector has to be a map, or you get the following error:</p>
<pre><code>error: error validating "customresource.yml": error validating data: ValidationError(HorizontalPodAutoscaler.spec.metrics[0].object.metric.selector): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector: got "string", expected "map"; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>This applies to the external metric type too, so the K8s docs appear to be incorrect when they provide an example passing a string.</p>
| Phyxx |
<p>we have kubernetes deployment controller build using kubebuilder.
in controller we are updating ConfigMap in controller and then want all pods backed by deployment to restart so latest ConfigMap will be reloaded.</p>
<p>How can I trigger this deployment/pods restart from controller.</p>
<p>Note: the deployment/pod I want to restart is different form the one for which controller would have received event.</p>
<p>we are exploring the ways to do this</p>
| Vikas Budhwat | <p>You can't "restart" a Pod, but if the Pod is managed by a Deployment you can remove the Pod and the Deployment will spawn a new one. If you update the Deployment (e.g., by changing the name of a ConfigMap referenced in the manifest), the Deployment will automatically respawn all associated pods.</p>
<p>Your controller just needs to make the same API call you would normally use to delete a Pod or update a Deployment. This may require some RBAC configuration so that the ServiceAccount under which your controller is running has the necessary permissions.</p>
| larsks |
<p>If a Helm deployment's status is <code>failed</code>, what can I do to determine what made it fail?</p>
| Nomnom | <p><code>helm history <release_name></code></p>
<p>Shows the kubernetes errors for the attempted deployment of that release.</p>
| 70Mike |
<p>I have added a map in my values.yaml file like below</p>
<pre><code>defaultview:
journey:
ce5f164c-ae0f-11e7-9220-c7137b83fb6a: 45
abc: -1
pqr: 30
</code></pre>
<p>I read this property in configmap like below</p>
<pre><code>defaultview.journey: {{ .Values.defaultview.journey }}
</code></pre>
<p>When I check configmap I see entry like below in cm</p>
<pre><code>defaultview.journey:
----
map[abc:-1 ce5f164c-ae0f-11e7-9220-c7137b83fb6a:45 pqr:30]
</code></pre>
<p>Java class is trying to bind this property into a Map<String,Integer> like below</p>
<pre><code>import lombok.Data;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;
import java.util.HashMap;
import java.util.Map;
@Data
@Component
@ConfigurationProperties(prefix = "defaultview")
public class JourneyViewConfig {
private Map<String, Integer> Journey = new HashMap<>();
}
</code></pre>
<p>I see error thrown while spring boot startup</p>
<pre><code>Failed to bind properties under 'defaultview.journey' to java.util.Map<java.lang.String, java.lang.Integer>:\\n\\n Property: defaultview.journey\\n Value: \\\"map[abc:-1 ce5f164c-ae0f-11e7-9220-c7137b83fb6a:45 pqr:30]\\\"\\n Origin: \\\"defaultview.journey\\\" from property source \\\"<cm name>\\\"\\n Reason: org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [java.lang.String] to type [java.util.Map<java.lang.String, java.lang.Integer>
</code></pre>
<p>Is there a way to inject Map directly from configmap to spring boot?</p>
<p>Edit 1:
I am able to read the values in map by adding this <strong>dirty code</strong></p>
<pre><code>private Map<String, Integer> parseConfigValue(String configValue) {
Map<String, Integer> resultMap = new HashMap<>();
String trimmedInput = configValue.trim();
if (trimmedInput.startsWith("map[")) {
trimmedInput = trimmedInput.substring(4, trimmedInput.length() - 1);
String[] pairs = trimmedInput.split(" ");
for (String pair : pairs) {
String[] keyValue = pair.split(":");
if (keyValue.length == 2) {
String key = keyValue[0].trim();
Integer value = Integer.parseInt(keyValue[1].trim());
resultMap.put(key, value);
}
}
</code></pre>
<p>Is there a more subtle way of doing this?</p>
| Prashant Raghav | <p>This line is problematic:</p>
<pre><code>defaultview.journey: {{ .Values.defaultview.journey }}
</code></pre>
<p>You're trying to include a structured variable as a simple string, so this is resulting in a ConfigMap that looks like:</p>
<pre><code>data:
defaultview.journey_broken: map[abc:-1 ce5f164c-ae0f-11e7-9220-c7137b83fb6a:45 pqr:30]
</code></pre>
<p>That's not useful, as it's neither valid YAML nor JSON and probably isn't what your application is expecting.</p>
<p>You're going to need to use the <code>toYaml</code> function to render your structured variable as a block of YAML text. For example:</p>
<pre><code>data:
defaultview.journey: |
{{ indent 4 (.Values.defaultview.journey | toYaml) }}
</code></pre>
<p>Which results in:</p>
<pre><code>data:
defaultview.journey: |
abc: -1
ce5f164c-ae0f-11e7-9220-c7137b83fb6a: 45
pqr: 30
</code></pre>
| larsks |
<p>In my machines , 4 pvc are created. Now i need to get all volume name associated with the pvc in a list. Then those list will be passed to storage array and i will ensure that the volumes are created in storage server. </p>
<pre><code>- name: Verify whether the PVC is created
command: "kubectl get pvc pvc{{local_uuid}}-{{item}} -o json"
with_sequence: start=1 end=4
register: result
- set_fact:
pvcstatus: "{{ (item.stdout |from_json).status.phase }}"
volume_name: "{{(item.stdout |from_json).spec.volumeName}}"
with_items: "{{ result.results}}"
- debug: var=volume_name
</code></pre>
<p>But when i run the above tasks , volume_name is having last volumename alone instead of all the volumes as a list. How to get all the volume names in a list?</p>
| Samselvaprabu | <p>Your <code>set_fact</code> task is setting <code>volume_name</code> to a single value in each iteration...so of course, when the loop completes, the variable has the value from the final iteration. That's the expected behavior. If you want a list, you need to create a list. You can do this by <em>appending</em> to a list in your <code>set_fact</code> loop:</p>
<pre><code>- set_fact:
volume_name: "{{ volume_name|default([]) + [(item.stdout |from_json).spec.volumeName] }}"
with_items: "{{ result.results}}"
</code></pre>
<p>The expression <code>volume_name|default([])</code> will evaluate to an empty list when <code>volume_name</code> is undefined (which is the case on the first iteration of the loop).</p>
<p>I tested this out using the following playbook:</p>
<pre><code>---
- hosts: localhost
gather_facts: false
vars:
result:
results:
- stdout: '{"spec": {"volumeName": "volume1"}}'
- stdout: '{"spec": {"volumeName": "volume2"}}'
- stdout: '{"spec": {"volumeName": "volume3"}}'
tasks:
- debug:
var: result
- set_fact:
volume_name: "{{ volume_name|default([]) + [(item.stdout |from_json).spec.volumeName] }}"
with_items: "{{ result.results}}"
- debug:
var: volume_name
</code></pre>
<p>Which results in:</p>
<pre><code>TASK [debug] *****************************************************************************************************************************************************************
ok: [localhost] => {
"volume_name": [
"volume1",
"volume2",
"volume3"
]
}
</code></pre>
| larsks |
<p>I've decided to run a podTemplate with one container of main.</p>
<ol>
<li>Why does my pod template configuration include JNLP? What is needed for? can I have only my pod with my container with my image?</li>
<li>How do I overwrite the JNLP image with my image instead of inbound image?</li>
<li>How do I run my job on my pod/container of 'main' and not JNLP?</li>
</ol>
<p><a href="https://i.stack.imgur.com/6UvEm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6UvEm.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/aRYV2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aRYV2.png" alt="enter image description here" /></a></p>
<p>My Jenkins configuration as code -</p>
<pre><code> Jenkins:cluster: non-prod
Jenkins:secrets:
create: true
secretsList:
- name: jenkins-github-token-non-prod
value: /us-west-2-non-prod/jenkins/secrets/github-token
- name: jenkins-slack-token-non-prod
value: /us-west-2-non-prod/jenkins/secrets/slack-token
Jenkins:config:
chart: jenkins
namespace: default
repo: https://charts.jenkins.io
values:
agent:
enabled: true
podTemplates:
jenkins-slave-pod: |
- name: jenkins-slave-pod
label: jenkins-slave-pod
containers:
- name: main
image: '805787217936.dkr.ecr.us-west-2.amazonaws.com/aba-jenkins-slave:ecs-global-node_master_57'
command: "sleep"
args: "30d"
privileged: true
master.JCasC.enabled: true
master.JCasC.defaultConfig: true
kubernetesConnectTimeout: 5
kubernetesReadTimeout: 15
maxRequestsPerHostStr: "32"
namespace: default
image: "805787217936.dkr.ecr.us-west-2.amazonaws.com/aba-jenkins-slave"
tag: "ecs-global-node_master_57"
workingDir: "/home/jenkins/agent"
nodeUsageMode: "NORMAL"
# name of the secret to be used for image pulling
imagePullSecretName:
componentName: "eks-global-slave"
websocket: false
privileged: false
runAsUser:
runAsGroup:
resources:
requests:
cpu: "512m"
memory: "512Mi"
limits:
cpu: "512m"
memory: "512Mi"
podRetention: "Never"
volumes: [ ]
workspaceVolume: { }
envVars: [ ]
# - name: PATH
# value: /usr/local/bin
command:
args: "${computer.jnlpmac} ${computer.name}"
# Side container name
sideContainerName: "jnlp"
# Doesn't allocate pseudo TTY by default
TTYEnabled: true
# Max number of spawned agent
containerCap: 10
# Pod name
podName: "jnlp"
# Allows the Pod to remain active for reuse until the configured number of
# minutes has passed since the last step was executed on it.
idleMinutes: 0
# Timeout in seconds for an agent to be online
connectTimeout: 100
serviceAccount:
annotations: {}
controller:
numExecutors: 1
additionalExistingSecrets: []
JCasC:
securityRealm: |
local:
allowsSignup: false
users:
- id: "aba"
password: "aba"
# securityRealm: |
# saml:
# binding: "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
# displayNameAttributeName: "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name"
# groupsAttributeName: "http://schemas.xmlsoap.org/claims/Group"
# idpMetadataConfiguration:
# period: 0
# url: "https://aba.onelogin.com/saml/metadata/34349e62-799f-4378-9d2a-03b870cbd965"
# maximumAuthenticationLifetime: 86400
# usernameCaseConversion: "none"
# authorizationStrategy: |-
# roleBased:
# forceExistingJobs: true
configScripts:
credentials: |
credentials:
system:
domainCredentials:
- credentials:
- string:
scope: GLOBAL
id: slack-token
description: "Slack access token"
secret: "${jenkins-slack-token-non-prod-value}"
- usernamePassword:
id: "github-credentials"
password: "aba"
scope: GLOBAL
username: "aba"
plugin-config: |
jenkins:
disabledAdministrativeMonitors:
- "hudson.model.UpdateCenter$CoreUpdateMonitor"
- "jenkins.diagnostics.ControllerExecutorsNoAgents"
security:
updateSiteWarningsConfiguration:
ignoredWarnings:
- "core-2_263"
- "SECURITY-2617-extended-choice-parameter"
- "SECURITY-2170"
- "SECURITY-2796"
- "SECURITY-2169"
- "SECURITY-2332"
- "SECURITY-2232"
- "SECURITY-1351"
- "SECURITY-1350"
- "SECURITY-2888"
unclassified:
slackNotifier:
teamDomain: "superops"
baseUrl: "https://superops.slack.com/services/hooks/jenkins-ci/"
tokenCredentialId: "slack-token"
globalLibraries:
libraries:
- defaultVersion: "master"
allowVersionOverride: true
name: "aba-jenkins-library"
implicit: true
retriever:
modernSCM:
scm:
git:
credentialsId: "github-credentials"
id: "shared-library-creds"
remote: "https://github.com/aba-aba/aba-jenkins-library.git"
traits:
- "gitBranchDiscovery"
- "cleanBeforeCheckoutTrait"
- "ignoreOnPushNotificationTrait"
additionalPlugins:
- junit:1119.1121.vc43d0fc45561
- prometheus:2.0.11
- saml:4.352.vb_722786ea_79d
- role-strategy:546.ve16648865996
- blueocean-web:1.25.5
- github-branch-source:1677.v731f745ea_0cf
- git-changelog:3.23
- scriptler:3.5
- sshd:3.249.v2dc2ea_416e33
- rich-text-publisher-plugin:1.4
- matrix-project:785.v06b_7f47b_c631
- build-failure-analyzer:2.3.0
- testng-plugin:555.va0d5f66521e3
- allure-jenkins-plugin:2.30.2
- timestamper:1.18
- ws-cleanup:0.42
- build-timeout:1.21
- slack:616.v03b_1e98d13dd
- email-ext:2.91
- docker-commons:1.19
- docker-workflow:521.v1a_a_dd2073b_2e
- rundeck:3.6.11
- parameter-separator:1.3
- extended-choice-parameter:346.vd87693c5a_86c
- uno-choice:2.6.3
adminPassword: ""
ingress:
enabled: true
hostName: jenkins.non-prod.us-west-2.int.isappcloud.com
ingressClassName: nginx-int
installPlugins:
- kubernetes:3883.v4d70a_a_a_df034
- workflow-aggregator:590.v6a_d052e5a_a_b_5
- git:5.0.0
- configuration-as-code:1569.vb_72405b_80249
jenkinsUrlProtocol: https
prometheus:
enabled: true
resources:
limits:
cpu: "4"
memory: 8Gi
requests:
cpu: "2"
memory: 4Gi
sidecars:
configAutoReload:
resources:
requests:
cpu: 128m
memory: 256Mi
statefulSetAnnotations:
pulumi.com/patchForce: "true"
Name: eks-non-prod-us-west-2-jenkins
department: aba
division: enterprise
environment: non-prod
owner: devops
project: eks-non-prod-us-west-2-jenkins
team: infra
tag: 2.362-jdk11
version: 4.1.13
Jenkins:stackTags:
Name: eks-non-prod-us-west-2-jenkins
department: aba
division: enterprise
environment: non-prod
owner: devops
project: eks-non-prod-us-west-2-jenkins
team: infra
aws:region: us-west-2
</code></pre>
| EilonA | <p>The kubernetes plugin has a summary of what the JNLP is used for. It's recommended to retain the JNLP container, and the name is JNLP mostly for historical reasons. It sounds like it's not JWS.</p>
<blockquote>
<p>The Kubernetes plugin allocates Jenkins agents in Kubernetes pods. Within these pods, there is always one special container jnlp that is running the Jenkins agent. Other containers can run arbitrary processes of your choosing, and it is possible to run commands dynamically in any container in the agent pod...
Commands will be executed by default in the jnlp container, where the Jenkins agent is running. (The jnlp name is historical and is retained for compatibility.)
...In addition to that, in the Kubernetes Pod Template section, we need to configure the image that will be used to spin up the agent pod. We do not recommend overriding the jnlp container except under unusual circumstances.</p>
</blockquote>
<p><a href="https://plugins.jenkins.io/kubernetes/" rel="nofollow noreferrer">https://plugins.jenkins.io/kubernetes/</a></p>
<p>To customize the jnlp image you specify that in the agent block then using the container label in the container block to run on that container:</p>
<pre><code>pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: jnlp
image: 'jenkins/inbound-agent' // your image you want to override
args: ['\$(JENKINS_SECRET)', '\$(JENKINS_NAME)']
- name: maven
image: maven:alpine
command:
- cat
tty: true
- name: busybox
image: busybox
command:
- cat
tty: true
'''
retries 2
}
}
stages {
stage('Run maven') {
steps {
container('maven') { // specify which container to run this on
sh 'mvn -version'
}
container('busybox') {
sh '/bin/busybox'
}
}
}
}
}
</code></pre>
| chubbsondubs |
<p>Kubernetes kind <code>Deployment</code> doesn't allow patch changes in <code>spec.selector.matchLabels</code>, so any new deployments (managed by Helm or otherwise) that want to change the labels can't use the RollingUpdate feature within a Deployment. What's the best way to achieve a rollout of a new deployment without causing downtime?</p>
<p>Minimum example:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: foo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: foo
template:
metadata:
labels:
app: foo
spec:
containers:
- name: foo
image: ubuntu:latest
command: ["/bin/bash", "-ec", "sleep infinity"]
</code></pre>
<p>Apply this, then edit the labels (both matchLabels and metadata.labels) to <code>foo2</code>. If you try to apply this new deployment, k8s will complain (by design) The <code>Deployment "foo" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"foo2"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable</code>.</p>
<p>The only way I can think of right now is to use a new Deployment name so the new deployment does not try to patch the old one, and then delete the old one, with the ingress/load balancer resources handling the transition. Then we can redeploy with the old name, and delete the new name, completing the migration.</p>
<p>Is there a way to do it with fewer k8s CLI steps? Perhaps I can edit/delete something that keeps the old pods alive while the new pods roll out under the same name?</p>
| snugghash | <p>I just did this, and I followed the four-step process you describe. I think the answer is no, there is no better way.</p>
<p>My service was managed by Helm. For that I literally created four merge requests that needed to be rolled out sequentially:</p>
<ol>
<li>Add identical deployment "foo-temp", only name is different.</li>
<li>Delete deployment foo.</li>
<li>Recreate deployment foo with desired label selector.</li>
<li>Delete deployment foo-temp.</li>
</ol>
<p>I tested shortcutting the process (combining step 1 and 2), but it doesn't work - helm deletes one deployment before it creates the other, and then you have downtime.</p>
<p>The good news is: in my case i didn't need to change any other descriptors (charts), so it was not so bad. All the relationships (traffic routing, etc) were made via label matching. Since foo-temp had the same labels, the relationships worked automatically. The only issue was that my HPA referenced the name, not the labels. Instead of modifying it, I left foo-temp without an HPA and just specified a high amount of replicas for it. The HPA didn't complain when its target didn't exist between step 2 and 3.</p>
| Fletch |
<p>I've been using <code>kubectl -vvvv ...</code> a lot to learn about the different HTTP requests sent to the API server for different commands.</p>
<p>However, I cannot seem to find a way of achieving the same with <code>docker</code>.</p>
<p>In particular, I've considered <code>docker --debug ...</code>, but e.g. <code>docker --debug ps</code> just displays the normal output.</p>
<p>How can I make <code>docker</code> output the HTTP requests sent to the daemon?</p>
| Shuzheng | <blockquote>
<p>How can I make docker output the HTTP requests sent to the daemon?</p>
</blockquote>
<p>You can't, but you can set up a proxy server between the client and the docker daemon so that you can see the requests. The <code>socat</code> tool is useful for this. Set up a proxy by running:</p>
<pre><code>socat -v unix-listen:/tmp/docker.sock,fork unix-connect:/var/run/docker.sock
</code></pre>
<p>And then point <code>docker</code> at the proxy:</p>
<pre><code>docker -H unix:///tmp/docker.sock ps
</code></pre>
<p>As you make requests with <code>docker</code>, you'll see the requests and replies in the output from the <code>socat</code> command.</p>
<p>(You can set the <code>DOCKER_HOST</code> environment variable if you get tired of typing the <code>-H ...</code> command line option.)</p>
| larsks |
<p>I'm updating some of my Kubernetes configurations to use <code>'replacements'</code> and <code>'resources'</code> in kustomize as <code>'vars'</code> and <code>'bases'</code> have been deprecated.</p>
<p>Previously, I used <code>'vars'</code> in a base (<code>/base/secrets/</code>) like this:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: test_secret
env: secret.env
vars:
- name : SECRET_VALUE
objref:
kind: Secret
name: test_secret
apiVersion: v1
fieldref:
fieldpath: metadata.name
</code></pre>
<p>This base was used in various overlays for different services:</p>
<pre><code>namespace: test-overlay
bases:
- ../../base/secrets/
- ../../base/service/
</code></pre>
<p>Now, with <code>'resources'</code> and <code>'replacements'</code>, my understanding is that it's not possible to replace values in <code>/base/service/</code> from <code>/base/secrets/</code> as before. I could apply the <code>'replacement'</code> in the overlay itself and target the base I want to modify, but I would prefer to perform the operation from a base for reusability and ease of use.</p>
<p>Here's what I'm trying to do:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: test_secret
env: secret.env
replacements:
- source:
name: test_secret
kind: Secret
targets:
- select:
kind: Deployment
name: service
fieldPaths:
- spec.template.spec.<field>
</code></pre>
<p>In the <code>'replacements'</code> directive,<code> spec.template.spec.<field></code> is the field in the Deployment resource that I'm trying to replace.</p>
<p>I'm using kustomize version <code>v5.1.0</code>.</p>
<p>How can I get <code>'replacements'</code> to target other bases so that they can be used from any overlay? What's the best practice for this scenario?</p>
<p>I've attempted to apply the 'replacements' in the overlay itself and target the base I want to modify like this:</p>
<pre><code>namespace: test-overlay
resources:
- ../../base/secrets/
- ../../base/service/
replacements:
- source:
kind: Secret
name: test_secret
targets:
- select:
kind: Deployment
name: service
fieldPaths:
- spec.template.spec.<field>
</code></pre>
<p>While this does apply the replacement to the service, it's not a satisfactory solution for me because I have multiple overlays that all need to use the same replacement for various deployments. I would prefer to define the replacement once in a base, rather than having to define it in each overlay.</p>
<p><strong>Edit: A more clear minimal reproducible example</strong></p>
<pre><code>/base
/secrets
kustomization.yaml
/service
deployment.yaml
kustomization.yaml
/overlays
/test-overlay
kustomization.yaml
</code></pre>
<p>With the <code>/secrets/</code> implemented as:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: test_secret
env: secret.env
replacements:
- source:
name: test_secret
kind: Secret
targets:
- select:
kind: Deployment
name: service
fieldPaths:
- spec.template.spec.volumes.name
</code></pre>
<p>This would be the <code>/service/</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service
spec:
template:
spec:
volumes:
- name: placeholder_value
emptyDir: {}
</code></pre>
<p>With <code>/test-overlay/</code></p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test-overlay
resources:
- ../../base/secrets/
- ../../base/service/
</code></pre>
<p>I've tried including the <code>'replacements'</code> in the test-overlays <code>kustomization</code> file which did work but is less desirable then letting the replacement be defined in a base</p>
| unitop | <p>Thanks for updating your question with an example.</p>
<p>The answer I've posted here is still the correct solution in terms of sharing a replacement configuration among multiple overlays, but there are some errors in your <code>replacement</code> syntax: you cannot target <code>spec.template.spec.volumes.name</code>, because <code>volumes</code> is a list and has no <code>name</code> attribute.</p>
<p>You can only target list elements with a <code>[name=value]</code> style selector, so:</p>
<pre><code>replacements:
- source:
name: test_secret
kind: Secret
targets:
- select:
kind: Deployment
name: service
fieldPaths:
- spec.template.spec.volumes.[name=placeholder_value].name
</code></pre>
<hr />
<p>A <code>kustomization.yaml</code> can only apply transformations (labels, patches, replacements, etc) to resources that are emitted by that <code>kustomization.yaml</code> -- which means that if you want a transformation to affect all resources, it needs to be applied in the "outermost" kustomization.</p>
<p>This means that you can't place something in a "base" that will modify resources generated in your overlays.</p>
<p>But don't worry, there is a solution! <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md" rel="nofollow noreferrer">Components</a> allow you to reuse kustomization fragments. If we move your replacement configuration into a component, we can get the behavior you want.</p>
<p>For example, here is a project with a base and two overlays:</p>
<pre><code>.
βββ base
βΒ Β βββ deployment.yaml
βΒ Β βββ kustomization.yaml
βββ components
βΒ Β βββ replace-username-password
βΒ Β βββ kustomization.yaml
βββ overlay
βββ env1
βΒ Β βββ kustomization.yaml
βββ env2
βββ kustomization.yaml
</code></pre>
<p><code>base/deployment.yaml</code> looks like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 2
template:
spec:
containers:
- name: example
image: docker.io/alpine:latest
command:
- sleep
- inf
env:
- name: USER_NAME
value: update-via-replacement
- name: USER_PASSWORD
value: update-via-replacement
</code></pre>
<p>And <code>base/kustomization.yaml</code> looks like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: replacement-example
resources:
- deployment.yaml
secretGenerator:
- name: example
literals:
- password=secret
configMapGenerator:
- name: example
literals:
- username=alice
</code></pre>
<p>So the <code>base</code> directory results in a Deployment, a Secret, and a ConfigMap. There are two overlays, <code>env1</code> and <code>env2</code>. In both overlays I want to apply the same replacement configuration, so I put that into <code>components/replace-username-password/kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
replacements:
- source:
kind: ConfigMap
name: example
fieldPath: data.username
targets:
- select:
kind: Deployment
name: example
fieldPaths:
- spec.template.spec.containers.[name=example].env.[name=USER_NAME].value
- source:
kind: Secret
name: example
fieldPath: data.password
targets:
- select:
kind: Deployment
name: example
fieldPaths:
- spec.template.spec.containers.[name=example].env.[name=USER_PASSWORD].value
</code></pre>
<p>Now in <code>overlays/env1/kustomization.yaml</code> I can make use of this component:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
envName: env1
resources:
- ../../base
components:
- ../../components/replace-username-password
</code></pre>
<p>And the same in <code>overlays/env2</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
envName: env2
resources:
- ../../base
components:
- ../../components/replace-username-password
</code></pre>
| larsks |
<p>I'm currently using the Kubernetes Plugin for Jenkins to on-demand provision Jenkins workers on my kubernetes cluster. </p>
<p>A base image for the worker node is stored in my (artifactory) docker registry, and the Kubernetes plugin is configured to pull this image to spawn workers.</p>
<p>My artifactory docker repo was not using authentication but I've now moved it to authenticating image pulls. However there is no apparent way to provide the registry credentials via the UI. </p>
<p>The Jenkins K8s plugin documentation doesn't appear to mention a way to do this via the UI either. There is minimal documentation on the "imagePullSecrets" parameter, but the scope of this seems to apply to pipeline definition or kubernetes template definitions, which seems like overkill.</p>
<p>Is there something I'm missing? I'd be thankful if someone could point out the steps to configure this without having to create a kubernetes template configuration from scratch again.</p>
<p>Thanks in advance!</p>
| Traiano Welcome | <p>the <code>imagePullSecret</code> relates to a Kubernetes secret where your credentials are stored </p>
<p>Details of how to create the Kubernetes can be found here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">pull image from private registry</a></p>
<h1>Create a Secret by providing credentials on the command line</h1>
<p>Create this Secret, naming it regcred:</p>
<pre><code>kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
</code></pre>
<p>where:</p>
<pre><code><your-registry-server> is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub)
<your-name> is your Docker username.
<your-pword> is your Docker password.
<your-email> is your Docker email.
</code></pre>
<p>then you should be able to set your <code>imagepullsecret</code> to: regcred</p>
| Graeme |
<p>For default service account I have creating clusterrolebinding for cluster role=cluster-admin <br>using below kubectl command</p>
<pre><code>kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=rbac-test:default
</code></pre>
<p>cluster-admin role is bind to default service account. <br>
How to unbind it again from service account?</p>
| Anurag_BEHS | <p>When you run your <code>kubectl</code> command it creates the following object:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: add-on-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: rbac-test
</code></pre>
<p>You should be able to just delete that object:</p>
<pre><code>kubectl delete clusterrolebinding add-on-cluster-admin
</code></pre>
| larsks |
<p>I am new to kustomize and trying to figure out how to patch my ingress that is running via OpenShift Container Platform.</p>
<p>The base config works fine and the overlay was working until I introduced my ingress overlay patch. For reference the patch-service overlay does work so I am pretty sure my structure and links are all good, however the error I get on my project when syncronising to OpenShift (via ArgoCD) is:</p>
<p><code>one or more objects failed to apply, reason: Ingress in version "v1" cannot be handled as a Ingress: json: cannot unmarshal number into Go struct field IngressServiceBackend.spec.defaultBackend.service.port of type v1.ServiceBackendPort (retried 5 times).</code></p>
<p><strong>Application Repo Structure</strong></p>
<pre><code>|mssql-example
βββ base
β βββ deployment.yaml
β βββ ingress.yaml
β βββ kustomization.yaml
β βββ storage.yaml
β βββ service.yaml
βββ overlays
βββ nprd-dev
β βββ **patch-ingress.yaml**
β βββ patch-service.yaml
β βββ kustomization.yaml
βββ nprd-sit
β βββ patch-ingress.yaml
β βββ patch-service.yaml
β βββ kustomization.yaml
βββ nprd-uat
β βββ patch-ingress.yaml
β βββ patch-service.yaml
β βββ kustomization.yaml
</code></pre>
<p><strong>mssql-example\base\ingress.yaml</strong></p>
<pre><code>---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: mssql-example-adc
annotations:
ingress.citrix.com/frontend-ip: 192.168.1.10
ingress.citrix.com/insecure-port: '1433'
ingress.citrix.com/insecure-service-type: tcp
kubernetes.io/ingress.class: citrix-vpx
spec:
defaultBackend:
service:
name: mssql-service
port:
number: 31433
</code></pre>
<p><strong>mssql-example\overlays\nprd-dev\kustomization.yaml</strong></p>
<pre><code>---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ops-example-dev
resources:
- ../../base
patches:
- path: patch-ingress.yaml
target:
kind: Ingress
version: v1
name: mssql-example-adc
- path: patch-service.yaml
target:
kind: Service
version: v1
name: mssql-example-tcp
</code></pre>
<p><strong>mssql-example\overlays\nprd-dev\patch-ingress.yaml</strong></p>
<pre><code>---
- op: replace
path: /metadata/name
value: mssql-example-ingress-dev
- op: replace
path: /spec/defaultBackend/service/port
value: 31434
</code></pre>
<p>I think my path may be wrong, but I can't seem to work out how to correctly identify the replacement path for the spec when it is defaultBackend. I tried the path as /spec/defaultBackend/0/service/port and /spec/0/defaultBackend/service/port incase it was an array.</p>
| Daryl | <p>The error is telling you everything you need to know:</p>
<pre><code>cannot unmarshal number into Go struct field
</code></pre>
<p>First, look at the format of your initial Ingress manifest:</p>
<pre><code>kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: mssql-example-adc
annotations:
ingress.citrix.com/frontend-ip: 192.168.1.10
ingress.citrix.com/insecure-port: '1433'
ingress.citrix.com/insecure-service-type: tcp
kubernetes.io/ingress.class: citrix-vpx
spec:
defaultBackend:
service:
name: mssql-service
port:
number: 31433
</code></pre>
<p>Pay particular attention to the structure of <code>spec.defaultBackend.service.port</code>.</p>
<p>Now, look at the output generated by your patch:</p>
<pre><code>$ kustomize build
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.citrix.com/frontend-ip: 192.168.1.10
ingress.citrix.com/insecure-port: "1433"
ingress.citrix.com/insecure-service-type: tcp
kubernetes.io/ingress.class: citrix-vpx
name: mssql-example-ingress-dev
spec:
defaultBackend:
service:
name: mssql-service
port: 31434
</code></pre>
<p>Do you see the difference? You've replaced a structured value:</p>
<pre><code>port:
number: 31433
</code></pre>
<p>With an integer:</p>
<pre><code>port: 31434
</code></pre>
<p>Just update your patch to target <code>...service/port/number</code> instead:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ingress.yaml
patches:
- target:
name: mssql-example-adc
patch: |
- op: replace
path: /metadata/name
value: mssql-example-ingress-dev
- op: replace
path: /spec/defaultBackend/service/port/number
value: 31434
</code></pre>
<p>Which results in:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.citrix.com/frontend-ip: 192.168.1.10
ingress.citrix.com/insecure-port: "1433"
ingress.citrix.com/insecure-service-type: tcp
kubernetes.io/ingress.class: citrix-vpx
name: mssql-example-ingress-dev
spec:
defaultBackend:
service:
name: mssql-service
port:
number: 31434
</code></pre>
| larsks |
<p>I have 2 EKS clusters, in 2 different AWS accounts and with, I might assume, different firewalls (which I don't have access to). The first one (Dev) is all right, however, with the same configuration, UAT cluster pods is struggling to resolve DNS. The Nodes can resolve and seems to be all right.</p>
<p>1) ping 8.8.8.8 works</p>
<pre><code>--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
</code></pre>
<p>2) I can ping the IP of google (and others), but not the actual dns names.</p>
<p>Our configuration:</p>
<ol>
<li>configured with Terraform.</li>
<li>The worker nodes and control plane SG are the same than the dev ones. I believe those are fine.</li>
<li>Added 53 TCP and 53 UDP on inbound + outbound NACl (just to be sure 53 was really open...). Added 53 TCP and 53 UDP outbound from Worker Nodes.</li>
<li>We are using <code>ami-059c6874350e63ca9</code> with 1.14 kubernetes version.</li>
</ol>
<p>I am unsure if the problem is a firewall somewhere, coredns, my configuration that needs to be updated or an "stupid mistake". Any help would be appreciated.</p>
| shrimpy | <p>Note that this issue may present itself in many forms (e.g. DNS not resolving is just one possible case). The <code>terraform-awk-eks</code> module exposes a terraform input to create the necessary security group rules that allow these inter worker-group/node-group communications: <code>worker_create_cluster_primary_security_group_rules</code>. More information in this <code>terraform-awk-eks</code> issue <a href="https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1089" rel="nofollow noreferrer">https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1089</a></p>
<p>When the input is enabled, terraform creates the following security group rules:</p>
<pre><code> # module.eks.module.eks.aws_security_group_rule.cluster_primary_ingress_workers[0] will be created
+ resource "aws_security_group_rule" "cluster_primary_ingress_workers" {
+ description = "Allow pods running on workers to send communication to cluster primary security group (e.g. Fargate pods)."
+ from_port = 0
+ id = (known after apply)
+ protocol = "-1"
+ security_group_id = "sg-03bb33d3318e4aa03"
+ self = false
+ source_security_group_id = "sg-0fffc4d49a499a1d8"
+ to_port = 65535
+ type = "ingress"
}
# module.eks.module.eks.aws_security_group_rule.workers_ingress_cluster_primary[0] will be created
+ resource "aws_security_group_rule" "workers_ingress_cluster_primary" {
+ description = "Allow pods running on workers to receive communication from cluster primary security group (e.g. Fargate pods)."
+ from_port = 0
+ id = (known after apply)
+ protocol = "-1"
+ security_group_id = "sg-0fffc4d49a499a1d8"
+ self = false
+ source_security_group_id = "sg-03bb33d3318e4aa03"
+ to_port = 65535
+ type = "ingress"
}
</code></pre>
| fvdnabee |
<p>I am running a load test over a kubernetes pod and i want to sample every 5 minutes the CPU and memory usage of it.
I was currently manually using the linux <code>top</code> command over the kubernetes pod.</p>
<p>Is there any way given a <code>kubernetes pod</code> to fetch the CPU/Memory usage every X minutes and append it to a file ?</p>
| Bercovici Adrian | <p>Try this one-liner:</p>
<pre class="lang-bash prettyprint-override"><code>while [ true ]; do echo $(date) $(date +%s) $(kubectl top -n your-namespace pod $(kubectl get pods -n your-namespace -l your-label-name=your-label-value -o jsonpath='{..metadata.name}') | tail -n 1) | tee -a /path/to/save/your/logs.txt; done
</code></pre>
<p>Add <code>sleep 300</code> to sample it every 5 minutes instead of continuously.</p>
<p>It will find a pod in namespace <code>your-namespace</code> with label <code>your-label-name</code> that has value <code>your-label-value</code>, take its name, and will take only the last one such pod, if you have multiple pods with the same label (that's what <code>| tail -n 1</code> for). This way you won't have to determine the name of a pod manually. Then it'll print something like this:</p>
<pre><code>Sun, Mar 12, 2023 4:59:05 PM 1678640345 your-pod-name-5c64678fc6-rsldm 47m 657Mi
</code></pre>
<p>Where <code>1678640345</code> is Unix milliseconds timestamp written by <code>$(date +%s)</code>. The output will be printed in console (stdout) and mirrored in <code>/path/to/save/your/logs.txt</code> file.</p>
| izogfif |
<p>I have been working with one <a href="https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_patchesjson6902_" rel="nofollow noreferrer"><code>patchesJson6902</code></a> clause in my Kubernetes kustomize configuration; now I want to use overlays (to support different instances) and I want to split the patches (some patches are generic, i.e. for enabling let's-encrypt instead of self-signed for cert-manager for some instances; other patches are specific, i.e. for distinct certificate hostname per instance).</p>
<p>I have a <code>base/04-public-letsencrypt-ingress/kustomization.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>β¦
patchesJson6902:
- target: { group: networking.k8s.io, version: v1, kind: Ingress, name: ingress-frontend }
path: ingress-letsencrypt.patch.yaml
</code></pre>
<p>and a <code>overlay/04-public-letsencrypt-ingress/kustomization.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>bases:
- ../../base/04-public-letsencrypt-ingress/
patchesJson6902:
- target: { group: networking.k8s.io, version: v1, kind: Ingress, name: ingress-frontend }
path: ingress-tls-hostname.patch.yaml
</code></pre>
<p>This does not work as I expected, it seems only one of the <code>patchesJson6902</code> definitions becomes effective.</p>
<p>Is this the expected behavior? Is there a good way to split patches; maybe is there an alternative to <code>patchesJson6902</code> that can also works when defined in multiple places?</p>
| Christian Fuchs | <p>First: the <code>patchesJson6902</code> directive has been deprecated; you should simply be using <code>patches</code>.</p>
<p>With respect to your question, when you include a base kustomization in the <code>resources</code> section of your <code>kustomization.yaml</code>, kustomize neither knows or cares that the base used patches: the patches in your local <code>kustomization.yaml</code> are simply applied to whatever manifests are generated by your <code>resources</code> section (and config/secret generators, etc).</p>
<p>So for example if we have this layout:</p>
<pre><code>.
βββ base
βΒ Β βββ ingress.yaml
βΒ Β βββ kustomization.yaml
βΒ Β βββ set-ingressclass-patch.yaml
βββ overlay
βββ kustomization.yaml
βββ set-ingress-tls-patch.yaml
</code></pre>
<p>And the following files:</p>
<ul>
<li><p><code>base/ingress.yaml</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
spec:
rules:
- host: www.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example
port:
name: http
</code></pre>
</li>
<li><p><code>base/set-ingressclass-patch.yaml</code>:</p>
<pre><code>- path: /metadata/annotations
op: add
value:
kubernetes.io/ingress.class: nginx
</code></pre>
</li>
<li><p><code>base/kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: example
resources:
- ingress.yaml
patches:
- target:
kind: Ingress
name: example
path: set-ingressclass-patch.yaml
</code></pre>
</li>
<li><p><code>overlay/set-ingress-tls-patch.yaml</code>:</p>
<pre><code>- path: /metadata/annotations/cert-manager.io~1cluster-issuer
op: add
value: my_issuer
- path: /spec/tls
op: add
value:
- hosts:
- www.example.com
secretName: example-cert
</code></pre>
</li>
<li><p><code>overlay/kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Ingress
name: example
path: set-ingress-tls-patch.yaml
</code></pre>
</li>
</ul>
<p>Then running <code>kustomize build base</code> produces:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: example
name: example
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: example
port:
name: http
path: /
pathType: Prefix
</code></pre>
<p>And running <code>kustomize build overlay</code> produces:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: my_issuer
kubernetes.io/ingress.class: nginx
labels:
app: example
name: example
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: example
port:
name: http
path: /
pathType: Prefix
tls:
- hosts:
- www.example.com
secretName: example-cert
</code></pre>
<p>Here we can see that the output of the second <code>kustomize build</code> commands takes the manifests produced by <code>base</code> -- which include the patch that sets the ingress class -- and applies the patches in <code>overlay/set-ingress-tls-patch.yaml</code>.</p>
<hr />
<p>You didn't show examples of your patches, so it's hard to tell what's actually going on, but I have a theory. Let's take a closer look at the patches in this example.</p>
<p>The manifest <code>base/ingress.yaml</code> doesn't have an <code>metadata.annotations</code> section. Our patch...</p>
<pre><code>- path: /metadata/annotations
op: add
value:
kubernetes.io/ingress.class: nginx
</code></pre>
<p>...creates the <code>annotations</code> key and content. But when we apply patches in the overlay, the Ingress resource already has an <code>annotations</code> section. If we had written <code>overlay/set-ingress-tls-patch.yaml</code> like this:</p>
<pre><code>- path: /metadata/annotations
op: add
value:
cert-manager.io/cluster-issuer: my_issuer
</code></pre>
<p>That would have <em>replaced</em> the <code>annotations</code> key with a new value! The resulting manifest would look like:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: my_issuer
labels:
app: example
name: example
spec:
...
</code></pre>
<p>But we don't want to <em>replace</em> the <code>annotations</code> section; we want to add a new key. So instead our patch looks like:</p>
<pre><code>- path: /metadata/annotations/cert-manager.io~1cluster-issuer
op: add
value: my_issuer
</code></pre>
<p>This is adding a new key under <code>annotations</code>, rather than replacing the entire <code>annotations</code> key. The pattern <code>~1</code> is how you <a href="https://github.com/json-patch/json-patch-tests/issues/42" rel="nofollow noreferrer">escape a forward slash in a JSON pointer expression</a>.</p>
<hr />
<p>The above all works, but I using strategic merge patches would be simpler and more obvious. In that case, <code>base/set-ingressclass-patch.yaml</code> would look like:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
</code></pre>
<p>And <code>base/set-ingress-tls-patch.yaml</code> would look like:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: my_issuer
name: example
spec:
tls:
- hosts:
- www.example.com
secretName: example-cert
</code></pre>
<p>Using this style of patch, you don't need to set a <code>target</code> in your <code>kustomization.yaml</code> because the target is implied by the <code>kind</code> and <code>metadata.name</code> in the patch. E.g., <code>base/kustomization.yaml</code> would look like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: example
resources:
- ingress.yaml
patches:
- path: set-ingressclass-patch.yaml
</code></pre>
<p>I think that's easier to understand than the JSONPatch style patches, but that's a matter of opinion and as we've demonstrated here either will work.</p>
| larsks |
<p>When creating an ingress resource in GCE using the Nginx ingress controller, the ingress resource is stuck on "Creating ingress". Any custom annotations appear to be lost, but I can access the URL defined by the ingress.</p>
<p>What could be causing this?</p>
<p><a href="https://i.stack.imgur.com/u3uLe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u3uLe.png" alt="enter image description here"></a></p>
| Phyxx | <p>This turned out to be because I was sending the annotation</p>
<pre><code>nginx.ingress.kubernetes.io/ssl-redirect: false
</code></pre>
<p>instead of </p>
<pre><code>nginx.ingress.kubernetes.io/ssl-redirect: "false"
</code></pre>
<p>According to <a href="https://github.com/kubernetes/ingress-nginx/issues/1990" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/1990</a>, the Nginx controller only accepts strings containing "true" or "false". By sending boolean values, GCE was hanging.</p>
<p>Interestingly there were no errors indicating a problem, and I could access the ingress URL, which made debugging the problem quote painful.</p>
| Phyxx |
<p>Trying to access Kubernetes dashboard (Azure AKS) by using below command but getting error as attached.</p>
<pre><code>az aks browse --resource-group rg-name --name aks-cluster-name --listen-port 8851
</code></pre>
<p><a href="https://i.stack.imgur.com/IRLRB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IRLRB.jpg" alt="Kubernetes dashboard browing error" /></a></p>
| KRM | <p>Please read AKS documentation of how to authenticate the dashboard from <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-dashboard" rel="nofollow noreferrer">link</a>. This also explains about how to enable the addon for newer version of k8s also.</p>
<p>Pasting here for reference</p>
<p><strong>Use a kubeconfig</strong></p>
<p>For both Azure AD enabled and non-Azure AD enabled clusters, a kubeconfig can be passed in. Ensure access tokens are valid, if your tokens are expired you can refresh tokens via kubectl.</p>
<ol>
<li>Set the admin kubeconfig with <code>az aks get-credentials -a --resource-group <RG_NAME> --name <CLUSTER_NAME></code></li>
<li>Select <code>Kubeconfig</code> and click <code>Choose kubeconfig file</code> to open file selector</li>
<li>Select your kubeconfig file (defaults to $HOME/.kube/config)</li>
<li>Click Sign In</li>
</ol>
<p><strong>Use a token</strong></p>
<ol>
<li>For non-Azure AD enabled cluster, run <code>kubectl config view</code> and copy the token associated with the user account of your cluster.</li>
<li>Paste into the token option at sign in.</li>
<li>Click Sign In</li>
</ol>
<p>For Azure AD enabled clusters, retrieve your AAD token with the following command. Validate you've replaced the resource group and cluster name in the command.</p>
<pre><code>kubectl config view -o jsonpath='{.users[?(@.name == "clusterUser_<RESOURCE GROUP>_<AKS_NAME>")].user.auth-provider.config.access-token}'
</code></pre>
| Atul |
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: echo-job
spec:
template:
metadata:
name: echo
spec:
containers:
- name: echo
image: busybox
command: ['echo', 'Hello Kubernetes Jobs!']
restartPolicy: Never
backoffLimit: 4
</code></pre>
<p>Can someone please explain to me what is the difference between the following two fields:
<code>metadata: name: echo-job</code>
and
<code>spec: template: metadata: name: echo</code></p>
<p>What do these two different values represent?</p>
| james gem | <p><code>metadata.name</code> is the name <strong>of the job</strong>.</p>
<p><code>spec.template.metadata.name</code> is the name <strong>of the pod created by the job</strong> (which will also have a random suffix appended).</p>
<p>If I submit your manifest, we see the Job:</p>
<pre><code>$ kubectl get job
NAME COMPLETIONS DURATION AGE
echo-job 1/1 4s 14s
</code></pre>
<p>And we see the Pod created to run the Job:</p>
<pre><code>$ kubectl get pod
NAME READY STATUS RESTARTS AGE
echo-job-q9jxq 0/1 Completed 0 51s
</code></pre>
<hr />
<p>A single Job may end up creating multiple Pods if the Job fails. For example, if we replace your <code>echo</code> command with <code>["sh", "-c", "exit 1"]</code>, we might see:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
echo-job-2n9qt 0/1 Error 0 2s
echo-job-hw5wj 0/1 Error 0 22s
echo-job-n44ql 0/1 Error 0 33s
</code></pre>
| larsks |
<p>I need to get node name and IP address of each node and then create dictionary object. I am able to get Kubernetes node list using below command </p>
<pre><code> - hosts: k8s
tasks:
- name: get cluster nodes
shell: "kubectl get nodes -o wide --no-headers | awk '{ print $1 ,$7}'"
register: nodes
- debug: var=nodes
- set_fact:
node_data: {}
- name: display node name
debug:
msg: "name is {{item.split(' ').0}}"
with_items: "{{nodes.stdout_lines}}"
- set_fact:
node_data: "{{ node_data | combine ( item.split(' ').0 : { 'name': item.split(' ').0 , 'ip' : item.split(' ').1 }, recursive=True) }}"
with_items: "{{ nodes.stdout_lines }}"
- debug: var=node_data
</code></pre>
<p>I got below error:</p>
<blockquote>
<p>FAILED! => {"msg": "template error while templating string: expected
token ',', got ':'. String: {{ node_data | combine ( item.split(' ').0
: { 'name':item.split(' ').0 , 'ip': item.split(' ').1 },
recursive=True) }}"}</p>
</blockquote>
<p>Output of kubectl command given below</p>
<pre><code>kubectl get nodes -o wide --no-headers | awk '{ print $1 ,$7}'
</code></pre>
<p>is as follows</p>
<pre><code>> ip-192-168-17-93.ec2.internal 55.175.171.80
> ip-192-168-29-91.ec2.internal 3.23.224.95
> ip-192-168-83-37.ec2.internal 54.196.19.195
> ip-192-168-62-241.ec2.internal 107.23.129.142
</code></pre>
<p>How to get the nodename and ip address into dictionary object in ansible?</p>
| Samselvaprabu | <p>The first argument to the <code>combine</code> filter must be a dictionary. You're calling:</p>
<pre><code> - set_fact:
node_data: "{{ node_data | combine ( item.split(' ').0 : { 'name': item.split(' ').0 , 'ip' : item.split(' ').1 }, recursive=True) }}"
with_items: "{{ nodes.stdout_lines }}"
</code></pre>
<p>You need to make that:</p>
<pre><code> - set_fact:
node_data: "{{ node_data | combine ({item.split(' ').0 : { 'name': item.split(' ').0 , 'ip' : item.split(' ').1 }}, recursive=True) }}"
with_items: "{{ nodes.stdout_lines }}"
</code></pre>
<p>Note the new <code>{...}</code> around your first argument to <code>combine</code>. You might want to consider reformatting this task for clarity, which might make this sort of issue more obvious:</p>
<pre><code> - set_fact:
node_data: >-
{{ node_data | combine ({
item.split(' ').0: {
'name': item.split(' ').0,
'ip': item.split(' ').1
},
}, recursive=True) }}
with_items: "{{ nodes.stdout_lines }}"
</code></pre>
<hr>
<p>You could even make it a little more clear by moving the calls to <code>item.split</code> into a <code>vars</code> section, like this:</p>
<pre><code> - set_fact:
node_data: >-
{{ node_data | combine ({
name: {
'name': name,
'ip': ip
},
}, recursive=True) }}
vars:
name: "{{ item.split(' ').0 }}"
ip: "{{ item.split(' ').1 }}"
with_items: "{{ nodes.stdout_lines }}"
</code></pre>
| larsks |
<p>I tried to configure envoy in my kubernetes cluster by following this example: <a href="https://www.envoyproxy.io/docs/envoy/latest/start/quick-start/configuration-dynamic-filesystem" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/start/quick-start/configuration-dynamic-filesystem</a></p>
<p>My static envoy config:</p>
<pre><code> node:
cluster: test-cluster
id: test-id
dynamic_resources:
cds_config:
path: /var/lib/envoy/cds.yaml
lds_config:
path: /var/lib/envoy/lds.yaml
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 19000
</code></pre>
<p>The dynamic config from configmap is mounted to and contains the files .</p>
<p>I used a configmap to mount the config files (<code>cds.yaml</code> and <code>lds.yaml</code>) into to envoy pod (to <code>/var/lib/envoy/</code>) but unfortunately the envoy configuration doesn't change when I change the config in the configmap. The mounted config files are updated as expected.</p>
<p>I can see from the logs, that envoy watches the config files:</p>
<pre><code>[2021-03-01 17:50:21.063][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:47] added watch for directory: '/var/lib/envoy' file: 'cds.yaml' fd: 1
[2021-03-01 17:50:21.063][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:140] maybe finish initialize state: 1
[2021-03-01 17:50:21.063][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:149] maybe finish initialize primary init clusters empty: true
[2021-03-01 17:50:21.063][1][info][config] [source/server/configuration_impl.cc:95] loading 0 listener(s)
[2021-03-01 17:50:21.063][1][info][config] [source/server/configuration_impl.cc:107] loading stats configuration
[2021-03-01 17:50:21.063][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:47] added watch for directory: '/var/lib/envoy' file: 'lds.yaml' fd: 1
</code></pre>
<p>and once I update the configmap I also get the logs that something changed:</p>
<pre><code>[2021-03-01 17:51:50.881][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:72] notification: fd: 1 mask: 80 file: ..data
[2021-03-01 17:51:50.881][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:72] notification: fd: 1 mask: 80 file: ..data
</code></pre>
<p>but envoy doesn't reload the config.</p>
<p>It seems that kubernetes updates the config files by changing a directory and envoy doesn't recognise that the config files are changed.</p>
<p>Is there an easy way to fix that? I don't want to run and xDS server for my tests but hot config reload would be great for my testing π</p>
<p>Thanks!</p>
| sschoebinger | <p>I think the answer to your issue is that the filesystem events that Envoy uses to reload its xDS config are not triggered by configmap volumes. <a href="https://github.com/mumoshu/crossover#why-not-use-configmap-volumes" rel="nofollow noreferrer">See more explanation in the README for the crossover utility.</a></p>
| Nick Sieger |
<p>I am using fluentbit as a pod deployment where I am creating many fluentbit pods which are attached to azure blob containers. Since multiple pods exist I tried adding tolerations as I did on daemonset deployment but it did not work and failed. Also every time I delete and start the pods reinvests all the the again. Please advise on fixing these issues.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: deployment
spec:
volumes:
- name: config_map_name
configMap:
name: config_map_name
- name: pvc_name
persistentVolumeClaim:
claimName: pvc_name
containers:
- name: fluentbit-logger
image: fluent/fluent-bit:2.1.3
env:
- name: FLUENTBIT_USER
valueFrom:
secretKeyRef:
name: fluentbit-secret
key: user
- name: FLUENTBIT_PWD
valueFrom:
secretKeyRef:
name: fluentbit-secret
key: pwd
resources:
requests:
memory: "32Mi"
cpu: "50m"
limits:
memory: "64Mi"
cpu: "100m"
securityContext:
runAsUser: 0
privileged: true
volumeMounts:
- name: config_map_name
mountPath: "/fluent-bit/etc"
- name: pvc_name
mountPath: mount_path
tolerations:
- key: "dedicated"
operator: "Equal"
value: "sgelk"
effect: "NoSchedule"
- key: "dedicated"
operator: "Equal"
value: "kafka"
effect: "NoSchedule"
</code></pre>
<p>Getting the error as below</p>
<pre><code>error: error validating "/tmp/fluentbit-deploy.yaml": error validating data: ValidationError(Pod.spec.containers[0]): unknown field "tolerations" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
| yasin mohammed | <p>The <code>tolerations</code> attribute needs to be set on the pod, but you are attempting to set it on a container (that's why you see the error "unknown field "tolerations" in io.k8s.api.core.v1.Container"). You would need to write:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: deployment
spec:
volumes:
- name: config_map_name
configMap:
name: config_map_name
- name: pvc_name
persistentVolumeClaim:
claimName: pvc_name
containers:
- name: fluentbit-logger
image: fluent/fluent-bit:2.1.3
env:
- name: FLUENTBIT_USER
valueFrom:
secretKeyRef:
name: fluentbit-secret
key: user
- name: FLUENTBIT_PWD
valueFrom:
secretKeyRef:
name: fluentbit-secret
key: pwd
resources:
requests:
memory: "32Mi"
cpu: "50m"
limits:
memory: "64Mi"
cpu: "100m"
securityContext:
runAsUser: 0
privileged: true
volumeMounts:
- name: config_map_name
mountPath: "/fluent-bit/etc"
- name: pvc_name
mountPath: mount_path
tolerations:
- key: "dedicated"
operator: "Equal"
value: "sgelk"
effect: "NoSchedule"
- key: "dedicated"
operator: "Equal"
value: "kafka"
effect: "NoSchedule"
</code></pre>
| larsks |
<p>I'm trying to mount an <code>azureFile</code> volume on a Windows K8S pod, but I get the error </p>
<blockquote>
<p>MountVolume.SetUp failed for volume "azure-file-share" : azureMount:
SmbGlobalMapping failed: fork/exec
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe: The
parameter is incorrect., only SMB mount is supported now, output: ""</p>
</blockquote>
<p>What is wrong?</p>
| Phyxx | <p>The issue where was a bad <code>azurestorageaccountkey</code> value in the secret. You can have a secret like:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: volume-azurefile-storage-secret
type: Opaque
data:
azurestorageaccountname: <base 64 encoded account name>
azurestorageaccountkey: <base 64 encoded account key>
</code></pre>
<p>What was throwing me was that Azure already base 64 encodes the account key, and it was not clear if you need to double encode it for this secret file. </p>
<p>The answer is yes, you do double encode it. If you do not, you get the error from the question.</p>
| Phyxx |
<p>I have a local minikube cluster on hyper-v, when i try to pull images from my private repository i get this error :</p>
<p><em>Failed to pull image "my-repolink/image": rpc error: code = Unknown desc = Error response from daemon: Get my-repolink/v2/: x509: certificate signed by unknown authority</em></p>
<p>When running <code>minikube docker-env</code> i get:</p>
<pre><code>$Env:DOCKER_TLS_VERIFY = "1"
$Env:DOCKER_HOST = "tcp://IP:2376"
$Env:DOCKER_CERT_PATH = "C:\Users\myuser\.minikube\certs"
$Env:MINIKUBE_ACTIVE_DOCKERD = "minikube"
</code></pre>
<p>I was wanderring if i can change the DOCKER_TLS_VERIFY to "0" (if yes how plz?) and if it will have any effect on this error?</p>
| M__ | <p>You need to tell minikube which certificates to trust. The <a href="https://minikube.sigs.k8s.io/docs/reference/networking/proxy/#x509-certificate-signed-by-unknown-authority" rel="nofollow noreferrer">official doc</a> mentions this specific issue.</p>
<p>The suggestion is to put the appropriate self-signed ceritificate of your private registry into <code>~/.minikube/files/etc/ssl/certs</code>; then run <code>minikube delete</code> followed by <code>minikube start</code>.</p>
| Farcaller |
<p>I want all members of security group <code>sg-a</code> to be able to access several ports, e.g. 6443 (kubernetes api server), on all instances in <code>sg-a</code>: including themselves. </p>
<p>I create a rule in <code>sg-a</code> that says</p>
<ul>
<li>Type: Custom TCP</li>
<li>Protocol: TCP</li>
<li>Port Range: 6443</li>
<li>Source: <code>sg-a</code> </li>
</ul>
<p>However, <code>instanceA</code> cannot access port 6443 on itself. </p>
<p>When I update "Source" to Source: <code>instanceA.public.ip.address</code> , then <code>instanceA</code> can access port 6443 on itself.</p>
<p>However, I now have instance specific rules in my security group. If possible, I would like to find a solution where I do not have to add new rules when I add a new instance to my security group</p>
| MCI | <p>For the security group to operate as you describe, the instances will need to connect to each other via a <strong>Private IP address</strong>.</p>
<p>The fact that it works if you allow the Public IP address indicates that the connection is being made by the public IP address.</p>
| John Rotenstein |
<p>I have KeyCloak Gateway running successfully locally providing Google OIDC authentication for the Kubernetes dashboard. However using the same settings results in an error when the app is deployed as a pod in the cluster itself.</p>
<p>The error I see when the Gateway is running in a K8S pod is:</p>
<pre><code>unable to exchange code for access token {"error": "invalid_request: Credentials in post body and basic Authorization header do not match"}
</code></pre>
<p>I'm calling the gateway with the following options:</p>
<pre><code>--enable-logging=true
--enable-self-signed-tls=true
--listen=:443
--upstream-url=https://mydashboard
--discovery-url=https://accounts.google.com
--client-id=<client id goes here>
--client-secret=<secret goes here>
--resources=uri=/*
</code></pre>
<p>With these settings applied to a container in a pod I can browse to the Gateway, am redirected to Google to log in, and then am redirected back to the Gateway where the error above is generated.</p>
<p>What could account for the difference between running the application locally and running it in a pod that would generate the above error?</p>
| Phyxx | <p>This turned out to be a copy/paste fail in the end, with the client secret being incorrect. The error message wasn't much help here, but at least it was a simple fix.</p>
| Phyxx |
<p>I'm deploying a pod written in quarkus in kubernetes and the startup seems to go fine. But there's a problem with readiness and liveness that result unhealthy.
For metrics I'm using smallrye metrics configured on port 8080 and on path:</p>
<pre><code>quarkus.smallrye-metrics.path=/metrics
</code></pre>
<p>If i enter in the pod and i execute</p>
<pre><code>curl localhost:8080/metrics
</code></pre>
<p>the response is</p>
<pre><code># HELP base_classloader_loadedClasses_count Displays the number of classes that are currently loaded in the Java virtual machine.
# TYPE base_classloader_loadedClasses_count gauge
base_classloader_loadedClasses_count 7399.0
# HELP base_classloader_loadedClasses_total Displays the total number of classes that have been loaded since the Java virtual machine has started execution.
# TYPE base_classloader_loadedClasses_total counter
base_classloader_loadedClasses_total 7403.0
# HELP base_classloader_unloadedClasses_total Displays the total number of classes unloaded since the Java virtual machine has started execution.
# TYPE base_classloader_unloadedClasses_total counter
base_classloader_unloadedClasses_total 4.0
# HELP base_cpu_availableProcessors Displays the number of processors available to the Java virtual machine. This value may change during a particular invocation of the virtual machine.
# TYPE base_cpu_availableProcessors gauge
base_cpu_availableProcessors 1.0
# HELP base_cpu_processCpuLoad_percent Displays the "recent cpu usage" for the Java Virtual Machine process. This value is a double in the [0.0,1.0] interval. A value of 0.0 means that none of the CPUs were running threads from the JVM process during the recent period of time observed, while a value of 1.0 means that all CPUs were actively running threads from the JVM 100% of the time during the recent period being observed. Threads from the JVM include the application threads as well as the JVM internal threads. All values between 0.0 and 1.0 are possible depending of the activities going on in the JVM process and the whole system. If the Java Virtual Machine recent CPU usage is not available, the method returns a negative value.
# TYPE base_cpu_processCpuLoad_percent gauge
base_cpu_processCpuLoad_percent 2.3218608761411404E-7
# HELP base_cpu_systemLoadAverage Displays the system load average for the last minute. The system load average is the sum of the number of runnable entities queued to the available processors and the number of runnable entities running on the available processors averaged over a period of time. The way in which the load average is calculated is operating system specific but is typically a damped time-dependent average. If the load average is not available, a negative value is displayed. This attribute is designed to provide a hint about the system load and may be queried frequently. The load average may be unavailable on some platforms where it is expensive to implement this method.
# TYPE base_cpu_systemLoadAverage gauge
base_cpu_systemLoadAverage 0.15
# HELP base_gc_time_total Displays the approximate accumulated collection elapsed time in milliseconds. This attribute displays -1 if the collection elapsed time is undefined for this collector. The Java virtual machine implementation may use a high resolution timer to measure the elapsed time. This attribute may display the same value even if the collection count has been incremented if the collection elapsed time is very short.
# TYPE base_gc_time_total counter
base_gc_time_total_seconds{name="Copy"} 0.032
base_gc_time_total_seconds{name="MarkSweepCompact"} 0.071
# HELP base_gc_total Displays the total number of collections that have occurred. This attribute lists -1 if the collection count is undefined for this collector.
# TYPE base_gc_total counter
base_gc_total{name="Copy"} 4.0
base_gc_total{name="MarkSweepCompact"} 2.0
# HELP base_jvm_uptime_seconds Displays the time from the start of the Java virtual machine in milliseconds.
# TYPE base_jvm_uptime_seconds gauge
base_jvm_uptime_seconds 624.763
# HELP base_memory_committedHeap_bytes Displays the amount of memory in bytes that is committed for the Java virtual machine to use. This amount of memory is guaranteed for the Java virtual machine to use.
# TYPE base_memory_committedHeap_bytes gauge
base_memory_committedHeap_bytes 8.5262336E7
# HELP base_memory_maxHeap_bytes Displays the maximum amount of heap memory in bytes that can be used for memory management. This attribute displays -1 if the maximum heap memory size is undefined. This amount of memory is not guaranteed to be available for memory management if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.
# TYPE base_memory_maxHeap_bytes gauge
base_memory_maxHeap_bytes 1.348141056E9
# HELP base_memory_usedHeap_bytes Displays the amount of used heap memory in bytes.
# TYPE base_memory_usedHeap_bytes gauge
base_memory_usedHeap_bytes 1.2666888E7
# HELP base_thread_count Displays the current number of live threads including both daemon and non-daemon threads
# TYPE base_thread_count gauge
base_thread_count 11.0
# HELP base_thread_daemon_count Displays the current number of live daemon threads.
# TYPE base_thread_daemon_count gauge
base_thread_daemon_count 7.0
# HELP base_thread_max_count Displays the peak live thread count since the Java virtual machine started or peak was reset. This includes daemon and non-daemon threads.
# TYPE base_thread_max_count gauge
base_thread_max_count 11.0
# HELP vendor_cpu_processCpuTime_seconds Displays the CPU time used by the process on which the Java virtual machine is running in nanoseconds. The returned value is of nanoseconds precision but not necessarily nanoseconds accuracy. This method returns -1 if the the platform does not support this operation.
# TYPE vendor_cpu_processCpuTime_seconds gauge
vendor_cpu_processCpuTime_seconds 4.36
# HELP vendor_cpu_systemCpuLoad_percent Displays the "recent cpu usage" for the whole system. This value is a double in the [0.0,1.0] interval. A value of 0.0 means that all CPUs were idle during the recent period of time observed, while a value of 1.0 means that all CPUs were actively running 100% of the time during the recent period being observed. All values betweens 0.0 and 1.0 are possible depending of the activities going on in the system. If the system recent cpu usage is not available, the method returns a negative value.
# TYPE vendor_cpu_systemCpuLoad_percent gauge
vendor_cpu_systemCpuLoad_percent 2.3565253563367224E-7
# HELP vendor_memory_committedNonHeap_bytes Displays the amount of non heap memory in bytes that is committed for the Java virtual machine to use.
# TYPE vendor_memory_committedNonHeap_bytes gauge
vendor_memory_committedNonHeap_bytes 5.1757056E7
# HELP vendor_memory_freePhysicalSize_bytes Displays the amount of free physical memory in bytes.
# TYPE vendor_memory_freePhysicalSize_bytes gauge
vendor_memory_freePhysicalSize_bytes 5.44448512E9
# HELP vendor_memory_freeSwapSize_bytes Displays the amount of free swap space in bytes.
# TYPE vendor_memory_freeSwapSize_bytes gauge
vendor_memory_freeSwapSize_bytes 0.0
# HELP vendor_memory_maxNonHeap_bytes Displays the maximum amount of used non-heap memory in bytes.
# TYPE vendor_memory_maxNonHeap_bytes gauge
vendor_memory_maxNonHeap_bytes -1.0
# HELP vendor_memory_usedNonHeap_bytes Displays the amount of used non-heap memory in bytes.
# TYPE vendor_memory_usedNonHeap_bytes gauge
vendor_memory_usedNonHeap_bytes 4.7445384E7
# HELP vendor_memoryPool_usage_bytes Current usage of the memory pool denoted by the 'name' tag
# TYPE vendor_memoryPool_usage_bytes gauge
vendor_memoryPool_usage_bytes{name="CodeHeap 'non-nmethods'"} 1357184.0
vendor_memoryPool_usage_bytes{name="CodeHeap 'non-profiled nmethods'"} 976128.0
vendor_memoryPool_usage_bytes{name="CodeHeap 'profiled nmethods'"} 4787200.0
vendor_memoryPool_usage_bytes{name="Compressed Class Space"} 4562592.0
vendor_memoryPool_usage_bytes{name="Eden Space"} 0.0
vendor_memoryPool_usage_bytes{name="Metaspace"} 3.5767632E7
vendor_memoryPool_usage_bytes{name="Survivor Space"} 0.0
vendor_memoryPool_usage_bytes{name="Tenured Gen"} 9872160.0
# HELP vendor_memoryPool_usage_max_bytes Peak usage of the memory pool denoted by the 'name' tag
# TYPE vendor_memoryPool_usage_max_bytes gauge
vendor_memoryPool_usage_max_bytes{name="CodeHeap 'non-nmethods'"} 1369600.0
vendor_memoryPool_usage_max_bytes{name="CodeHeap 'non-profiled nmethods'"} 976128.0
vendor_memoryPool_usage_max_bytes{name="CodeHeap 'profiled nmethods'"} 4793088.0
vendor_memoryPool_usage_max_bytes{name="Compressed Class Space"} 4562592.0
vendor_memoryPool_usage_max_bytes{name="Eden Space"} 2.3658496E7
vendor_memoryPool_usage_max_bytes{name="Metaspace"} 3.5769312E7
vendor_memoryPool_usage_max_bytes{name="Survivor Space"} 2883584.0
vendor_memoryPool_usage_max_bytes{name="Tenured Gen"} 9872160.0
</code></pre>
<p>So it seems metrics are working fine, but kubernetes returns this error:</p>
<pre><code>Warning Unhealthy 24m (x9 over 28m) kubelet Liveness probe errored: strconv.Atoi: parsing "metrics": invalid syntax
Warning Unhealthy 4m2s (x70 over 28m) kubelet Readiness probe errored: strconv.Atoi: parsing "metrics": invalid syntax
</code></pre>
<p>Any help?</p>
<p>Thanks</p>
| Giamma | <p>First I needed to fix dockerfile.jvm</p>
<pre><code>FROM openjdk:11
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
# We make four distinct layers so if there are application changes the library layers can be re-used
# RUN ls -la target
COPY --chown=185 target/quarkus-app/lib/ /deployments/lib/
COPY --chown=185 target/quarkus-app/*.jar /deployments/
COPY --chown=185 target/quarkus-app/app/ /deployments/app/
COPY --chown=185 target/quarkus-app/quarkus/ /deployments/quarkus/
RUN java -version
EXPOSE 8080
USER root
ENV AB_JOLOKIA_OFF=""
ENV JAVA_OPTS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
ENV JAVA_DEBUG="true"
ENV JAVA_APP_JAR="/deployments/quarkus-run.jar"
CMD java ${JAVA_OPTS} -jar ${JAVA_APP_JAR}
</code></pre>
<p>this way jar started working. without that CMD openjdk image is just starting jshell. After that I saw the log below</p>
<pre><code>The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
2022-09-21 19:56:00,450 INFO [io.sma.health] (executor-thread-1) SRHCK01001: Reporting health down status: {"status":"DOWN","checks":[{"name":"Database connections health check","status":"DOWN","data":{"<default>":"Unable to execute the validation check for the default DataSource: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}}]}
</code></pre>
<p>DB connection in kubernetes is not working.</p>
<p>deploy command: mvn clean package -DskipTests -Dquarkus.kubernetes.deploy=true</p>
<p>"minikube dashboard" looks like below
<a href="https://i.stack.imgur.com/qsT4l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qsT4l.png" alt="minikube dashboard" /></a></p>
<p>used the endpoints below</p>
<pre><code>quarkus.smallrye-health.root-path=/health
quarkus.smallrye-health.liveness-path=/health/live
quarkus.smallrye-metrics.path=/metrics
</code></pre>
<p>and liveness url looks like below in the firefox
<a href="https://i.stack.imgur.com/T8nOt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T8nOt.png" alt="quarkus liveness" /></a></p>
<p>I needed to change some dependencies in pom because I use minikube in my local and needed to delete some java code because of db connection problems, you can find working example at <a href="https://github.com/ozkanpakdil/quarkus-examples/tree/master/liveness-readiness-kubernetes" rel="nofollow noreferrer">https://github.com/ozkanpakdil/quarkus-examples/tree/master/liveness-readiness-kubernetes</a></p>
<p>you can see the definition yaml of the deployment below.</p>
<pre><code>mintozzy@mintozzy-MACH-WX9:~$ kubectl get deployments.apps app-version-checker -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.quarkus.io/build-timestamp: 2022-09-21 - 20:29:23 +0000
app.quarkus.io/commit-id: 7d709651868d810cd9a906609c8edad3f9d796c0
deployment.kubernetes.io/revision: "3"
prometheus.io/path: /metrics
prometheus.io/port: "8080"
prometheus.io/scheme: http
prometheus.io/scrape: "true"
creationTimestamp: "2022-09-21T20:13:21Z"
generation: 3
labels:
app.kubernetes.io/name: app-version-checker
app.kubernetes.io/version: 1.0.0-SNAPSHOT
name: app-version-checker
namespace: default
resourceVersion: "117584"
uid: 758d420b-ed22-48f8-9d6f-150422a6b38e
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: app-version-checker
app.kubernetes.io/version: 1.0.0-SNAPSHOT
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
app.quarkus.io/build-timestamp: 2022-09-21 - 20:29:23 +0000
app.quarkus.io/commit-id: 7d709651868d810cd9a906609c8edad3f9d796c0
prometheus.io/path: /metrics
prometheus.io/port: "8080"
prometheus.io/scheme: http
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app.kubernetes.io/name: app-version-checker
app.kubernetes.io/version: 1.0.0-SNAPSHOT
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: mintozzy/app-version-checker:1.0.0-SNAPSHOT
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /health/live
port: 8080
scheme: HTTP
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 10
name: app-version-checker
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /health/ready
port: 8080
scheme: HTTP
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 10
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2022-09-21T20:13:21Z"
lastUpdateTime: "2022-09-21T20:30:03Z"
message: ReplicaSet "app-version-checker-5cb974f465" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2022-09-22T16:09:48Z"
lastUpdateTime: "2022-09-22T16:09:48Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 3
readyReplicas: 1
replicas: 1
updatedReplicas: 1
</code></pre>
| ozkanpakdil |
<p>I have deployed kibana in a kubernetes environment. If I give that a LoadBalancer type Service, I could access it fine. However, when I try to access the same via a nginx-ingress it fails. The configuration that I use in my nginx ingress is:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: my-ingress
spec:
rules:
- http:
paths:
- backend:
serviceName: kibana
servicePort: {{ .Values.kibanaPort }}
path: /kibana
</code></pre>
<p>I have launched my kibana with the following setting:</p>
<pre><code> - name: SERVER_BASEPATH
value: /kibana
</code></pre>
<p>and I am able to access the kibana fine via the <code>LoadBalancer</code> IP. However when I try to access via the Ingress, most of the calls go through fine except for a GET call to <code>vendors.bundle.js</code> where it fails almost consistently.</p>
<p>The log messages in the ingress during this call is as follows:</p>
<pre><code>2019/10/25 07:31:48 [error] 430#430: *21284 upstream prematurely closed connection while sending to client, client: 10.142.0.84, server: _, request: "GET /kibana/bundles/vendors.bundle.js HTTP/2.0", upstream: "http://10.20.3.5:3000/kibana/bundles/vendors.bundle.js", host: "1.2.3.4", referrer: "https://1.2.3.4/kibana/app/kibana"
10.142.0.84 - [10.142.0.84] - - [25/Oct/2019:07:31:48 +0000] "GET /kibana/bundles/vendors.bundle.js HTTP/2.0" 200 1854133 "https://1.2.3.4/kibana/app/kibana" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" 47 13.512 [some-service] 10.20.3.5:3000 7607326 13.513 200 506f778b25471822e62fbda2e57ccd6b
</code></pre>
<p>I am not sure why I get the <code>upstream prematurely closed connection while sending to client</code> across different browsers. I have tried setting <code>proxy-connect-timeout</code> and <code>proxy-read-timeout</code> to 100 seconds and even then it fails. I am not sure if this is due to some kind of default size or chunks.</p>
<p>Also it is interesting to note that only some kibana calls are failing and not all are failing.</p>
<p>In the browser, I see the error message:</p>
<pre><code>GET https://<ip>/kibana/bundles/vendors.bundle.js net::ERR_SPDY_PROTOCOL_ERROR 200
</code></pre>
<p>in the developer console.</p>
<p>Anyone has any idea on what kind of config options I need to pass to my nginx-ingress to make kibana proxy_pass fine ?</p>
| Sankar | <p>I have found the cause of the error. The <code>vendors.bundle.js</code> file was relatively bigger and since I was accessing from a relatively slow network, the requests were terminated. The way I fixed this is, by adding to the nginx-ingress configuration the following fields:</p>
<pre><code>nginx.ingress.kubernetes.io/proxy-body-size: 10m (Change this as you need)
nginx.ingress.kubernetes.io/proxy-connect-timeout: "100"
nginx.ingress.kubernetes.io/proxy-send-timeout: "100"
nginx.ingress.kubernetes.io/proxy-read-timeout: "100"
nginx.ingress.kubernetes.io/proxy-buffering: "on"
</code></pre>
| Sankar |
<p>I want to add an initContainer to all of my pods (with a specific annotation) in my kustomize base. The newly added init container should be the first init container. My patch looks like this.</p>
<pre><code>patches:
- target:
kind: Pod
annotationSelector: "database_init=True"
patch: |-
- op: add
path: /spec/initContainers/0
value:
name: database-init
...
</code></pre>
<p>This works fine for all pods that already have an init container. Unfortunately, I have pods without init containers and there the patch fails with the error <code>add operation does not apply: doc is missing path: \"/spec/initContainers/0\"</code>.
How do I write a patch that works for all my pods?</p>
<p>Complete example:</p>
<pre><code>.
βββ base
β βββ kustomization.yaml
β βββ pod-1.yaml
β βββ pod-2.yaml
βββ overlay
βββ kustomization.yaml
</code></pre>
<pre><code># cat base/pod-1.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod-1
annotations:
database_init: True
spec:
initContainers:
- name: something
image: alpine:latest
command: ["sleep", "10" ]
containers:
- name: main
image: alpine:latest
command: [ "sleep", "60" ]
</code></pre>
<pre><code># cat base/pod-2.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod-2
annotations:
database_init: True
spec:
# initContainers:
# - name: something
# image: alpine:latest
# command: ["sleep", "10" ]
containers:
- name: main
image: alpine:latest
command: [ "sleep", "60" ]
</code></pre>
<pre><code># cat base/kustomization.yaml
resources:
- pod-1.yaml
- pod-2.yaml
</code></pre>
<pre><code># cat overlay/kustomization.yaml
resources:
- ../base
patches:
- target:
kind: Pod
annotationSelector: "database_init=True"
patch: |-
- op: add
path: /spec/initContainers/0
value:
name: database-init
image: alpine:latest
command: [ "sleep", 10 ]
</code></pre>
<p>Result:</p>
<pre><code>$ kubectl kustomize overlay
error: add operation does not apply: doc is missing path: "/spec/initContainers/0": missing value
</code></pre>
<p>After uncommenting the init container of pod-2, everything works.</p>
<p>Edit: Added the annotationSelector.</p>
<p>Edit: Added the complete example code.</p>
| briconaut | <p>JSONPatch patches in general are fundamentally unable to handle the "append to existing list or create new list" situation. See e.g. <a href="https://stackoverflow.com/q/47670366/147356">this question</a>.</p>
<p>Fortunately, with Kustomize 5.0.1 it appears to be possible to achieve your goal using a strategic merge patch. Given your example configuration, if I place the following in <code>overlay/kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base/
patches:
- target:
kind: Pod
annotationSelector: database_init=True
patch: |-
apiVersion: v1
kind: Pod
metadata:
name: __ignored__
spec:
initContainers:
- name: database-init
image: alpine:latest
command:
- init
- the
- database
</code></pre>
<p>Then running <code>kustomize build overlay</code> produces the following output:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
database_init: "True"
name: mypod-1
spec:
containers:
- command:
- sleep
- "60"
image: alpine:latest
name: main
initContainers:
- command:
- init
- the
- database
image: alpine:latest
name: database-init
- command:
- sleep
- "10"
image: alpine:latest
name: something
---
apiVersion: v1
kind: Pod
metadata:
annotations:
database_init: "True"
name: mypod-2
spec:
containers:
- command:
- sleep
- "60"
image: alpine:latest
name: main
initContainers:
- command:
- init
- the
- database
image: alpine:latest
name: database-init
</code></pre>
| larsks |
<p>I have created an pub/sub topic to which I will publish a message every time an new object is uploaded to the bucket. Now I want to create a subscription to push a notification to an endpoint every time a new object is uploaded to that bucket. Following the documentation, I wanted something like that:</p>
<p><code>
gcloud alpha pubsub subscriptions create orderComplete \
--topic projects/PROJECT-ID/topics/TOPIC \
--push-endpoint http://localhost:5000/ENDPOINT/
--ack-deadline=60
</code>
However my app is running on kubernetes and it seems that pub/sub cannot reach my endpoint. Any suggestions?</p>
| gr7 | <p>Yeah, so as @jakub-bujny points out you need a SSL endpoint. So one solution, on GKE, to use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">google's managed certificates</a> with an Ingress resource (link shows you how)</p>
| CpILL |
Subsets and Splits