Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have a monorepo nodejs/react app that I want to deploy to GKE using Helm charts. I added two Dockerfiles one for the frontend and the other for the back.</p>
<p>I'm using Helm Charts to deploy my microservices to the Kubernetes cluster but this time I don't know how to configure it so that I can deploy both back and front simultaneously to GKE.</p>
<p>Should I configure a values.yaml file for each service and keep the other templates as they are (ingress, service, deployment, hpa) or should I work on each service independently?</p>
| Dawn tns | <p>Posting this as an answer for better visibility since it's a good solution:</p>
<p><a href="https://stackoverflow.com/questions/70436517/deploy-both-front-and-backend-using-helm-charts#comment124510627_70436517">David suggested</a> that you can</p>
<blockquote>
<p>probably put both parts into the same Helm chart, probably with different <code>templates/*.yaml</code> files for the front-and back-end parts.</p>
<p>If you had a good argument that the two parts are separate (maybe different development teams work on them and you have a good public API contract) it's fine to deploy them separately</p>
</blockquote>
| Wojtek_B |
<p>I am running an Elasticsearch stack with Kibana and Fluent Bit on Kubernetes. To set up security I set a passwords using this command:</p>
<pre><code>kubectl exec -it $(kubectl get pods -n logging | grep elasticsearch-client | sed -n 1p | awk '{print $1}') -n logging -- bin/elasticsearch-setup-passwords auto -b
</code></pre>
<p>Following <a href="https://www.studytonight.com/post/setup-elasticsearch-with-authentication-xpack-security-enabled-on-kubernetes" rel="nofollow noreferrer">this tutorial</a></p>
<p>As during development, I have to do this setup quite often I want to automate this step, but I can not quite get it working. As I am using <a href="https://docs.tilt.dev/tiltfile_concepts.html" rel="nofollow noreferrer">Tiltfiles</a> my first instinct was to call a bash script from there but as <code>Starlark</code> does not support this, it is not an option. My next attempt was to try it with <code>Init Containers</code> but as they can not access the other containers this also did not work.
Does anyone know what the correct way to do this is?</p>
| Manuel | <p>As @David Maze pointed out I should put the (bootstrap) password into a <code>secret</code>.</p>
<p>My initial problem was that I thought that the <code>elasticsearch-setup-passwords</code> tool was the only way to set up the (user) password.</p>
<p>In other words, I did not know the password before the program was started.
But I just found a way to change the bootstrap password beforehand <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html" rel="nofollow noreferrer">here</a>, so the problem is solved.</p>
| Manuel |
<p>I'm running a mongoDB (5.0.12) instance as a kubernetes pod. Suddenly the pod is failing and I need some help to understand the logs:</p>
<pre><code>{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"AuthorizationManager-1","msg":"WiredTiger error","attr":{"error":1,"message":"[1663094391:104664][1:0x7fc5224cc700], file:index-9--3195476868760592993.wt, WT_SESSION.open_cursor: __posix_open_file, 808: /data/db/index-9--3195476868760592993.wt: handle-open: open: Operation not permitted"}}
{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"F", "c":"STORAGE", "id":50882, "ctx":"AuthorizationManager-1","msg":"Failed to open WiredTiger cursor. This may be due to data corruption","attr":{"uri":"table:index-9--3195476868760592993","config":"overwrite=false","error":{"code":8,"codeName":"UnknownError","errmsg":"1: Operation not permitted"},"message":"Please read the documentation for starting MongoDB with --repair here: http://dochub.mongodb.org/core/repair"}}
{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"F", "c":"-", "id":23091, "ctx":"AuthorizationManager-1","msg":"Fatal assertion","attr":{"msgid":50882,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_session_cache.cpp","line":109}}
{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"F", "c":"-", "id":23092, "ctx":"AuthorizationManager-1","msg":"\n\n***aborting after fassert() failure\n\n"}
</code></pre>
<p>So why is there <code>operation is not permitted</code>? I already run <code>mongod --repair</code>, but the error still occurs.</p>
<p>This is how the pod is deployed:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
strategy:
type: Recreate
template:
metadata:
labels:
app: mongodb
spec:
hostname: mongodb
# securityContext:
# runAsUser: 999
# runAsGroup: 3000
# fsGroup: 2000
volumes:
- name: data
persistentVolumeClaim:
claimName: data
containers:
- name: mongodb
image: mongo:5.0.12
args: ["--auth", "--dbpath", "/data/db"]
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- mountPath: /data/db
name: data
# securityContext:
# allowPrivilegeEscalation: false
</code></pre>
<p><em>Update</em></p>
<p>The PVC:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
| user3142695 | <p>You can try checking the permissions for that file before execution:</p>
<pre><code>ls -l
</code></pre>
<p>then using <strong>chmod</strong> you can try changing the permission and then try executing it.</p>
<p>OR</p>
<p>You can refer here, this might help you:</p>
<blockquote>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
</blockquote>
| Ryuzaki Lawliet |
<p>I am having an issue on GKE where all this error is being spewed from all name spaces. Not sure what might be the issue or how to troubleshoot this.</p>
<p>message: "MountVolume.SetUp failed for "<strong>volume-name</strong>-token-m4rtn" : failed to sync secret cache: timed out waiting for the condition"</p>
<p>It occurs for almost all pods in all namespaces. Has anyone come across this or have any ideas on how I can troubleshoot?</p>
| David Essien | <p>The error you are receiving points to be a problem with RBAC(Role-based access control) permissions, looks like the service account used by the pod does not have enough permissions.</p>
<p>Hence, the default service account within the namespace you are deploying to is not authorized to mount the secret that you are trying to mount into your Pod.</p>
<p>You can get further information on the following link <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Using RBAC Authorization</a>
You also can take a look at the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#define-and-assign" rel="nofollow noreferrer">Google’s documentation</a></p>
<p>For example, the following Role grants read access (get, watch, and list) to all pods in the accounting Namespace:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: accounting
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>Also you can take a look at the following similar cases <a href="https://www.reddit.com/r/kubernetes/comments/noyrvi/failed_to_sync_configmap_cache_timed_out/" rel="nofollow noreferrer">Case in Reddit</a>, <a href="https://stackoverflow.com/questions/64908290/pods-starting-but-not-working-in-kubernetes">StackOverflow case</a></p>
| Jose Luis Delgadillo |
<p>I need to test a scenario to see how my app deals with latency. My application is in K8S on Azure (AKS) and its connecting to a Postgres DB in Azure. Anyone know of any good tools that aren't too tricky to implement?</p>
| DeirdreRodgers | <p>I ended up created a haproxy VM and directing my db traffic through it. Then on the haproxy I used the network emulator linux tool to delay the traffic . It works really well</p>
<p><a href="https://wiki.linuxfoundation.org/networking/netem" rel="nofollow noreferrer">https://wiki.linuxfoundation.org/networking/netem</a></p>
| DeirdreRodgers |
<p>I am beginner in helm chart templating and I'm looking for best practise.
I would like to create a helm chart template enough generics to use same chart for all team project (backend, front-end,..)
On order to make it generics I can let developer to specify list of many cases in values.yml (volumes for deployment, network policy ingress egress etc...).
And i could keep the kubernetes template deployment, service etc.. Enough generics to never mention any specific keys.
So the developer could only modify values yml for add values for their application behavior.</p>
<p>The disadvantage is that the kubernetes generics template will not contain any logic about the application , and the generic template will be hard to maintain (because it will handle every possible case).
The advantage is that the developer doesn't need to understand helm because they will not modify the kubernetes template.</p>
<p>So you have any experience about that?</p>
| julus | <p>You can use _*.tpl files to define generic templates, They are located in ./templates/_*.tpl (. being the directory with global Chart.yaml and values.yaml).
Also by defaul in helm global values override local values. Solution to this can be found here - <a href="https://github.com/helm/helm/issues/5676" rel="nofollow noreferrer">https://github.com/helm/helm/issues/5676</a></p>
<p>By using these 2 techniques in conjunction you can make generic templates and only use values.yaml to render what you want to render.</p>
<p>For example:</p>
<p>values.yaml:</p>
<pre><code>global:
defaults:
switches:
volumesEnabled: false
ingressEnabled: false
ingress:
host: "generic-host.com"
volumes:
volumeName: "generic-volume-name"
subchart1:
defaultOverrides:
switches:
volumesEnabled: true
volumes:
volumeName: "not-so-generic-name"
subchart2:
defaultOverrides:
switches:
volumesEnabled: true
ingressEnabled: true
</code></pre>
<p>Then templates (java is just for grouping templates in one category, you can try to guess in which language my backend microservices are written :) )</p>
<p>./templates/java/_deployment.tpl:</p>
<pre><code>{{- define "templates.java.deployment" }}
{{- $properties := merge .Values.defaultOverrides $.Values.global.defaults -}}
{{*/ generic deployment structure */}}
{{- if $properties.switches.volumesEnabled -}}
volume: {{ $properties.volumes.volumeName }}
{{- end }}
{{*/ generic deployment structure */}}
{{- end }}
</code></pre>
<p>./templates/java/_ingress.tpl:</p>
<pre><code>{{- define "templates.java.ingress" }}
{{- $properties := merge .Values.defaultOverrides $.Values.global.defaults -}}
{{- if $properties.switches.ingressEnabled -}}
host: {{ $properties.ingress.host }}
{{*/ generic ingress structure */}}
{{- end }}
{{- end }}
</code></pre>
<p>And then subchart templates
./charts/subchart1/templates/deployment.yaml:</p>
<pre><code>{{ include "templates.java.deployment" . }}
</code></pre>
<p>./charts/subchart1/templates/ingress.yaml:</p>
<pre><code>{{ include "templates.java.ingress" . }}
</code></pre>
<p>subchart2 has exactly the same includes.</p>
<p>In the end we will have:</p>
<p>subchart1:</p>
<ul>
<li>has deployment</li>
<li>volumeName is overriden from local values with "not-so-generic-name"</li>
<li>ingress is not rendered at all</li>
</ul>
<p>subchart2:</p>
<ul>
<li>has deployment</li>
<li>volumeName is default from global values</li>
<li>ingress host is default from global values</li>
</ul>
<p>But I would say that it's a bad practice to generlize to much, because it will make your templates overly complex. In my case I found 2 distinct groups which have nearly identical manifests within them (basically frontend and backend) and made a set of _*.tpl files for each of them and settings default values for each group respectivelly in global values.yaml.</p>
| flutt13 |
<p>I was doing project for my class and the goal of the project was to implement simple backend application with Mariadb database. I choose Java Spring Boot for my backend.</p>
<p>The backend work when I testing it with database in docker container.</p>
<p>But when I move to working with Kubernetes my backend won't connect to database.</p>
<p>After I ran command <code>kubectl apply -f .</code> backend pods will keep spawning and dying because it won't connect to database.</p>
<p>Here all my Kubernetes files for deployment.</p>
<p>mariadb-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb
labels:
app: mariadb
type: database
spec:
replicas: 1
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
type: database
spec:
containers:
- name: mariadb
image: mariadb
ports:
- containerPort: 3306
env:
- name: MARIADB_ROOT_PASSWORD
value: rootpass
- name: MARIADB_DATABASE
value: pastebin
- name: MARIADB_USER
value: dev
- name: MARIADB_PASSWORD
value: devpass
</code></pre>
<p>mariadb-svc.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mariadb
labels:
name: mariadb
spec:
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: ClusterIP
selector:
app: mariadb
name: mariadb
</code></pre>
<p>backend-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: scalable-p1-backend
labels:
app: scalable-p1-backend
spec:
replicas: 1
selector:
matchLabels:
app: scalable-p1-backend
template:
metadata:
labels:
app: scalable-p1-backend
spec:
containers:
- name: scalable-p1-backend
image: ghcr.io/cs-muic/scalable-p1-taextreme/scalable-p1-backend:latest
env:
- name: SPRING_DATASOURCE_URL
value: jdbc:mariadb://mariadb:3306/pastebin
- name: SPRING_DATASOURCE_USERNAME
value: dev
- name: SPRING_DATASOURCE_PASSWORD
value: rootpass
imagePullSecrets:
- name: scalable-p1-taextreme-secret
</code></pre>
<p>backend-svc.yaml</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: scalable-p1-backend
spec:
selector:
app: scalable-p1-backend
type: ClusterIP
ports:
- protocol: TCP
port: 80
targetPort: 5000
</code></pre>
<p>I have edit those files manytime already and try following every solution that I found and it not seem to work.</p>
<p>Error message from Backend logs.</p>
<pre><code>java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=mariadb-svc)(port=3306)(type=master) : Socket fail to connect to host:mariadb-svc, port:3306. Connection refused (Connection refused)
</code></pre>
| TaeXtreme | <p>Have you tried</p>
<pre><code> - name: SPRING_DATASOURCE_URL
value: jdbc:mariadb://mariadb.default.svc:3306/pastebin
</code></pre>
| Lazy0147 |
<p>I am new to docker/k8s. I was running a simple Nodejs app on my local machine. I am using skaffold.</p>
<p>Now I am trying to run the same thing on the Kubernetes cluster digital ocean. I am getting the following error:</p>
<pre><code>Error: container auth is waiting to start: rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c can't be pulled.
</code></pre>
<p>My pod status was ErrImagePull.
I tried to look into the pods for the events it shows the following failures:</p>
<pre><code>Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m10s default-scheduler Successfully assigned default/auth-699894675-66pwq to xc-k8s-dev-30pnw
Normal Pulling 4m54s (x2 over 5m7s) kubelet Pulling image "rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c"
Warning Failed 4m53s (x2 over 5m5s) kubelet Failed to pull image "rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c": rpc error: code = Unknown desc = Error response from daemon: manifest for rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c not found: manifest unknown: manifest unknown
Warning Failed 4m53s (x2 over 5m5s) kubelet Error: ErrImagePull
Normal SandboxChanged 4m47s (x7 over 5m5s) kubelet Pod sandbox changed, it will be killed and re-created.
Normal BackOff 4m46s (x6 over 5m2s) kubelet Back-off pulling image "rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c"
Warning Failed 4m46s (x6 over 5m2s) kubelet Error: ImagePullBackOff
</code></pre>
<p>The error is appearing only with the digital ocean.
I tried to search about the issue but unable to resolve the problem. The error is relevant to pulling images from docker. My repo is public but still I am unable to pull it.</p>
<p>Can anyone help me to solve this problem?</p>
<p><strong>Edit 1:</strong>
My auth-depl.yaml is like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: rehanpunjwani/auth:latest
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
</code></pre>
<p><strong>Edit 2</strong>:
output of the <code>kubectl get pod -o yaml -l app=auth</code></p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-01-20T11:22:48Z"
generateName: auth-6c596959dc-
labels:
app: auth
app.kubernetes.io/managed-by: skaffold
pod-template-hash: 6c596959dc
skaffold.dev/run-id: d99c01da-cb0b-49e8-bcb8-98ecd6d1c9f9
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:generateName: {}
f:labels:
.: {}
f:app: {}
f:app.kubernetes.io/managed-by: {}
f:pod-template-hash: {}
f:skaffold.dev/run-id: {}
f:ownerReferences:
.: {}
k:{"uid":"a0c69a6b-fe95-4bed-8630-6abbae1d97f9"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:spec:
f:containers:
k:{"name":"auth"}:
.: {}
f:env:
.: {}
k:{"name":"JWT_KEY"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:enableServiceLinks: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kube-controller-manager
operation: Update
time: "2021-01-20T11:22:48Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ContainersReady"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Initialized"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
k:{"type":"Ready"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:containerStatuses: {}
f:hostIP: {}
f:podIP: {}
f:podIPs:
.: {}
k:{"ip":"10.244.0.22"}:
.: {}
f:ip: {}
f:startTime: {}
manager: kubelet
operation: Update
time: "2021-01-20T11:26:07Z"
name: auth-6c596959dc-9ghtg
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: auth-6c596959dc
uid: a0c69a6b-fe95-4bed-8630-6abbae1d97f9
resourceVersion: "1994444"
selfLink: /api/v1/namespaces/default/pods/auth-6c596959dc-9ghtg
uid: c64653af-d17c-4c96-bea1-338b50b04567
spec:
containers:
- env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
key: JWT_KEY
name: jwt-secret
image: rehanpunjwani/auth:b902346e89a8f523f5b9f281921bf2413a4686148045523670c26653e66d8526
imagePullPolicy: IfNotPresent
name: auth
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-drzwc
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
imagePullSecrets:
- name: regcred
nodeName: xc-k8s-dev-30png
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-drzwc
secret:
defaultMode: 420
secretName: default-token-drzwc
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-01-20T11:22:48Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2021-01-20T11:22:48Z"
message: 'containers with unready status: [auth]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2021-01-20T11:22:48Z"
message: 'containers with unready status: [auth]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2021-01-20T11:22:48Z"
status: "True"
type: PodScheduled
containerStatuses:
- image: rehanpunjwani/auth:b902346e89a8f523f5b9f281921bf2413a4686148045523670c26653e66d8526
imageID: ""
lastState: {}
name: auth
ready: false
restartCount: 0
started: false
state:
waiting:
message: Back-off pulling image "rehanpunjwani/auth:b902346e89a8f523f5b9f281921bf2413a4686148045523670c26653e66d8526"
reason: ImagePullBackOff
hostIP: 10.110.0.3
phase: Pending
podIP: 10.244.0.22
podIPs:
- ip: 10.244.0.22
qosClass: BestEffort
startTime: "2021-01-20T11:22:48Z"
kind: List
metadata:
resourceVersion: ""
selfLink: ""```
</code></pre>
| dev_rrp | <p>The issue was with skaffold cli. Making the <code>build.local.push = true</code> solved the issue for me.</p>
<pre><code>deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: true #Here it was false
</code></pre>
| dev_rrp |
<p>So i have set up an AWS EKS cluster (using fargate), and intend to use it for gitlab CI/CD integration</p>
<p>Then tried to set up the base domain based on <a href="https://docs.gitlab.com/ee/topics/autodevops/quick_start_guide.html#install-ingress" rel="nofollow noreferrer">this doc</a></p>
<pre><code>helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
helm install nginx-ingress nginx-stable/nginx-ingress
</code></pre>
<p><code>kubectl get service nginx-ingress-nginx-ingress -ojson | jq -r '.status.loadBalancer.ingress[].ip'</code> always returns <code>null</code> even after waiting for long</p>
<p>Basically I need a loadBalancer IP for setting up base domain in gitlab.</p>
<p>Read that i need to set service type to loadBalancer, so retried with
<code> helm install nginx-ingress nginx-stable/nginx-ingress --set controller.service.type=LoadBalancer</code></p>
<p>Same result</p>
<p>result of <code>kubectl get service nginx-ingress-nginx-ingress -ojson</code> includes</p>
<pre><code>{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"annotations": {
"meta.helm.sh/release-name": "nginx-ingress",
"meta.helm.sh/release-namespace": "default"
},
"creationTimestamp": "2021-03-26T19:49:02Z",
"finalizers": [
"service.kubernetes.io/load-balancer-cleanup"
],
"labels": {
"app.kubernetes.io/instance": "nginx-ingress",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/name": "nginx-ingress-nginx-ingress",
"helm.sh/chart": "nginx-ingress-0.8.1"
},
.
.
.
.
"status": {
"loadBalancer": {
"ingress": [
{
"hostname": "<xyz-key>.ap-southeast-1.elb.amazonaws.com"
}
]
}
}
}
</code></pre>
<p>Any help on <strong>how to obtain a load balancer IP address</strong> would be appreciated</p>
| Josh Kurien | <p>As @rockn-rolla mentioned in the comments, an Elastic Load Balancer provisioned through nginx-ingress on EKS returns is exposed by a hostname. The Load Balancer does have IP addresses tied to the subnets that it's deployed into (Elastic Network Interfaces), but its possible that they change. If you have a domain name, you can point a custom subdomain to the load balancer and provide that to GitLab. This GitLab blog has additional details: <a href="https://about.gitlab.com/blog/2020/05/05/deploying-application-eks/" rel="nofollow noreferrer">https://about.gitlab.com/blog/2020/05/05/deploying-application-eks/</a></p>
| Simon |
<p>I have created a docker image ( java web application ), created a kubernetes cluster with 1 master and 1 worker, created a deployment and a service. All the resources seem to run fine as I have checked by 'kubectl describe resource resourcename'. At last, I used Ingress in order to expose the services outside the cluster. The ingress resource seems to work fine as there are no errors while describing the ingress object. However, on accessing the host on a browser from another machine , I get "Your connection is not private" error. I am pretty new to Kubernetes and I am unable to debug the cause of this.</p>
<p>Below are service/deployment yaml files, ingress file contents and the status of resources.</p>
<p>Service and Deployment YAML:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: hotelapplication
labels:
name: hotelapplication
spec:
ports:
- name: appport
port: 8080
targetPort: 8080
selector:
app: hotelapplication
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hotelapplication
spec:
selector:
matchLabels:
app: hotelapplication
replicas: 1
template:
metadata:
labels:
app: hotelapplication
spec:
containers:
- name: hotelapplication
image: myname/hotelapplication:2.0
imagePullPolicy: Always
ports:
- containerPort: 8080
env: # Setting Enviornmental Variables
- name: DB_HOST # Setting Database host address from configMap
valueFrom:
configMapKeyRef:
name: db-config # name of configMap
key: host
- name: DB_NAME # Setting Database name from configMap
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME # Setting Database username from Secret
valueFrom:
secretKeyRef:
name: db-user # Secret Name
key: username
- name: DB_PASSWORD # Setting Database password from Secret
valueFrom:
secretKeyRef:
name: db-user
key: password
</code></pre>
<p>Below is the ingress yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: springboot-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: testing.mydomain.dev
http:
paths:
- backend:
serviceName: hotelapplication
servicePort: 8080
</code></pre>
<p>All the resources - pods, deployments, services, endpoints seem to work fine.</p>
<p>Ingress:</p>
<pre><code>Name: springboot-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
testing.mydomain.dev
hotelapplication:8080 (192.168.254.51:8080)
Annotations: ingress.kubernetes.io/rewrite-target: /
Events: <none>
</code></pre>
<p>Services:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hotelapplication ClusterIP 10.109.220.90 <none> 8080/TCP 37m
</code></pre>
<p>Deployments:</p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
hotelapplication 1/1 1 1 5h55m
mysql-hotelapplication 1/1 1 1 22h
nfs-client-provisioner 1/1 1 1 23h
</code></pre>
<p>Pods object:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
hotelapplication-596f65488f-cnhlc 1/1 Running 0 149m
mysql-hotelapplication-65587cb8c8-crx4v 1/1 Running 0 22h
nfs-client-provisioner-64f4fb59d8-cb6hd 1/1 Running 0 23h
</code></pre>
<p>I have deleted services/deployments/pods and retried, all in vain. Please help me to fix this.</p>
<p>Edit 1:</p>
<p>I have added nginx.ingress.kubernetes.io/ssl-redirect: "false" to the ingress service definition. But, I am facing the same issue. On accessing the public IP of the host, I am facing 502 Bad Gateway error.</p>
<p>On the logs of ingress, I found below error:</p>
<pre><code>P/1.1", upstream: "http://192.168.254.56:8081/", host: "myip"
2021/05/06 06:01:33 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET / HTTP/1.1", upstream: "http://192.168.254.56:8081/", host: "<myhostipaddress>"
2021/05/06 06:01:33 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET / HTTP/1.1", upstream: "http://192.168.254.56:8081/", host: "<myhostipaddress>"
2021/05/06 06:01:34 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
2021/05/06 06:01:34 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
2021/05/06 06:01:34 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
2021/05/06 06:01:35 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET / HTTP/1.1", upstream: "http://192.168.254.56:8081/", host: "<myhostipaddress>"
2021/05/06 06:01:35 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET / HTTP/1.1", upstream: "http://192.168.254.56:8081/", host: "<myhostipaddress>"
2021/05/06 06:01:35 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET / HTTP/1.1", upstream: "http://192.168.254.56:8081/", host: "<myhostipaddress>"
2021/05/06 06:01:36 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
2021/05/06 06:01:36 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
2021/05/06 06:01:36 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
W0506 06:06:46.328727 6 controller.go:391] Service "ingress-nginx/default-http-backend" does not have any active Endpoint
W0506 06:09:06.921564 6 controller.go:391] Service "ingress-nginx/default-http-backend" does not have any active Endpoint
</code></pre>
| CuteBoy | <p>Apparently, I had configured the incorrect containerPort in the deployment. There is nothing wrong with ingress configuration. But, kubernetes did not actually show any errors in logs which made debugging pretty difficult.</p>
<p>Just a tip for beginners, before trying to expose your services, test the service by configuring the 'type' in service definition as 'NodePort'. This way we can ensure the service is configured correctly, just by accessing the service easily outside the cluster.</p>
| CuteBoy |
<p>I try to run an exposed service on my Macbook(M1). However, I failed to browse the service locally. What could be the root cause?
Please see the attached images.</p>
<p><a href="https://i.stack.imgur.com/Fd068.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Fd068.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/XzvUR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XzvUR.jpg" alt="enter image description here" /></a></p>
| David | <p>Is the shortcut command for fetching the minikube IP and a service’s NodePort not working?</p>
<pre class="lang-sh prettyprint-override"><code>minikube service --url <service-name>
</code></pre>
| Alan |
<p>i have a list of customer ids that i want to pass to the values.yml in the helm chart , and then for each customer create a deployment is that possible? this is what i want to pass in values.yml:</p>
<pre class="lang-yaml prettyprint-override"><code>customer:
- 62
- 63
</code></pre>
<p>and this is my deployment template
<a href="https://gist.github.com/JacobAmar/8c45e98f9c34bfd662b9fd11a534b9d5" rel="nofollow noreferrer">https://gist.github.com/JacobAmar/8c45e98f9c34bfd662b9fd11a534b9d5</a></p>
<p>im getting this error when im installing the chart
"parse error at (clientmodule/templates/deployment.yaml:51): unexpected EOF"</p>
<p>also i want to pass that customer id to the default command in the container , thanks for the help :)</p>
| Jacob Amar | <p>Ok , so i found a solution to to why helm is only creating a deployment for my last item in the list , helm uses "---" as a seprator between the specification of your yaml. so now my template looks like this and it works :)</p>
<pre><code> {{ range .Values.customer.id }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: "clientmodule-customer-{{ . }}"
labels:
{{- include "clientmodule.labels" $ | nindent 4 }}
customer: "{{ . }}"
spec:
{{- if not $.Values.autoscaling.enabled }}
replicas: {{ $.Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "clientmodule.selectorLabels" $ | nindent 6 }}
template:
metadata:
{{- with $.Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "clientmodule.selectorLabels" $ | nindent 8 }}
spec:
{{- with $.Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "clientmodule.serviceAccountName" $ }}
securityContext:
{{- toYaml $.Values.podSecurityContext | nindent 8 }}
containers:
- name: clientmodule-customer-{{ . }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 12 }}
image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag | default $.Chart.AppVersion }}"
imagePullPolicy: {{ $.Values.image.pullPolicy }}
command: ["sh","-c",{{$.Values.command}}]
resources:
{{- toYaml $.Values.resources | nindent 12 }}
{{- with $.Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $.Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $.Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
---
{{- end }}
</code></pre>
<p>you can refer to this answer too : <a href="https://stackoverflow.com/questions/51024074/how-can-i-iteratively-create-pods-from-list-using-helm/51025501">looping over helm</a></p>
| Jacob Amar |
<p>I am running kubectl port-forward and sending a request to localhost:8081. I get the following error output:</p>
<pre><code>$ kubectl port-forward svc/pong-service 8081:8090 -n my-namespace
Forwarding from 127.0.0.1:8081 -> 443
Forwarding from [::1]:8081 -> 443
Handling connection for 8081
Handling connection for 8081
E0826 11:31:54.679791 412617 portforward.go:400] an error occurred forwarding 8081 -> 443: error forwarding port 443 to pod 80485aa877fd1279190c5b4fbcb1efab1ccf4c7feb865c7ad3a289aeb8890d0f, uid : exit status 1: 2021/08/26 18:31:54 socat[368856] E connect(5, AF=2 127.0.0.1:443, 16): Connection refused
</code></pre>
<p><a href="https://stackoverflow.com/a/59636660/990376">This answer</a> leads me to believe my request is being forwarded to <some pod>:443 and <some pod> is not listening on port 443.</p>
<p>What is this string in the kubectl error output?
<code>80485aa877fd1279190c5b4fbcb1efab1ccf4c7feb865c7ad3a289aeb8890d0f</code></p>
<p>Can I use this string to find the name or uid of <some pod>?</p>
| Hamomelette | <p>Your port forward command is incorrect, change it to:</p>
<p>kubectl port-forward svc/pong-service 8081:<em><strong>443</strong></em> -n my-namespace</p>
<p>If that was your typo, check with <code>kubectl get ep pong-service -n my-namespace</code> to ensure port 443 is indeed opened.</p>
| gohm'c |
<p>The authentication using kyecloak isn't working as expected, it been used Istio vs Keycloak.
Istio components configured : Gateway, Virtualservice, AuthorizationPolicy, RequestAuthentication</p>
<p>using a valid token: <strong>401 Jwt issuer is not configured</strong></p>
<p><a href="https://i.stack.imgur.com/zYMaM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zYMaM.png" alt="enter image description here" /></a></p>
<p>ISTIO CONFIGURATION FOR SECURITY:</p>
<pre><code>---
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: "jwt-example"
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
jwtRules:
- issuer: "http://localhost:30080/auth/realms/master"
jwksUri: "http://localhost:30080/auth/realms/master/protocol/openid-connect/certs"
forwardOriginalToken: true
outputPayloadToHeader: x-jwt-payload
EOF
---
kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "frontend-ingress"
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
action: DENY
rules:
- from:
- source:
notRequestPrincipals: ["*"]
principalBinding: USE_ORIGIN
EOF
---
</code></pre>
<p>once there is no authorization Bearer</p>
<p><a href="https://i.stack.imgur.com/sWgyM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sWgyM.png" alt="enter image description here" /></a></p>
<p>for double check i used istio's example and worked :</p>
<pre><code> kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "jwt-example"
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
jwtRules:
- issuer: "[email protected]"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json"
EOF
kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "frontend-ingress"
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
action: DENY
rules:
- from:
- source:
notRequestPrincipals: ["*"]
EOF
</code></pre>
<p>ISTIO GTW and VS :</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: keycloak-gateway
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: enterprise-vs
spec:
hosts:
- '*'
gateways:
- default/keycloak-gateway
http:
- match:
- uri:
prefix: '/enterprise/'
rewrite:
uri: /
fault:
delay:
fixedDelay: 1s
route:
- destination:
host: enterprise
port:
number: 8080
subset: enterprise-s1
weight: 90
- destination:
host: enterprise
port:
number: 8080
subset: enterprise-s2
weight: 10
</code></pre>
| Tiago Medici | <p>I encountered similar issue.</p>
<p>The JWT token had following value for issuer:
"iss": "http://localhost:8080/auth/realms/dev"</p>
<p>I matched the same value in my JwtRules i.e. localhost.
However I changed jwksUri to cluster IP address of Keycloak.
This seems to have worked.</p>
<p>jwtRules:</p>
<pre><code>- issuer: 'http://localhost:8080/auth/realms/dev'
jwksUri: 'http://10.105.250.41:8080/auth/realms/dev/protocol/openid-connect/certs'
</code></pre>
| Rafi Assadi H M |
<p>As I am new to kubernetes, struggling to get the list of deployments and other details by using the kubernetes client c#.</p>
<p>Like for</p>
<pre><code> $kubectl get services
$kubectl get nodes
</code></pre>
<p>Any help would be appreciated...</p>
| dinesh | <p>To do that, you first need to be authenticated and authorized to your Kubernetes namespace/cluster.</p>
<pre class="lang-cs prettyprint-override"><code>var config = await KubernetesClientConfiguration.BuildConfigFromConfigFileAsync(new FileInfo("C:\\Path\\To\\Your\\Kubeconfig\\file"));
var k8sClient = new Kubernetes(config);
</code></pre>
<p>Below is how you can get deployment/service</p>
<pre class="lang-cs prettyprint-override"><code>var deployments = await k8sClient.ListNamespacedDeploymentAsync("insert-your-namespace-here");
var services = await k8sClient.ListNamespacedServiceAsync("insert-your-namespace-here");
</code></pre>
<p>Example for listing out your deployment/service</p>
<pre class="lang-cs prettyprint-override"><code>foreach (var service in services.Items)
Console.WriteLine(service.Metadata.Name);
foreach (var item in deployments.Items)
Console.WriteLine(item.Metadata.Name);
</code></pre>
<p>For more details and examples, check out this repo: <a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">https://github.com/kubernetes-client/csharp</a></p>
| gadnandev |
<p>I have VPC A with CIDR <code>10.A.0.0/16</code> and VPC B with CIDR <code>10.B.0.0/16</code>. I have VPC A and B peered and updated the route tables and from a server in <code>10.B.0.0/16</code> can ping a server in <code>10.A.0.0/16</code> and vice versa.</p>
<p>The applications on VPC A also use some IPs in the <code>192.168.0.0/16</code> range. Not something I can easily change, but I need to be able to reach <code>192.168.0.0/16</code> on VPC A from VPC B.</p>
<p>I've tried adding <code>192.168.0.0/16</code> to the route table used for VPC B and setting the target of the peered connection. That does not work, I believe because <code>192.168.0.0/16</code> is not in the CIDR block for VPC A.</p>
<p>I'm unable to add <code>192.168.0.0/16</code> as a secondary CIDR in VPC A because it is restricted. See <a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#add-cidr-block-restrictions" rel="nofollow noreferrer">CIDR block association restrictions</a> and <a href="https://stackoverflow.com/questions/66381126/how-do-we-allot-the-secondary-cidr-to-vpc-in-aws">related question</a>. I understand it is restricted, but why is it restricted? RFC1918 doesn't seem to say anything against using more than one of the private address spaces.</p>
<p>I've also tried making a Transit Gateway, attaching both VPCs, and adding a static route to the Transit Gateway Route Table for <code>192.168.0.0/16</code> that targets the VPC A attachment. But still cannot reach that range from within VPC B.</p>
<p><strong>Is there another way to peer to both <code>10.0.0.0/8</code> and <code>192.168.0.0/16</code> CIDR blocks on the same VPC?</strong></p>
<h4>Updated, background info</h4>
<p>The VPCs are used by two different kubernetes clusters. The older one uses project-calico that uses the default cluster CIDR <code>192.168.0.0/16</code> and pod IPs get assigned in that range. The newer one is an EKS cluster and pod IPs are assigned from the VPC's CIDR range. During the transition period I've got the two clusters' VPCs peered together.</p>
<h4>Route Table</h4>
<p>The route table for the private subnet for VPC A</p>
<pre><code>10.A.0.0/16 local
10.B.0.0/16 pcx-[VPC A - VPC B peering connection]
0.0.0.0/0 nat-[gateway for cluster A]
</code></pre>
<p>Route table for the private subnet for VPC B</p>
<pre><code>10.B.0.0/16 local
10.A.0.0/16 pcx-[VPC A - VPC B peering connection]
192.168.0.0/16 pcx-[VPC A - VPC B peering connection]
0.0.0.0/0 nat-[gateway for cluster B]
</code></pre>
<p>This does not work, of course, because <code>192.168.0.0/16</code> is not in VPC A's CIDR block, nor can it be added.</p>
| Gangstead | <p>Calico creates an overlay network using the specified cluster CIDR (192.168.x.x) on top of VPC (A) CIDR, so pods/services in this k8s cluster can communicate. The overlay network routing information is neither expose nor usable for AWS route table. This is different from k8s cluster running in VPC (B) which uses VPC CNI that leverage on the VPC CIDR as the cluster CIDR.</p>
<p>Calico <a href="https://docs.projectcalico.org/networking/bgp" rel="nofollow noreferrer">BGP Peering</a> offers a way here but it is not going to be an easy route for this case.</p>
<blockquote>
<p>Calico nodes can exchange routing information over BGP to enable
reachability for Calico networked workloads (Kubernetes pods or
OpenStack VMs).</p>
</blockquote>
<p>If you must achieve pod to pod communication in different k8s clusters and networks but not via Ingress/LB, migrate one of the k8s cluster CNI to be the same as the other so you can fully leverage on their unique peering capabilities.</p>
| gohm'c |
<p>What could be the <code>kubectl</code> command to see <code>k8s</code> <code>secret</code> values</p>
<p>I tried</p>
<pre><code>kubectl get secrets/<secrets-name> -n <namespace>
</code></pre>
<p>It returns</p>
<pre>
NAME TYPE DATA AGE
secrets1 Opaque 1 18h
</pre>
<p>but I want to know what value stored inside the secret</p>
| Dupinder Singh | <p>Say you had a secret like the one below with a password key then something like this should work to obtain the value:</p>
<p><code>kubectl get secret/my-secret -n dev -o go-template='{{.data.password|base64decode}}'</code></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: dev
type: Opaque
data:
password: TXEyRCMoOGdmMDk=
username: cm9vdA==
</code></pre>
| Alan |
<p>I want to disable a Kubernetes scheduler plugin. <a href="https://kubernetes.io/docs/reference/scheduling/config/" rel="nofollow noreferrer">The kubernetes docs</a> says to use a configuration file like:</p>
<pre><code>apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
profiles:
- plugins:
score:
disabled:
- name: NodeResourcesLeastAllocated
enabled:
- name: MyCustomPluginA
weight: 2
- name: MyCustomPluginB
weight: 1
</code></pre>
<p>But it does not say how to activate this config file. I've tried calling <code>kube-scheduler --conf conf.yaml</code> locally on my computer but do not understand if the kube-scheduler should be run from inside the kube scheduling pod. I'm using minkube with kubernetes v1.20.2.</p>
| HissPirat | <p>I found an issue on github discussing the documentation, I posted how we managed to configure plugins in that thread <a href="https://github.com/kubernetes/website/issues/21128" rel="nofollow noreferrer">https://github.com/kubernetes/website/issues/21128</a> .</p>
| HissPirat |
<p>I am receiving NoExecuteTaintManager events that are deleting my pod but I can't figure out why. The node is healthy and the Pod has the appropriate tolerations. </p>
<p>This is actually causing infinite scale up because my Pod is setup so that it uses 3/4 Node CPUs and has a Toleration Grace Period > 0. This forces a new node when a Pod terminates. Cluster Autoscaler tries to keep the replicas == 2. </p>
<p>How do I figure out which taint is causing it specifically? Any then why it thinks that node had that taint? Currently the pods are being killed at exactly 600 seconds (which I have changed <code>tolerationSeconds</code> to be for <code>node.kubernetes.io/unreachable</code> and <code>node.kubernetes.io/not-ready</code>) however the node does not appear to undergo either of those situations.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
my-api-67df7bd54c-dthbn 1/1 Running 0 8d
my-api-67df7bd54c-mh564 1/1 Running 0 8d
my-pod-6d7b698b5f-28rgw 1/1 Terminating 0 15m
my-pod-6d7b698b5f-2wmmg 1/1 Terminating 0 13m
my-pod-6d7b698b5f-4lmmg 1/1 Running 0 4m32s
my-pod-6d7b698b5f-7m4gh 1/1 Terminating 0 71m
my-pod-6d7b698b5f-8b47r 1/1 Terminating 0 27m
my-pod-6d7b698b5f-bb58b 1/1 Running 0 2m29s
my-pod-6d7b698b5f-dn26n 1/1 Terminating 0 25m
my-pod-6d7b698b5f-jrnkg 1/1 Terminating 0 38m
my-pod-6d7b698b5f-sswps 1/1 Terminating 0 36m
my-pod-6d7b698b5f-vhqnf 1/1 Terminating 0 59m
my-pod-6d7b698b5f-wkrtg 1/1 Terminating 0 50m
my-pod-6d7b698b5f-z6p2c 1/1 Terminating 0 47m
my-pod-6d7b698b5f-zplp6 1/1 Terminating 0 62m
</code></pre>
<pre><code>14:22:43.678937 8 taint_manager.go:102] NoExecuteTaintManager is deleting Pod: my-pod-6d7b698b5f-dn26n
14:22:43.679073 8 event.go:221] Event(v1.ObjectReference{Kind:"Pod", Namespace:"prod", Name:"my-pod-6d7b698b5f-dn26n", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod prod/my-pod-6d7b698b5f-dn26n
</code></pre>
<pre><code># kubectl -n prod get pod my-pod-6d7b698b5f-8b47r -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
checksum/config: bcdc41c616f736849a6bef9c726eec9bf704ce7d2c61736005a6fedda0ee14d0
kubernetes.io/psp: eks.privileged
creationTimestamp: "2019-10-25T14:09:17Z"
deletionGracePeriodSeconds: 172800
deletionTimestamp: "2019-10-27T14:20:40Z"
generateName: my-pod-6d7b698b5f-
labels:
app.kubernetes.io/instance: my-pod
app.kubernetes.io/name: my-pod
pod-template-hash: 6d7b698b5f
name: my-pod-6d7b698b5f-8b47r
namespace: prod
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: my-pod-6d7b698b5f
uid: c6360643-f6a6-11e9-9459-12ff96456b32
resourceVersion: "2408256"
selfLink: /api/v1/namespaces/prod/pods/my-pod-6d7b698b5f-8b47r
uid: 08197175-f731-11e9-9459-12ff96456b32
spec:
containers:
- args:
- -c
- from time import sleep; sleep(10000)
command:
- python
envFrom:
- secretRef:
name: pix4d
- secretRef:
name: rabbitmq
image: python:3.7-buster
imagePullPolicy: Always
name: my-pod
ports:
- containerPort: 5000
name: http
protocol: TCP
resources:
requests:
cpu: "3"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-gv6q5
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-142-54-235.ec2.internal
nodeSelector:
nodepool: zeroscaling-gpu-accelerated-p2-xlarge
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 172800
tolerations:
- key: specialized
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 600
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 600
volumes:
- name: default-token-gv6q5
secret:
defaultMode: 420
secretName: default-token-gv6q5
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-10-25T14:10:40Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-10-25T14:11:09Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-10-25T14:11:09Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-10-25T14:10:40Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://15e2e658c459a91a86573c1096931fa4ac345e06f26652da2a58dc3e3b3d5aa2
image: python:3.7-buster
imageID: docker-pullable://python@sha256:f0db6711abee8d406121c9e057bc0f7605336e8148006164fea2c43809fe7977
lastState: {}
name: my-pod
ready: true
restartCount: 0
state:
running:
startedAt: "2019-10-25T14:11:09Z"
hostIP: 10.142.54.235
phase: Running
podIP: 10.142.63.233
qosClass: Burstable
startTime: "2019-10-25T14:10:40Z"
</code></pre>
<pre><code># kubectl -n prod describe pod my-pod-6d7b698b5f-8b47r
Name: my-pod-6d7b698b5f-8b47r
Namespace: prod
Priority: 0
PriorityClassName: <none>
Node: ip-10-142-54-235.ec2.internal/10.142.54.235
Start Time: Fri, 25 Oct 2019 10:10:40 -0400
Labels: app.kubernetes.io/instance=my-pod
app.kubernetes.io/name=my-pod
pod-template-hash=6d7b698b5f
Annotations: checksum/config: bcdc41c616f736849a6bef9c726eec9bf704ce7d2c61736005a6fedda0ee14d0
kubernetes.io/psp: eks.privileged
Status: Terminating (lasts 47h)
Termination Grace Period: 172800s
IP: 10.142.63.233
Controlled By: ReplicaSet/my-pod-6d7b698b5f
Containers:
my-pod:
Container ID: docker://15e2e658c459a91a86573c1096931fa4ac345e06f26652da2a58dc3e3b3d5aa2
Image: python:3.7-buster
Image ID: docker-pullable://python@sha256:f0db6711abee8d406121c9e057bc0f7605336e8148006164fea2c43809fe7977
Port: 5000/TCP
Host Port: 0/TCP
Command:
python
Args:
-c
from time import sleep; sleep(10000)
State: Running
Started: Fri, 25 Oct 2019 10:11:09 -0400
Ready: True
Restart Count: 0
Requests:
cpu: 3
Environment Variables from:
pix4d Secret Optional: false
rabbitmq Secret Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gv6q5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-gv6q5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gv6q5
Optional: false
QoS Class: Burstable
Node-Selectors: nodepool=zeroscaling-gpu-accelerated-p2-xlarge
Tolerations: node.kubernetes.io/not-ready:NoExecute for 600s
node.kubernetes.io/unreachable:NoExecute for 600s
specialized
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 12m (x2 over 12m) default-scheduler 0/13 nodes are available: 1 Insufficient pods, 13 Insufficient cpu, 6 node(s) didn't match node selector.
Normal TriggeredScaleUp 12m cluster-autoscaler pod triggered scale-up: [{prod-worker-gpu-accelerated-p2-xlarge 7->8 (max: 13)}]
Warning FailedScheduling 11m (x5 over 11m) default-scheduler 0/14 nodes are available: 1 Insufficient pods, 1 node(s) had taints that the pod didn't tolerate, 13 Insufficient cpu, 6 node(s) didn't match node selector.
Normal Scheduled 11m default-scheduler Successfully assigned prod/my-pod-6d7b698b5f-8b47r to ip-10-142-54-235.ec2.internal
Normal Pulling 11m kubelet, ip-10-142-54-235.ec2.internal pulling image "python:3.7-buster"
Normal Pulled 10m kubelet, ip-10-142-54-235.ec2.internal Successfully pulled image "python:3.7-buster"
Normal Created 10m kubelet, ip-10-142-54-235.ec2.internal Created container
Normal Started 10m kubelet, ip-10-142-54-235.ec2.internal Started container
</code></pre>
<pre><code># kubectl -n prod describe node ip-10-142-54-235.ec2.internal
Name: ip-10-142-54-235.ec2.internal
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=p2.xlarge
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1b
kubernetes.io/hostname=ip-10-142-54-235.ec2.internal
nodepool=zeroscaling-gpu-accelerated-p2-xlarge
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 25 Oct 2019 10:10:20 -0400
Taints: specialized=true:NoExecute
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 25 Oct 2019 10:23:11 -0400 Fri, 25 Oct 2019 10:10:19 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 25 Oct 2019 10:23:11 -0400 Fri, 25 Oct 2019 10:10:19 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 25 Oct 2019 10:23:11 -0400 Fri, 25 Oct 2019 10:10:19 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 25 Oct 2019 10:23:11 -0400 Fri, 25 Oct 2019 10:10:40 -0400 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.142.54.235
ExternalIP: 3.86.112.24
Hostname: ip-10-142-54-235.ec2.internal
InternalDNS: ip-10-142-54-235.ec2.internal
ExternalDNS: ec2-3-86-112-24.compute-1.amazonaws.com
Capacity:
attachable-volumes-aws-ebs: 39
cpu: 4
ephemeral-storage: 209702892Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 62872868Ki
pods: 58
Allocatable:
attachable-volumes-aws-ebs: 39
cpu: 4
ephemeral-storage: 200777747706
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 61209892Ki
pods: 58
System Info:
Machine ID: 0e76fec3e06d41a6bf2c49a18fbe1795
System UUID: EC29973A-D616-F673-6899-A96C97D5AE2D
Boot ID: 4bc510b6-f615-48a7-9e1e-47261ddf26a4
Kernel Version: 4.14.146-119.123.amzn2.x86_64
OS Image: Amazon Linux 2
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.1
Kubelet Version: v1.13.11-eks-5876d6
Kube-Proxy Version: v1.13.11-eks-5876d6
ProviderID: aws:///us-east-1b/i-0f5b519aa6e38e04a
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
amazon-cloudwatch cloudwatch-agent-4d24j 50m (1%) 250m (6%) 50Mi (0%) 250Mi (0%) 12m
amazon-cloudwatch fluentd-cloudwatch-wkslq 50m (1%) 0 (0%) 150Mi (0%) 300Mi (0%) 12m
prod my-pod-6d7b698b5f-8b47r 3 (75%) 0 (0%) 0 (0%) 0 (0%) 14m
kube-system aws-node-6nr6g 10m (0%) 0 (0%) 0 (0%) 0 (0%) 13m
kube-system kube-proxy-wf8k4 100m (2%) 0 (0%) 0 (0%) 0 (0%) 13m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 3210m (80%) 250m (6%)
memory 200Mi (0%) 550Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
attachable-volumes-aws-ebs 0 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 13m kubelet, ip-10-142-54-235.ec2.internal Starting kubelet.
Normal NodeHasSufficientMemory 13m (x2 over 13m) kubelet, ip-10-142-54-235.ec2.internal Node ip-10-142-54-235.ec2.internal status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m (x2 over 13m) kubelet, ip-10-142-54-235.ec2.internal Node ip-10-142-54-235.ec2.internal status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m (x2 over 13m) kubelet, ip-10-142-54-235.ec2.internal Node ip-10-142-54-235.ec2.internal status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 13m kubelet, ip-10-142-54-235.ec2.internal Updated Node Allocatable limit across pods
Normal Starting 12m kube-proxy, ip-10-142-54-235.ec2.internal Starting kube-proxy.
Normal NodeReady 12m kubelet, ip-10-142-54-235.ec2.internal Node ip-10-142-54-235.ec2.internal status is now: NodeReady
</code></pre>
<pre><code># kubectl get node ip-10-142-54-235.ec2.internal -o yaml
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2019-10-25T14:10:20Z"
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: p2.xlarge
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: us-east-1
failure-domain.beta.kubernetes.io/zone: us-east-1b
kubernetes.io/hostname: ip-10-142-54-235.ec2.internal
nodepool: zeroscaling-gpu-accelerated-p2-xlarge
name: ip-10-142-54-235.ec2.internal
resourceVersion: "2409195"
selfLink: /api/v1/nodes/ip-10-142-54-235.ec2.internal
uid: 2d934979-f731-11e9-89b8-0234143df588
spec:
providerID: aws:///us-east-1b/i-0f5b519aa6e38e04a
taints:
- effect: NoExecute
key: specialized
value: "true"
status:
addresses:
- address: 10.142.54.235
type: InternalIP
- address: 3.86.112.24
type: ExternalIP
- address: ip-10-142-54-235.ec2.internal
type: Hostname
- address: ip-10-142-54-235.ec2.internal
type: InternalDNS
- address: ec2-3-86-112-24.compute-1.amazonaws.com
type: ExternalDNS
allocatable:
attachable-volumes-aws-ebs: "39"
cpu: "4"
ephemeral-storage: "200777747706"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 61209892Ki
pods: "58"
capacity:
attachable-volumes-aws-ebs: "39"
cpu: "4"
ephemeral-storage: 209702892Ki
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 62872868Ki
pods: "58"
conditions:
- lastHeartbeatTime: "2019-10-25T14:23:51Z"
lastTransitionTime: "2019-10-25T14:10:19Z"
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: "2019-10-25T14:23:51Z"
lastTransitionTime: "2019-10-25T14:10:19Z"
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: "2019-10-25T14:23:51Z"
lastTransitionTime: "2019-10-25T14:10:19Z"
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: "2019-10-25T14:23:51Z"
lastTransitionTime: "2019-10-25T14:10:40Z"
message: kubelet is posting ready status
reason: KubeletReady
status: "True"
type: Ready
daemonEndpoints:
kubeletEndpoint:
Port: 10250
images:
- names:
- python@sha256:f0db6711abee8d406121c9e057bc0f7605336e8148006164fea2c43809fe7977
- python:3.7-buster
sizeBytes: 917672801
- names:
- 602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni@sha256:5b7e7435f88a86bbbdb2a5ecd61e893dc14dd13c9511dc8ace362d299259700a
- 602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni:v1.5.4
sizeBytes: 290739356
- names:
- fluent/fluentd-kubernetes-daemonset@sha256:582770d951f81e0971e852089239ced0186e0bdc3226daf16b99ca4cc22de4f7
- fluent/fluentd-kubernetes-daemonset:v1.3.3-debian-cloudwatch-1.4
sizeBytes: 261867521
- names:
- amazon/cloudwatch-agent@sha256:877106acbc56e747ebe373548c88cd37274f666ca11b5c782211db4c5c7fb64b
- amazon/cloudwatch-agent:latest
sizeBytes: 131360039
- names:
- 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/kube-proxy@sha256:4767b441ddc424b0ea63c305b79be154f65fb15ebefe8a3b2832ce55aa6de2f0
- 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/kube-proxy:v1.13.8
sizeBytes: 80183964
- names:
- busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e
- busybox:latest
sizeBytes: 1219782
- names:
- 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64@sha256:bea77c323c47f7b573355516acf927691182d1333333d1f41b7544012fab7adf
- 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1
sizeBytes: 742472
nodeInfo:
architecture: amd64
bootID: 4bc510b6-f615-48a7-9e1e-47261ddf26a4
containerRuntimeVersion: docker://18.6.1
kernelVersion: 4.14.146-119.123.amzn2.x86_64
kubeProxyVersion: v1.13.11-eks-5876d6
kubeletVersion: v1.13.11-eks-5876d6
machineID: 0e76fec3e06d41a6bf2c49a18fbe1795
operatingSystem: linux
osImage: Amazon Linux 2
systemUUID: EC29973A-D616-F673-6899-A96C97D5AE2D
</code></pre>
| Ryan | <p>Unfortunately, I don't have an exact answer to your issue, but I may have some workaround.</p>
<p>I think I had the same issue with Amazon EKS cluster, version 1.13.11 - my pod was triggering node scale-up, pod was scheduled, works for 300s and then evicted:</p>
<pre><code>74m Normal TaintManagerEviction pod/master-3bb760a7-b782-4138-b09f-0ca385db9ad7-workspace Marking for deletion Pod project-beta/master-3bb760a7-b782-4138-b09f-0ca385db9ad7-workspace
</code></pre>
<p>Interesting, that the same pod was able to run with no problem if it was scheduled on existing node and not a just created one.</p>
<p>From my investigation, it really looks like some issue with this specific Kubernetes version. Maybe some edge case of the TaintBasedEvictions feature(I think it was enabled by default in 1.13 version of Kubernetes).</p>
<p>To "fix" this issue I updated cluster version to 1.14. After that, mysterious pod eviction did not happen anymore.</p>
<p>So, if it's possible to you, I suggest updating your cluster to 1.14 version(together with cluster-autoscaler).</p>
| Rybue |
<p>I'm having trouble setting up my k8s pods exactly how I want. My trouble is that I have multiple containers which listen to the same ports (80,443). In a remote machine, I normally use docker-compose with 'ports - 12345:80' to set this up. With K8s it appears from all of the examples I have found that with a container, the only option is to expose a port, not to proxy it. I know I can use reverse proxies to forward to multiple ports, but that would require the images to use different ports rather than using the same port and having the container forward the requests. Is there a way to do this in k8s?</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: xxx.xxx.xxx.xxx
selector:
app: app
tier: backend
ports:
- protocol: "TCP"
port: 80
targetPort: 80
type: LoadBalancer
</code></pre>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
selector:
matchLabels:
app: app
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: app
tier: backend
track: stable
spec:
containers:
- name: app
image: image:example
ports:
- containerPort: 80
imagePullSecrets:
- name: xxxxxxx
</code></pre>
<p>Ideally, I would be able to have the containers on a Node listening to different ports, which the applications running in those containers continue to listen to 80/443, and my services would route to the correct container as necessary.</p>
<p>My load balancer is working correctly, as is my first container. Adding a second container succeeds, but the second container can't be reached. The second container uses a similar script with different names and a different image for deployment.</p>
| Carson | <p>The answer here is adding a service for the pod where the ports are declared. Using Kompose to convert a docker-compose file, this is the result:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: "5000"
port: 5000
targetPort: 80
selector:
io.kompose.service: app
status:
loadBalancer: {}
</code></pre>
<p>as well as</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: app
strategy: {}
template:
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
spec:
containers:
- image: image:example
imagePullPolicy: ""
name: app
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
</code></pre>
<p>Some of the fluff from Kompose could be removed, but the relevant answer to this question is declaring the port and target port for the pod in a service, and exposing the targetPort as a containerPort in the deployment for the container.
Thanks to David Maze and GintsGints for the help!</p>
| Carson |
<p>After upgrade Flink 1.10 to Flink 1.11, the log4j configuration is no longer working.</p>
<p>my previous configuration was using a library with an adapter that requires log4j 1.x and is no longer compatible with Flink 1.11</p>
<p>according to the new configuration, the flink-conf.yaml should look like this</p>
<pre><code>log4j-console.properties: |+
# This affects logging for both user code and Flink
rootLogger.level = INFO
rootLogger.appenderRef.console.ref = ConsoleAppender
rootLogger.appenderRef.rolling.ref = RollingFileAppender
# Uncomment this if you want to _only_ change Flink's logging
#logger.flink.name = org.apache.flink
#logger.flink.level = INFO
</code></pre>
<p>my current configuration using log4j1 looks something similar to this</p>
<pre><code>log4j-console.properties: |+
log4j.rootLogger=INFO,myappender,console
log4j.appender.myappender=com.company.log4j.MyAppender
log4j.appender.myappender.endpoints=http://
</code></pre>
<p>is there a way to tell Flink 1.11 to use log4j1 in the flink-conf.yaml file?</p>
| Edgar Ferney Ruiz Anzola | <p>As far as I know, <code>flink-conf.yaml</code> does not contain <code>log4j-console.properties</code> section and this is a separate file. What you have specified I suppose is a part of <code>flink-configuration-configmap.yaml</code> cluster resource definition.</p>
<p>According to the flink <a href="https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/logging.html#configuring-log4j1" rel="nofollow noreferrer">Configuring Log4j1 Section</a>, in order to use log4j1, you need to:</p>
<ul>
<li>remove the <code>log4j-core</code>, <code>log4j-slf4j-impl</code> and <code>log4j-1.2-api</code> jars from the lib directory,</li>
<li>add the <code>log4j</code>, <code>slf4j-log4j12</code> and <code>log4j-to-slf4j</code> jars to the lib directory,</li>
</ul>
| Mikalai Lushchytski |
<p>I am trying to use storageclass, PersistentVolumeClaim ,PersistentVolume
I can run by command promt from local and working fine
But when deploying by azure pipeline getting issue
"cannot get resource "storageclasses" in API group "storage.k8s.io" at the cluster scope"</p>
<p><a href="https://i.stack.imgur.com/3J9X8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3J9X8.png" alt="enter image description here" /></a></p>
<pre><code> kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
parameters:
storageAccount: xxxdevxxx
location: Souxxxxst xxxxx
---
# Create a Secret to hold the name and key of the Storage Account
# Remember: values are base64 encoded
apiVersion: v1
kind: Secret
metadata:
name: azurefile-secret
type: Opaque
data:
azurestorageaccountname: YWlhZ7xxxxxzdA==
azurestorageaccountkey: a2s4bURfghfhjxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxMUd6eE1UTEdxZHdRUzhvR09UZ0xBempPN3dXZEF0K1E9PQ==
---
# Create a persistent volume, with the corresponding StorageClass and the reference to the Azure File secret.
# Remember: Create the share in the storage account otherwise the pods will fail with a "No such file or directory"
apiVersion: v1
kind: PersistentVolume
metadata:
name: jee-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
storageClassName: azurefile
azureFile:
secretName: azurefile-secret
shareName: jee-log
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
---
# Create a PersistentVolumeClaim referencing the StorageClass and the volume
# Remember: this is a static scenario. The volume was created in the previous step.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jee-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: azurefile-xxxx
volumeName: jee-pv
</code></pre>
| Jeebendu kumar Behera | <p>in Azurepipeline to create persistance volume and storage class by Kubernetes
use <strong>Service connection type</strong> = <strong>Azure Resource Manager</strong> instead of <strong>Azure service connection</strong></p>
| Jeebendu kumar Behera |
<p>Stack:
Azure Kubernetes Service<br />
NGINX Ingress Controller - <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a> <br />
AKS Loadbalancer<br />
Docker containers</p>
<p>My goal is to create a K8s cluster that will allow me to use multiple pods, under a single IP, to create a microservice architecture. After working with tons of tutorials and documentation, I'm not having any luck with my endgoal. I got to the point of being able to access a single deployment using the Loadbalancer, but introducing the ingress has not been successful so far. The services are separated into their respective files for readability and ease of control.</p>
<p>Additionally, the Ingress Controller was added to my cluster as described in the installation instructions using: <code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml</code></p>
<p>LoadBalancer.yml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: x.x.x.x
selector:
app: ingress-service
tier: backend
ports:
- name: "default"
port: 80
targetPort: 80
type: LoadBalancer
</code></pre>
<p>IngressService.yml:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: api-service
servicePort: 80
</code></pre>
<p>api-deployment.yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
selector:
matchLabels:
app: api
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: api
tier: backend
track: stable
spec:
containers:
- name: api
image: image:tag
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: SECRET
</code></pre>
<p>The API in the image is exposed on port 80 correctly.</p>
<p>After applying each of the above yml services and deployments, I attempt a web request to one of the api resources via the LoadBalancer's IP and receive only a timeout on my requests.</p>
| Carson | <p>Found my answer after hunting around enough. Basically, the problem was that the Ingress Controller has a Load Balancer built into the yaml, as mentioned in comments above. However, the selector for that LoadBalancer requires marking your Ingress service as part of the class. Then that Ingress service points to each of the services attached to your pods. I also had to make a small modification to allow using a static IP in the provided load balancer.</p>
| Carson |
<p>I have set up traefik with the helm chart. I have an application that I want visible to the outside world. But I am getting this error below.</p>
<pre><code>kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
namespace: example
name: example-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web, websecure
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /
backend:
serviceName: example-app
servicePort: 80
</code></pre>
<p>I can then run:</p>
<pre><code>kubectl get ing -n example
</code></pre>
<p>which gives me:</p>
<pre><code>NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
example example-ingress <none> mydomain.com 80 75m
</code></pre>
<p>But when I check the logs of the traefik pod I get the following errors:</p>
<pre><code>level=error msg="Cannot create service: subset not found" namespace=example ingress=example-ingress serviceName=example-app providerName=kubernetes servicePort=80
</code></pre>
<p>Any ideas?</p>
| Jacob | <p>Please try solution from thread below, answer says:</p>
<p>"I had a missing SecretName in my ingress definition, and i updated ro -rc3 (and finally to v2.0), after the update the error is no longer there"</p>
<p><a href="https://community.containo.us/t/kubernetesingress-cannot-create-service-subset-not-found/1516" rel="nofollow noreferrer">https://community.containo.us/t/kubernetesingress-cannot-create-service-subset-not-found/1516</a></p>
| Roar S. |
<p>Can anyone explain to me why when running my load test on one pod it gives better TPS rather than when scaling to two pods.</p>
<p>I expected that when running the same scenario with the same configuration on 2 pods the TPS will be increased but this is not what happened.</p>
<p>Is this normal behaviour that scaling horizontal not improve the total number of requests?</p>
<p>Please note that I didn't get any failures on one pod just scaled to 2 for high availability.</p>
| Marwa Mohamed Mahmoud | <pre><code>...my load test on one pod it gives better TPS rather than when scaling to two pods.
</code></pre>
<p>This can happen when 2 pods race for same resource and create a bottleneck.</p>
<pre><code>Is this normal behaviour that scaling horizontal not improve the total number of requests?
</code></pre>
<p>Client (web) requests can improve but the capacity for backend, sometimes middleware too (if any), needs to catch up.</p>
| gohm'c |
<p>According to <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-organizing-with-namespaces" rel="nofollow noreferrer">this</a> blog, the <code>kubens</code> command is used to show all the available namespaces in <strong>GCP's</strong> GKE, but when I tried using this command, and after connecting to the cluster it says: </p>
<blockquote>
<p>command not found</p>
</blockquote>
<p>Why am I getting this error?</p>
| Yash Saini | <p>First you have to install <code>kubens</code> in your machine. Go through the below link for installation.</p>
<p><a href="https://github.com/ahmetb/kubectx" rel="nofollow noreferrer">https://github.com/ahmetb/kubectx</a></p>
| Afnan Ashraf |
<p>How can I use Mike Farah's YQ v4 to update a field that has special characters.</p>
<p>e.g. manipulating below:</p>
<pre><code> containers:
- name: flyway
image: xx.dkr.ecr.eu-west-1.amazonaws.com/testimage:60
</code></pre>
<p>Errors for attempts:</p>
<pre><code>$ yq e ".spec.template.spec.containers[0].image=xx.dkr.ecr.eu-west-1.amazonaws.com/testimage:61" flyway.yaml
Error: Bad expression, please check expression syntax
$ yq e ".spec.template.spec.containers[0].image=xx\.dkr\.ecr\.eu-west-1\.amazonaws\.com\/testimage:61" flyway.yaml
Error: Parsing expression: Lexer error: could not match text starting at 1:53 failing at 1:54.
unmatched text: "\\"
#########
$ echo $image
xx.dkr.ecr.eu-west-1.amazonaws.com/testimage:61
$ yq e ".spec.template.spec.containers[0].image=$image" flyway.yaml
Error: Bad expression, please check expression syntax
</code></pre>
<p>Didnt find any documents explaining any escape characters for special character.</p>
| Chinmaya Biswal | <p>Found this documentation:
<a href="https://mikefarah.gitbook.io/yq/operators/env-variable-operators" rel="nofollow noreferrer">https://mikefarah.gitbook.io/yq/operators/env-variable-operators</a></p>
<p>This worked for me:</p>
<pre><code>$ echo $image
xx.dkr.ecr.eu-west-1.amazonaws.com/testimage:61
$ myenv=$image yq e '.spec.template.spec.containers[0].image=env(myenv)' flyway.yaml
<<removed contents for brevity >>
spec:
containers:
- name: flyway
image: xx.dkr.ecr.eu-west-1.amazonaws.com/testimage:61
</code></pre>
| Chinmaya Biswal |
<p>I am facing some issues on I believe to be my .yaml file.
Docker-compose works fine and the containers ran as expected.
But after kompose convert on the file did not yield desired result on k8s, and I am getting com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure.</p>
<p>There are no existing container in docker containers and docker-compose down was used prior in kompose convert.</p>
<p>mysql pod work fine, and able to access.
spring is however unable to connect to it....</p>
<p>in docker-compose.yaml</p>
<pre><code> version: '3'
services:
mysql-docker-container:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=1
- MYSQL_DATABASE=db_fromSpring
- MYSQL_USER=springuser
- MYSQL_PASSWORD=ThePassword
networks:
- backend
ports:
- 3307:3306
volumes:
- /data/mysql
spring-boot-jpa-app:
command: mvn clean install -DskipTests
image: bnsbns/spring-boot-jpa-image
depends_on:
- mysql-docker-container
environment:
- spring.datasource.url=jdbc:mysql://mysql-docker-container:3306/db_fromSpring
- spring.datasource.username=springuser
- spring.datasource.password=ThePassword
networks:
- backend
ports:
- "8087:8080"
volumes:
- /data/spring-boot-app
networks:
backend:
</code></pre>
<p>Error:</p>
<pre><code>2021-09-15 04:37:47.542 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
</code></pre>
<p>backend-network.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: backend
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/backend: "true"
podSelector:
matchLabels:
io.kompose.network/backend: "true"
</code></pre>
<p>mysql-docker-container-claim0-persistentvolumeclaim.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: mysql-docker-container-claim0
name: mysql-docker-container-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
</code></pre>
<p>mysql-docker-container-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mysql-docker-container
name: mysql-docker-container
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mysql-docker-container
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/backend: "true"
io.kompose.service: mysql-docker-container
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: db_fromSpring
- name: MYSQL_PASSWORD
value: ThePassword
- name: MYSQL_ROOT_PASSWORD
value: "1"
- name: MYSQL_USER
value: springuser
image: mysql:latest
imagePullPolicy: ""
name: mysql-docker-container
ports:
- containerPort: 3306
resources: {}
volumeMounts:
- mountPath: /data/mysql
name: mysql-docker-container-claim0
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: mysql-docker-container-claim0
persistentVolumeClaim:
claimName: mysql-docker-container-claim0
status: {}
</code></pre>
<p>mysql-docker-container-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mysql-docker-container
name: mysql-docker-container
spec:
ports:
- name: "3307"
port: 3307
targetPort: 3306
selector:
io.kompose.service: mysql-docker-container
status:
loadBalancer: {}
</code></pre>
<p>springboot-app-jpa-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: spring-boot-jpa-app
name: spring-boot-jpa-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: spring-boot-jpa-app
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/backend: "true"
io.kompose.service: spring-boot-jpa-app
spec:
containers:
- args:
- mvn
- clean
- install
- -DskipTests
env:
- name: spring.datasource.password
value: ThePassword
- name: spring.datasource.url
value: jdbc:mysql://mysql-docker-container:3306/db_fromSpring
- name: spring.datasource.username
value: springuser
image: bnsbns/spring-boot-jpa-image
imagePullPolicy: ""
name: spring-boot-jpa-app
ports:
- containerPort: 8080
resources: {}
volumeMounts:
- mountPath: /data/spring-boot-app
name: spring-boot-jpa-app-claim0
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: spring-boot-jpa-app-claim0
persistentVolumeClaim:
claimName: spring-boot-jpa-app-claim0
status: {}
</code></pre>
<p>springboot-jpa-app-persistence-claim.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: spring-boot-jpa-app-claim0
name: spring-boot-jpa-app-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
</code></pre>
<p>springboot-app-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: spring-boot-jpa-app
name: spring-boot-jpa-app
spec:
ports:
- name: "8087"
port: 8087
targetPort: 8080
selector:
io.kompose.service: spring-boot-jpa-app
status:
loadBalancer: {}
</code></pre>
<hr />
<p>solution as posted by gohm'c was that i had the incorrect port.</p>
<p>facing this issue next, do i need to specific a cluster/load?</p>
<p>$ kubectl expose deployment spring-boot-jpa-app --type=NodePort
Error from server (AlreadyExists): services "spring-boot-jpa-app" already exists</p>
<pre><code>minikube service spring-boot-jpa-app
|-----------|---------------------|-------------|--------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------------|-------------|--------------|
| default | spring-boot-jpa-app | | No node port |
|-----------|---------------------|-------------|--------------|
😿 service default/spring-boot-jpa-app has no node port
</code></pre>
| invertedOwlCoding | <p>The <code>mysql-docker-container</code> service port is 3307, can you try:</p>
<pre><code>env:
...
- name: spring.datasource.url
value: jdbc:mysql://mysql-docker-container:3307/db_fromSpring
</code></pre>
| gohm'c |
<p>I am using <a href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster" rel="noreferrer">here</a> to create a new AKS cluster. This has worked fine, however, when I look at the cluster I have noticed there is no External-IP (it shows )</p>
<p>How do I add an external IP address so that I can access the cluster externally?</p>
<p>I am using AKS within Azure</p>
<p>Paul</p>
| Paul | <p>kubectl apply -f {name of this file}.yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
type: LoadBalancer
</code></pre>
<p>From <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/</a></p>
<p>This will create a load balancer that has an external ip address. You can specify one if you have a static IP as well.</p>
| Carson |
<p>I am trying to setup ALB load balancer instead of default ELB loadbalancer in Kubernetes AWS.The loadbalancer has to be connected to the istio ingressgateway.I looked for solutions and only found <a href="https://medium.com/@cy.chiang/how-to-integrate-aws-alb-with-istio-v1-0-b17e07cae156" rel="noreferrer">one</a>.
But the istio version mentioned is V1 and there has been so many changes in istio now.I tried to change service type to nodeport in the chart (according to the blog)but still the service comes as a Loadbalancer.</p>
<p>Can someone mention steps how to configure ALB for istio ingressgateway?</p>
<p>Thanks for reading</p>
| sachin | <blockquote>
<p>Step 1 : Change istioingresssgateway service type as nodeport</p>
<p>Step 2 : Install ALB ingress controller </p>
<p>Step 3 : Write ingress.yaml for istioingressgateway as follows:</p>
</blockquote>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: istio-system
name: ingress
labels:
app: ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets: <subnet1>,<subnet2>
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: istio-ingressgateway
servicePort: 80
</code></pre>
<blockquote>
<p>alb.ingress.kubernetes.io/subnets annotation can be avoided if you labelled subnet of vpc with :</p>
<p>kubernetes.io/cluster/: owned</p>
<p>kubernetes.io/role/internal-elb: 1 (for internal ELB)</p>
<p>kubernetes.io/role/elb: 1 (for external ELB)</p>
</blockquote>
<p>or else you can provide two subnet values and each subnet should be in different availability zone in the above yaml</p>
<blockquote>
<p>It worked in Istio 1.6</p>
</blockquote>
| tibin tomy |
<p>Like <a href="https://github.com/bitnami/charts/tree/master/bitnami/nginx" rel="nofollow noreferrer">nginx chart</a>, is there a way to quickly generate a list of all parameters?</p>
| xianzhe | <p><code>helm show values nginx bitnami/nginx</code></p>
| gohm'c |
<p>Does anyone know what am I doing wrong with my kubernetes secret yaml and why its not able to successfully create one programatically?</p>
<p>I am trying to programmatically create a secret in Kubernetes cluster with credentials to pull an image from a private registry but it is failing with the following:</p>
<pre><code>"Secret "secrettest" is invalid: data[.dockerconfigjson]: Invalid value: "<secret contents redacted>": invalid character 'e' looking for beginning of value"
</code></pre>
<p>This is the yaml I tried to use to create the secret with. It is yaml output from a secret previously created in my kubernetes cluster using the command line except without a few unnecessary properties. So I know this is valid yaml:</p>
<pre><code>apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJoZWxsb3dvcmxkLmF6dXJlY3IuaW8iOnsidXNlcm5hbWUiOiJoZWxsbyIsInBhc3N3b3JkIjoid29ybGQiLCJhdXRoIjoiYUdWc2JHODZkMjl5YkdRPSJ9fX0=
kind: Secret
metadata:
name: secrettest
namespace: default
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>This is the decoded value of the ".dockerconfigjson" property which seems to be throwing the error but not sure why if the value is supposed to be encoded per documentation:</p>
<pre><code>{"auths":{"helloworld.azurecr.io":{"username":"hello","password":"world","auth":"aGVsbG86d29ybGQ="}}}
</code></pre>
<p>According to the documentation, my yaml is valid so Im not sure whats the issue:
<a href="https://i.stack.imgur.com/Cdl2H.png" rel="nofollow noreferrer">Customize secret yaml</a></p>
<p><strong>Note: I tried creating the Secret using the Kubernetes client and "PatchNamespacedSecretWithHttpMessagesAsync" in C#</strong></p>
<p>Referenced documentaion: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
| jorgeavelar98 | <p>I was getting similar error when I was trying to create secret using client-go.
The error actually tells that the encoded string has invalid e at the beginning of the value (so may be it's expecting '{' this at the beginning ).To solve this, the value should not be encoded into base64. Just use it as it is and that will be encoded later.</p>
| Sagar Deshmukh |
<p>I want to create more than one cache using helm, my yaml is the following</p>
<pre class="lang-yaml prettyprint-override"><code>deploy:
infinispan:
cacheContainer:
distributedCache:
- name: "mycache"
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
- name: "mycache1"
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
</code></pre>
<p>when when i install the helm i get the following error</p>
<pre class="lang-sh prettyprint-override"><code> Red Hat Data Grid Server failed to start org.infinispan.commons.configuration.io.ConfigurationReaderException: Missing required attribute(s): name[86,1]
</code></pre>
<p>I dont know if is possible to create more than one cache. I have followed the following documentation <a href="https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.3/html/building_and_deploying_data_grid_clusters_with_helm/configuring-servers" rel="nofollow noreferrer">https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.3/html/building_and_deploying_data_grid_clusters_with_helm/configuring-servers</a></p>
<p>Thanks for your help.</p>
<p>Alexis</p>
| Alexis C. C. | <p>Yes it's possible to define multiple caches. You have to use the format:</p>
<pre class="lang-yaml prettyprint-override"><code>deploy:
infinispan:
cacheContainer:
<1st-cache-name>:
<cache-type>:
<cache-definition>:
...
<2nd-cache-name>:
<cache-type>:
<cache-definition>:
</code></pre>
<p>So in your case that will be:</p>
<pre class="lang-yaml prettyprint-override"><code>deploy:
infinispan:
cacheContainer:
mycache: # mycache definition follows
distributedCache:
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
mycache1: # mycache1 definition follows
distributedCache:
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
</code></pre>
<p>See <a href="https://infinispan.org/docs/infinispan-operator/2.2.x/operator.html#infinispan-configuration_configuring-clusters" rel="nofollow noreferrer">here</a> for an example of how to define multiple caches in json/xml/yaml formats.</p>
| Ryan Emerson |
<p>I am deploying my application on a Scaleway Kapsule Kubernetes cluster and I am trying to generate TLS certificate from Let's Encrypt using Cert-Manager. Here is my resources :</p>
<hr />
<p>Secret:</p>
<pre><code>apiVersion: v1
stringData:
SCW_ACCESS_KEY: XXX
SCW_SECRET_KEY: XXX
kind: Secret
metadata:
name: scaleway-secret
type: Opaque
</code></pre>
<hr />
<p>Issuer:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: scaleway
spec:
acme:
email: xxx
server: https://acme-staging-v02.api.letsencrypt.org/directory
# for production use this URL instead
# server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: scaleway-acme-secret
solvers:
- dns01:
webhook:
groupName: acme.scaleway.com
solverName: scaleway
config:
accessKeySecretRef:
key: SCW_ACCESS_KEY
name: scaleway-secret
secretKeySecretRef:
key: SCW_SECRET_KEY
name: scaleway-secret
</code></pre>
<hr />
<p>Ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-lb
annotations:
cert-manager.io/issuer: scaleway
kubernetes.io/tls-acme: "true"
spec:
rules:
- host: mydomain.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-svc
port:
number: 80
tls:
- hosts:
- mydomain.example.com
secretName: mydomain.example.com-cert
</code></pre>
<p>But I encounter a strange error that I did not find in the internet and in any of the forums :</p>
<pre><code>Error presenting challenge: failed to update DNS zone recrds: scaleway-sdk-go: http error 403 Forbidden: domain not found
</code></pre>
<p>My domain is pointing to the IP of the loadbalancer as it should and it's working. What could it be ?</p>
| joe1531 | <p><code>failed to update DNS zone recrds: scaleway-sdk-go: http error 403 Forbidden</code></p>
<p>Your role has no right over the registered domain, see the documentation <a href="https://developers.scaleway.com/en/products/domain/dns/api/#permissions" rel="nofollow noreferrer">here</a>.</p>
| gohm'c |
<p>I am trying to create hpa for memory based metric, so I provided the annotations in my service according to docs.</p>
<pre><code>apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: helloworld-go
namespace: default
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/class: "hpa.autoscaling.knative.dev"
autoscaling.knative.dev/metric: "memory"
</code></pre>
<p><a href="https://knative.dev/docs/serving/autoscaling/autoscaling-metrics/#setting-metrics-per-revision" rel="nofollow noreferrer">https://knative.dev/docs/serving/autoscaling/autoscaling-metrics/#setting-metrics-per-revision</a></p>
<p>HPA ->
<img src="https://i.stack.imgur.com/H0YVE.png" alt="HPA" />
as you can see, metric type is only CPU, I have also checked yaml output, but it always create a <strong>hpa</strong> with <strong>cpu</strong> metrics only, I don't have much experience with this autoscaling.</p>
<p>And the annotations are added to my hpa from service.
<img src="https://i.stack.imgur.com/s3uhH.png" alt="Annotations in HPA" /></p>
<p>Any suggestions on this, or what I am doing wrong.</p>
| Parneet Raghuvanshi | <p>finally after some research and expermients</p>
<h3>Explanation:</h3>
<p>To make the HPA work for knative service, we also need to specify the target as annotations, as we can see here:</p>
<p><a href="https://i.stack.imgur.com/kUjPW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kUjPW.png" alt="enter image description here" /></a></p>
<p>This <strong>if</strong> condition for hpa will only work, if we specify some target for metric.
<a href="https://github.com/knative/serving/blob/91ac3b335131565cb9304ed9f6259c959f71b996/pkg/reconciler/autoscaling/hpa/resources/hpa.go#L62" rel="nofollow noreferrer">https://github.com/knative/serving/blob/91ac3b335131565cb9304ed9f6259c959f71b996/pkg/reconciler/autoscaling/hpa/resources/hpa.go#L62</a></p>
<h3>Solution:</h3>
<p>now it works by adding one more annotation:</p>
<pre><code>apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: helloworld-go
namespace: default
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/class: "hpa.autoscaling.knative.dev"
autoscaling.knative.dev/metric: "memory"
autoscaling.knative.dev/target: "75"
</code></pre>
<p>now the <strong>hpa</strong> will be of metric type = memory
<a href="https://i.stack.imgur.com/WW1sb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WW1sb.png" alt="enter image description here" /></a></p>
| Parneet Raghuvanshi |
<p>The default Kubernetes (K3S) installation (rather rudely) occupies port 443 with the metrics-server. I am able to patch the service to change the port but then kubectl does not know how to query metrics. Where do I change the port for the kubectl client?
Port 443 is not not in <code>~./kube/config</code> (only 6443 - the api port).</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1",...
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ kubectl patch service metrics-server -n kube-system --type='json' --patch='[{"op": "replace", "path": "/spec/ports/0/port", "value":7443}]'
service/metrics-server patched
$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
Error from server (ServiceUnavailable): the server is currently unable to handle the request
</code></pre>
| Marc | <p>metrics-server has registered 443 with api-server during installation. Easiest way is <a href="https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#kubernetes-components" rel="nofollow noreferrer">disable</a> the bundled metrics-server and re-install with the service port set to 7443 <a href="https://github.com/kubernetes-sigs/metrics-server/blob/4b20c2d43e338d5df7fb530dc960e5d0753f7ab1/charts/metrics-server/values.yaml#L104" rel="nofollow noreferrer">here</a>, so the call out would reach the right port. If you are using the manifest, amend the port number <a href="https://github.com/kubernetes-sigs/metrics-server/blob/4b20c2d43e338d5df7fb530dc960e5d0753f7ab1/manifests/base/service.yaml#L10" rel="nofollow noreferrer">here</a>.</p>
| gohm'c |
<p>I would like to add a fluent-bit agent as a sidecar container to an existing <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/" rel="nofollow noreferrer">Istio Ingress Gateway</a> <code>Deployment</code> that is generated via external tooling (<code>istioctl</code>). I figured using <a href="https://get-ytt.io/" rel="nofollow noreferrer">ytt</a> and its <a href="https://github.com/k14s/ytt/blob/develop/docs/lang-ref-ytt-overlay.md" rel="nofollow noreferrer">overlays</a> would be a good way to accomplish this since it should let me append an additional <code>container</code> to the <code>Deployment</code> and a few extra <code>volumes</code> while leaving the rest of the generated YAML intact.</p>
<p>Here's a placeholder <code>Deployment</code> that approximates an <code>istio-ingressgateay</code> to help visualize the structure:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-ingressgateway
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
template:
metadata:
labels:
app: istio-ingressgateway
spec:
containers:
- args:
- example-args
command: ["example-command"]
image: gcr.io/istio/proxyv2
imagePullPolicy: Always
name: istio-proxy
volumes:
- name: example-volume-secret
secret:
secretName: example-secret
- name: example-volume-configmap
configMap:
name: example-configmap
</code></pre>
<p>I want to add a container to this that looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
</code></pre>
<p>and <code>volumes</code> that look like:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
- name: varlog
hostPath:
path: /var/log
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
</code></pre>
<p>I managed to hack something together by modifying the <a href="https://get-ytt.io/#example:example-overlay-files" rel="nofollow noreferrer">overylay files example</a> in the ytt playground, this looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
containers:
#@overlay/append
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
#@overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
volumes:
#@overlay/append
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
#@overlay/append
- name: varlog
hostPath:
path: /var/log
#@overlay/append
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
</code></pre>
<p>What I am wondering, though, is what is the best, most idiomatic way of using <code>ytt</code> to do this?</p>
<p>Thanks!</p>
| tcdowney | <p>What you have now is good! The one suggestion I would make is that, if the volumes and containers always <em>need</em> to be added together, they be combined in to the same overlay, like so:</p>
<pre><code>#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
containers:
#@overlay/append
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
#@overlay/append
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
#@overlay/append
- name: varlog
hostPath:
path: /var/log
#@overlay/append
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
</code></pre>
<p>This will guarantee any time the container is added, the appropriate volumes will be included as well.</p>
| ewrenn |
<p>I am installing the below helm package on my K8s cluster</p>
<p><a href="https://github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-21.0.0" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-21.0.0</a></p>
<p>I've got it locally and when I deploy it creates everything including a service called alertmanager-operated. Its listening on TCP port 9093 and I need to change this. I dont see where this can be configured in the values.yaml or anywhere else in the package</p>
| DeirdreRodgers | <p>It's <a href="https://github.com/prometheus-community/helm-charts/blob/b25f5a532f26e28852d5e3b125c902428f521adf/charts/kube-prometheus-stack/values.yaml#L301" rel="nofollow noreferrer">here</a>. Your values.yaml can have:</p>
<pre><code>...
alertmanager:
service:
port: <your port #>
</code></pre>
<p>Follow-up on your comment <code>... cant tell how the alertmanager-operated service gets created and how to configure it</code></p>
<p><a href="https://www.vmware.com/topics/glossary/content/kubernetes-services" rel="nofollow noreferrer">Here's</a> a good source for quick understanding of various k8s services. For greater configure details checkout the official <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">documentation</a>. Set the values according to your need and k8s will create the service for you when you apply the chart.</p>
| gohm'c |
<p>When I went through some of the tutorials online, it showed that only the worker nodes have the container runtime.
But from my understanding, it seems master nodes also run some pods such as <a href="https://stackoverflow.com/questions/58481709/why-kubelet-is-running-on-kubernetes-master-node#:%7E:text=The%20reason%20is%20that%20kubeadm,to%20provide%20the%20infrastructure%20pods.">etcd and the api server components</a> to ensure the cluster functions properly and thus has kubelet.
Can anyone please correct me if I'm wrong and answer my question if possible?</p>
| Saif Mu | <p><code>Master</code> nodes have <code>CRI</code> too, verify it using: <code>kubectl get nodes -o wide</code>.</p>
<p>When a Kubernetes cluster is first set up, a <code>Taint</code> is set on the master node. This automatically prevents any pods from being scheduled on this node. But, it's definitely possible to run pods on the master node. However, <code>best practice</code> is not to deploy application workloads on a master server.</p>
<p>In terms of tutorials, I believe it's just to keep things simple.</p>
| Sakib Md Al Amin |
<p>Trying to make the app in kubernetes with walkthrough of
Docker and Kubernetes - Full Course for Beginners
<a href="https://www.youtube.com/watch?v=Wf2eSG3owoA&t=14992s&ab_channel=freeCodeCamp.org" rel="nofollow noreferrer">https://www.youtube.com/watch?v=Wf2eSG3owoA&t=14992s&ab_channel=freeCodeCamp.org</a></p>
<p>After the comands:</p>
<pre><code>wendel@wendel-VirtualBox:~/Docker-Kub-MongoDB$ kubectl apply -f mongo-configmap.yaml
configmap/mongodb-configmap created
wendel@wendel-VirtualBox:~/Docker-Kub-MongoDB$ kubectl apply -f mongo-express.yaml
deployment.apps/mongo-express created
wendel@wendel-VirtualBox:~/Docker-Kub-MongoDB$ kubectl logs mongo-express-78fcf796b8-t9lqj
Welcome to mongo-express
------------------------
(...)
(node:7) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated,
and will be removed in a future version.
To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true }
to the MongoClient constructor.
Mongo Express server listening at http://0.0.0.0:8081
Server is open to allow connections from anyone (0.0.0.0)
basicAuth credentials are "admin:pass", it is recommended you change this in your config.js!
</code></pre>
<p><a href="https://i.stack.imgur.com/KhcMi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KhcMi.png" alt="enter image description here" /></a>
list of yamls:</p>
<p><a href="https://gitlab.com/nanuchi/youtube-tutorial-series/-/blob/master/demo-kubernetes-components/mongo-configmap.yaml" rel="nofollow noreferrer">https://gitlab.com/nanuchi/youtube-tutorial-series/-/blob/master/demo-kubernetes-components/mongo-configmap.yaml</a></p>
<p><a href="https://gitlab.com/nanuchi/youtube-tutorial-series/-/blob/master/demo-kubernetes-components/mongo-express.yaml" rel="nofollow noreferrer">https://gitlab.com/nanuchi/youtube-tutorial-series/-/blob/master/demo-kubernetes-components/mongo-express.yaml</a></p>
<p><a href="https://gitlab.com/nanuchi/youtube-tutorial-series/-/blob/master/demo-kubernetes-components/mongo-secret.yaml" rel="nofollow noreferrer">https://gitlab.com/nanuchi/youtube-tutorial-series/-/blob/master/demo-kubernetes-components/mongo-secret.yaml</a></p>
<p><a href="https://gitlab.com/nanuchi/youtube-tutorial-series/-/blob/master/demo-kubernetes-components/mongo.yaml" rel="nofollow noreferrer">https://gitlab.com/nanuchi/youtube-tutorial-series/-/blob/master/demo-kubernetes-components/mongo.yaml</a></p>
<p>What I missed to apear this warning message?</p>
| Wendel Fabiano R. da Silva | <p>This warning message is not related to k8s. This is related to change in js driver for mongodb server discovery. In your code where you instantiate mongodb client you should specify the flag suggested in the warning message:</p>
<p><code>const driver = MongoClient(<connection string>, { useUnifiedTopology: true })</code></p>
<p>The warning should go away then.</p>
| gohm'c |
<p>In Kubernetes docs <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details</a> it is said that:</p>
<p>For example, if the current metric value is 200m, and the desired value is 100m, the number of replicas will be doubled, since 200.0 / 100.0 == 2.0 If the current value is instead 50m, you'll halve the number of replicas, since 50.0 / 100.0 == 0.5. The control plane skips any scaling action if the ratio is sufficiently close to 1.0 <strong>(within a globally-configurable tolerance, 0.1 by default).</strong></p>
<p>But there is no information how to change this tolerance in yaml hpa config. Below is my hpa config</p>
<pre><code>apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
namespace: my-app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 1
maxReplicas: 6
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
</code></pre>
<p>How can I modify value of tolerance?</p>
| Nimander | <p>The tolerance value for the horizontal pod autoscaler (HPA) in Kubernetes is a global configuration setting and it's not set on the individual HPA object. It is set on the controller manager that runs on the Kubernetes control plane. You can change the tolerance value by modifying the configuration file of the controller manager and then restarting the controller manager.</p>
| Razvan I. |
<p>I'm attempting to use the <a href="https://plugins.jenkins.io/statistics-gatherer/%3E" rel="nofollow noreferrer">Statistics Gathering</a> Jenkins plugin to forward metrics to Logstash. The plugin is configured with the following url: <code>http://logstash.monitoring-observability:9000</code>. Both Jenkins and Logstash are deployed on Kubernetes. When I run a build, which triggers metrics forwarding via this plugin, I see the following error in the logs:</p>
<pre><code>2022-02-19 23:29:20.464+0000 [id=263] WARNING o.j.p.s.g.util.RestClientUtil$1#failed: The request for url http://logstash.monitoring-observability:9000/ has failed.
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:173
</code></pre>
<p>I get the same behavior when I exec into the jenkins pod and attempt to curl logstash:</p>
<pre><code>jenkins@jenkins-7889fb54b8-d9rvr:/$ curl -vvv logstash.monitoring-observability:9000
* Trying 10.52.9.143:9000...
* connect to 10.52.9.143 port 9000 failed: Connection refused
* Failed to connect to logstash.monitoring-observability port 9000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to logstash.monitoring-observability port 9000: Connection refused
</code></pre>
<p>I also get the following error in the logstash logs:</p>
<pre><code>[ERROR] 2022-02-20 00:05:43.450 [[main]<tcp] pipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Tcp port=>9000, codec=><LogStash::Codecs::JSON id=>"json_f96babad-299c-42ab-98e0-b78c025d9476", enable_metric=>true, charset=>"UTF-8">, host=>"jenkins-server.devops-tools", ssl_verify=>false, id=>"0fddd9afb2fcf12beb75af799a2d771b99af6ac4807f5a67f4ec5e13f008803f", enable_metric=>true, mode=>"server", proxy_protocol=>false, ssl_enable=>false, ssl_key_passphrase=><password>>
Error: Cannot assign requested address
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
</code></pre>
<p>Here is my jenkins-deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: devops-tools
labels:
app: jenkins-server
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-server
template:
metadata:
labels:
app: jenkins-server
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
serviceAccountName: jenkins-admin
containers:
- name: jenkins
env:
- name: LOGSTASH_HOST
value: logstash
- name: LOGSTASH_PORT
value: "5044"
- name: ELASTICSEARCH_HOST
value: elasticsearch-logging
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
image: jenkins/jenkins:lts
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "500Mi"
cpu: "500m"
ports:
- name: httpport
containerPort: 8080
- name: jnlpport
containerPort: 50000
livenessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-pv-claim
</code></pre>
<p>Here is my jenkins-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: jenkins-server
namespace: devops-tools
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8080'
spec:
selector:
app: jenkins-server
k8s-app: jenkins-server
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
</code></pre>
<p>Here is my logstash-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: monitoring-observability
labels:
app: logstash
spec:
selector:
matchLabels:
app: logstash
replicas: 1
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
env:
- name: JENKINS_HOST
value: jenkins-server
- name: JENKINS_PORT
value: "8080"
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 9000
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
</code></pre>
<p>Here is my logstash-service.yaml</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: monitoring-observability
labels:
app: logstash
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "logstash"
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 9000
targetPort: 9000
type: ClusterIP
</code></pre>
<p>Here is my logstash configmap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
host => "jenkins-server.devops-tools"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
</code></pre>
<p>There are no firewalls configured in my cluster that would be blocking traffic on port 9000. I have also tried this same configuration with port <code>5044</code> and get the same results. It seems as though my logstash instance is not actually listening on the <code>containerPort</code>. Why might this be?</p>
| Kyle Green | <p>I resolved this error by updating the configmap to this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
</code></pre>
<p>Note that all references to the jenkins host have been removed.</p>
| Kyle Green |
<p>I'm looking to set up a consul service mesh in my Kubernetes Cluster and need to enable ingress-gateway. My plan is to run ingress-gateway as a ClusterIP Service and Kubernetes Ingress (Nginx ingress) to direct traffic to that ingress. I've been going through the tutorials on Ingress Gateway on Consul.io and am confused by something. The helm chart has a list of <code>gateways:</code> with a name.</p>
<ul>
<li><p>Does the name of the service built by the helm chart have to match the consul configuration for ingress (minus the prefix applied by helm)?</p>
</li>
<li><p>If it does not have to match can I set up multiple consul ingress gateways on the same port?</p>
</li>
</ul>
<p>Example:</p>
<pre><code>$ cat myingress.hcl
Kind = "ingress-gateway"
# does the following Name need to match kubernetes service
Name = "ingress-gateway"
Listeners = [
Port = 8080
......
]
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
consul-ingress-gateway ClusterIP <blah> <blah> 8080/TCP,8443/TCP
......
</code></pre>
| Wanderer | <p>The <a href="https://www.consul.io/docs/k8s/helm#v-ingressgateways-gateways" rel="nofollow noreferrer"><code>Name</code></a> field in the configuration entry has to match the name of the service as registered in Consul. By default the Helm chart uses the name "ingress-gateway" (<a href="https://www.consul.io/docs/k8s/helm#v-ingressgateways-gateways-name" rel="nofollow noreferrer">https://www.consul.io/docs/k8s/helm#v-ingressgateways-gateways-name</a>).</p>
<p>You can customize this with the <code>name</code> field which must be defined for each ingress gateway listed under the <a href="https://www.consul.io/docs/k8s/helm#v-ingressgateways-gateways" rel="nofollow noreferrer"><code>ingressGateways.gateways</code></a> array in your Helm chart's values file. For example:</p>
<pre class="lang-yaml prettyprint-override"><code>---
ingressGateways:
gateways:
- name: ingress-gateway
service: LoadBalancer
ports:
- 8080
- name: nonprod-gateway
service: LoadBalancer
ports:
- 9000
</code></pre>
| Blake Covarrubias |
<p>I'm using fluent-bit to collect logs and pass it to fluentd for processing in a Kubernetes environment. Fluent-bit instances are controlled by DaemonSet and read logs from docker containers.</p>
<pre><code> [INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
</code></pre>
<p>There is a fluent-bit service also running</p>
<pre><code>Name: monitoring-fluent-bit-dips
Namespace: dips
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=fluent-bit-dips
app.kubernetes.io/version=1.8.10
helm.sh/chart=fluent-bit-0.19.6
Annotations: meta.helm.sh/release-name: monitoring
meta.helm.sh/release-namespace: dips
Selector: app.kubernetes.io/instance=monitoring,app.kubernetes.io/name=fluent-bit-dips
Type: ClusterIP
IP Families: <none>
IP: 10.43.72.32
IPs: <none>
Port: http 2020/TCP
TargetPort: http/TCP
Endpoints: 10.42.0.144:2020,10.42.1.155:2020,10.42.2.186:2020 + 1 more...
Session Affinity: None
Events: <none>
</code></pre>
<p>Fluentd service description is as below</p>
<pre><code>Name: monitoring-logservice
Namespace: dips
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=logservice
app.kubernetes.io/version=1.9
helm.sh/chart=logservice-0.1.2
Annotations: meta.helm.sh/release-name: monitoring
meta.helm.sh/release-namespace: dips
Selector: app.kubernetes.io/instance=monitoring,app.kubernetes.io/name=logservice
Type: ClusterIP
IP Families: <none>
IP: 10.43.44.254
IPs: <none>
Port: http 24224/TCP
TargetPort: http/TCP
Endpoints: 10.42.0.143:24224
Session Affinity: None
Events: <none>
</code></pre>
<p>But fluent-bit logs doesn't reach fluentd and getting following error</p>
<pre><code>[error] [upstream] connection #81 to monitoring-fluent-bit-dips:24224 timed out after 10 seconds
</code></pre>
<p>I tried several things like;</p>
<ul>
<li>re-deploying fluent-bit pods</li>
<li>re-deploy fluentd pod</li>
<li>Upgrade fluent-bit version from 1.7.3 to 1.8.10</li>
</ul>
<p>This is an Kubernetes environment where fluent-bit able to communicate with fluentd in the very earlier stage of deployment. Apart from that, this same fluent versions is working when I deploy locally with docker-desktop environment.</p>
<p>My guesses are</p>
<ul>
<li>fluent-bit cannot manage the amount of log process</li>
<li>fluent services are unable to communicate once the services are restarted</li>
</ul>
<p>Anyone having any experience in this or has any idea how to debug this issue more deeper?</p>
<hr />
<p>Updated following with fluentd running pod description</p>
<pre><code>Name: monitoring-logservice-5b8864ffd8-gfpzc
Namespace: dips
Priority: 0
Node: sl-sy-k3s-01/10.16.1.99
Start Time: Mon, 29 Nov 2021 13:09:13 +0530
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/name=logservice
pod-template-hash=5b8864ffd8
Annotations: kubectl.kubernetes.io/restartedAt: 2021-11-29T12:37:23+05:30
Status: Running
IP: 10.42.0.143
IPs:
IP: 10.42.0.143
Controlled By: ReplicaSet/monitoring-logservice-5b8864ffd8
Containers:
logservice:
Container ID: containerd://102483a7647fd2f10bead187eddf69aa4fad72051d6602dd171e1a373d4209d7
Image: our.private.repo/dips/logservice/splunk:1.9
Image ID: our.private.repo/dips/logservice/splunk@sha256:531f15f523a251b93dc8a25056f05c0c7bb428241531485a22b94896974e17e8
Ports: 24231/TCP, 24224/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 29 Nov 2021 13:09:14 +0530
Ready: True
Restart Count: 0
Liveness: exec [/bin/healthcheck.sh] delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: exec [/bin/healthcheck.sh] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SOME_ENV_VARS
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from monitoring-logservice-token-g9kwt (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
monitoring-logservice-token-g9kwt:
Type: Secret (a volume populated by a Secret)
SecretName: monitoring-logservice-token-g9kwt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
</code></pre>
| AnujAroshA | <p>Try change your fluent-bit config that points to fluentd service as <strong>monitoring-logservice.dips:24224</strong></p>
| gohm'c |
<p>I want to set up some stuff when starting Kubernetes node worker. Specifically, I change systemd service configuration and apply it (systemctl daemon-reload), but from inside the container, I don't know how to configure systemd of node worker</p>
| Nguyễn Văn Dưng | <p>Not sure what you actually want to do, but</p>
<ol>
<li>Usually systemd is not installed inside containers</li>
<li>I don't know what you want to implement, but I pretty sure that run systemd daemon inside container is a <a href="https://stackoverflow.com/questions/51979553/is-it-recommended-to-run-systemd-inside-docker-container">bad idea</a></li>
<li>In most cases if you want to start long running background process, that will be better idea to run it in separate container and connect two containers.</li>
<li>If you need to do some action on container start before run main process, just override <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod" rel="nofollow noreferrer">entrypoint</a>, and prepend own command before main one (you may add it with <a href="https://www.maketecheasier.com/run-bash-commands-background-linux/" rel="nofollow noreferrer">&</a> symbol to run in background, but it is a not smart solution)</li>
</ol>
| rzlvmp |
<p>I have installed and configured Consul cluster on VM nodes. I would like to add there another nodes in client mode, not in SERVER mode. These nodes should run on Kubernetes. I have used HELM template but I'm not able to add there these nodes in CLIENT mode but only in server node. HELM TEMPLATE: <a href="https://github.com/helm/charts/tree/master/stable/consul" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/consul</a></p>
<p>I want to use this for service registration to Consul cluster. Does anyone has any idea or experience with this?</p>
| Jiří Šafář | <p>The official Consul helm chart (<a href="https://github.com/hashicorp/consul-helm/" rel="nofollow noreferrer">https://github.com/hashicorp/consul-helm/</a>) supports this. You'll want to deploy the Helm chart using a configuration similar to the following.</p>
<pre class="lang-yaml prettyprint-override"><code># By default disable all resources in the Helm chart
global:
enabled: false
# Enable Client nodes
client:
enabled: true
# Set this to true to expose the Consul clients using the Kubernetes node
# IPs. If false, the pod IPs must be routable from the external servers.
exposeGossipPorts: true
# IPs of your external Consul server(s)
join:
- 192.0.2.10
- 192.0.2.20
- 192.0.2.30
</code></pre>
<p>See the <a href="https://www.consul.io/docs/platform/k8s/servers-outside-kubernetes.html" rel="nofollow noreferrer">Consul Servers Outside of Kubernetes</a> docs for details.</p>
| Blake Covarrubias |
<p>I have a Kafka wrapper library that uses transactions on the produce side only. The library does not cover the consumer. The producer publishes to multiple topics. The goal is to achieve transactionality. So the produce should either succeed which means there should be exactly once copy of the message written in each topic, or fail which means message was not written to any topics. The users of the library are applications that run on Kubernetes pods. Hence, the pods could fail, or restart frequently. Also, the partition is not going to be explicitly set upon sending the message.</p>
<p>My question is, how should I choose the transactional.id for producers? My first idea is to simply choose UUID upon object initiation, as well as setting a transaction.timeout.ms to some reasonable time (a few seconds). That way, if a producer gets terminated due to pod restart, the consumers don't get locked on the transaction forever.</p>
<p>Are there any flaws with this strategy? Is there a smarter way to do this? Also, I cannot ask the library user for some kind of id.</p>
| user14205224 | <p>UUID can be used in your library to generate transaction id for your producers. I am not really sure what you mean by: <em>That way, if a producer gets terminated due to pod restart, the consumers don't get locked on the transaction forever</em>.</p>
<p>Consumer is never really "stuck". Say the producer goes down after writing message to one topic (and hence transaction is not yet committed), then consumer will behave in one of the following ways:</p>
<ul>
<li>If <code>isolation.level</code> is set to <code>read_committed</code>, consumer will never process the message (since the message is not committed). It will still read the next committed message that comes along.</li>
<li>If <code>isolation.level</code> is set to <code>read_uncommitted</code>, the message will be read and processed (defeating the purpose of transaction in the first place).</li>
</ul>
| Rishabh Sharma |
<p>I´m trying to run hazelcast on Kubernetes, i have 2 pods with hazelcast last version image
:hazelcast/hazelcast running, each pod detect each other so all okay.</p>
<p>When I run the pod with my spring boot aplication that have:</p>
<pre><code><hazelcast.ver>3.9-EA</hazelcast.ver>
</code></pre>
<p>Also I use</p>
<pre><code> <groupId>com.hazelcast</groupId>
<artifactId>hazelcast-client</artifactId>
<version>3.9-EA</version>
</code></pre>
<p>I get on hazelcast pod this error:</p>
<pre><code>java.lang.IllegalStateException: Unknown protocol: CB2
at `com.hazelcast.internal.server.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:132) ~[hazelcast-5.2.1.jar:5.2.1]`
</code></pre>
<p>Any idea ?</p>
| flteam | <p>Hazelcast clients of version 3.x are not compatible with Hazelcast 4.x or 5.x servers. Since the Hazelcast version you use for your servers is 5.2.1 (latest, as of now) 3.9 clients cannot work with them. You need to use 4.x or 5.x clients in your Spring Boot application, preferably the same version with your server.</p>
| mdumandag |
<p>I have a simple web server running in a single pod on GKE. I has also exposed it using a load balancer service. What is the easiest way to make this pod accessible over HTTPS?</p>
<pre><code>gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
personal..... us-central1-a 1.19.14-gke.1900 34.69..... e2-medium 1.19.14-gke.1900 1 RUNNING
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10..... <none> 443/TCP 437d
my-service LoadBalancer 10..... 34.71...... 80:30066/TCP 12d
kubectl get pods
NAME READY STATUS RESTARTS AGE
nodeweb-server-9pmxc 1/1 Running 0 2d15h
</code></pre>
<p>EDIT: I also have a domain name registered if it's easier to use that instead of https://34.71....</p>
| nickponline | <p>First, your cluster should have Config Connector <a href="https://cloud.google.com/config-connector/docs/how-to/install-upgrade-uninstall#addon-configuring" rel="nofollow noreferrer">installed</a> and function properly.</p>
<p>Start by delete your existing load balancer service <code>kubectl delete service my-service</code></p>
<p>Create a static IP.</p>
<pre><code>apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: <name your IP>
spec:
location: global
</code></pre>
<p>Retrieve the created IP <code>kubectl get computeaddress <the named IP> -o jsonpath='{.spec.address}'</code></p>
<p>Create an DNS "A" record that map your registered domain with the created IP address. Check with <code>nslookup <your registered domain name></code> to ensure the correct IP is returned.</p>
<p>Update your load balancer service spec by insert the following line after <code>type: LoadBalancer</code>: <code>loadBalancerIP: "<the created IP address>"</code></p>
<p>Re-create the service and check <code>kubectl get service my-service</code> has the EXTERNAL-IP set correctly.</p>
<p>Create <code>ManagedCertificate</code>.</p>
<pre><code>apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: <name your cert>
spec:
domains:
- <your registered domain name>
</code></pre>
<p>Then create the Ingress.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <name your ingress>
annotations:
networking.gke.io/managed-certificates: <the named certificate>
spec:
rules:
- host: <your registered domain name>
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
number: 80
</code></pre>
<p>Check with <code>kubectl describe ingress <named ingress></code>, see the rules and annotations section.</p>
<p><strong>NOTE:</strong> It can take up to 15mins for the load balancer to be fully ready. Test with <code>curl https://<your registered domain name></code>.</p>
| gohm'c |
<p>I have deployed MongoDB ReplicaSet on Kubernetes using Helm and the chart <code>stable/mongodb-replicaset</code></p>
<p>On Kubernetes, I can connect to MongoDB using the connection string which is something of the sort</p>
<pre><code>mongodb://mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/?replicaSet=myRepl
</code></pre>
<p>In the event I change the number of replicas, the connection string would change as well, which also means that every application connecting to the database would need to be updated.</p>
<p>Is there a workaround to this?</p>
<p>I thought of creating a Service, so that only this would need to be changed, however the connection string does not pass regex validation.</p>
<p>Any help on this is appreciated.</p>
| GZZ | <p>The Helm chart <code>stable/mongodb-replicaset</code> deploys also 2 headless services:</p>
<ol>
<li><code><release name>-mongodb-replicaset</code></li>
<li><code><release name>-mongodb-replicaset-client</code></li>
</ol>
<p>The DNS record of <code><release name>-mongodb-replicaset</code> returns the address of all the replicas, so, in order to connect to the replicaset, the connection string is</p>
<p><code>"mongodb+srv://<release name>-mongodb-replicaset.namespace.svc.cluster.local/?tls=false&ssl=false"</code></p>
<p>Note that tls and ssl have been set to false for testing as they were enabled by default.</p>
| GZZ |
<p>I am trying to create an encrypted persistent volume claim with an EBS StorageClass with the below k8s yaml:</p>
<pre><code> ---
#########################################################
# Encrypted storage for Redis AWS EBS
#########################################################
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: encrypted-redis-data
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
encrypted: "true"
---
#########################################################
# Persistent volume for redis
#########################################################
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data-encry
labels:
name: redis-data-encry
spec:
storageClassName: encrypted-redis-data
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
</code></pre>
<p>Upon doing so the persistent volume claim is stuck in "Pending" status with the following error:</p>
<blockquote>
<p>Failed to provision volume with StorageClass "encrypted-redis-data": failed to create encrypted volume: the volume disappeared after creation, most likely due to inaccessible KMS encryption key</p>
</blockquote>
<p>How can I fix this and create the EBS volume?</p>
| user14132079 | <p>I found out the answer thanks to IronMan. I added the proper KMS permissions to the EKS cluster and the volume was created. Answer found here:
<a href="https://github.com/kubernetes/kubernetes/issues/62171#issuecomment-380481349" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/62171#issuecomment-380481349</a></p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Minimal_EBS_KMS_Create_and_Attach",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKeyWithoutPlaintext",
"kms:CreateGrant"
],
"Resource": "key arn"
}
]
}
</code></pre>
| user14132079 |
<p>I am trying to run a go-ethereum node on AWS EKS, for that i have used statefulsets with below configuration.
<a href="https://i.stack.imgur.com/RxyOt.png" rel="nofollow noreferrer">statefulset.yaml file</a></p>
<p>Running<code>kubectl apply -f statefulset.yaml</code> creates 2 pods out of which 1 is running and 1 is in CrashLoopBackOff state.
<a href="https://i.stack.imgur.com/PS6ez.png" rel="nofollow noreferrer">Pods status</a>
After checking the logs for second pod the error I am getting is <code>Fatal: Failed to create the protocol stack: datadir already used by another process</code>.
<a href="https://i.stack.imgur.com/J4rqa.png" rel="nofollow noreferrer">Error logs i am getting</a></p>
<p>The problem is mainly due to the pods using the same directory to write(geth data) on the persistant volume(i.e the pods are writing to '/data'). If I use a subpath expression and mount the pod's directory to a sub-directory with pod name(for eg: '/data/geth-0') it works fine.
<a href="https://i.stack.imgur.com/AxXQA.png" rel="nofollow noreferrer">statefulset.yaml with volume mounting to a sub directory with podname
</a>
But my requirement is that all the three pod's data is written at '/data' directory.
Below is my volume config file.
<a href="https://i.stack.imgur.com/fGegz.png" rel="nofollow noreferrer">volume configuration</a></p>
| Sahil Singh | <p>You need to dynamically provision the access point for each of your stateful pod. First create an EFS storage class that support dynamic provision:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-dyn-sc
provisioner: efs.csi.aws.com
reclaimPolicy: Retain
parameters:
provisioningMode: efs-ap
directoryPerms: "700"
fileSystemId: <get the ID from the EFS console>
</code></pre>
<p>Update your spec to support claim template:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: geth
...
spec:
...
template:
...
spec:
containers:
- name: geth
...
volumeMounts:
- name: geth
mountPath: /data
...
volumeClaimTemplates:
- metadata:
name: geth
spec:
accessModes:
- ReadWriteOnce
storageClassName: efs-dyn-sc
resources:
requests:
storage: 5Gi
</code></pre>
<p>All pods now write to their own /data.</p>
| gohm'c |
<p>I am uploading and downloading files to Azure storage from a React spa using SAS tokens.</p>
<p>When running on localhost, everything works, however when deployed to Kubernetes on Azure, I receive the following authentication error.</p>
<pre><code>onError RestError: <?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:e6bfca97-c01e-0030-2e29-4e7d7c000000
Time:2020-06-29T15:26:39.7164613Z</Message><AuthenticationErrorDetail>Signature did not match. String to sign used was w
2020-06-29T20:26:39Z
/blob/datalake/container/Natural_Language_Processing.pdf
</code></pre>
<p>The javascript code responsible for the upload is</p>
<pre class="lang-js prettyprint-override"><code>// upload to Azure
const blobName = file.name;
const accountSas = resp.data.SAS;
const account = resp.data.account;
const containerName = resp.data.container;
const anonymousCredential = new AnonymousCredential();
const blobServiceClient = new BlobServiceClient(
`https://${account}.blob.core.windows.net?${accountSas}`,
anonymousCredential
);
// Create a container
const containerClient = blobServiceClient.getContainerClient(
containerName
);
// Create a blob
const content = file;
const blockBlobClient = containerClient.getBlockBlobClient(blobName);
const uploadBlobResponse = await blockBlobClient.upload(
content,
Buffer.byteLength(content)
);
</code></pre>
<p>while the backend Python code for the SAS token generation is the following</p>
<pre class="lang-py prettyprint-override"><code>if content['up_down'] == 'download':
permission = BlobSasPermissions(read=True)
else:
permission = BlobSasPermissions(write=True)
account_name = os.getenv("STORAGE_ACCOUNT_NAME")
container_name = metadata.get_container_name()
blob_name = content['filePath']
expiry = datetime.utcnow() + timedelta(hours=5)
options = {
'account_name': account_name,
'container_name': container_name,
'blob_name': blob_name,
'account_key': os.getenv("STORAGE_ACCESS_KEY"),
'permission': permission,
'expiry': expiry
}
SAS = generate_blob_sas(**options)
</code></pre>
<p>Where <a href="https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob?view=azure-python#generate-blob-sas-account-name--container-name--blob-name--snapshot-none--account-key-none--user-delegation-key-none--permission-none--expiry-none--start-none--policy-id-none--ip-none----kwargs-" rel="nofollow noreferrer"><code>generate_blob_sas</code></a> is imported from azure-storage-blob (version 12.3.1).</p>
<p>Any idea on how to resolve this?</p>
| GZZ | <p>After a long time scratching my head to find a solution, I figured out where the problem was.</p>
<p>It had nothing to do with the Python library for accessing the blob, but rather with the environment variables in the Kubernetes pod.</p>
<p>The environment variables were passed to Kubernetes as secrets using a yaml file (as explained in this <a href="https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually" rel="nofollow noreferrer">link</a>).
Using this method, the secret needs to be base64 encoded. For this I was using the following</p>
<pre class="lang-sh prettyprint-override"><code>echo 'secret' | base64
>> c2VjcmV0Cg==
</code></pre>
<p>In this way however, the <code>echo</code> command appends by default a newline character to the output. What I should have used instead was</p>
<pre class="lang-sh prettyprint-override"><code>echo -n 'secret' | base64
>> c2VjcmV0
</code></pre>
<p>This bug was particularly difficult to find especially because when printed, the wrong solution would appear to lead to the correct result</p>
<pre class="lang-sh prettyprint-override"><code>echo 'secret' | base64 | base64 -d
>> secret
</code></pre>
<p>Anyway, I hope that my mistake will help someone in the future!</p>
| GZZ |
<p>I want to redeploy an application in k8s using GitOps(ArgoCD) in case of an only config Map change, how ArgoCD will understand to restart the container as we all know without restarting the container new config map is not going to take effect.</p>
<p>Scenario - If one container is running from ArgoCD and I have to modify configmap yaml file in GitHub and ArgoCD will automatically understand and sync the updated values but container will not restart as we are not modifying in Deployment Yaml Files, so how config map will take effect in the container</p>
| Rahul Raj | <p>Found a workaround for the above question, We can include a parameter(Jenkins Build Number) as env variable in the Deployment config and it will be updated on every build from CI Pipeline, so in case of only config Change in Git repo, deployment will also be rolled out because Build number Parameter will change after running the Pipelines and as we all know ArgoCD will automatically be triggered once any change is done in Git repo connected to ArgoCD</p>
| Rahul Raj |
<p>To get one of our issue resolved, we need "CATALINA_OPTS=-Djava.awt.headless=true" this property in Kubernetes configuration. i guess this should be added in .yml file.</p>
<p>Please help us , under which this has to be added. Thanks in advance.</p>
<p>Any sample Yaml file or link to it if provided will be great.</p>
| saranya | <p><code>CATALINA_OPTS</code> is an environment variable. Here is how to <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">define it</a> and set it to a single value in Deployment configuration yaml:</p>
<pre><code>...
spec:
...
template:
metadata:
labels:
...
spec:
containers:
- image: ...
...
env:
- name: CATALINA_OPTS
value: -Djava.awt.headless=true
...
</code></pre>
| gears |
<p>I've created a cluster using terraform with:</p>
<pre><code>provider "google" {
credentials = "${file("gcp.json")}"
project = "${var.gcp_project}"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_container_cluster" "primary" {
name = "${var.k8s_cluster_name}"
location = "us-central1-a"
project = "${var.gcp_project}"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "primary_preemptible_nodes" {
project = "${var.gcp_project}"
name = "my-node-pool"
location = "us-central1-a"
cluster = "${google_container_cluster.primary.name}"
# node_count = 3
autoscaling {
min_node_count = 3
max_node_count = 5
}
node_config {
# preemptible = true
machine_type = "g1-small"
metadata = {
disable-legacy-endpoints = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only"
]
}
}
</code></pre>
<p>Surprisingly this node pool seems to be 'stuck' at 0 instances? Why? How can I diagnose this?</p>
<p><a href="https://i.stack.imgur.com/YZa7D.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YZa7D.jpg" alt="enter image description here"></a></p>
| Chris Stryczynski | <p>you should add "initial_node_count" (like <code>initial_node_count = 3</code>) to "google_container_node_pool" resourse.
<a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_node_pool#node_count" rel="nofollow noreferrer">Official documentation</a> says you should not to use "node_count" with "autoscaling".</p>
| D3pRe5s |
<p>I have 2 applications running on my cluster : G and C. G is only one pod and C is on 2 pods.</p>
<p>G is exposed to external connection and C is not. G first receive requests that he then process and sends to C.</p>
<p>So I was wondering how can I load balance the requests that G sends to C between the 2 pods of C.</p>
<p>I am currently using Kubernetes native service for C but I'm not sure if it is load balancing between my 2 pods. Everything I'm reading seems to expose the service externally and I don't want that</p>
<p>Thank you</p>
| Marc-Antoine Caron | <p>Create a Kubernetes Service of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">ClusterIP</a> for application C's Deployment. Such Service gets an internal IP which isn't exposed outside of the cluster. The Service does a simple round-robin routing of the traffic among the pods it targets (from the Deployment). </p>
<p>Use this to reference application C from G via the Service:</p>
<pre><code><k8s-service-name>.<namespace>.svc.cluster.local
</code></pre>
<p>The above assumes that there's <a href="https://v1-13.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS</a> running on the cluster (there usually is).</p>
| gears |
<p>I am unable to properly mount volumes using HostPath within Kubernetes running in Docker and WSL 2. This seems to be a WSL 2 issue when mounting volumes in Kubernetes running in Docker. Anyone know how to fix this?</p>
<p>Here are the steps:</p>
<p>Deploy debug build to Kubernetes for my app.
Attach Visual Studio Code using the Kubernetes extension
Navigate to the project folder for my application that was attached using the volume mount <= Problem Right Here</p>
<p>When you go and look at the volume mount nothing is there.</p>
<pre><code>C:\Windows\System32>wsl -l -v
NAME STATE VERSION
Ubuntu Running 2
docker-desktop-data Running 2
docker-desktop Running 2
Docker Desktop v2.3.0.3
Kubernetes v1.16.5
Visual Studio Code v1.46.1
</code></pre>
<pre><code>====================================================================
Dockerfile
====================================================================
#
# Base image for deploying and running based on Ubuntu
#
# Support ASP.NET and does not include .NET SDK or NodeJs
#
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-bionic AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
#
# Base image for building .NET based on Ubuntu
#
# 1. Uses .NET SDK image as the starting point
# 2. Restore NuGet packages
# 3. Build the ASP.NET Core application
#
# Destination is /app/build which is copied to /app later on
#
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-bionic AS build
WORKDIR /src
COPY ["myapp.csproj", "./"]
RUN dotnet restore "./myapp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "myapp.csproj" -c Release -o /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-bionic AS debug
RUN curl --silent --location https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get install --yes nodejs
ENTRYPOINT [ "sleep", "infinity" ]
#
# Base image for building React based on Node/Ubuntu
#
# Destination is /app/ClientApp/build which is copied to /clientapp later
#
# NOTE: npm run build puts the output in the build directory
#
FROM node:12.18-buster-slim AS clientbuild
WORKDIR /src
COPY ./ClientApp /app/ClientApp
WORKDIR "/app/ClientApp"
RUN npm install
RUN npm run build
#
# Copy clientbuild:/app/ClientApp to /app/ClientApp
#
# Copy build:/app to /app
#
FROM base as final
WORKDIR /app/ClientApp
COPY --from=clientbuild /app/ClientApp .
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "myapp.dll"]
====================================================================
Kubernetes Manifest
====================================================================
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: localhost:6000/myapp
ports:
- containerPort: 5001
securityContext:
privileged: true
volumeMounts:
- mountPath: /local
name: local
resources: {}
volumes:
- name: local
hostPath:
path: /C/dev/myapp
type: DirectoryOrCreate
hostname: myapp
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 5001
targetPort: 5001
selector:
app: myapp
</code></pre>
| Richard Crane | <p>According to the following thread, hostPath volumes are not officially supported for wsl2, yet. They do suggest a workaround, though I had trouble getting it to work. I have found that prepending <code>/run/desktop/mnt/host/c</code> seems to work for me.</p>
<pre><code>// C:\someDir\volumeDir
hostPath:
path: /run/desktop/mnt/host/c/someDir/volumeDir
type: DirectoryOrCreate
</code></pre>
<p>Thread Source: <a href="https://github.com/docker/for-win/issues/5325" rel="noreferrer">https://github.com/docker/for-win/issues/5325</a><br />
Suggested workaround from thread: <a href="https://github.com/docker/for-win/issues/5325#issuecomment-567594291" rel="noreferrer">https://github.com/docker/for-win/issues/5325#issuecomment-567594291</a></p>
| Ryan Darnell |
<p>I am deploying php and redis to a local minikube cluster but getting below error related to name resolution.</p>
<pre class="lang-sh prettyprint-override"><code>Warning: Redis::connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /app/redis.php on line 4
Warning: Redis::connect(): connect() failed: php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /app/redis.php on line 4
Fatal error: Uncaught RedisException: Redis server went away in /app/redis.php:5 Stack trace: #0 /app/redis.php(5): Redis->ping() #1 {main} thrown in /app/redis.php on line 5
</code></pre>
<p>I am using below configurations files:</p>
<p>apache-php.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: php-apache
image: webdevops/php-apache
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: app-code
mountPath: /app
volumes:
- name: app-code
hostPath:
path: /minikubeMnt/src
---
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
app: apache
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
selector:
app: apache
</code></pre>
<p>redis.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:5.0.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: NodePort
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
</code></pre>
<p>And I am using the below PHP code to access Redis, I have mounted below code into the apache-php deployment.</p>
<pre class="lang-php prettyprint-override"><code><?php
ini_set('display_errors', 1);
$redis = new Redis();
$redis->connect("redis-service", 6379);
echo "Server is running: ".$redis->ping();
</code></pre>
<p>Cluster dashboard view for the services is given below:</p>
<p><a href="https://i.stack.imgur.com/nBT5a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nBT5a.png" alt="img1" /></a>
<a href="https://i.stack.imgur.com/dGdbT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dGdbT.png" alt="img2" /></a></p>
<p>Thanks in advance.</p>
<p>When I run env command getting below values related to redis and when I use the IP:10.104.115.148 to access redis then it is working fine.</p>
<pre><code>REDIS_SERVICE_PORT=tcp://10.104.115.148:6379
REDIS_SERVICE_PORT_6379_TCP=tcp://10.104.115.148:6379
REDIS_SERVICE_SERVICE_PORT=6379
REDIS_SERVICE_PORT_6379_TCP_ADDR=10.104.115.148
REDIS_SERVICE_PORT_6379_TCP_PROTO=tcp```
</code></pre>
| Saifullah khan | <p>Consider using K8S <code>liveliness</code> and <code>readiness</code> probes here, to automatically recover from errors. You can find more related information <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="nofollow noreferrer">here</a>.</p>
<p>And you can use an <code>initContainer</code> that check for availability of redis-server using bash <code>while</code> loop with <code>break</code> and then let php-apache to start. For more information, check <strong>Scenario 2</strong> in <a href="https://www.magalix.com/blog/kubernetes-patterns-the-init-container-pattern" rel="nofollow noreferrer">here</a>.</p>
<hr />
<h3>Redis Service as Cluster IP</h3>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: clusterIP
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
</code></pre>
| initanmol |
<p>I was trying to get into kubernetes-dashboard Pod, but I keep getting this error:</p>
<pre><code>C:\Users\USER>kubectl exec -n kubernetes-dashboard kubernetes-dashboard-66c887f759-bljtc -it -- sh
OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown
command terminated with exit code 126
</code></pre>
<p>The Pod is running normally and I can access the Kubernetes UI via the browser. But I was getting some issues getting it running before, and I wanted to get inside the pod to run some commands, but I always get the same error mentioned above.</p>
<p>When I try the same command with a pod running nginx for example, it works:</p>
<pre><code>C:\Users\USER>kubectl exec my-nginx -it -- sh
/ # ls
bin home proc sys
dev lib root tmp
docker-entrypoint.d media run usr
docker-entrypoint.sh mnt sbin var
etc opt srv
/ # exit
</code></pre>
<p>Any explanation, please?</p>
| Aicha AIT OUMGHAR | <p>Prefix the command to run with <code>/bin</code> so your updated command will look like:</p>
<pre><code>kubectl exec -n kubernetes-dashboard <POD_NAME> -it -- /bin/sh
</code></pre>
<p>The reason you're getting that error is because Git in Windows slightly modifies the MSYS that changes command args. Generally using the command <code>/bin/sh</code> or <code>/bash/bash</code> works universally.</p>
| JasonY |
<p>I am trying to join a ubuntu host to rhel kubernetes master.
Installed kubernetes version 1.24.3 and using crio runtime.
If i join a rhel vm to rhel kubernetes master. There is no issue.
But when i join the ubuntu to host to rhel kubernetes master. kube-proxy in kube-system is stuck in container creating state.</p>
<p>Describe the node and getting the following error:</p>
<blockquote>
<p>Failed to create pod sandbox: rpc error: code = Unknown desc = error
creating pod sandbox with name
"k8s_kube-proxy-s56kp_kube-system_(ID)": initializing source
docker://registry.k8s.io/pause:3.6: pinging container registry
registry.k8s.io: Get "https://registry.k8s.io/v2/": dial tcp (ip):443:
i/o timeout</p>
</blockquote>
<p>How to resolve this issue?</p>
| Prems | <p>Found the solution :</p>
<p>Proxy has to be set in <code>/etc/sysconfig/crio</code>:</p>
<pre><code>HTTP_PROXY=<proxy>
HTTPS_PROXY=<proxy>
NO_PROXY=<IPs>
systemctl daemon-reload
systemctl restart crio
</code></pre>
| Prems |
<p>We are encountering some Internal Server errors (500) on our web services API, seemingly when new pods are introduced or deleted as part of our autoscaler configuration.</p>
<p>Naturally, this sort of thing is very hard to troubleshoot as you are not always able to control the moments pods start and stop.</p>
<p>Is there a mechanism one could use to instruct the cluster to scale up and down upon?</p>
<p>Our <code>hpa.yaml</code> for interest.</p>
<pre><code>kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta2
metadata:
name: xyz
spec:
minReplicas: 1
maxReplicas: 12
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: xyz
behavior:
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 50
periodSeconds: 60
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 85
</code></pre>
| André Haupt | <p>If you're interested in causing your HPA to forcibly create or destroy Pods for debugging purposes, you can use custom metrics in your Horizontal Pod Autoscaler .yaml definition.</p>
<p>If the problem is that newly created Pods cause errors, you can implement readiness probes in the Pod definition that perform an httpGet check. That way, you can avoid redirecting traffic on faulty Pods until the probe check returns a status = 200.</p>
| shaki |
<p>I am using <code>Elastic Search(v7.6.1)</code> on a <code>Kubernetes(v1.19)</code> cluster.</p>
<p>The docs suggests to disable swapping:</p>
<p><a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html</a></p>
<p>My yaml:</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elastic-cluster-1
spec:
version: 7.6.1
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
nodeSets:
- name: default
count: 3
config:
node.master: true
node.data: true
node.ingest: true
podTemplate:
metadata:
labels:
# additional labels for pods
type: elastic-master-node
spec:
nodeSelector:
node-pool: <NODE_POOL>
initContainers:
# Increase linux map count to allow elastic to store large memory maps
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
limits:
memory: 11.2Gi
requests:
cpu: 3200m
env:
- name: ES_JAVA_OPTS
value: "-Xms6g -Xmx6g"
# Request persistent data storage for pods
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: ssd
- name: data
count: 2
config:
node.master: false
node.data: true
node.ingest: true
podTemplate:
metadata:
labels:
# additional labels for pods
type: elastic-data-node
spec:
nodeSelector:
node-pool: <NODE_POOL>
initContainers:
# Increase linux map count to allow elastic to store large memory maps
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
limits:
memory: 11.2Gi
requests:
cpu: 3200m
env:
- name: ES_JAVA_OPTS
value: "-Xms6g -Xmx6g"
# Request persistent data storage for pods
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: ssd
# Google cloud storage credentials
secureSettings:
- secretName: "gcs-credentials"
http:
service:
spec:
# expose this cluster Service with a LoadBalancer
type: LoadBalancer
tls:
certificate:
secretName: elasticsearch-certificate
</code></pre>
<p>It's not clear to me how to change this yaml in order to disable swapping correctly. Changing each manually is not an option because in every restart the configuration will be lost.</p>
<p>How can I do this?</p>
| Montoya | <p>First of all k8s cluster by default will have swap disabled, this is actually a <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin" rel="nofollow noreferrer">mandatory requirement</a>. For most cases; especially cloud managed cluster which follows the requirement, you do not need to worry about swapping issue. Even for 1.22, enabling swap is only an alpha feature.</p>
<p>If for whatever reason you need to deal with this, you can consider setting <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#bootstrap-memory_lock" rel="nofollow noreferrer">bootstrap.memory_lock</a> to true.</p>
<pre><code>...
containers:
- name: elasticsearch
env:
- name: bootstrap.memory_lock
value: "true"
...
</code></pre>
| gohm'c |
<p>I run a local kubernetes cluster (Minikube) and I try to connect pgAdmin to postgresql, bot run in Kubernetes.
What would be the connection string? Shall I access by service ip address or by service name?</p>
<pre><code>kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbpostgresql NodePort 10.103.252.31 <none> 5432:30201/TCP 19m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d21h
pgadmin-service NodePort 10.109.58.168 <none> 80:30200/TCP 40h
kubectl get ingress:
NAME CLASS HOSTS ADDRESS PORTS AGE
pgadmin-ingress <none> * 192.168.49.2 80 40h
kubectl get pod:
NAME READY STATUS RESTARTS AGE
pgadmin-5569ddf4dd-49r8f 1/1 Running 1 40h
postgres-78f4b5db97-2ngck 1/1 Running 0 23m
</code></pre>
<p>I have tried with 10.103.252.31:30201 but without success.</p>
| Cosmin D | <p>Remember minikube is running inside its' own container, the NodePort clusterIPs you're getting back are open inside of minikube. So to get minikube's resolution of port and ip, run: <code>minikube service <your-service-name> --url</code>
This will return something like <a href="http://127.0.0.1:50946" rel="nofollow noreferrer">http://127.0.0.1:50946</a> which you can use to create an external DB connection. <a href="https://i.stack.imgur.com/wo44r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wo44r.png" alt="enter image description here" /></a></p>
<p>Another option would be to use kubectl to forward a local port to the service running on localhost ex. <code>kubectl port-forward service/django-service 8080:80</code></p>
| matt_linker |
<p>I am following the <a href="https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#include-servicemonitors" rel="nofollow noreferrer">documentation</a> to create services operator. I am not sure why I cannot access the Prometheus services.</p>
<p>My apps.yml:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: sms-config-service
labels:
app: sms-config-service
spec:
type: NodePort
selector:
app: sms-config-service
ports:
- port: 8080
targetPort: 8080
name: http
</code></pre>
<p>My ServiceMonitor yml:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: servicemonitor-sms-services
name: servicemonitor-sms-config-services
namespace: metrics
spec:
selector:
matchLabels:
app: sms-config-service
endpoints:
- port: http
</code></pre>
<p>Prometheus yml:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
spec:
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
app: servicemonitor-sms-services
resources:
requests:
memory: 800Mi
enableAdminAPI: true
</code></pre>
<p>Prometheus config yml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: prometheus
spec:
type: NodePort
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: web
selector:
prometheus: prometheus
</code></pre>
<p>When I access the url below, browser shows "unable to connect". I am not sure where I did wrong? Should I set up a deployment for the Prometheus?</p>
<hr />
<pre><code>$ minikube service prometheus --url
http://192.168.64.3:30900
</code></pre>
<p>Update:
I have the prometheus pod running on NodePort 32676.
<a href="https://i.stack.imgur.com/pHGlx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pHGlx.png" alt="enter image description here" /></a>
Should I change the Prometheus config yml to fix the issue?</p>
| kevin | <p>I find the issue is I don't have <code>serviceAccountName</code> created.</p>
| kevin |
<p>First As describe here <a href="https://stackoverflow.com/questions/60079822/kubernetes-pods-and-cpu-limits">Kubernetes: Pods and cpu limits</a> when we do not specify the limit of CPU a pod can use all the CPUs of the node. Is this also apply for memory? Means does pod starts using the all memory when it required when we do not specify the limit?</p>
<p>Second Let's consider we have a single worker node with 2GB Memory, we have 8 pods deployed on this, is it correct to assign 256Mi request and limit to each of these 8 pods? Is it recommended to apply same limit on all pods?</p>
<p>Third what would happen if I want to create a new pod with same request and limit 256Mi? Will it always stay in pending state?</p>
| Vinay Sorout | <p><code>Means does pod starts using the all memory when it required when we do not specify the limit</code></p>
<p>Short answer is Yes. You probably want to look into QoS as well, pod without request/limit are assigned BestEffort; which will be first to evict when system is under stress.</p>
<p><code>is it correct to assign 256Mi request and limit to each of these 8 pods? Is it recommended to apply same limit on all pods?</code></p>
<p>If your pod needs that amount of memory to function then it is not a question of correct/incorrect. You should plan your cluster capacity instead.</p>
<p><code>Will it always stay in pending state?</code></p>
<p>You won't be able to use ALL memory in reality; the fact that you have many other programs running (eg. kubelet) on the same node. When you specified resources and there is no node that could meet this requirement your pod will enter pending state. If you don't specified the resource the scheduler will allocate your pod to run on one of the available node - which your pod may get kick-out when there is a higher priority, better QoS request come in, or simply suicide as the node won't satisfy the resources need.</p>
| gohm'c |
<p>I tried to deploy the Kafka-UI in my local Kubernetes cluster, but ingress-nginx gives 502 error (Bad Gateway). I used the following configurations:</p>
<p><strong>Deployment:</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-ui-deployment
labels:
app: kafka-ui
spec:
replicas: 1
selector:
matchLabels:
app: kafka-ui
template:
metadata:
labels:
app: kafka-ui
spec:
containers:
- name: kafka-ui
image: provectuslabs/kafka-ui:latest
env:
- name: KAFKA_CLUSTERS_0_NAME
value: "K8 Kafka Cluster"
- name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
value: kafka-svc:9093
imagePullPolicy: Always
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1024Mi"
cpu: "1000m"
ports:
- containerPort: 8088
protocol: TCP</code></pre>
</div>
</div>
</p>
<p><strong>Service:</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: kafka-ui-service
spec:
selector:
app: kafka-ui
ports:
- protocol: TCP
port: 80
targetPort: 8088</code></pre>
</div>
</div>
</p>
<p><strong>Ingress:</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code> ingressClassName: public
rules:
- host: "localhost"
http:
paths:
- path: /kafka-ui
pathType: Prefix
backend:
service:
name: kafka-ui-service
port:
number: 80</code></pre>
</div>
</div>
</p>
<p>Port-forward the targetport got following error:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="false" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>Forwarding from 127.0.0.1:8088 -> 8088
Forwarding from [::1]:8088 -> 8088
channel 9: open failed: connect failed: Connection refused
Handling connection for 8088
Handling connection for 8088
E0623 09:18:20.768161 33100 portforward.go:406] an error occurred forwarding 8088 -> 8088: error forwarding port 8088 to pod 75353d54479df5f235c03db1899367dc77e82877986be849761eba6193ca72c0, uid : failed to execute portforward in network namespace "/var/run/netns/cni-a5ed0994-0456-6b6c-5a79-90e582ef09b3": failed to connect to localhost:8088 inside namespace "75353d54479df5f235c03db1899367dc77e82877986be849761eba6193ca72c0", IPv4: dial tcp4 127.0.0.1:8088: connect: connection refused IPv6 dial tcp6: address localhost: no suitable address found
E0623 09:18:20.768994 33100 portforward.go:234] lost connection to pod</code></pre>
</div>
</div>
</p>
<p>Any suggestions will be appreciated.
Thanks for your help!</p>
| Haoyuan | <p>The main error was the port. The right port is 8080. This yaml works fine for me.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-ui-deployment
labels:
app: kafka-ui
spec:
replicas: 1
selector:
matchLabels:
app: kafka-ui
template:
metadata:
labels:
app: kafka-ui
spec:
containers:
- name: kafka-ui
image: provectuslabs/kafka-ui:latest
env:
- name: KAFKA_CLUSTERS_0_NAME
value: "K8 Kafka Cluster"
- name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
value: kafka-kafka-bootstrap.kafka:9092 # <- service-name.namespace:9092
imagePullPolicy: Always
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1024Mi"
cpu: "1000m"
ports:
- containerPort: 8080 # <- Rectify the port
protocol: TCP</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: kafka-ui-service
namespace: kafka
spec:
selector:
app: kafka-ui
ports:
- protocol: TCP
port: 8080
targetPort: 8080 # <- Rectify the port.</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code> ingressClassName: public
rules:
- host: "localhost"
http:
paths:
- path: /kafka-ui
pathType: Prefix
backend:
service:
name: kafka-ui-service
port:
number: 8080</code></pre>
</div>
</div>
</p>
| Rahul Sahoo |
<p>I have deployed airflow in kubernetes as is descrived in this link: <a href="https://github.com/apache/airflow/tree/master/chart" rel="nofollow noreferrer">https://github.com/apache/airflow/tree/master/chart</a></p>
<p>To access the airflow UI I can do:</p>
<pre><code> kubectl port-forward svc/airflow2-webserver 8080:8080 --namespace default
</code></pre>
<p>But I would want to expose it in a url. I found this guide: <a href="https://godatadriven.com/blog/deploying-apache-airflow-on-azure-kubernetes-service/" rel="nofollow noreferrer">https://godatadriven.com/blog/deploying-apache-airflow-on-azure-kubernetes-service/</a></p>
<p>In the bottom part: FQDN with Ingress controller, he installs a nginx-ingress-controller.</p>
<p>I am pretty new on everything related to this matter, so if I understand correctly, I have to link the port 8080 where airflow is exposing the airflow UI and link it in some way with the nginx-ingress-controller that has an external IP to expose the localhost:8080 to this external IP and then be able to access it outside of kubernetes.</p>
<p>It is correct?</p>
| J.C Guzman | <p>Basically you will create a service object with type load balancer which will be assigned a public ip to the load balancer. Then that service will redirect request to set of pods matching label selectors. You can have an nginx ingress controller as those pods which can proxy pass all the request inside your cluster according to ingress rules. You are using nginx ingress controller so that you have only one load balancer and many applications running instead of having many services seperately exposed publicly. I hope this clarifies things.</p>
| Saurabh Nigam |
<p>It might take a while to explain what I'm trying to do but bear with me please.</p>
<p>I have the following infrastructure specified:
<a href="https://i.stack.imgur.com/mHJcE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mHJcE.png" alt="enter image description here" /></a></p>
<p>I have a job called <code>questo-server-deployment</code> (I know, confusing but this was the only way to access the deployment without using ingress on minikube)</p>
<p>This is how the parts should talk to one another:
<a href="https://i.stack.imgur.com/pRZA8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pRZA8.png" alt="enter image description here" /></a></p>
<p>And <a href="https://github.com/matewilk/questo/pull/8/files#diff-a9b6f2819e0e34608760f562298832266b7da1a55cb004d9cdc7e2dc8c6d6e54" rel="nofollow noreferrer">here</a> you can find the entire Kubernetes/Terraform config file for the above setup</p>
<p>I have 2 endpoints exposed from the <code>node.js</code> app (<code>questo-server-deployment</code>)
I'm making the requests using <code>10.97.189.215</code> which is the <code>questo-server-service</code> external IP address (as you can see in the first picture)</p>
<p>So I have 2 endpoints:</p>
<ul>
<li>health - which simply returns <code>200 OK</code> from the <code>node.js</code> app - and this part is fine confirming the node app is working as expected.</li>
<li>dynamodb - which should be able to send a request to the <code>questo-dynamodb-deployment</code> (pod) and get a response back, but it can't.</li>
</ul>
<p>When I print env vars I'm getting the following:</p>
<pre><code>➜ kubectl -n minikube-local-ns exec questo-server-deployment--1-7ptnz -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=questo-server-deployment--1-7ptnz
DB_DOCKER_URL=questo-dynamodb-service
DB_REGION=local
DB_SECRET_ACCESS_KEY=local
DB_TABLE_NAME=Questo
DB_ACCESS_KEY=local
QUESTO_SERVER_SERVICE_PORT_4000_TCP=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_PORT=8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PORT=8000
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
QUESTO_SERVER_SERVICE_SERVICE_HOST=10.97.189.215
QUESTO_SERVER_SERVICE_PORT=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PROTO=tcp
QUESTO_SERVER_SERVICE_PORT_4000_TCP_ADDR=10.97.189.215
KUBERNETES_PORT_443_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_ADDR=10.107.45.125
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
QUESTO_SERVER_SERVICE_SERVICE_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_HOST=10.107.45.125
QUESTO_DYNAMODB_SERVICE_PORT=tcp://10.107.45.125:8000
KUBERNETES_SERVICE_PORT_HTTPS=443
NODE_VERSION=12.22.7
YARN_VERSION=1.22.15
HOME=/root
</code></pre>
<p>so it looks like the configuration is aware of the dynamodb address and port:</p>
<pre><code>QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
</code></pre>
<p>You'll also notice in the above env variables that I specified:</p>
<pre><code>DB_DOCKER_URL=questo-dynamodb-service
</code></pre>
<p>Which is supposed to be the <code>questo-dynamodb-service</code> url:port which I'm assigning to the config <a href="https://github.com/matewilk/questo/pull/8/files#diff-a9b6f2819e0e34608760f562298832266b7da1a55cb004d9cdc7e2dc8c6d6e54R163" rel="nofollow noreferrer">here</a> (in the configmap) which is then used <a href="https://github.com/matewilk/questo/pull/8/files#diff-a9b6f2819e0e34608760f562298832266b7da1a55cb004d9cdc7e2dc8c6d6e54R67" rel="nofollow noreferrer">here</a> in the <code>questo-server-deployment</code> (job)</p>
<p>Also, when I log:</p>
<pre><code>kubectl logs -f questo-server-deployment--1-7ptnz -n minikube-local-ns
</code></pre>
<p>I'm getting the following results:</p>
<p><a href="https://i.stack.imgur.com/4Oh6s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Oh6s.png" alt="enter image description here" /></a></p>
<p>Which indicates that the app (node.js) tried to connect to the db (dynamodb) but on the wrong port <code>443</code> instead of <code>8000</code>?</p>
<p>The <code>DB_DOCKER_URL</code> should contain the full address (with port) to the <code>questo-dynamodb-service</code></p>
<p>What am I doing wrong here?</p>
<p>Edit ----</p>
<p>I've explicitly assigned the port <code>8000</code> to the <code>DB_DOCKER_URL</code> as suggested in the answer but now I'm getting the following error:
<a href="https://i.stack.imgur.com/3t7tD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3t7tD.png" alt="enter image description here" /></a></p>
<p>Seems to me there is some kind of default behaviour in Kubernetes and it tries to communicate between pods using <code>https</code> ?</p>
<p>Any ideas what needs to be done here?</p>
| matewilk | <p>How about specify the port in the ConfigMap:</p>
<pre><code>...
data = {
DB_DOCKER_URL = ${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000
...
</code></pre>
<p>Otherwise it may default to 443.</p>
| gohm'c |
<p>My goal is to setup an <strong>ingress nginx</strong> within my kubernetes cluster. The deployment seems to work as I guess, the logs are looking good.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create--1-n5h28 0/1 Completed 0 4d8h
pod/ingress-nginx-admission-patch--1-czsfn 0/1 Completed 0 4d8h
pod/ingress-nginx-controller-7f7f8685b8-xvldg 1/1 Running 0 10m
pod/web-app-59555dbf95-slqc4 1/1 Running 0 20m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.97.224.250 <none> 80:32666/TCP,443:31657/TCP 4d8h
service/ingress-nginx-controller-admission ClusterIP 10.100.7.97 <none> 443/TCP 4d8h
service/web-app-internal ClusterIP 10.103.22.145 <none> 80/TCP 20m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 4d8h
deployment.apps/web-app 1/1 1 1 20m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-55b65fcbff 0 0 0 22h
replicaset.apps/ingress-nginx-controller-5f7d486f4d 0 0 0 43m
replicaset.apps/ingress-nginx-controller-76bdf9b5f6 0 0 0 3h47m
replicaset.apps/ingress-nginx-controller-7d7489d947 0 0 0 44m
replicaset.apps/ingress-nginx-controller-7f7f8685b8 1 1 1 10m
replicaset.apps/ingress-nginx-controller-7fdc4896dd 0 0 0 22h
replicaset.apps/ingress-nginx-controller-86668dc4fc 0 0 0 22h
replicaset.apps/ingress-nginx-controller-8cf5559f8 0 0 0 4d8h
replicaset.apps/ingress-nginx-controller-f58499759 0 0 0 62m
replicaset.apps/web-app-59555dbf95 1 1 1 20m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 2s 4d8h
job.batch/ingress-nginx-admission-patch 1/1 7s 4d8h
</code></pre>
<p>I've already experienced some issues, stated in <a href="https://stackoverflow.com/questions/69167854/kubernetes-ingress-nginx-controller-internal-error-occurred-failed-calling-web">this question</a>.
The deployment I use is the following:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: registry/web-app:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web-app-internal
namespace: ingress-nginx
spec:
selector:
app: web-app
ports:
- port: 80
targetPort: 80
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-access-log: "true"
name: web-app-ingress
namespace: ingress-nginx
labels:
name: web-app-ingress
spec:
rules:
- host: web.app
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: web-app-internal
port:
number: 80
</code></pre>
<p>First of all, let me explain, that I try to make the ingress accessible via <code>http</code> internally for the moment. When this is working, the next step will be to establish a <code>ssl</code> certified connection.</p>
<p>Last but not least, a few more relevant data:</p>
<ul>
<li>the host defined within the ingress rule resolves to the ip of the externally hosted load-balancer outside the cluster within my own network</li>
<li><code>curl -v http://web.app</code> returns the following output:</li>
</ul>
<pre><code>* Trying x.x.x.x...
* TCP_NODELAY set
* Connected to web.app (x.x.x.x) port 80 (#0)
> GET / HTTP/1.1
> Host: web.app
> User-Agent: curl/7.64.1
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 400 Bad Request
<
Client sent an HTTP request to an HTTPS server.
* Closing connection 0
</code></pre>
<p>I'm a newbie to all things k8s related, any guess what I miss?</p>
<p>Many thanks in advice!</p>
| andreas.teich | <p>No, solved the problem. It was an incorrect nginx load-balancer setup. Did indeed pass the <strong>443</strong> and <strong>80</strong> traffic, but not to the exposed Port for <strong>http</strong> on my worker nodes the <strong>ingress-nginx-controller</strong> service allocated. After this, everything works fine.</p>
| andreas.teich |
<p>The docker-splunk image has an added layer of complexity because it has the ansible configurator doing the initial setup. Ansible even restarts the splunk program as part of the setup.</p>
<p>I'm having trouble thinking of an appropriate kubernetes readiness probe. TCP passes as soon as it gets a valid return. But the ansible playbooks need at least another 10 minutes before they're finished.</p>
<p>I currently use an initial delay, but I want something smarter. I'm thinking a command type probe that will look for when the ansible playbooks are complete. But I don't know where to look.</p>
<p>I guess this means I have to learn ansible now.</p>
| qwerty10110 | <p>You can use a startup probe for this which can have long time wait and is made specifically for slow startup containers. The startup probe can check the status of the ansible build to tell successful startup or not.</p>
| Saurabh Nigam |
<p>I am trying to mount my Pod logs directory from <code>/var/log/pods</code> to a local node volume <code>/var/data10</code>.</p>
<p>Deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-counter
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: my-nginx
template:
metadata:
labels:
app: my-nginx
spec:
nodeSelector:
kubernetes.io/hostname: kworker3
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
ports:
- containerPort: 80
volumeMounts:
- name: dirvol
mountPath: "/var/log/containers"
readOnly: true
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_ID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
volumes:
- name: dirvol
persistentVolumeClaim:
claimName: nginx-pvc
</code></pre>
<p>PV+PVC file:</p>
<pre><code>---
kind: PersistentVolume
apiVersion: v1
metadata:
name: nginx-pv
namespace: default
spec:
storageClassName: nginx-sc
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/var/data10"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: nginx-sc
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nginx-sc
namespace: default
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
</code></pre>
<p>Terminal Window:</p>
<pre><code>us@kworker3:~$ cd /var/log/pods/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b/
us@kworker3:/var/log/pods/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b$ cd count/
us@kworker3:/var/log/pods/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b/count$ ls
0.log
us@kworker3:/var/log/pods/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b/count$ cd
us@kworker3:~$
us@kworker3:~$
us@kworker3:~$
us@kworker3:~$ cd /var/data10
us@kworker3:/var/data10$ cd default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b/
us@kworker3:/var/data10/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b$ ls
us@kworker3:/var/data10/default_nginx-counter-6bdd59f45f-psd9x_f1999c22-6702-41b6-afb7-26db4239977b$ ls
</code></pre>
<p>I am trying to take the log file <code>0.log</code> and place it in the persistent volume <code>/var/data10</code> But as you can see it is empty.</p>
<p>I know I could have used a logging agent like fluentd to grab my container logs but I am trying to use this way to get my logs.</p>
<p><strong>Please note</strong> that I am trying to apply this scenario for a real web application, kubernetes pods would normally throw logs to the /var/log/containers directory on the node and my goal is to mount the log file (of the container) to the hostDisk (/var/data10) so that when the pod is deleted I would still have the logs inside my volume.</p>
| amin | <p>Symbolic link does not work in hostPath. Use <code>tee</code> to make a copy in the pod <code>echo ... | tee /pathInContainer/app.log</code> which in turn mount to <code>/var/data10</code> hostPath volume. If <code>tee</code> is not ideal, your best bet is running a log agent as sidecar.</p>
<p>Note that your PV <code>hostPath.path: "/var/data10"</code> will not contain any data as your stdout does not save here. You mounted this hostPath in the container as "/var/log/containers" will serve no purpose.</p>
| gohm'c |
<p>I have the following architecture for the PostgreSQL cluster:</p>
<p><a href="https://i.stack.imgur.com/TlP1l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TlP1l.png" alt="enter image description here" /></a></p>
<p>Here, there are multiple clients that interacts with PostgreSQL pods via pgpool, the issue is, when the pod (could be <code>pgpool</code> or <code>PostgreSQL</code> pod) terminates (for multiple reasons) the client is getting impacts and have to recreate the connection. For example in this diagram, if <code>postgresql-1</code> pod terminates then <code>client-0</code> will have to recreate the connection with the cluster.</p>
<p>Is there a way in kubernetes to handle it so that connections to <code>pgpool k8s service</code> are load balanced/ recreated to other pods so that the clients do not see the switch over <strong>and are not impacted</strong>?</p>
<p>Please note these are TCP connections and not HTTP connections (which are stateless). Also, all the PostgreSQL pods are <a href="https://www.postgresql.org/docs/10/runtime-config-wal.html#SYNCHRONOUS-COMMIT-MATRIX" rel="nofollow noreferrer">always in sync with remote_apply</a>.</p>
| Vishrant | <blockquote>
<p>Is there a way in kubernetes to handle it so that connections to pgpool k8s service are load balanced/ recreated to other pods...</p>
</blockquote>
<p>Connections to pgpool k8s service is load balance by kube-proxy. Endpoints (pgpool pods) that back the service will automatically be update whenever there's a change (eg. scaling) in pods population.</p>
<blockquote>
<p>...so that the clients do not see the switch over and are not impacted?</p>
</blockquote>
<p>Should the pgpool pod that the client connected gets terminated, the client tcp state become invalid (eg. void remote IP). There's no need to keep such connection alive but re-connect to the pgpool <strong>service</strong> where kube-proxy will route you to next available pgpool pod. The actual connection to backend database is managed by pgpool including database failover. With pgpool as the proxy, you do not need to worry about database switching.</p>
| gohm'c |
<p>Per <a href="https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html</a>, I ran the command, eksctl utils install-vpc-controllers --cluster <cluster_name> --approve</p>
<p>My EKS version is v1.16.3. I tries to deploy Windows docker images to a windows node. I got error below.</p>
<p>Warning FailedCreatePodSandBox 31s kubelet, ip-west-2.compute.internal Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ab8001f7b01f5c154867b7e" network for pod "mrestapi-67fb477548-v4njs": networkPlugin cni failed to set up pod "mrestapi-67fb477548-v4njs_ui" network: failed to parse Kubernetes args: pod does not have label vpc.amazonaws.com/PrivateIPv4Address</p>
<pre><code>$ kubectl logs vpc-resource-controller-645d6696bc-s5rhk -n kube-system
I1010 03:40:29.041761 1 leaderelection.go:185] attempting to acquire leader lease kube-system/vpc-resource-controller...
I1010 03:40:46.453557 1 leaderelection.go:194] successfully acquired lease kube-system/vpc-resource-controller
W1010 23:57:53.972158 1 reflector.go:341] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:99: watch of *v1.Pod ended with: too old resource version: 1480444 (1515040)
</code></pre>
<p>It complains too old resource version. How do I upgrade the version?</p>
| Melissa Jenner | <ol>
<li>I removed the windows nodes, re-created windows nodes with different instance type. But, it did not work.</li>
<li>Removed windows nodes group, re-created windows nodes group. It did not work.</li>
<li>Finally, I removed entire EKS cluster, re-created eks cluster. The command, kubectl describe node <windows_node> gives me the output below.</li>
</ol>
<pre><code> vpc.amazonaws.com/CIDRBlock 0 0
vpc.amazonaws.com/ENI 0 0
vpc.amazonaws.com/PrivateIPv4Address 1 1
</code></pre>
<p>Deployed windows-server-iis.yaml. It works as expected. The root cause of the problem is mystery.</p>
| Melissa Jenner |
<p>I am working on an STS based application and I want to temporarily disable HPA without deleting it. How can I achieve it?</p>
<p>I can't delete the HPA because if I delete it and again deploy the service then the HPA is created from STS and the name in STS and <code>hpa.yaml</code> is different. So, I don't want to delete it and I can't create using <code>hpa.yaml</code> using kubectl as per requirement.</p>
<p>So, is there any way I can disable and again enable it either using kubectl or make any changes to <code>hpa.yaml</code> file?</p>
| beingumang | <p>I deleted and Created Again using the same name:</p>
<pre><code>kubectl autoscale statefulset <sts_name> --name=<hpa_name_deleted> --min=<min_replica> --max=<max_replica> --cpu-percent=<cpu_percent> --memory=<target_memory> -n <namespace>
</code></pre>
<p>I did not find any way to disable it temporarily.</p>
| beingumang |
<p>We have 3 namespaces on a kubernetes cluster</p>
<pre><code>dev-test / build / prod
</code></pre>
<p>I want to limit the resource usage for <code>dev-test</code> & <code>build</code> only.
Can I set the resource quotas only for these namespaces without specifying (default-) resource requests & limits on the pod/container level?</p>
<p>If the resource usage on the limited namespaces is low, prod can use the rest completely, and it can grow only to a limited value, so prod resource usage is protected. </p>
<pre><code>apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-test
spec:
hard:
cpu: "2"
memory: 8Gi
</code></pre>
<p>Is this enough?</p>
| Alexej Medvedev | <p>Yes, you can set resource limits per namespace using ResourceQuota object:</p>
<pre><code>apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
</code></pre>
<p>From <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/" rel="nofollow noreferrer">kubernetes documentation</a>.</p>
| hilsenrat |
<p>I want to see all commands issued to our Kubernetes cluster via kubectl. Is there any native way of doing it?</p>
| humble_wolf | <p>Enable <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">k8s audit log</a>, use <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#audit-policy" rel="nofollow noreferrer">policy</a> for fine grain logging.</p>
| gohm'c |
<p>Hi I'm trying to get client real-ip to restrict some access at pod. But unfortunately I'm always getting 10.244.1.1 at every pod. I have tried with <a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/services/source-ip/</a> but unfortunately no luck. Please help.</p>
<p>I'm using kubernetes 1.8.2 version on cent os 7 bare metal servers for kubernetes cluster. I do not have any choice bu to use bare metal . As an ingress controller I'm using kong. My kong ingress controller is always getting 10.244.1.1. In kong there is a feature called IP restriction. I'm trying to use it.</p>
<p>So other suggested to use another kong hop as a loadbalancer which is not a good solution for my situation.</p>
| Assaduzzaman Assad | <p>You need to specify the traffic policy on the kong-proxy service</p>
<pre><code>spec:
...
selector:
app: ingress-kong
type: LoadBalancer
externalTrafficPolicy: Local
</code></pre>
<p>And you may need to add one or both of the following environment variables to the kong container</p>
<pre><code>- name: KONG_TRUSTED_IPS
value: 0.0.0.0/0,::/0
- name: KONG_REAL_IP_RECURSIVE
value: "on"
</code></pre>
<p>I got this working with a k3s instance.</p>
<p>There are detailed information about the issues with the source-ip in "bare metal considerations for k8s" on the k8s documentation and "preserving client ip addresses" in kong docs. They contain too many details to briefly summarize.</p>
| d.sndrs |
<p>I am new to k8s. I would like to test the deletionGracePeriodSeconds feature.
Documentation says :</p>
<blockquote>
<p>deletionGracePeriodSeconds (integer) : Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set.</p>
</blockquote>
<p>I guess that if the pod terminates nicely, the feature does not apply.</p>
<p>So, How can I make the pod "reject" deletion in order to see how this feature works? Does it work for deletion by command :</p>
<pre><code>kubectl delete pod mypod
</code></pre>
<p>or only with scheduled deletion with "deletionTimestamp"</p>
<p>Here is how I tried to do it (via the trap command) but it does not seem to work :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
deletionGracePeriodSeconds: 10
deletionTimestamp: 2020-08-05T14:40:00Z
name: mypod
spec:
containers:
- name: mycontainer
image: nginx
command: ["/bin/sh", "-c"]
args:
- trap "" 2 3 15 20 | sleep 600
</code></pre>
<p>Thanks in advance
Abdelghani</p>
| Abdelghani | <p>I assume you are asking about <code>terminationGracePeriodSeconds</code>, please correct me if I'm mistaken and I'll edit accrodingly.</p>
<p>The <code>terminationGracePeriodSeconds</code> is the time between Kubernetes sends the <code>SIGTERM</code> signal to the pod main process (<code>PID 1</code>) until it sends the <code>SIGKILL</code> signal, which abruptly kills the process (and subsequently, the pod itself).</p>
<p><code>SIGTERM</code> signal is meant to be interpreted by the process, and it should start a "Graceful Shutdown" - stop receiving new tasks and finish the work on existing ones. If the process in your pod needs more than 30 seconds for this procedure (let's say you're running a worker which process each task in 2 minutes), you'd want to extend the <code>terminationGracePeriodSeconds</code> accordingly.</p>
<p>So you can't make the pod "reject" the deletion, but your process can either ignore the <code>SIGTERM</code> signal, and then after the period configured in <code>terminationGracePeriodSeconds</code> it'll be killed abruptly, or it may be that your process needs more time to gracefully shutdown (and in that case, you'd want to increase `</p>
| hilsenrat |
<p>I'am trying to create pod with volume persistent disk of 10gb but seems I cannot create disk under 200Gb.</p>
<p>I can see pv listed but pvClaim is on pending. I can see what the pc is Available so I cannot understand what's happen</p>
<p><strong>Please find info below:</strong></p>
<pre><code>Invalid value for field 'resource.sizeGb': '10'. Disk size cannot be smaller than 200 GB., invalid
kubectl get pvc -n vault-ppd
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-vault-ppd-claim Pending balanced-persistent-disk 2m45s
kubectl get pv -n vault-ppd
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-vault-ppd 10Gi RWO Retain Available vault/pv-vault-ppd-claim
</code></pre>
<p>My manifest <strong>vault-ppd.yaml</strong></p>
<pre><code> kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: balanced-persistent-disk
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-standard
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
- europe-west1-b
- europe-west1-c
- europe-west1-d
---
apiVersion: v1
kind: Namespace
metadata:
name: vault-ppd
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault-ppd
namespace: vault-ppd
labels:
app.kubernetes.io/name: vault-ppd
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vault-ppd
spec:
storageClassName: "balanced-persistent-disk"
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: vault
name: pv-vault-ppd-claim
gcePersistentDisk:
pdName: gke-vault-volume
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-vault-ppd-claim
namespace: vault-ppd
spec:
storageClassName: "balanced-persistent-disk"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<p>Thx for helps guys</p>
| Rabah DevOps | <p><code>pdName: gke-vault-volume</code> should be a regional replicated disk with size >=200GB, you can just update your PVC/PC with the correct size. If it is not, you can set <code>storageClassName: ""</code> in both the PVC and PV to use the standard default StorageClass that provide standard disk.</p>
| gohm'c |
<p>I would request if someone can help in why my release pipeline to AKS cluster is failing. I'm using Azure Devops Release pipeline.</p>
<pre><code>2020-09-02T10:56:33.1944594Z ##[section]Starting: kubectl create app and service or apply
2020-09-02T10:56:33.2380678Z ==============================================================================
2020-09-02T10:56:33.2381778Z Task : Kubectl
2020-09-02T10:56:33.2384698Z Description : Deploy, configure, update a Kubernetes cluster in Azure Container Service by running kubectl commands
2020-09-02T10:56:33.2385174Z Version : 1.173.0
2020-09-02T10:56:33.2386143Z Author : Microsoft Corporation
2020-09-02T10:56:33.2387216Z Help : https://aka.ms/azpipes-kubectl-tsg
2020-09-02T10:56:33.2387656Z ==============================================================================
2020-09-02T10:56:34.9075560Z Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/windows/amd64/kubectl.exe
2020-09-02T10:56:37.8112374Z Caching tool: kubectl 1.7.0 x64
2020-09-02T10:56:37.9135761Z Prepending PATH environment variable with directory: C:\hostedtoolcache\windows\kubectl\1.7.0\x64
2020-09-02T10:56:40.4896282Z Could not fetch Kubectl version. Please make sure that the Kubernetes server is up and running.
2020-09-02T10:56:40.5241929Z [command]C:\hostedtoolcache\windows\kubectl\1.7.0\x64\kubectl.exe apply -f D:\a\r1\a\_praneshshzl_AKSCICDDEMO\aksdeploy.yml -o json
2020-09-02T10:56:49.8395443Z {
2020-09-02T10:56:49.8397440Z "apiVersion": "apps/v1",
2020-09-02T10:56:49.8398489Z "kind": "Deployment",
2020-09-02T10:56:49.8399348Z "metadata": {
2020-09-02T10:56:49.8400030Z "annotations": {
2020-09-02T10:56:49.8401188Z "deployment.kubernetes.io/revision": "2",
2020-09-02T10:56:49.8404455Z "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"aspx-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":2,\"selector\":{\"matchLabels\":{\"app\":\"asp-net\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"asp-net\"}},\"spec\":{\"containers\":[{\"image\":\"***/drop:40\",\"name\":\"asp\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"
2020-09-02T10:56:49.8406762Z },
2020-09-02T10:56:49.8407640Z "creationTimestamp": "2020-09-02T07:19:54Z",
2020-09-02T10:56:49.8408392Z "generation": 2,
2020-09-02T10:56:49.8409051Z "name": "aspx-deployment",
2020-09-02T10:56:49.8409752Z "namespace": "default",
2020-09-02T10:56:49.8410403Z "resourceVersion": "19157",
2020-09-02T10:56:49.8411229Z "selfLink": "/apis/apps/v1/namespaces/default/deployments/aspx-deployment",
2020-09-02T10:56:49.8412093Z "uid": "57c18e4d-0583-43bc-b0c4-58d6bb2c9069"
2020-09-02T10:56:49.8413323Z },
2020-09-02T10:56:49.8413859Z "spec": {
2020-09-02T10:56:49.8414348Z "progressDeadlineSeconds": 600,
2020-09-02T10:56:49.8414799Z "replicas": 2,
2020-09-02T10:56:49.8415097Z "revisionHistoryLimit": 10,
2020-09-02T10:56:49.8415368Z "selector": {
2020-09-02T10:56:49.8415640Z "matchLabels": {
2020-09-02T10:56:49.8416109Z "app": "asp-net"
2020-09-02T10:56:49.8416360Z }
2020-09-02T10:56:49.8416549Z },
2020-09-02T10:56:49.8416778Z "strategy": {
2020-09-02T10:56:49.8417054Z "rollingUpdate": {
2020-09-02T10:56:49.8417341Z "maxSurge": "25%",
2020-09-02T10:56:49.8417658Z "maxUnavailable": "25%"
2020-09-02T10:56:49.8417927Z },
2020-09-02T10:56:49.8418189Z "type": "RollingUpdate"
2020-09-02T10:56:49.8418422Z },
2020-09-02T10:56:49.8418649Z "template": {
2020-09-02T10:56:49.8419141Z "metadata": {
2020-09-02T10:56:49.8419452Z "creationTimestamp": null,
2020-09-02T10:56:49.8419748Z "labels": {
2020-09-02T10:56:49.8420044Z "app": "asp-net"
2020-09-02T10:56:49.8420308Z }
2020-09-02T10:56:49.8420525Z },
2020-09-02T10:56:49.8420745Z "spec": {
2020-09-02T10:56:49.8422686Z "containers": [
2020-09-02T10:56:49.8422953Z {
2020-09-02T10:56:49.8423661Z "image": "***/drop:40",
2020-09-02T10:56:49.8424053Z "imagePullPolicy": "IfNotPresent",
2020-09-02T10:56:49.8424416Z "name": "asp",
2020-09-02T10:56:49.8424731Z "ports": [
2020-09-02T10:56:49.8425016Z {
2020-09-02T10:56:49.8425324Z "containerPort": 80,
2020-09-02T10:56:49.8425698Z "protocol": "TCP"
2020-09-02T10:56:49.8426026Z }
2020-09-02T10:56:49.8426285Z ],
2020-09-02T10:56:49.8426562Z "resources": {},
2020-09-02T10:56:49.8426957Z "terminationMessagePath": "/dev/termination-log",
2020-09-02T10:56:49.8427380Z "terminationMessagePolicy": "File"
2020-09-02T10:56:49.8427696Z }
2020-09-02T10:56:49.8427918Z ],
2020-09-02T10:56:49.8428205Z "dnsPolicy": "ClusterFirst",
2020-09-02T10:56:49.8428542Z "restartPolicy": "Always",
2020-09-02T10:56:49.8428896Z "schedulerName": "default-scheduler",
2020-09-02T10:56:49.8429231Z "securityContext": {},
2020-09-02T10:56:49.8429585Z "terminationGracePeriodSeconds": 30
2020-09-02T10:56:49.8429873Z }
2020-09-02T10:56:49.8430792Z }
2020-09-02T10:56:49.8431012Z },
2020-09-02T10:56:49.8431285Z "status": {
2020-09-02T10:56:49.8431559Z "availableReplicas": 2,
2020-09-02T10:56:49.8431825Z "conditions": [
2020-09-02T10:56:49.8432059Z {
2020-09-02T10:56:49.8432383Z "lastTransitionTime": "2020-09-02T07:19:54Z",
2020-09-02T10:56:49.8432798Z "lastUpdateTime": "2020-09-02T07:37:15Z",
2020-09-02T10:56:49.8433255Z "message": "ReplicaSet \"aspx-deployment-84597d88f5\" has successfully progressed.",
2020-09-02T10:56:49.8434044Z "reason": "NewReplicaSetAvailable",
2020-09-02T10:56:49.8434679Z "status": "True",
2020-09-02T10:56:49.8435244Z "type": "Progressing"
2020-09-02T10:56:49.8435605Z },
2020-09-02T10:56:49.8435825Z {
2020-09-02T10:56:49.8436146Z "lastTransitionTime": "2020-09-02T10:53:01Z",
2020-09-02T10:56:49.8436562Z "lastUpdateTime": "2020-09-02T10:53:01Z",
2020-09-02T10:56:49.8436957Z "message": "Deployment has minimum availability.",
2020-09-02T10:56:49.8437364Z "reason": "MinimumReplicasAvailable",
2020-09-02T10:56:49.8437756Z "status": "True",
2020-09-02T10:56:49.8438064Z "type": "Available"
2020-09-02T10:56:49.8438304Z }
2020-09-02T10:56:49.8438510Z ],
2020-09-02T10:56:49.8438765Z "observedGeneration": 2,
2020-09-02T10:56:49.8439067Z "readyReplicas": 2,
2020-09-02T10:56:49.8439325Z "replicas": 2,
2020-09-02T10:56:49.8439604Z "updatedReplicas": 2
2020-09-02T10:56:49.8439834Z }
2020-09-02T10:56:49.8440256Z }
2020-09-02T10:56:50.0434980Z error: error validating "D:\\a\\r1\\a\\_praneshshzl_AKSCICDDEMO\\aksdeploy.yml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
2020-09-02T10:56:50.0592077Z commandOutput{
2020-09-02T10:56:50.0592764Z "apiVersion": "apps/v1",
2020-09-02T10:56:50.0593476Z "kind": "Deployment",
2020-09-02T10:56:50.0593853Z "metadata": {
2020-09-02T10:56:50.0594453Z "annotations": {
2020-09-02T10:56:50.0617018Z "deployment.kubernetes.io/revision": "2",
2020-09-02T10:56:50.0619826Z "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"aspx-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":2,\"selector\":{\"matchLabels\":{\"app\":\"asp-net\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"asp-net\"}},\"spec\":{\"containers\":[{\"image\":\"***/drop:40\",\"name\":\"asp\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"
2020-09-02T10:56:50.0622683Z },
2020-09-02T10:56:50.0623075Z "creationTimestamp": "2020-09-02T07:19:54Z",
2020-09-02T10:56:50.0623735Z "generation": 2,
2020-09-02T10:56:50.0624085Z "name": "aspx-deployment",
2020-09-02T10:56:50.0624371Z "namespace": "default",
2020-09-02T10:56:50.0624697Z "resourceVersion": "19157",
2020-09-02T10:56:50.0628209Z "selfLink": "/apis/apps/v1/namespaces/default/deployments/aspx-deployment",
2020-09-02T10:56:50.0628683Z "uid": "57c18e4d-0583-43bc-b0c4-58d6bb2c9069"
2020-09-02T10:56:50.0628948Z },
2020-09-02T10:56:50.0629156Z "spec": {
2020-09-02T10:56:50.0629632Z "progressDeadlineSeconds": 600,
2020-09-02T10:56:50.0629958Z "replicas": 2,
2020-09-02T10:56:50.0630236Z "revisionHistoryLimit": 10,
2020-09-02T10:56:50.0633694Z "selector": {
2020-09-02T10:56:50.0634104Z "matchLabels": {
2020-09-02T10:56:50.0634389Z "app": "asp-net"
2020-09-02T10:56:50.0634654Z }
2020-09-02T10:56:50.0634863Z },
2020-09-02T10:56:50.0635116Z "strategy": {
2020-09-02T10:56:50.0635383Z "rollingUpdate": {
2020-09-02T10:56:50.0635685Z "maxSurge": "25%",
2020-09-02T10:56:50.0636003Z "maxUnavailable": "25%"
2020-09-02T10:56:50.0636322Z },
2020-09-02T10:56:50.0636572Z "type": "RollingUpdate"
2020-09-02T10:56:50.0636821Z },
2020-09-02T10:56:50.0637050Z "template": {
2020-09-02T10:56:50.0637316Z "metadata": {
2020-09-02T10:56:50.0637611Z "creationTimestamp": null,
2020-09-02T10:56:50.0637922Z "labels": {
2020-09-02T10:56:50.0638230Z "app": "asp-net"
2020-09-02T10:56:50.0638483Z }
2020-09-02T10:56:50.0638819Z },
2020-09-02T10:56:50.0639105Z "spec": {
2020-09-02T10:56:50.0639388Z "containers": [
2020-09-02T10:56:50.0639651Z {
2020-09-02T10:56:50.0640045Z "image": "***/drop:40",
2020-09-02T10:56:50.0640435Z "imagePullPolicy": "IfNotPresent",
2020-09-02T10:56:50.0640782Z "name": "asp",
2020-09-02T10:56:50.0641098Z "ports": [
2020-09-02T10:56:50.0641395Z {
2020-09-02T10:56:50.0642030Z "containerPort": 80,
2020-09-02T10:56:50.0642527Z "protocol": "TCP"
2020-09-02T10:56:50.0642853Z }
2020-09-02T10:56:50.0643111Z ],
2020-09-02T10:56:50.0643405Z "resources": {},
2020-09-02T10:56:50.0643802Z "terminationMessagePath": "/dev/termination-log",
2020-09-02T10:56:50.0644232Z "terminationMessagePolicy": "File"
2020-09-02T10:56:50.0644550Z }
2020-09-02T10:56:50.0644768Z ],
2020-09-02T10:56:50.0645058Z "dnsPolicy": "ClusterFirst",
2020-09-02T10:56:50.0645398Z "restartPolicy": "Always",
2020-09-02T10:56:50.0645752Z "schedulerName": "default-scheduler",
2020-09-02T10:56:50.0646087Z "securityContext": {},
2020-09-02T10:56:50.0646605Z "terminationGracePeriodSeconds": 30
2020-09-02T10:56:50.0646902Z }
2020-09-02T10:56:50.0647131Z }
2020-09-02T10:56:50.0647308Z },
2020-09-02T10:56:50.0647518Z "status": {
2020-09-02T10:56:50.0647785Z "availableReplicas": 2,
2020-09-02T10:56:50.0648069Z "conditions": [
2020-09-02T10:56:50.0648292Z {
2020-09-02T10:56:50.0648700Z "lastTransitionTime": "2020-09-02T07:19:54Z",
2020-09-02T10:56:50.0649116Z "lastUpdateTime": "2020-09-02T07:37:15Z",
2020-09-02T10:56:50.0649585Z "message": "ReplicaSet \"aspx-deployment-84597d88f5\" has successfully progressed.",
2020-09-02T10:56:50.0650020Z "reason": "NewReplicaSetAvailable",
2020-09-02T10:56:50.0650354Z "status": "True",
2020-09-02T10:56:50.0650662Z "type": "Progressing"
2020-09-02T10:56:50.0650920Z },
2020-09-02T10:56:50.0651117Z {
2020-09-02T10:56:50.0651508Z "lastTransitionTime": "2020-09-02T10:53:01Z",
2020-09-02T10:56:50.0652363Z "lastUpdateTime": "2020-09-02T10:53:01Z",
2020-09-02T10:56:50.0652807Z "message": "Deployment has minimum availability.",
2020-09-02T10:56:50.0653184Z "reason": "MinimumReplicasAvailable",
2020-09-02T10:56:50.0653520Z "status": "True",
2020-09-02T10:56:50.0653840Z "type": "Available"
2020-09-02T10:56:50.0654095Z }
2020-09-02T10:56:50.0654285Z ],
2020-09-02T10:56:50.0654538Z "observedGeneration": 2,
2020-09-02T10:56:50.0654831Z "readyReplicas": 2,
2020-09-02T10:56:50.0655099Z "replicas": 2,
2020-09-02T10:56:50.0655360Z "updatedReplicas": 2
2020-09-02T10:56:50.0655592Z }
2020-09-02T10:56:50.0655767Z }
2020-09-02T10:56:50.0655889Z
2020-09-02T10:56:52.6885299Z ##[error]The process 'C:\hostedtoolcache\windows\kubectl\1.7.0\x64\kubectl.exe' failed with exit code 1
2020-09-02T10:56:52.7480636Z ##[section]Finishing: kubectl create app and service or apply
</code></pre>
<p>My aksdeploy.yml file is at <a href="https://github.com/praneshshzl/AKSCICDDEMO/blob/master/aksdeploy.yml" rel="nofollow noreferrer">https://github.com/praneshshzl/AKSCICDDEMO/blob/master/aksdeploy.yml</a></p>
<p>My image to deploy is a docker hub image based on windows at <a href="https://hub.docker.com/repository/docker/praneshhzl/drop" rel="nofollow noreferrer">https://hub.docker.com/repository/docker/praneshhzl/drop</a></p>
| Pranesh Sathyanarayan | <p>Looks like this was something to do in the kubectl apply task in release pipeline. We have to go to right pane, scroll down, expand "advanced" and in version spec check the box <strong>"check for latest version"</strong>. My pipeline built completed successfully. There was no issues in my deployment yaml.</p>
<p>Enabling this option ensures proper latest version of kubectl.exe is downloaded
onto the VSTS agent</p>
<pre><code>Downloading: https://storage.googleapis.com/kubernetes-release/release/stable.txt
2020-09-02T12:18:39.3539227Z Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.19.0/bin/windows/amd64/kubectl.exe
2020-09-02T12:18:40.7266540Z Caching tool: kubectl 1.19.0 x64
2020-09-02T12:18:40.8028720Z Prepending PATH environment variable with directory: C:\hostedtoolcache\windows\kubectl\1.19.0\x64
2020-09-02T12:18:43.3630075Z ==============================================================================
2020-09-02T12:18:43.3632157Z Kubectl Client Version: v1.19.0
2020-09-02T12:18:43.3633193Z Kubectl Server Version: v1.16.13
2020-09-02T12:18:43.3633757Z ==============================================================================
</code></pre>
| Pranesh Sathyanarayan |
<p>I am learning kubernetes on minikube. I studied the kubernetes official documentation and followed their <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-interactive/" rel="noreferrer">interactive tutorial</a> in a sandboxed environment. Everything worked fine in the sandbox but I tried the same thing on my system it failed.</p>
<h3>My Setup :</h3>
<ul>
<li>I am using macOS Big Sur version 11.6.2(20G314) on Apple M1.</li>
<li>I have used docker instead of virtual machine environment for minikube.</li>
</ul>
<h3>Steps to reproduce :</h3>
<p>First I created a deployment, then I created a <code>NodePort</code> type service to expose it to external traffic.</p>
<p>The pod is running fine and no issues are seen in the service description.</p>
<p>To test if the app is exposed outside of the cluster I used <code>curl</code> to send a request to the node :</p>
<pre class="lang-sh prettyprint-override"><code>curl $(minikube ip):$NODE_PORT
</code></pre>
<p>But I get no response from the server :</p>
<blockquote>
<p>curl: (7) Failed to connect to 192.168.XX.X port 32048: Operation timed out.</p>
</blockquote>
<p>I have copied everything that was done in the tutorial. Same deployment name, same image, same service-name, literally EVERYTHING.</p>
<p>I tried <code>LoadBalancer</code> type, but found out that minikube doesn't support it. To access the <code>LoadBalancer</code> deployment, I used the command <code>minikube tunnel</code> but this did not help.</p>
<p>What could be the possible reasons? Is it my system?</p>
| Prateik Pokharel | <p>I also had this problem on my m1 mac. I was able to access the service by using this command :</p>
<pre class="lang-sh prettyprint-override"><code>kubectl port-forward svc/kubernetes-bootcamp 8080:8080
</code></pre>
<p>You can see <a href="https://levelup.gitconnected.com/minikube-tips-tricks-739f4b00ac17" rel="noreferrer">this article</a> and <a href="https://stackoverflow.com/questions/71667587/apple-m1-minikube-no-service-url">this answer</a> for more info and ways to go about it.</p>
| heisguyy |
<p>Preview:
We started working on helm 3 to deploy our applications on k8s and we have come to good stage with deploying the charts successfully. However we are very new to implement tests under helm charts.
For example I am deploying pdfreactor official image and i can check the web application version details either by using browser "http://172.27.1.119:31423/service/" or by "curl <a href="http://172.27.1.119:31423/service/%22" rel="nofollow noreferrer">http://172.27.1.119:31423/service/"</a>. Now I want to write a helm test to check the same. The below is pdfreactor-test.yaml (reference link: <a href="https://helm.sh/docs/topics/chart_tests/" rel="nofollow noreferrer">https://helm.sh/docs/topics/chart_tests/</a>)</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-credentials-test"
annotations:
"helm.sh/hook": test
spec:
containers:
- name: {{ .Release.Name }}-credentials-test
image: {{ .Values.image.imageName }}
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
command:
- /bin/bash
- curl http://172.27.1.119:31423/service/
</code></pre>
<p>When i ran</p>
<pre><code> helm install pdfreactor <chart name>
helm test pdfreactor
</code></pre>
<p>I got below response</p>
<pre><code>NAME: pdfreactor
LAST DEPLOYED: Thu Aug 13 09:02:55 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing pdfreactor.
</code></pre>
<p>Below are my challenges.</p>
<ol>
<li>Need to understand what am i doing wrong?</li>
<li>How exactly the helm test will work? Does it create a new pod and test, or does it test on existing pod?</li>
<li>What is the purpose of giving image details in test.yaml?</li>
</ol>
<p>Note: I have even used the default template generated with helm create.</p>
| SHC | <p>Make sure your test configuration files reside under <code><chart-name>/templates/tests/</code> folder.</p>
<p>Regarding 2 and 3 - Yes, it creates a new pod, using the template you provided. The pod will run to completion, and if the Exit Code will be 0, the test is considered successful.</p>
| hilsenrat |
<p>I've recently added the logging for my GKE instances on the GCP instances. Nowadays the following error occurs three times a second and therefore a massive amount of errors will be generated. Unfortunately all important errors will be lost, cause of the massive amount of errors in the logs. The following JSON is one of these errors:</p>
<pre><code>{
"insertId": "42",
"jsonPayload": {
"pid": "1",
"source": "stackdriver.go:60",
"message": "Error while sending request to Stackdriver googleapi: Error 400: One or more TimeSeries could not be written: Unknown metric:
kubernetes.io/internal/addons/workload_identity/go_gc_duration_seconds_count: timeSeries[31]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_gc_duration_seconds_sum: timeSeries[4]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_goroutines: timeSeries[0]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_info: timeSeries[47]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_alloc_bytes: timeSeries[55]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_alloc_bytes_total: timeSeries[40]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_buck_hash_sys_bytes: timeSeries[13]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_frees_total: timeSeries[2]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_gc_cpu_fraction: timeSeries[56]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_gc_sys_bytes: timeSeries[19]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_heap_alloc_bytes: timeSeries[46]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_heap_idle_bytes: timeSeries[32]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_heap_inuse_bytes: timeSeries[42]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_heap_objects: timeSeries[1]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_heap_released_bytes: timeSeries[8]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_heap_sys_bytes: timeSeries[43]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_last_gc_time_seconds: timeSeries[33]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_lookups_total: timeSeries[34]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_mallocs_total: timeSeries[3]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_mcache_inuse_bytes: timeSeries[18]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_mcache_sys_bytes: timeSeries[11]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_mspan_inuse_bytes: timeSeries[38]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_mspan_sys_bytes: timeSeries[23]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_next_gc_bytes: timeSeries[10]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_other_sys_bytes: timeSeries[16]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_stack_inuse_bytes: timeSeries[17]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_stack_sys_bytes: timeSeries[12]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_memstats_sys_bytes: timeSeries[21]; Unknown metric: kubernetes.io/internal/addons/workload_identity/go_threads: timeSeries[41]; Unknown metric: kubernetes.io/internal/addons/workload_identity/process_cpu_seconds_total: timeSeries[20]; Unknown metric: kubernetes.io/internal/addons/workload_identity/process_max_fds: timeSeries[22]; Unknown metric: kubernetes.io/internal/addons/workload_identity/process_open_fds: timeSeries[9]; Unknown metric: kubernetes.io/internal/addons/workload_identity/process_resident_memory_bytes: timeSeries[39]; Unknown metric: kubernetes.io/internal/addons/workload_identity/process_start_time_seconds: timeSeries[45]; Unknown metric: kubernetes.io/internal/addons/workload_identity/process_virtual_memory_bytes: timeSeries[30]; Unknown metric: kubernetes.io/internal/addons/workload_identity/process_virtual_memory_max_bytes: timeSeries[44]; Unknown metric: kubernetes.io/internal/addons/workload_identity/promhttp_metric_handler_requests_in_flight: timeSeries[7]; Unknown metric: kubernetes.io/internal/addons/workload_identity/promhttp_metric_handler_requests_total: timeSeries[35-37]; Value type for metric kubernetes.io/internal/addons/workload_identity/metadata_server_build_info must be DOUBLE, but is INT64.: timeSeries[48], badRequest"
},
"resource": {
"type": "k8s_container",
"labels": {
"cluster_name": "cluster-a",
"location": "europe-west3",
"pod_name": "prometheus-to-sd-jcmwn",
"project_id": "my-nice-project-id",
"container_name": "prometheus-to-sd-new-model",
"namespace_name": "kube-system"
}
},
"timestamp": "2020-07-30T06:26:01.784963Z",
"severity": "ERROR",
"labels": {
"k8s-pod/pod-template-generation": "1",
"k8s-pod/controller-revision-hash": "7984bf4f95",
"k8s-pod/k8s-app": "prometheus-to-sd"
},
"logName": "projects/my-nice-project-id/logs/stderr",
"sourceLocation": {
"file": "stackdriver.go",
"line": "60"
},
"receiveTimestamp": "2020-07-30T06:26:03.411798926Z"
}
</code></pre>
<p>What is the problem of this behaviour and how I can fix it?</p>
| theexiile1305 | <p>It looks like a bug in GKE clusters with the <code>Workload Identity</code> feature enabled.<br />
The bug reproduced for me in <code>1.14.10-gke.42</code> with Workload Identity, but works as expected with GKE cluster deployed with version <code>1.15.12-gke.2</code>.</p>
<p>There is an <a href="https://github.com/GoogleCloudPlatform/k8s-stackdriver/issues/308" rel="nofollow noreferrer">open issue</a> in GitHub. If you can't upgrade your cluster version, I suggest you to contact Google Cloud support and ask them for their recommended mitigation (Although they probably will instruct you to upgrade your cluster version as well).</p>
| hilsenrat |
<p>I'm using Google Cloud Build to CI/CD my application, which rely on multiple cronjobs. The first step of my build is like:</p>
<pre class="lang-yaml prettyprint-override"><code> # validate k8s manifests
- id: validate-k8s
name: quay.io/fairwinds/polaris:1.2.1
entrypoint: polaris
args:
- audit
- --audit-path
- ./devops/k8s/cronjobs/worker-foo.yaml
- --set-exit-code-on-danger
- --set-exit-code-below-score
- "87"
</code></pre>
<p>I'm using <a href="https://polaris.docs.fairwinds.com/checks/security/" rel="nofollow noreferrer">Polaris</a> to enforce best security practices. For each cronjob, I have a deployment manifest that is like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: worker-foo
namespace: foo
spec:
schedule: "30 1-5,20-23 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
backoffLimit: 3
template:
spec:
hostIPC: false
hostPID: false
hostNetwork: false
volumes:
- name: foo-sa
secret:
secretName: foo-sa
- name: foo-secrets
secret:
secretName: foo-secrets
- name: tmp-pod
emptyDir: {}
restartPolicy: OnFailure
containers:
- name: worker-foo
image: gcr.io/bar/foo:latest
imagePullPolicy: "Always"
resources:
requests:
memory: "512M"
cpu: "50m"
limits:
memory: "6000M"
cpu: "500m"
volumeMounts:
- name: foo-sa
mountPath: /var/secrets/foo-sa
- mountPath: /tmp/pod
name: tmp-pod
command: ["/bin/bash", "-c"]
args:
- |
timeout --kill-after=10500 10500 python foo/foo/foo.py --prod;
</code></pre>
<p>I found <a href="https://blog.aquasec.com/kubernetess-policy" rel="nofollow noreferrer">here</a> that the hierarchy of HostIPC parameter in manifest file is “spec.jobTemplate.spec.template.spec.HostIPC”, but it does not seem to conform Polaris validation:</p>
<pre><code>Step #0 - "validate-k8s": "Results": [
Step #0 - "validate-k8s": {
Step #0 - "validate-k8s": "Name": "worker-foo",
Step #0 - "validate-k8s": "Namespace": "foo",
Step #0 - "validate-k8s": "Kind": "CronJob",
Step #0 - "validate-k8s": "Results": {},
Step #0 - "validate-k8s": "PodResult": {
Step #0 - "validate-k8s": "Name": "",
Step #0 - "validate-k8s": "Results": {
Step #0 - "validate-k8s": "hostIPCSet": {
Step #0 - "validate-k8s": "ID": "hostIPCSet",
Step #0 - "validate-k8s": "Message": "Host IPC is not configured",
Step #0 - "validate-k8s": "Success": true,
Step #0 - "validate-k8s": "Severity": "danger",
Step #0 - "validate-k8s": "Category": "Security"
Step #0 - "validate-k8s": },
Step #0 - "validate-k8s": "hostNetworkSet": {
Step #0 - "validate-k8s": "ID": "hostNetworkSet",
Step #0 - "validate-k8s": "Message": "Host network is not configured",
Step #0 - "validate-k8s": "Success": true,
Step #0 - "validate-k8s": "Severity": "warning",
Step #0 - "validate-k8s": "Category": "Networking"
Step #0 - "validate-k8s": },
Step #0 - "validate-k8s": "hostPIDSet": {
Step #0 - "validate-k8s": "ID": "hostPIDSet",
Step #0 - "validate-k8s": "Message": "Host PID is not configured",
Step #0 - "validate-k8s": "Success": true,
Step #0 - "validate-k8s": "Severity": "danger",
Step #0 - "validate-k8s": "Category": "Security"
Step #0 - "validate-k8s": }
Step #0 - "validate-k8s": },
</code></pre>
<p>What I'm missing here? How should I declare HostIPC and HostPID params in order to satisfy Polaris validation?</p>
<p>Possibly related issue: <a href="https://github.com/FairwindsOps/polaris/issues/328" rel="nofollow noreferrer">https://github.com/FairwindsOps/polaris/issues/328</a></p>
| Kfcaio | <p>Polaris may be asking you to explicitly set those attributes to false. Try this:</p>
<pre><code>...
jobTemplate:
spec:
backoffLimit: 3
template:
spec:
hostIPC: false
hostNetwork: false
hostPID: false
...
containers:
- worker-foo
...
...
</code></pre>
| gohm'c |
<p>We have 2 namespaces, say namespace1 and namespace2.</p>
<p>The following are the services in namespace1 and the services exposed.</p>
<pre><code>[root@console ~]# oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
config-server ClusterIP 172.30.8.152 <none> 8888/TCP 3h
eureka-server ClusterIP 172.30.120.74 <none> 8761/TCP 3h
expedia-rapidapi-service ClusterIP 172.30.236.3 <none> 8233/TCP 3h
travelcodes-service ClusterIP 172.30.14.36 <none> 8084/TCP 3h
tti-service ClusterIP 172.30.46.212 <none> 8245/TCP 2h
</code></pre>
<p>I can use nslookup lookup the cluster IP in any pod to the service "travelcodes-service"</p>
<pre><code>/ $ nslookup travelcodes-service.contents.svc.cluster.local
Name: travelcodes-service.contents.svc.cluster.local
Address 1: 172.30.14.36 travelcodes-service.contents.svc.cluster.local
</code></pre>
<p>However, I can only use curl to access travelcodes-service if the pod is in namespace1 but not namespace2</p>
<pre><code>curl 172.30.14.36:8084/ping
</code></pre>
<p>Is there anything I need to expose in order to let a pod in namespace2 to access "travelcodes-service" in namespace1?</p>
| Christopher Cheng | <p>You can access the service in with</p>
<pre><code><service1>.<namespace1>
</code></pre>
<p>For example you can use this url:</p>
<pre><code>http://<service1>.<namespace1>.svc.cluster.local
</code></pre>
<p>More on that: <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="nofollow noreferrer">DNS for Services and Pods</a></p>
<p>To get a list of all your namespaces:</p>
<pre><code>oc get ns
</code></pre>
<p>And for a list of services in one namespace:</p>
<pre><code>oc get services -n <namespace-name>
</code></pre>
| jsanchez |
<p>I am running GPU instance on GKE when everything is deployed I make the request to the service Above mention error occur
I followed all the step in mentioned in <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#ubuntu" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#ubuntu</a>
This is my DockerFile</p>
<pre><code>FROM nvidia/cuda:10.2-cudnn7-devel
# install nginx
# RUN apt-get update && apt-get install nginx vim -y --no-install-recommends
# RUN ln -sf /dev/stdout /var/log/nginx/access.log \
# && ln -sf /dev/stderr /var/log/nginx/error.log
## Setup
RUN mkdir -p /opt/app
RUN apt-get update -y && \
apt-get install -y --no-install-recommends \
python3-dev \
python3-pip \
python3-wheel \
python3-setuptools && \
rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/*
RUN pip3 install --no-cache-dir -U install setuptools pip
RUN pip3 install --no-cache-dir cupy_cuda102==8.0.0rc1 scipy optuna
COPY requirements.txt start.sh run.py uwsgi.ini utils.py /opt/app/
COPY shading_characteristics /opt/app/shading_characteristics
WORKDIR /opt/app
RUN pip install -r requirements.txt
RUN pip install --upgrade 'sentry-sdk[flask]'
RUN pip install uwsgi -I --no-cache-dir
EXPOSE 5000
## Start the server, giving permissions for script
# COPY nginx.conf /etc/nginx
RUN chmod +x ./start.sh
RUN chmod -R 777 /root
CMD ["./start.sh"]
</code></pre>
| shaharyar | <p><strong>Edit (May 2021)</strong></p>
<p>GKE now officially supports NVIDIA driver version <code>450.102.04</code>, which support <code>CUDA 10.2</code>.<br />
Please note that GKE 1.19.8-gke.1200 and higher is required.</p>
<hr />
<p>As you can see in Nvidia's <a href="https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver" rel="nofollow noreferrer">website</a>, <code>CUDA 10.2</code> requires Nvidia driver version >= 440.33.</p>
<p>Since the latest Nvidia driver available officially <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#ubuntu" rel="nofollow noreferrer">in GKE</a> is <code>418.74</code>, the newest <code>CUDA</code> version you can use is <code>10.1</code> at the moment.</p>
<p>If your application, or other dependencies such as PyTorch, can function properly with <code>CUDA 10.1</code>, the fastest solution will be to downgrade your base Docker image with <code>CUDA 10.1</code>.</p>
<p>There are <a href="https://github.com/GoogleCloudPlatform/container-engine-accelerators/issues/119" rel="nofollow noreferrer">unofficial</a> ways to install newer Nvidia Driver versions on GKE nodes running COS, but if it's not a must for you - I'd stick to the official and supported GKE method and use 10.1.</p>
| hilsenrat |
<p>I'm trying to use the <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">Kubernetes client-go</a> to access pod details in a cluster.</p>
<p>I want to use it to get the details of pods running in one particular namespace, similar to <code>kubectl get pods -n <my namespace></code>.</p>
<p>The details I want are the <code>name</code>, <code>status</code>, <code>ready</code>, <code>restarts</code> and <code>age</code> of the pod.</p>
<p>How can I get those data?</p>
| Navendu Pottekkat | <p>So, I wrote a function that takes in a Kubernetes client (refer the client-go for details on making one) and a namespace and returns all the pods available-</p>
<pre class="lang-golang prettyprint-override"><code>func GetPods(client *meshkitkube.Client, namespace string) (*v1core.PodList, error) {
// Create a pod interface for the given namespace
podInterface := client.KubeClient.CoreV1().Pods(namespace)
// List the pods in the given namespace
podList, err := podInterface.List(context.TODO(), v1.ListOptions{})
if err != nil {
return nil, err
}
return podList, nil
}
</code></pre>
<p>After getting all the pods, I used a loop to run through all the pods and containers within each pod and manually got all the data I required-</p>
<pre class="lang-golang prettyprint-override"><code>// List all the pods similar to kubectl get pods -n <my namespace>
for _, pod := range podList.Items {
// Calculate the age of the pod
podCreationTime := pod.GetCreationTimestamp()
age := time.Since(podCreationTime.Time).Round(time.Second)
// Get the status of each of the pods
podStatus := pod.Status
var containerRestarts int32
var containerReady int
var totalContainers int
// If a pod has multiple containers, get the status from all
for container := range pod.Spec.Containers {
containerRestarts += podStatus.ContainerStatuses[container].RestartCount
if podStatus.ContainerStatuses[container].Ready {
containerReady++
}
totalContainers++
}
// Get the values from the pod status
name := pod.GetName()
ready := fmt.Sprintf("%v/%v", containerReady, totalContainers)
status := fmt.Sprintf("%v", podStatus.Phase)
restarts := fmt.Sprintf("%v", containerRestarts)
ageS := age.String()
// Append this to data to be printed in a table
data = append(data, []string{name, ready, status, restarts, ageS})
}
</code></pre>
<p>This will result in the exact same data as you would get when running <code>kubectl get pods -n <my namespace></code>.</p>
| Navendu Pottekkat |
<p>I have tried the answers in <a href="https://stackoverflow.com/questions/56489147/how-to-restore-original-client-ip-from-cloudflare-with-nginx-ingress-controller">this question</a>. This is my current configuration:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-2.13.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.35.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
use-proxy-protocol: 'true'
enable-real-ip: "true"
proxy-real-ip-cidr: "173.245.48.0/20,173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22,2400:cb00::/32,2606:4700::/32,2803:f800::/32,2405:b500::/32,2405:8100::/32,2a06:98c0::/29,2c0f:f248::/32"
# use-forwarded-headers: "true"
# compute-full-forwarded-for: "true"
# forwarded-for-header: "Cf-Connecting-Ip"
# forwarded-for-header: "X-Original-Forwarded-For"
server-snippet: |
real_ip_header CF-Connecting-IP;
</code></pre>
<p>And none of the configuration I have tried is actually giving the originating ip as the real ip.</p>
<p>Before I applied the configuration, I was getting:</p>
<pre><code>Host: example.com
X-Request-ID: deadcafe
X-Real-IP: 162.158.X.X (A Cloudflare IP)
X-Forwarded-For: 162.158.X.X (Same as above)
X-Forwarded-Proto: https
X-Forwarded-Host: example.com
X-Forwarded-Port: 80
X-Scheme: https
X-Original-Forwarded-For: <The Originating IP that I want>
Accept-Encoding: gzip
CF-IPCountry: IN
CF-RAY: cafedeed
CF-Visitor: {"scheme":"https"}
user-agent: Mozilla/5.0
accept-language: en-US,en;q=0.5
referer: https://pv-hr.jptec.in/
upgrade-insecure-requests: 1
cookie: __cfduid=012dadfad
CF-Request-ID: 01234faddad
CF-Connecting-IP: <The Originating IP that I want>
CDN-Loop: cloudflare
</code></pre>
<p>After applying the config map, the headers are:</p>
<pre><code>Host: example.com
X-Request-ID: 0123fda
X-Real-IP: 10.X.X.X (An IP that matches the private ip of the Digital Ocean droplets in the vpc, so guessing its the load balancer)
X-Forwarded-For: 10.X.X.X (Same as above)
X-Forwarded-Proto: http
X-Forwarded-Host: example.com
X-Forwarded-Port: 80
X-Scheme: http
X-Original-Forwarded-For: <Originating IP>
Accept-Encoding: gzip
CF-IPCountry: US
CF-RAY: 5005deeb
CF-Visitor: {"scheme":"https"}
accept: /
user-agent: Mozilla/5.0
CF-Request-ID: 1EE7af
CF-Connecting-IP: <Originating IP>
CDN-Loop: cloudflare
</code></pre>
<p>So the only change after the configuration is that the real-ip now points to some internal resource on the Digital Ocean vpc. I haven't been able to track that down but I am guessing its the load balancer. I am confident that it is a DO resource because it matches the ip of the kubernetes nodes. So, I am not really sure why this is happening and what I should be doing to get the originating ip as the real ip.</p>
| Akritrime | <p>The problem you are facing is here:</p>
<p><code>proxy-real-ip-cidr: "173.245.48.0/20,173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22,2400:cb00::/32,2606:4700::/32,2803:f800::/32,2405:b500::/32,2405:8100::/32,2a06:98c0::/29,2c0f:f248::/32"</code></p>
<p>However, the traffic being seen is coming from your DO LB instead <code>10.x.x.x</code>. This is causing it to be ignored for this rule.</p>
<p>I did the following to get it functional:</p>
<pre><code>apiVersion: v1
data:
enable-real-ip: "true"
server-snippet: |
real_ip_header CF-Connecting-IP;
kind: ConfigMap
metadata:
[...]
</code></pre>
<p>Security Notice: This will apply to all traffic even if it didn't originate from Cloudflare itself. As such, someone could spoof the headers on the request to impersonate another IP address.</p>
| Cameron Munroe |
<p>let me put you in context. I got pod with a configuration that looks close to this:</p>
<pre><code>spec:
nodeSets:
- name: default
count: 3
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: repd-ssd-xfs
</code></pre>
<p>I also have my <code>StorageClass</code></p>
<pre><code>apiVersion: ...
kind: StorageClass
metadata:
name: repd-ssd-xfs
parameters:
type: pd-ssd
fsType: xfs
replication-type: regional-pd
zones: us-central1-a, us-central1-b, us-central1-f
reclaimPolicy: Retain
volumeBindingMode: Immediate
</code></pre>
<p>I delete the namespace of the pod and then apply again and I notice that the pvc that my pod was using change and bound to a new pvc, the last pvc used by the pod is in state released. My question is that Is there any way to specify to the pod to use my old pvc? The <code>StorageClass</code> policy is <code>Retain</code> but that means that I can still using pvc with status released? </p>
| YosefMac | <p>In addition to the answer provided by <em>@shashank tyagi</em>. </p>
<p>Have a look at the documentation <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes" rel="nofollow noreferrer">Persistent Volumes</a> and at the section <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain" rel="nofollow noreferrer">Retain</a> you can find:</p>
<blockquote>
<p><strong>When the PersistentVolumeClaim is deleted, the PersistentVolume still
exists and the volume is considered “released”. But it is not yet
available for another claim because the previous claimant’s data
remains on the volume.</strong> An administrator can manually reclaim the
volume with the following steps.</p>
<ul>
<li>Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or
Cinder volume) still exists after the PV is deleted.</li>
<li>Manually clean up the data on the associated storage asset accordingly.</li>
<li>Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the<br>
storage asset definition.</li>
</ul>
</blockquote>
<p>It could be helpful to check the documentation <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">Persistent volumes with Persistent Disks</a> and this example <a href="https://medium.com/faun/kubernetes-how-to-set-reclaimpolicy-for-persistentvolumeclaim-7eb7d002bb2e" rel="nofollow noreferrer">How to set ReclaimPolicy for PersistentVolumeClaim</a>.</p>
<p><strong>UPDATE</strong> Have a look at the article <a href="https://medium.com/@zhimin.wen/persistent-volume-claim-for-statefulset-8050e396cc51" rel="nofollow noreferrer">Persistent Volume Claim for StatefulSet</a>.</p>
| Serhii Rohoza |
<p>I try to deploy my application on GCP. I have a frontend in Vuejs and an api with Flask and a Postgre database</p>
<p>Everything is deployed in a Kubernetes cluster on GCP.
I can accès to my frontend without any problems, but I cannot access to the api, I have a 502 bad gatteway.
I think a made a mistake in my configuration.
Here is my configuration :</p>
<pre><code>**flask-service.yml**
apiVersion: v1
kind: Service
metadata:
name: flask
labels:
service: flask
spec:
type: NodePort
selector:
app: flask
ports:
- protocol: TCP
port: 5000
targetPort: 5000
</code></pre>
<p>vue-service.yml file</p>
<pre><code>**vue-service.yml**
apiVersion: v1
kind: Service
metadata:
name: vue
labels:
service: vue
spec:
type: NodePort
selector:
app: vue
ports:
- protocol: TCP
port: 8080
targetPort: 8080
</code></pre>
<p>ingress.yml file</p>
<pre><code>ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: vue
servicePort: 8080
- path: /api/*
backend:
serviceName: flask
servicePort: 5000
</code></pre>
<p>My flask app is deployed with gunicord</p>
<pre><code>gunicorn -b 0.0.0.0:5000 manage:app
</code></pre>
<p><a href="https://i.stack.imgur.com/noGDG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/noGDG.png" alt="screeshot of my GCP cluster services" /></a></p>
<p>Do you know where I've made a mistake ? I'm a beginner in kubernetes</p>
<p>Here is my Dockerfile</p>
<pre><code>FROM python:3.8.1-slim
# install netcat
RUN apt-get update && \
apt-get -y install netcat && \
apt-get clean
# set working directory
WORKDIR /usr/src/app
# add and install requirements
RUN pip install --upgrade pip
RUN pip install -U gunicorn
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# add entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# add app
COPY . /usr/src/app
# run server
CMD ["/usr/src/app/entrypoint.sh"]
</code></pre>
<p>And my entrypoint.sh</p>
<pre><code>echo "Waiting for postgres..."
while ! nc -z postgres 5432; do
sleep 0.1
done
echo "PostgreSQL started"
gunicorn -b 0.0.0.0:5000 manage:app
</code></pre>
<p>One more edit. In GCP, when I check backend services, there are 3 backends, and one of them doesn't work
But why do I have 3 backend ? I should have juste two no (flask and vue)?</p>
<p>When I check, I have 2 backend services with Flask, and on of them doesn't work</p>
<p><a href="https://i.stack.imgur.com/CK81L.png" rel="nofollow noreferrer">The backend services (flask) with problems</a></p>
<p><a href="https://i.stack.imgur.com/hceMM.png" rel="nofollow noreferrer">the other backend services (flask)</a></p>
<p>My Flask Image logs in GCP show an error. Do you know why ?</p>
<p><a href="https://i.stack.imgur.com/65p0W.png" rel="nofollow noreferrer">GCP log of my flask image</a></p>
| Eric | <p>I founded the solution. It was a problem with my ingress, I forgot to add an Ingress Controler. (I didn't know anything about Ingress Controler...)</p>
<p>Now I've added an Nginx Ingress Controler and everything works fine !</p>
| Eric |
<p>we are running multiple kubernetes pods concurrently on a cluster and we have been facing all sorts of connectivity issues with ORACLE, Informatica and other services.</p>
<p>out of multiple pods we ran, few of the pod just sit on the cluster after completing their task without writing logs to DB. when went through splunk logs of hanging pods(not really hanging because we were able to exec and run other things) or pods having connectivity issues we consistently saw this error followed by ORA-03113/03114 errors</p>
<p>can anyone help me understand this error</p>
<pre><code>INFO process_step:>1<; message:>Execute: Run successful<
2021-09-30 18:50:37.898 [ERROR][23511] customresource.go 136: Error updating resource Key=IPAMBlock(10-0-5-64-26) Name="10-0-5-64-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"10-0-5-64-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"425239779", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"10.0.5.64/26", Affinity:(*string)(0xc000476000), StrictAffinity:false, Allocations:[]*int{(*int)(0xc00068d388), (*int)(0xc00068d450), (*int)(nil), (*int)(0xc00068d490), (*int)(0xc00068d398), (*int)(0xc00068d3f0), (*int)(0xc00068d498), (*int)(0xc00068d390), (*int)(0xc00068d420), (*int)(0xc00068d4a0), (*int)(0xc00068d3d0), (*int)(nil), (*int)(0xc00068d308), (*int)(0xc00068d3b0), (*int)(0xc00068d310), (*int)(0xc00068d320), (*int)(nil), (*int)(0xc00068d4b8), (*int)(nil), (*int)(nil), (*int)(0xc00068d460), (*int)(0xc00068d4a8), (*int)(0xc00068d458), (*int)(0xc00068d3c8), (*int)(0xc00068d440), (*int)(nil), (*int)(0xc00068d428), (*int)(0xc00068d3b8), (*int)(0xc00068d470), (*int)(0xc00068d408), (*int)(0xc00068d418), (*int)(0xc00068d448), (*int)(0xc00068d438), (*int)(0xc00068d4b0), (*int)(0xc00068d3a8), (*int)(0xc00068d318), (*int)(0xc00068d430), (*int)(0xc00068d3d8), (*int)(0xc00068d410), (*int)(0xc00068d478), (*int)(0xc00068d3e0), (*int)(0xc00068d3c0), (*int)(0xc00068d358), (*int)(0xc00068d330), (*int)(0xc00068d340), (*int)(0xc00068d3f8), (*int)(0xc00068d328), (*int)(0xc00068d400), (*int)(0xc00068d338), (*int)(0xc00068d480), (*int)(0xc00068d350), (*int)(0xc00068d488), (*int)(0xc00068d468), (*int)(0xc00068d348), (*int)(0xc00068d360), (*int)(nil), (*int)(0xc00068d368), (*int)(0xc00068d3e8), (*int)(0xc00068d370), (*int)(0xc00068d378), (*int)(0xc00068d380), (*int)(0xc00068d3a0), (*int)
(nil), (*int)(nil)}, Unallocated:[]int{2, 25, 11, 18, 19, 55, 62, 16, 63
****Bunch of node IP's**************** then
error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "10-0-5-64-26": the object has been modified; please apply your changes to the latest version and try again
DBD::Oracle::db do failed: ORA-03113: end-of-file on communication channel
</code></pre>
| ROXOR7 | <p>In short, the request to Calico to change the allocation for pod IP addresses has failed. You can learn more about <a href="https://docs.projectcalico.org/networking/ipam" rel="nofollow noreferrer">Calico IPAM</a> here.</p>
<p>The last line is synonym to connection timeout.</p>
| gohm'c |
<p>I'm planning to use the <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow noreferrer">Jenkins Kubernetes plugin</a> in a setup where an existing Jenkins setup with master and slaves on VMs is split into the master remaining in the current setup and slaves being provisioned dynamically by Google Kubernetes Engine (GKE).</p>
<p>My aim is to reduce costs for the time where slaves can be auto-scaled down to a minimum and provide constant build speed by provisioning a large number of agents at the same time. I'm choosing this approach because it requires a minimum effort.</p>
<p>Afaik I need to forward ports 8080 and 50000 for the JNLP. This is a potential security risk since JNLP isn't protected by any form of encryption and credentials for Jenkins and third party system could be intercepted as well as arbitrary commands being run on the master.</p>
<p>There's the option to enable <a href="https://wiki.jenkins.io/display/JENKINS/Slave+To+Master+Access+Control" rel="nofollow noreferrer">Slave to master access control</a>, but as far as I understand it's not a protection against interception of credentials.</p>
<p>Is it possible to create an IP or other tunnel inside GKE? The IPs of the master nodes are not predictable and it seems like a lot of overhead to maintain the correct tunnel destination to potentially terminated and recreated node pools.</p>
<p>I'm aware that it's probably not rocket science to move the Jenkins master to Kubernetes as well and let it do it's magic with dynamic provisioning of agents in the contained and wonderful world of k8s. However I need to move it there and I don't want to invest time just to have a nice to look at solution if an easier approach does the job as well.</p>
| Kalle Richter | <p>You can use Google Cloud VPN service to have secure connection from your VMs to the resources in GKE. <a href="https://cloud.google.com/vpn/docs/concepts/overview" rel="nofollow noreferrer">Here</a> you can find official documentation and <a href="https://sreeninet.wordpress.com/2019/08/11/gke-with-vpn-networking-options/" rel="nofollow noreferrer">here</a> example of practical use provided by third party.</p>
| Serhii Rohoza |
<p>I'm working on a custom controller for a custom resource using kubebuilder (version 1.0.8). I have a scenario where I need to get a list of all the instances of my custom resource so I can sync up with an external database.</p>
<p>All the examples I've seen for kubernetes controllers use either client-go or just call the api server directly over http. However, kubebuilder has also given me this client.Client object to get and list resources. So I'm trying to use that.</p>
<p>After creating a client instance by using the passed in Manager instance (i.e. do <code>mgr.GetClient()</code>), I then tried to write some code to get the list of all the Environment resources I created.</p>
<pre><code>func syncClusterWithDatabase(c client.Client, db *dynamodb.DynamoDB) {
// Sync environments
// Step 1 - read all the environments the cluster knows about
clusterEnvironments := &cdsv1alpha1.EnvironmentList{}
c.List(context.Background(), /* what do I put here? */, clusterEnvironments)
}
</code></pre>
<p>The example in the documentation for the List method shows:</p>
<pre><code>c.List(context.Background, &result);
</code></pre>
<p>which doesn't even compile.</p>
<p>I saw a few method in the client package to limit the search to particular labels, or for a specific field with a specific value, but nothing to limit the result to a specific resource kind.</p>
<p>Is there a way to do this via the <code>Client</code> object? Should I do something else entirely?</p>
| Chris Tavares | <p>According to the latest documentation, the List method is defined as follows,</p>
<pre><code>List(ctx context.Context, list ObjectList, opts ...ListOption) error
</code></pre>
<p>If the <code>List</code> method you are calling has the same definition as above, your code should compile. As it has variadic options to set the namespace and field match, the mandatory arguments are <code>Context</code> and <code>objectList</code>.</p>
<p>Ref: <a href="https://book.kubebuilder.io/cronjob-tutorial/controller-implementation.html#2-list-all-active-jobs-and-update-the-status" rel="nofollow noreferrer">KubeBuilder Book</a></p>
| Hossain Mahmud |
<p>So I have namespaces</p>
<p>ns1, ns2, ns3, and ns4.</p>
<p>I have a service account sa1 in ns1. I am deploying pods to ns2, ns4 that use sa1. when I look at the logs it tells me that the sa1 in ns2 can't be found.</p>
<p>error:</p>
<p>Error creating: pods "web-test-2-795f5fd489-" is forbidden: error looking up service account ns2/sa: serviceaccount "sa" not found</p>
<p>Is there a way to make service accounts cluster wide? Or, can I create multiple service accounts with the same secret? in different namespaces?</p>
| Mr. E | <p>you can use that</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubernetes-enforce
rules:
- apiGroups: ["apps"]
resources: ["deployments","pods","daemonsets"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: ["*"]
resources: ["namespaces"]
verbs: ["get", "list", "watch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubernetes-enforce
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-enforce-logging
namespace: cattle-logging
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-enforce
subjects:
- kind: ServiceAccount
name: kubernetes-enforce
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-enforce-prome
namespace: cattle-prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-enforce
subjects:
- kind: ServiceAccount
name: kubernetes-enforce
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-enforce-system
namespace: cattle-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-enforce
subjects:
- kind: ServiceAccount
name: kubernetes-enforce
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-enforce-default
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-enforce
subjects:
- kind: ServiceAccount
name: kubernetes-enforce
namespace: kube-system
</code></pre>
| breizh5729 |
<p>I'm deploying this project (<a href="https://github.com/asynkron/protoactor-grains-tutorial" rel="nofollow noreferrer">GitHub</a>) locally on k3d Kubernetes Cluster. It includes a Helm chart. There is also a documentation for this example which can be found <a href="https://proto.actor/docs/cluster/getting-started-kubernetes/" rel="nofollow noreferrer">here</a>.</p>
<p>What I have done so far is what's below. It works just fine. The problem is the ClusterIPs it gives are internal for k8s and I can't access them outside of the cluster. What I want is to be able to run them on my machine's browser. I was told that I need a nodeport or a loadbalancer to do that. How can I do that?</p>
<pre><code>// Build Docker Images
// Navigate to root directory -> ./ProtoClusterTutorial
docker build . -t proto-cluster-tutorial:1.0.0
// Navigate to root directory
docker build -f ./SmartBulbSimulatorApp/Dockerfile . -t smart-bulb-simulator:1.0.0
// Push Docker Image to Docker Hub
docker tag proto-cluster-tutorial:1.0.0 hulkstance/proto-cluster-tutorial:1.0.0
docker push hulkstance/proto-cluster-tutorial:1.0.0
docker tag smart-bulb-simulator:1.0.0 hulkstance/smart-bulb-simulator:1.0.0
docker push hulkstance/smart-bulb-simulator:1.0.0
// List Docker Images
docker images
// Deployment to Kubernetes cluster
helm install proto-cluster-tutorial chart-tutorial
helm install simulator chart-tutorial --values .\simulator-values.yaml
// It might fail with the following message:
// Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "https://host.docker.internal:64285/version": dial tcp 172.16.1.131:64285: connectex: No connection could be made because the target machine actively refused it.
// which means we don't have a running Kubernetes cluster. We need to create one:
k3d cluster create two-node-cluster --agents 2
// If we want to switch between clusters:
kubectl config use-context k3d-two-node-cluster
// Confirm everything is okay
kubectl get pods
kubectl logs proto-cluster-tutorial-78b5db564c-jrz26
</code></pre>
| nop | <p>You can use <code>kubectl port-forward</code> command.<br />
Syntax:</p>
<pre><code>kubectl port-forward TYPE/NAME [options] LOCAL_PORT:REMOTE_PORT
</code></pre>
<p>In your case:</p>
<pre><code>kubectl port-forward pod/proto-cluster-tutorial-78b5db564c-jrz26 8181:PORT_OF_POD
</code></pre>
<p>Now, you can access <code>localhost:8181</code> to use.</p>
| quoc9x |
<p>Im trying to set up a simple k8s cluster on a bare metal server.</p>
<p>Im looking into ways to access the cluster.</p>
<p>Ive been looking though the docs and read through the bare metal considerations section.</p>
<p>so far i've found setting external IP's and nodePorts aren't recommended.
Ive heard metalLB should be used in production so i was about to go ahead with that.</p>
<p>Then i realised the ingress is already using a nodePort service and i can access that for development purposes.</p>
<p>Could i just use this in production too?</p>
| lloyd noone | <p>Of course you can. If you do not need <strong>routing rules</strong> or anything beyond what kube-proxy can offer, you don't need ingress controller like MetalLB.</p>
| gohm'c |
<p>Below is the current configuration for livenessProbe:</p>
<pre><code> livenessProbe:
httpGet:
path: /heartbeat
port: 8000
initialDelaySeconds: 2
timeoutSeconds: 2
periodSeconds: 8
failureThreshold: 2
</code></pre>
<hr />
<p>But response body for URL <code>.well-known/heartbeat</code> shows <code>status: "DOWN"</code> and the http return status as 200</p>
<p>So, Kubelet does not restart the container, due to http response status 200</p>
<hr />
<p>How to ensure Kubelet reads the response body instead of http return status? using <code>livenessProbe</code> configuration</p>
| overexchange | <p>You can interpret the body in your probe using shell command, example:</p>
<pre><code>livenessProbe:
exec:
command:
- sh
- -c
- curl -s localhost | grep 'status: "UP"'
</code></pre>
<p><code>grep</code> return non-zero if <code>status: "DOWN"</code> which will direct readinessProbe to fail. You can of course adjust the script according to your actual response body.</p>
| gohm'c |
<p>Can traefik act as a reverse proxy for some external endpoint ? Like nginx's proxy path for a specific location.
For example, I'd like to perform transparent reverse-proxying to <a href="https://app01.host.com" rel="nofollow noreferrer">https://app01.host.com</a> which is in another datacenter</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: backend01-ingressroute-app
spec:
entryPoints:
- websecure
routes:
- match: Host(`backend01.host.local`) && PathPrefix(`/app`)
kind: Rule
services:
....
</code></pre>
<p>backend01.host.local/app -> <a href="https://app01.host.com" rel="nofollow noreferrer">https://app01.host.com</a> ?
But what I need to specify as "services" here to achieve that ?</p>
| Michael C | <p>I found that external name services are per default disabled when using traefik with helm. Note that this has to be set for <code>kubernetesCRD</code> and for <code>kubernetesIngress</code> seperately. This is not explained well in the documentation: <a href="https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroute" rel="nofollow noreferrer">https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroute</a></p>
<p>traefik helm values file:</p>
<pre><code>...
#
# Configure providers
#
providers:
kubernetesCRD:
enabled: true
allowCrossNamespace: false
allowExternalNameServices: false # <- This needs to be true
# ingressClass: traefik-internal
# labelSelector: environment=production,method=traefik
namespaces:
[]
# - "default"
kubernetesIngress:
enabled: true
allowExternalNameServices: false # <- This needs to be true
# labelSelector: environment=production,method=traefik
namespaces:
...
</code></pre>
| firstdorsal |
<p>How can I run kubectl apply commands from go via client-go?
e.g.:
I'm having a file called crds.yaml and I want to run it via client-go</p>
<p>I can't find any examples about how to do so can someone please help?</p>
| Astin Gengo | <p>You need to decode your .yaml file</p>
<p><a href="https://github.com/kubernetes/client-go/issues/216" rel="nofollow noreferrer">Here</a> it's the complete answer (with code example)</p>
| Leandro Toloza |
Subsets and Splits