Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
β | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
β |
---|---|---|---|
<p>Hi I used to generate kubernetes .yaml file from deployed applications, but recently I am getting a messy .yaml file. For instance:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-01-16T16:02:21Z"
generation: 1
labels:
name: webapp
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:name: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
</code></pre>
<p>Can you see all the <code>f: etc..</code> ? What does that mean?</p>
<p>Is there a way to generate a simplified version of the deployed applications, pods and so on?</p>
<p>Without all the things not specified?</p>
<p>Used command was:</p>
<pre><code>kubectl get deployments.apps webapp -o yaml > webapp.yaml
</code></pre>
| pedro_bb7 | <p>Upgrade your kubectl to v1.21 and above which does not include managed field by default. To see managed field again, append <code>--show-managed-fields</code> to your command.</p>
| gohm'c |
<p>I've installed <code>kong-ingress-controller</code> using yaml file on a 3-nodes k8s cluster.
but I'm getting this (the status of pod is <code>CrashLoopBackOff</code>):</p>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kong ingress-kong-74d8d78f57-57fqr 1/2 CrashLoopBackOff 12 (3m23s ago) 40m
[...]
</code></pre>
<p>there are 2 container declarations in kong yaml file: <code>proxy</code> and <code>ingress-controller</code>.
The first one is up and running but the <code>ingress-controller</code> container is not:</p>
<pre><code>$kubectl describe pod ingress-kong-74d8d78f57-57fqr -n kong |less
[...]
ingress-controller:
Container ID: docker://8e9a3370f78b3057208b943048c9ecd51054d0b276ef6c93ccf049093261d8de
Image: kong/kubernetes-ingress-controller:1.3
Image ID: docker-pullable://kong/kubernetes-ingress-controller@sha256:cff0df9371d5ad07fef406c356839736ce9eeb0d33f918f56b1b232cd7289207
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 07 Sep 2021 17:15:54 +0430
Finished: Tue, 07 Sep 2021 17:15:54 +0430
Ready: False
Restart Count: 13
Liveness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
CONTROLLER_KONG_ADMIN_URL: https://127.0.0.1:8444
CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY: true
CONTROLLER_PUBLISH_SERVICE: kong/kong-proxy
POD_NAME: ingress-kong-74d8d78f57-57fqr (v1:metadata.name)
POD_NAMESPACE: kong (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ft7gg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ft7gg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 46m default-scheduler Successfully assigned kong/ingress-kong-74d8d78f57-57fqr to kung-node-2
Normal Pulled 46m kubelet Container image "kong:2.5" already present on machine
Normal Created 46m kubelet Created container proxy
Normal Started 46m kubelet Started container proxy
Normal Pulled 45m (x4 over 46m) kubelet Container image "kong/kubernetes-ingress-controller:1.3" already present on machine
Normal Created 45m (x4 over 46m) kubelet Created container ingress-controller
Normal Started 45m (x4 over 46m) kubelet Started container ingress-controller
Warning BackOff 87s (x228 over 46m) kubelet Back-off restarting failed container
</code></pre>
<p>And here is the log of <code>ingress-controller</code> container:</p>
<pre><code>-------------------------------------------------------------------------------
Kong Ingress controller
Release:
Build:
Repository:
Go: go1.16.7
-------------------------------------------------------------------------------
W0907 12:56:12.940106 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
time="2021-09-07T12:56:12Z" level=info msg="version of kubernetes api-server: 1.22" api-server-host="https://10.*.*.1:443" git_commit=632ed300f2c34f6d6d15ca4cef3d3c7073412212 git_tree_state=clean git_version=v1.22.1 major=1 minor=22 platform=linux/amd64
time="2021-09-07T12:56:12Z" level=fatal msg="failed to fetch publish-service: services \"kong-proxy\" is forbidden: User \"system:serviceaccount:kong:kong-serviceaccount\" cannot get resource \"services\" in API group \"\" in the namespace \"kong\"" service_name=kong-proxy service_namespace=kong
</code></pre>
<p>If someone could help me to get a solution, that would be awesome.</p>
<p>============================================================</p>
<p><strong>UPDATE</strong>:</p>
<p>The <code>kong-ingress-controller</code>'s yaml file:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongclusterplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongClusterPlugin
plural: kongclusterplugins
shortNames:
- kcp
scope: Cluster
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
namespace:
type: string
required:
- name
- namespace
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongconsumers.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .username
description: Username of a Kong Consumer
name: Username
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: KongConsumer
plural: kongconsumers
shortNames:
- kc
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
credentials:
items:
type: string
type: array
custom_id:
type: string
username:
type: string
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongingresses.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongIngress
plural: kongingresses
shortNames:
- ki
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
proxy:
properties:
connect_timeout:
minimum: 0
type: integer
path:
pattern: ^/.*$
type: string
protocol:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
read_timeout:
minimum: 0
type: integer
retries:
minimum: 0
type: integer
write_timeout:
minimum: 0
type: integer
type: object
route:
properties:
headers:
additionalProperties:
items:
type: string
type: array
type: object
https_redirect_status_code:
type: integer
methods:
items:
type: string
type: array
path_handling:
enum:
- v0
- v1
type: string
preserve_host:
type: boolean
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
regex_priority:
type: integer
request_buffering:
type: boolean
response_buffering:
type: boolean
snis:
items:
type: string
type: array
strip_path:
type: boolean
upstream:
properties:
algorithm:
enum:
- round-robin
- consistent-hashing
- least-connections
type: string
hash_fallback:
type: string
hash_fallback_header:
type: string
hash_on:
type: string
hash_on_cookie:
type: string
hash_on_cookie_path:
type: string
hash_on_header:
type: string
healthchecks:
properties:
active:
properties:
concurrency:
minimum: 1
type: integer
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
http_path:
pattern: ^/.*$
type: string
timeout:
minimum: 0
type: integer
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
passive:
properties:
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
threshold:
type: integer
type: object
host_header:
type: string
slots:
minimum: 10
type: integer
type: object
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongPlugin
plural: kongplugins
shortNames:
- kp
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
required:
- name
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tcpingresses.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .status.loadBalancer.ingress[*].ip
description: Address of the load balancer
name: Address
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: TCPIngress
plural: tcpingresses
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
rules:
items:
properties:
backend:
properties:
serviceName:
type: string
servicePort:
format: int32
type: integer
type: object
host:
type: string
port:
format: int32
type: integer
type: object
type: array
tls:
items:
properties:
hosts:
items:
type: string
type: array
secretName:
type: string
type: object
type: array
type: object
status:
type: object
version: v1beta1
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kong-serviceaccount
namespace: kong
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kong-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- tcpingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- kongplugins
- kongclusterplugins
- kongcredentials
- kongconsumers
- kongingresses
- tcpingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kong-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kong-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: kong-serviceaccount
namespace: kong
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 8000
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 8443
selector:
app: ingress-kong
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kong-validation-webhook
namespace: kong
spec:
ports:
- name: webhook
port: 443
protocol: TCP
targetPort: 8080
selector:
app: ingress-kong
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: ingress-kong
template:
metadata:
annotations:
kuma.io/gateway: enabled
prometheus.io/port: "8100"
prometheus.io/scrape: "true"
traffic.sidecar.istio.io/includeInboundPorts: ""
labels:
app: ingress-kong
spec:
containers:
- env:
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 ssl http2
- name: KONG_PORT_MAPS
value: 80:8000, 443:8443
- name: KONG_ADMIN_LISTEN
value: 127.0.0.1:8444 ssl
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
- name: KONG_DATABASE
value: "off"
- name: KONG_NGINX_WORKER_PROCESSES
value: "2"
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
image: kong:2.5
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-ssl
protocol: TCP
- containerPort: 8100
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
- env:
- name: CONTROLLER_KONG_ADMIN_URL
value: https://127.0.0.1:8444
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: "true"
- name: CONTROLLER_PUBLISH_SERVICE
value: kong/kong-proxy
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: kong/kubernetes-ingress-controller:1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ingress-controller
ports:
- containerPort: 8080
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
serviceAccountName: kong-serviceaccount
</code></pre>
| samm13 | <p>Having analysed the comments it looks like changing <code>apiVersion</code> from <code>rbac.authorization.k8s.io/v1beta1</code> to <code>rbac.authorization.k8s.io/v1</code> has solved the problem temporally, an alternative to this solution is to downgrade the cluster.</p>
| Jakub Siemaszko |
<p>How can I <strong>mount</strong> service account token,
we are using a chart which doesn't support it and after a hour the chart is failing.</p>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume</a> ?</p>
<p>I understand that from 1.22.x its by default behavior of k8s</p>
<p>its <code>BoundServiceAccountTokenVolume</code> in the following link
<a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/</a></p>
<p>Im referring to <strong>manually mounting the service account token</strong>.</p>
<p>Im talking about vectordev which doesnt support the
<a href="https://vector.dev/docs/setup/installation/platforms/kubernetes/" rel="nofollow noreferrer">https://vector.dev/docs/setup/installation/platforms/kubernetes/</a></p>
<p><strong>update</strong>
according to this post this is the way to do it on k8s 1.22.x
please provide an example since im not sure how to make it work
<a href="https://github.com/vectordotdev/vector/issues/8616#issuecomment-1010281331" rel="nofollow noreferrer">https://github.com/vectordotdev/vector/issues/8616#issuecomment-1010281331</a></p>
| PJEM | <p>There's no issue for Vector agent to access the token, but the token will now expire within an hour by default; compare to previous where it has no expiry. When the token has past the validity time, the agent application needs to reload the token from the mounted token volume (previously was a secret volume). The change is needed in the agent application to support this paradigm, not on K8s.</p>
| gohm'c |
<p>This is the simplest config straight from the docs, but when I create the service, kubectl lists the target port as something random. Setting the target port to 1337 in the YAML:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sails-svc
spec:
selector:
app: sails
ports:
- port: 1337
targetPort: 1337
type: LoadBalancer
</code></pre>
<p>And this is what k8s sets up for services:</p>
<pre><code>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP <X.X.X.X> <none> 443/TCP 23h
sails LoadBalancer <X.X.X.X> <X.X.X.X> 1337:30203/TCP 3m6s
svc-postgres ClusterIP <X.X.X.X> <none> 5432/TCP 3m7s
</code></pre>
<p>Why is k8s setting the target port to <code>30203</code>, when I'm specifying <code>1337</code>? It does the same thing if I try other port numbers, <code>80</code> gets <code>31887</code>. I've read <a href="https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-nodeport-allocation" rel="nofollow noreferrer">the docs</a> but disabling those attributes did nothing in GCP. What am I not configuring correctly?</p>
| Colby Blair | <p>Kubectl get services output includes <strong>Port:NodePort:Protocol</strong> information.By default and for convenience, the Kubernetes control plane will allocate a port from a range default: <strong>30000-32767</strong>(Refer the example in this <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">documentation</a>)</p>
<p>To get the TargetPort information try using</p>
<pre><code>kubectl get service <your service name> --output yaml
</code></pre>
<p>This command shows all ports details and stable external IP address under loadBalancer:ingress:</p>
<p>Refer this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#creating_a_service_of_type_loadbalancer" rel="nofollow noreferrer">documentation</a> from more details on creating a service type loadbalancer</p>
| Goli Nikitha |
<p>I am deploying a kubernetes app via github on GCP clusters. Everything works fine then.. I came across <code>cloud deploy delivery pipeline</code>..now I am stuck.</p>
<p>Following the <a href="https://cloud.google.com/deploy/docs/quickstart-basic?_ga=2.141149938.-1343950568.1631260475&_gac=1.47309141.1631868766.CjwKCAjw-ZCKBhBkEiwAM4qfF2mz0qQw_k68XtDo-SSlglr1_U2xTUO0C2ZF8zBOdMlnf_gQVwDi3xoCQ8IQAvD_BwE" rel="nofollow noreferrer">docs</a> here</p>
<pre><code>apiVersion: skaffold/v2beta12
kind: Config
build:
artifacts:
- image: skaffold-example
deploy:
kubectl:
manifests:
- k8s-*
</code></pre>
<p>In the <code>k8s</code> folder I have my deployment files like so</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ixh-auth-depl
labels:
app: ixh-auth
spec:
replicas: 1
selector:
matchLabels:
app: ixh-auth
template:
metadata:
labels:
app: ixh-auth
spec:
containers:
- name: ixh-auth
image: mb/ixh-auth:latest
ports:
- containerPort: 3000
resources:
requests:
cpu: 100m
memory: 500Mi
</code></pre>
<p>but it gives the error <code>invalid kubernetes manifest</code>. I cannot find anything to read on this and don't know how to proceed.</p>
| Abhishek Rai | <p>@Abhishek Rai, I agree with your answer. Google Cloud Deploy uses <strong>skaffold render</strong> to render your Kubernetes manifests, replacing untagged image names with the tagged image names of the container images you're deploying. Then when you promote the release, Google Cloud Deploy uses skaffold apply to apply the manifests and deploy the images to your Google Kubernetes Engine cluster.The content of manifest files should include the path of the yaml files as</p>
<pre><code>deploy:
kubectl:
manifests:
- PATH_TO_MANIFEST
</code></pre>
<p>so that the error will not be encountered. Refer to the <a href="https://cloud.google.com/deploy/docs/skaffold" rel="nofollow noreferrer">document</a> for more details.</p>
| Srividya |
<p>I have three node pools in my cluster each of them have autoscaling enabled to go from 1-100 nodes. Minimum nodes are 1 for all. I am having something weird happening with autoscaling.</p>
<p>Scale down works fine for all pools.
Scale up seems to create a new node pool instead of scaling the corresponding node pools but since that node pool is missing the labels we need nothing gets scheduled and eventually gets destroyed.</p>
<p>I swear I am missing some information to enable it to scale the right node-pool, Any suggestions on what to look at and where to change? I do not use/have GCE auto-scaling</p>
| vpram86 | <p>GKE starts new nodes only from user-created node pools. With <strong>Node auto-provisioning</strong> enabled, the cluster autoscaler can extend node pools automatically. Node auto-provisioning automatically manages a set of node pools on the user's behalf. Since the nodepools here don't have labels, Node auto-provisioning is creating the new nodepools with required labels.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#workload_separation" rel="nofollow noreferrer">Node auto-provisioning</a> might create node pools with labels and taints if all the following conditions are met:</p>
<ul>
<li>A pending Pod requires a node with a specific label key and value.</li>
<li>The Pod has a toleration for a taint with the same key.</li>
<li>The toleration is for the NoSchedule effect, NoExecute effect, or all effects.</li>
</ul>
<p>You can update node labels and node taints for the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/update-existing-nodepools#limitations" rel="nofollow noreferrer">existing nodepools</a> by disabling the autoscaling on the node pool. After the labels or taints are updated, <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler#enabling_autoscaling_for_an_existing_node_pool" rel="nofollow noreferrer">re-enable autoscaling</a>.</p>
<p>To update node labels for a existing node pool, use the following command:</p>
<pre><code>gcloud beta container node-pools update NODEPOOL_NAME \
--node-labels=[NODE_LABEL,...] \
[--cluster=CLUSTER_NAME] [--region=REGION | --zone=ZONE]
[GCLOUD_WIDE_FLAG β¦]
</code></pre>
<p><strong>Note:</strong> The cluster autoscaler is automatically enabled when using node auto-provisioning.</p>
<p>Refer to <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#overview" rel="nofollow noreferrer">Node auto-provisioning</a> for more information.</p>
| Srividya |
<p>I'm running Flink 1.11 on k8s cluster and getting the following error when trying to update the log4j-console.properties file:</p>
<pre><code>Starting Task Manager
Enabling required built-in plugins
Linking flink-s3-fs-hadoop-1.11.1.jar to plugin directory
Successfully enabled flink-s3-fs-hadoop-1.11.1.jar
sed: couldn't open temporary file /opt/flink/conf/sedl2dH0X: Read-only file system
sed: couldn't open temporary file /opt/flink/conf/sedPLYAzY: Read-only file system
/docker-entrypoint.sh: 72: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml: Permission denied
sed: couldn't open temporary file /opt/flink/conf/sede0G5LW: Read-only file system
/docker-entrypoint.sh: 120: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml.tmp: Read-only file system
Starting taskexecutor as a console application on host flink-taskmanager-c765c947c-qx68t.
Exception in thread "main" java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/ser/FilterProvider
at org.apache.logging.log4j.core.layout.JsonLayout.<init>(JsonLayout.java:158)
at org.apache.logging.log4j.core.layout.JsonLayout.<init>(JsonLayout.java:69)
at org.apache.logging.log4j.core.layout.JsonLayout$Builder.build(JsonLayout.java:102)
at org.apache.logging.log4j.core.layout.JsonLayout$Builder.build(JsonLayout.java:77)
at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1002)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:942)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:552)
at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:241)
at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:288)
at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:579)
at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651)
at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668)
at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138)
at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48)
at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.<clinit>(TaskManagerRunner.java:89)
Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.databind.ser.FilterProvider
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(Unknown Source)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(Unknown Source)
at java.base/java.lang.ClassLoader.loadClass(Unknown Source)
</code></pre>
<p>my log4j-console.properties:</p>
<pre><code>rootLogger.level = INFO
#rootLogger.appenderRef.console.ref = ConsoleAppender
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n
appender.kafka.type = Kafka
appender.kafka.name = Kafka
appender.kafka.topic = test
appender.kafka.layout.type = JsonLayout
appender.kafka.layout.complete = false
appender.kafka.additional1.type = KeyValuePair
appender.kafka.additional1.key=app
appender.kafka.additional1.value=TEST
appender.kafka.additional2.type = KeyValuePair
appender.kafka.additional2.key=subsystem
appender.kafka.additional2.value=TEST
appender.kafka.additional3.type = KeyValuePair
appender.kafka.additional3.key=deployment
appender.kafka.additional3.value=TEST
appender.kafka.property.bootstrap.servers=***
rootLogger.appenderRef.console.ref = STDOUT
rootLogger.appenderRef.kafka.ref = Kafka
</code></pre>
<p>Im using "flink:1.11.1-scala_2.11-java11" docker image and validated that all log4j2 dependencies are in the classpath.</p>
<p>I have also tried to create a new docker image from the above base image and add to it the missing dependency and yet nothing happened.</p>
| Noam Levy | <p>I too suffered from this bug. The issue here is that when the task manager and job managers start they are running with a modified classpath, not the JAR that you've built via your build system.</p>
<p>See the <code>constructFlinkClassPath</code> in the <a href="https://github.com/apache/flink/blob/af681c63821c027bc7a233560f7765b686d0d244/flink-dist/src/main/flink-bin/bin/config.sh#L20" rel="nofollow noreferrer">flink source code</a>. To prove this out, revert the JSON logging pattering and check out the classpath in the tm/jm logs on startup. You'll notice that your JAR isn't on the classpath.</p>
<p>To fix this issue you need to provide the dependencies (in this case you'll need <code>jackson-core</code> <code>jackson-annotations</code> and <code>jackson-databind</code>) to the <code>lib</code> folder within the tm/jm nodes (the <code>lib</code> folder is included by default in the flink classpath).</p>
<p>If you are using docker, you can do this when you build the container (<code>RUN wget...</code>).</p>
| fartknocker206 |
<p>I am trying to set up Kubeflow using <a href="https://charmed-kubeflow.io/" rel="nofollow noreferrer">charmed-kubeflow</a>. It says "super easy setup" and everything. But I am failing at step 2.</p>
<p>Setup: "Normal" remote Kubernetes cluster, set up with Kubespray. My idea was to:</p>
<pre><code>juju add-k8s mycluster
juju bootstrap mycluster mycluster
</code></pre>
<ul>
<li>Juju has Kubernetes access.</li>
<li>Juju creates the controller.</li>
<li>Juju tries to connect via 10.x.x.x IP.</li>
<li>Of course setup of the Controller does NOT work.</li>
</ul>
<p>How should this work? It is a remote cluster, private IPs cannot be accessed directly! + There is no option to configure a different Service type or so! There seem to be zero tutorials on Juju and using it... is anyone using this stuff?</p>
| iptizer | <p>this issue with the bootstrap command has been acknowledged and a fix is included in the Juju 2.9 release.</p>
<p>To get a release candidate as of now, use:
<code>snap install juju --classic --channel=2.9/candidate</code></p>
| Rui Vasconcelos |
<p>if I create a pod imperatively like this, what does the --port option do?</p>
<p><code>kubectl run mypod --image=nginx --port 9090</code></p>
<p>nginx application by default is going to listen on port 80. Why do we need this option?
The documentation says</p>
<blockquote>
<p>--port='': The port that this container exposes.</p>
</blockquote>
<p>If it is exposed using <code>kubectl expose pod mypod --port 9090</code>, it is going to create service on port 9090 and target port 9090. But in the above case it neither creates a service</p>
| Logu | <p><code>...nginx application by default is going to listen on port 80. Why do we need this option?</code></p>
<p>The use of <code>--port 80</code> means the same if you write in spec:</p>
<pre><code>...
containers:
- name: ...
image: nginx
ports:
- containerPort: 80
...
</code></pre>
<p>It doesn't do any port mapping but inform that this container will expose port 80.</p>
<p><code>...in the above case it neither creates a service</code></p>
<p>You can add <code>--expose</code> to <code>kubectl run</code> which will create a service, in this case is the same if you write in spec:</p>
<pre><code>kind: Service
...
spec:
ports:
- port: 80
targetPort: 80
...
</code></pre>
<p>Note you can <strong>only</strong> specify one port with <code>--port</code>, even if you write multiple <code>--port</code>, only the last one will take effect.</p>
| gohm'c |
<p>I have run into an issue where <code>helm install</code>ing my charts will work fine, but when I go to restart the system, the nvidia gpu operator will fail to validate.</p>
<p>Bootstrapping is simple:</p>
<p><code>$ microk8s enable gpu</code></p>
<p>< watching dashboard for all the pods to turn green ></p>
<p><code>$ microk8s helm install -n morpheus morpheus-ai-engine morpheus-ai-engine</code></p>
<p>< watching for the morpheus pods to turn green ></p>
<p>Now I can check if the <code>ai-engine</code> pod has GPU access:</p>
<pre><code>$ kubectl exec ai-engine-897d65cff-b2trz -- nvidia-smi
Wed Feb 22 16:35:32 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.78.01 Driver Version: 525.78.01 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro P400 Off | 00000000:04:00.0 Off | N/A |
| 0% 38C P8 N/A / 30W | 98MiB / 2048MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
</code></pre>
<p>Running the test vector-add pod returns a <code>Test PASSED</code>.</p>
<p>The trouble comes when I restart microk8s. The <code>nvidia-device-plugin-validator</code> pod fails to load with an <code>UnexpectedAdmissionError</code> claiming that no GPUs are available. And running <code>nvidia-smi</code> in the <code>ai-engine</code> pod returns a "command not found". The vector-add test pod won't start due to insufficient GPUs.</p>
<p>But if I uninstall the <code>ai-engine</code> chart and restart microk8s (waiting for the gpu operator pods to all turn green), I can then reinstall <code>ai-engine</code> and it works fine again, as does the vector-add test.</p>
| directedition | <p>This is an issue I am comming across too which lead me hear, it looks like it was just recently fixed with this patch <a href="https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/release-notes.html#id2" rel="nofollow noreferrer">https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/release-notes.html#id2</a></p>
<p>Which will evict pods requesting gpus while the operator starts up again.
This should solve your issue as it did mine.</p>
| Shane Hughes |
<p>I'm trying to deploy a multi-container docker app (<a href="https://github.com/shadowHawkeye/eramba" rel="nofollow noreferrer">https://github.com/shadowHawkeye/eramba</a>). This is the yaml file I'm using to <code>kubectl apply -f</code></p>
<p>The two images I have (one for DB and one for app) are built <code>docker build -t <> .</code> from the GitHub repo.</p>
<p>The DB_ENV_MYSQL_HOST, I've tried both and <eramba-db.eramba-1></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: eramba
namespace: eramba-1
labels:
app: eramba
spec:
replicas: 1
selector:
matchLabels:
app: eramba
template:
metadata:
labels:
app: eramba
spec:
containers:
- name: eramba
image: docker.io/deveramba/eramba:latest
ports:
- containerPort: 80
env:
- name: DB_ENV_MYSQL_DATABASE
value: "eramba-db"
- name: DB_ENV_MYSQL_HOST
value: "eramba-host"
- name: DB_ENV_MYSQL_USER
value: "eramba"
- name: DB_ENV_MYSQL_PASSWORD
value: "password"
- name: DB_ENV_MYSQL_ROOT_PASSWORD
value: "password"
- name: ERAMBA_HOSTNAME
value: localhost
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: eramba-db
namespace: eramba-1
labels:
app: eramba-db
spec:
replicas: 1
selector:
matchLabels:
app: eramba-db
template:
metadata:
labels:
app: eramba-db
spec:
containers:
- name: eramba-db
image: docker.io/deveramba/eramba-db:latest
ports:
- containerPort: 3306
env:
- name: MYSQL_DATABASE
value: "eramba-db"
- name: MYSQL_USER
value: "eramba"
- name: MYSQL_PASSWORD
value: "password"
- name: MYSQL_ROOT_PASSWORD
value: "password"
---
apiVersion: v1
kind: Service
metadata:
name: db
namespace: eramba-1
spec:
selector:
app: eramba-db
ports:
- name: sql
port: 3306
targetPort: 3306
---
apiVersion: v1
kind: Service
metadata:
name: eramba-np
namespace: eramba-1
spec:
type: NodePort
selector:
app: eramba
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30045
</code></pre>
<p>The deployment looks like (pods and services output)</p>
<pre><code>root@osboxes:/home/osboxes/manifests# kubectl get pods -n eramba-1
NAME READY STATUS RESTARTS AGE
eramba-7f7c88c9d6-zqnzr 1/1 Running 2 (73s ago) 7m47s
eramba-db-6c5fdfb7b8-wtgqd 1/1 Running 0 7m47s
root@osboxes:/home/osboxes/manifests# kubectl get service -o wide -n eramba-1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
db ClusterIP 10.98.169.229 <none> 3306/TCP 3h31m app=eramba-db
eramba-np NodePort 10.97.149.116 <none> 80:30045/TCP 3h31m app=eramba
</code></pre>
<p>The problem is that kubectl logs <> is complaining unknown host Eramba-host. I've defined DB_ENV_MYSQL_HOST and MYSQL_HOST in both app and db deployments, respectively.</p>
<pre><code>root@osboxes:/home/osboxes/manifests# kubectl logs eramba-7f7c88c9d6-zqnzr -n eramba-1
[i] pre-exec.d - processing /scripts/pre-exec.d/010-apache.sh
tail: can't open '/var/log/apache2/*log': No such file or directory
[i] pre-exec.d - processing /scripts/pre-exec.d/020-eramba-initdb.sh
[i] Waiting for database to setup...
[i] Trying to connect to database: try 1...
ERROR 2005 (HY000): Unknown MySQL server host 'eramba-host' (-3)
[i] Trying to connect to database: try 2...
ERROR 2005 (HY000): Unknown MySQL server host 'eramba-host' (-3)
[i] Trying to connect to database: try 3...
ERROR 2005 (HY000): Unknown MySQL server host 'eramba-host' (-3)
[i] Trying to connect to database: try 4...
ERROR 2005 (HY000): Unknown MySQL server host 'eramba-host' (-3)
[i] Trying to connect to database: try 5...
ERROR 2005 (HY000): Unknown MySQL server host 'eramba-host' (-3)
[i] Trying to connect to database: try 6...
ERROR 2005 (HY000): Unknown MySQL server host 'eramba-host' (-3)
[i] Trying to connect to database: try 7...
ERROR 2005 (HY000): Unknown MySQL server host 'eramba-host' (-3)
</code></pre>
<p>Here's the <code>kubectl logs</code> output for the db</p>
<pre><code>root@osboxes:/home/osboxes/manifests# kubectl logs eramba-db-6c5fdfb7b8-wtgqd -n eramba-1
2022-01-07 19:17:00+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.6.5+maria~focal started.
2022-01-07 19:17:00+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2022-01-07 19:17:00+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.6.5+maria~focal started.
2022-01-07 19:17:00+00:00 [Note] [Entrypoint]: Initializing database files
2022-01-07 19:17:00 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work.
PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !
To do so, start the server, then issue the following command:
'/usr/bin/mysql_secure_installation'
which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.
See the MariaDB Knowledgebase at https://mariadb.com/kb or the
MySQL manual for more instructions.
Please report any problems at https://mariadb.org/jira
The latest information about MariaDB is available at https://mariadb.org/.
You can find additional information about the MySQL part at:
https://dev.mysql.com
Consider joining MariaDB's strong and vibrant community:
https://mariadb.org/get-involved/
2022-01-07 19:17:01+00:00 [Note] [Entrypoint]: Database files initialized
2022-01-07 19:17:01+00:00 [Note] [Entrypoint]: Starting temporary server
2022-01-07 19:17:01+00:00 [Note] [Entrypoint]: Waiting for server startup
2022-01-07 19:17:01 0 [Note] mariadbd (server 10.6.5-MariaDB-1:10.6.5+maria~focal) starting as process 96 ...
2022-01-07 19:17:01 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2022-01-07 19:17:01 0 [Note] InnoDB: Number of pools: 1
2022-01-07 19:17:01 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
2022-01-07 19:17:01 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts)
2022-01-07 19:17:01 0 [Note] InnoDB: Using Linux native AIO
2022-01-07 19:17:01 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
2022-01-07 19:17:01 0 [Note] InnoDB: Completed initialization of buffer pool
2022-01-07 19:17:01 0 [Note] InnoDB: 128 rollback segments are active.
2022-01-07 19:17:01 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2022-01-07 19:17:01 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2022-01-07 19:17:01 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2022-01-07 19:17:01 0 [Note] InnoDB: 10.6.5 started; log sequence number 41361; transaction id 14
2022-01-07 19:17:01 0 [Note] Plugin 'FEEDBACK' is disabled.
2022-01-07 19:17:01 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2022-01-07 19:17:01 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work.
2022-01-07 19:17:01 0 [Warning] 'user' entry 'root@eramba-db-6c5fdfb7b8-wtgqd' ignored in --skip-name-resolve mode.
2022-01-07 19:17:01 0 [Warning] 'proxies_priv' entry '@% root@eramba-db-6c5fdfb7b8-wtgqd' ignored in --skip-name-resolve mode.
2022-01-07 19:17:01 0 [Note] InnoDB: Buffer pool(s) load completed at 220107 19:17:01
2022-01-07 19:17:01 0 [Note] mariadbd: ready for connections.
Version: '10.6.5-MariaDB-1:10.6.5+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution
2022-01-07 19:17:02+00:00 [Note] [Entrypoint]: Temporary server started.
Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leapseconds' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/tzdata.zi' as time zone. Skipping it.
2022-01-07 19:17:03 5 [Warning] 'proxies_priv' entry '@% root@eramba-db-6c5fdfb7b8-wtgqd' ignored in --skip-name-resolve mode.
2022-01-07 19:17:03+00:00 [Note] [Entrypoint]: Creating database eramba-db
2022-01-07 19:17:03+00:00 [Note] [Entrypoint]: Creating user eramba
2022-01-07 19:17:03+00:00 [Note] [Entrypoint]: Giving user eramba access to schema eramba-db
2022-01-07 19:17:03+00:00 [Note] [Entrypoint]: Stopping temporary server
2022-01-07 19:17:03 0 [Note] mariadbd (initiated by: root[root] @ localhost []): Normal shutdown
2022-01-07 19:17:03 0 [Note] InnoDB: FTS optimize thread exiting.
2022-01-07 19:17:03 0 [Note] InnoDB: Starting shutdown...
2022-01-07 19:17:03 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2022-01-07 19:17:03 0 [Note] InnoDB: Buffer pool(s) dump completed at 220107 19:17:03
2022-01-07 19:17:04 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1"
2022-01-07 19:17:04 0 [Note] InnoDB: Shutdown completed; log sequence number 42335; transaction id 15
2022-01-07 19:17:04 0 [Note] mariadbd: Shutdown complete
2022-01-07 19:17:04+00:00 [Note] [Entrypoint]: Temporary server stopped
2022-01-07 19:17:04+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up.
2022-01-07 19:17:04 0 [Note] mariadbd (server 10.6.5-MariaDB-1:10.6.5+maria~focal) starting as process 1 ...
2022-01-07 19:17:04 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2022-01-07 19:17:04 0 [Note] InnoDB: Number of pools: 1
2022-01-07 19:17:04 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
2022-01-07 19:17:04 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts)
2022-01-07 19:17:05 0 [Note] InnoDB: Using Linux native AIO
2022-01-07 19:17:05 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
2022-01-07 19:17:05 0 [Note] InnoDB: Completed initialization of buffer pool
2022-01-07 19:17:05 0 [Note] InnoDB: 128 rollback segments are active.
2022-01-07 19:17:05 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2022-01-07 19:17:05 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2022-01-07 19:17:05 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2022-01-07 19:17:05 0 [Note] InnoDB: 10.6.5 started; log sequence number 42335; transaction id 14
2022-01-07 19:17:05 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2022-01-07 19:17:05 0 [Note] Plugin 'FEEDBACK' is disabled.
2022-01-07 19:17:05 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work.
2022-01-07 19:17:05 0 [Note] InnoDB: Buffer pool(s) load completed at 220107 19:17:05
2022-01-07 19:17:05 0 [Note] Server socket created on IP: '0.0.0.0'.
2022-01-07 19:17:05 0 [Note] Server socket created on IP: '::'.
2022-01-07 19:17:05 0 [Warning] 'proxies_priv' entry '@% root@eramba-db-6c5fdfb7b8-wtgqd' ignored in --skip-name-resolve mode.
2022-01-07 19:17:05 0 [Note] mariadbd: ready for connections.
Version: '10.6.5-MariaDB-1:10.6.5+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
</code></pre>
| Bryan | <p>Here's how you can run Eramba community edition on K8s:</p>
<ul>
<li>Base on <a href="https://github.com/markz0r/eramba-community-docker" rel="nofollow noreferrer">eramba-community-docker</a>. Lots of hardwork by this author, do give the repo a star.</li>
<li>Tested on Linux only.</li>
<li>The MariaDB store data at your host path /tmp/erambadb. You can upgrade it to other storage media as you like.</li>
<li>Address implementation pertain to K8s only. Does not address any eramba specific topic or working.</li>
<li>Run in "default" namespace.</li>
<li>Run eramba web application as Pod. You can upgrade it to Deployment as you like.</li>
</ul>
<p>First, use your favorite editor to start a <code>eramba-cm.yaml</code> file:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: eramba
data:
c2.8.1.sql: |
CREATE DATABASE IF NOT EXISTS erambadb;
USE erambadb;
## IMPORTANT: MUST BE INDENT 2 SPACES AFTER c2.8.1.sql ##
<copy & paste content from here: https://raw.githubusercontent.com/markz0r/eramba-community-docker/master/sql/c2.8.1.sql>
</code></pre>
<p><code>kubectl create -f eramba-cm.yaml</code></p>
<p>Create the storage for MariaDB:</p>
<pre><code>cat << EOF > eramba-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: eramba-storage
spec:
storageClassName: eramba-storage
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/erambadb
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: eramba-storage
spec:
storageClassName: eramba-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
...
EOF
</code></pre>
<p><code>kubectl create -f eramba-storage.yaml</code></p>
<p>Install bitnami/mariadb using <a href="https://helm.sh/docs/intro/install/" rel="nofollow noreferrer">Helm</a></p>
<pre><code>helm repo add bitnami https://charts.bitnami.com/bitnami
helm upgrade -i eramba bitnami/mariadb --set auth.rootPassword=eramba,auth.database=erambadb,initdbScriptsConfigMap=eramba,volumePermissions.enabled=true,primary.persistence.existingClaim=eramba-storage
</code></pre>
<p>Run eramba web application:</p>
<pre><code>cat << EOF > eramba-web.yaml
apiVersion: v1
kind: Pod
metadata:
name: eramba-web
labels:
app.kubernetes.io/name: eramba-web
spec:
containers:
- name: eramba-web
image: markz0r/eramba-app:c281
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_HOSTNAME
value: eramba-mariadb
- name: MYSQL_DATABASE
value: erambadb
- name: MYSQL_USER
value: root
- name: MYSQL_PASSWORD
value: eramba
- name: DATABASE_PREFIX
value: ""
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: eramba-web
labels:
app.kubernetes.io/name: eramba-web
spec:
ports:
- name: http
nodePort: 30045
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: eramba-web
type: NodePort
...
EOF
</code></pre>
<p>Check all that required: <code>kubectl get cm,pvc,pv,svc,pods</code></p>
<p><a href="https://i.stack.imgur.com/wFXiu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wFXiu.png" alt="Up and running" /></a></p>
<p>You can now browse eramba-web via port-forward or http://<code><host ip></code>:30045.</p>
<p><code>kubectl port-forward service/eramba-web 8888:8080</code></p>
<p><a href="https://i.stack.imgur.com/flGx2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/flGx2.png" alt="Eramba login page" /></a></p>
| gohm'c |
<p>when I run the cronjob into Kubernetes, that time cron gives me to success the cron but not getting the desired result</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $.Values.appName }}
namespace: {{ $.Values.appName }}
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: image
command: ["/bin/bash"]
args: [ "test.sh" ]
restartPolicy: OnFailure
</code></pre>
<p>also, I am sharing test.sh</p>
<pre><code>#!/bin/sh
rm -rf /tmp/*.*
echo "remove done"
</code></pre>
<p>cronjob running successfully but
when I checked the container the file is not getting deleted into <strong>/tmp</strong> directory</p>
| Parth Shah | <p>You need to have the persistence volume attached with you pod and cronjob that you are using so it can remove all the files when the script get executed. You need to mount and provide path accordingly in your script. For adding kubernetes cronjobs kindly go through this <a href="https://stackoverflow.com/questions/46578331/kubernetes-is-it-possible-to-mount-volumes-to-a-container-running-as-a-cronjob">link</a></p>
| Hardik Panchal |
<p>I am playing with Kubernetes <em>really</em> for the first time, and have a brand new (empty) GKE cluster up on GCP.</p>
<p>I am going to play around with YAML Kustomize files and try to get a few services deployed there, but what I'm really looking for is a command (or set of <code>kubectl</code>/<code>gcloud</code> commands) to restore the GKE cluster to a totally "new" slate. This is because it's probably going to take several dozen (or more!) attempts at configuring and tweaking my YAML files until I get the configs and behavior down just right, and each time I mess up I want to start over with a completely "clean"/new GKE cluster. For reasons outside the scope of this question, deleting and recreating the GKE cluster really isn't a viable option.</p>
<p>My Kustomize files and deploment scripts will create Kubernetes operators, namespaces, persistent volumes (and claims), various services and all sorts of other resources. But I need to be able to drop/delete them all and bring the cluster back to the brand new state.</p>
<p>Is this possible, and if so, whats the process/command(s) involved? FWIW I have cluster admin permissions.</p>
| hotmeatballsoup | <p>As mentioned by @dany L, Kubernetes Namespace will be the perfect option for deleting the resources. Create a custom namespace by using the command</p>
<pre><code>Kubectl create namespace custom-name
</code></pre>
<p>and deploy all the resources(Deployment,ReplicaSet,Services etc.,) into the namespace.</p>
<p>To work with Namespace, you need to add <strong>--namespace flag</strong> to k8s commands.</p>
<p>For example:</p>
<pre><code>kubectl create -f deployment.yaml --namespace=custom-namespace
</code></pre>
<p>If you want to delete all of these resources, you just need to delete the custom namespace. By deleting the custom namespace, all the other resources would be deleted. Without it, ReplicaSet might create new pods when existing pods are deleted. Run the following command for deleting the namespace.</p>
<pre><code>kubectl delete namespace custom-name
</code></pre>
<p>To list down all the resources associated to a specific namespace, you can run the following command</p>
<pre><code>kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>
</code></pre>
<p>The kubectl api-resources enumerates the resource types available in your cluster. So we can use it by combining it with <strong>kubectl get</strong> to list every instance of every resource type in a Kubernetes namespace.</p>
<p>Refer to this <a href="https://www.studytonight.com/post/how-to-list-all-resources-in-a-kubernetes-namespace" rel="nofollow noreferrer">link</a> to list all the resources in a namespace.</p>
| Srividya |
<p>Could anyone get me a example about how to use</p>
<pre><code>kubectl rollout pause xxx
kubectl rollout update xxx
</code></pre>
<p>in client-go? I can't find any example about it. Thank you~</p>
| edselwang | <p>maybe .</p>
<pre><code>data := fmt.Sprintf(`{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"%s"}}}}}`, time.Now().String())
resultDeployment, err = p.Client.AppsV1().Deployments(p.Namespace).Patch(context.Background(), deployment.Name, types.StrategicMergePatchType, []byte(data), metav1.PatchOptions{FieldManager: "kubectl-rollout"})
</code></pre>
<p>you can use <code>kubectl</code> with --v=6 to see the logs, for example <code>kubectl get pods --v=6</code>, and build a request use go-client.</p>
| Yoonga |
<p>I have already tried finding resources and articles online for how to create alerts using Grafana 8 UI about the CPU and/or memory usage of my kubernetes cluster pods, but I couldn't find anything, neither on youtube, google, discord, stackoverflow nor reddit.</p>
<p>Does anyone know any guide on how to do that?</p>
<p>The goal is to literally create an alert rule that will send a slack message when the CPU or Memory usage of my kubernetes cluster pods pass over X%. The slack app to receive the grafana message is working, but I have no idea how would be the grafana query.</p>
<p>PS.: I am using Prometheus and node-exporter.</p>
| JoΓ£o Casarin | <p>You can try this query for creating an alert if the CPU or Memory usage is above threshold (let say 85%).</p>
<p><code>sum(rate(container_cpu_usage_seconds_total{namespace="$namespace", pod="$pod", container!="POD", container!="", pod!=""}[1m])) by (pod) / sum(kube_pod_container_resource_limits{namespace="$namespace", pod="$pod", resource="cpu"}) by (pod) * 100</code></p>
<p>You can check CPU utilization of all pods in the cluster by running:</p>
<p><code>sum(rate(container_cpu_usage_seconds_total{container_name!="POD",pod_name!=""}[5m]))</code></p>
<p>If you want to check CPU usage of each running pod you can use using:</p>
<p><code>sum(rate(container_cpu_usage_seconds_total{container_name!="POD",pod_name!=""}[5m])) by (pod_name).</code></p>
<p>To see actual CPU usage, look at metrics like <code>container_cpu_usage_seconds_total (per container CPU usage)</code> or maybe even <code>process_cpu_seconds_total (per process CPU usage).</code></p>
<p>You can create alert rule in grafana by following the steps provided in the <a href="https://grafana.com/docs/grafana/latest/alerting/alerting-rules/create-grafana-managed-rule/#add-grafana-managed-rule" rel="nofollow noreferrer">document</a> and refer to the <a href="https://stackoverflow.com/questions/61361263/grafana-for-kubernettes-shows-cpu-usage-higher-than-100">link</a> for more information.</p>
| Srividya |
<p>I have a resource block for my pod like -</p>
<blockquote>
<pre><code> resources:
limits:
cpu: 3000m
memory: 512Mi
requests:
memory: 512Mi
</code></pre>
</blockquote>
<p>does it by default take request allocation for CPU (i.e 3000m) which is mentioned in resource limits (3000m). Because in my case it taking 3000m as default cpu in request even though I have not mentioned it.</p>
| gkcld | <p>What you observed is correct, K8s will assign the requests.cpu that matches the limits.cpu when you only define the limits.cpu and not the requests.cpu. Official document <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#if-you-specify-a-cpu-limit-but-do-not-specify-a-cpu-request" rel="nofollow noreferrer">here</a>.</p>
| gohm'c |
<p>i try to get kubernetes auto completion going in nvim. I am using neovim nightly (0.5.) with 'neovim-lspconfig' and 'nvim-lua/completion-nvim'.
I installed the yaml-language-sever and it is working fine (i know it is working because it is showing errors inside yaml files in nvim).</p>
<p>I am fairly new to lua and nvim-lsp, it might just be a syntax error. I try to configure the server with this lua code:</p>
<pre><code>local lspconfig = require'lspconfig'
lspconfig.yamlls.setup{
on_attach = require'completion'.on_attach,
settings = {
yaml.schemas = { kubernetes = "globPattern" },
}
}
</code></pre>
<p>I tried thousand different ways to write it but i always get Errors like:</p>
<blockquote>
<p>Error loading lua [string ":lua"]:5: '}' expected (to close '{' at
line 4) near '='</p>
</blockquote>
<p>The <a href="https://github.com/neovim/nvim-lspconfig/blob/master/CONFIG.md" rel="nofollow noreferrer">documentation</a> just says to add server configs via the settings key. But i am not quite sure how.</p>
<p>Anybody got this going? Thanks a lot.</p>
| Y-Peter | <p>You should change it to</p>
<pre class="lang-lua prettyprint-override"><code>lspconfig.yamlls.setup{
settings = {
yaml = {
schemas = { kubernetes = "globPattern" },
}
}
</code></pre>
| sakis4440 |
<p>I am using google container registry (GCR) to push and pull docker images. I have created a deployment in kubernetes with 3 replicas. The deployment will use a docker image pulled from the GCR.</p>
<p>Out of 3 replicas, 2 are pulling the images and running fine.But the third replica is showing the below error and the pod's status remains "ImagePullBackOff" or "ErrImagePull"</p>
<blockquote>
<p>"Failed to pull image "gcr.io/xxx:yyy": rpc error: code = Unknown desc
= failed to pull and unpack image "gcr.io/xxx:yyy": failed to resolve reference "gcr.io/xxx:yyy": unexpected status code: 401 Unauthorized"</p>
</blockquote>
<p>I am confused like why only one of the replicas is showing the error and the other 2 are running without any issue. Can anyone please clarify this?</p>
<p>Thanks in Advance!</p>
| Soundarya | <p><strong>ImagePullBackOff</strong> and <strong>ErrImagePull</strong> indicate that the image used by a container cannot be loaded from the image registry.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#401_unauthorized_cannot_pull_images_from_private_container_registry_repository" rel="nofollow noreferrer">401 unauthorized error</a> might occur when you pull an image from a private Container Registry repository. For troubleshooting the error:</p>
<ol>
<li><p>Identify the node that runs the pod by <code>kubectl describe pod POD_NAME | grep "Node:"</code></p>
</li>
<li><p>Verify the node has the storage scope by running the command</p>
<pre><code>gcloud compute instances describe NODE_NAME --zone=COMPUTE_ZONE --format="flattened(serviceAccounts[].scopes)"
</code></pre>
</li>
<li><p>The node's access scope should contain at least one of the following:</p>
<p>serviceAccounts[0].scopes[0]: <a href="https://www.googleapis.com/auth/devstorage.read_only" rel="nofollow noreferrer">https://www.googleapis.com/auth/devstorage.read_only</a>
serviceAccounts[0].scopes[0]: <a href="https://www.googleapis.com/auth/cloud-platform" rel="nofollow noreferrer">https://www.googleapis.com/auth/cloud-platform</a></p>
</li>
<li><p>Recreate the node pool that node belongs to with sufficient scope and you cannot modify existing nodes, you must recreate the node with the correct scope.</p>
<ul>
<li><p>Create a new node pool with the gke-default scope by the following command</p>
<pre><code>gcloud container node-pools create NODE_POOL_NAME --cluster=CLUSTER_NAME --zone=COMPUTE_ZONE --scopes="gke-default"
</code></pre>
</li>
<li><p>Create a new node pool with only storage scope</p>
<pre><code>gcloud container node-pools create NODE_POOL_NAME --cluster=CLUSTER_NAME --zone=COMPUTE_ZONE --scopes="https://www.googleapis.com/auth/devstorage.read_only"
</code></pre>
</li>
</ul>
</li>
</ol>
<p>Refer to the <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#ImagePullBackOff" rel="nofollow noreferrer">link</a> for more information on the troubleshooting process.</p>
| Srividya |
<p>I have several secrets that are mounted and need to be read as a properties file. It seems kubernetes can't mount them as a single file so I'm trying to concatenate the files after the pod starts. I tried running a cat command in a postStart handler but it seems execute before the secrets are mounted as I get this error:</p>
<pre><code>Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword": stat cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword: no such file or directory: unknown
</code></pre>
<p>Then here is the yaml.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: K8S_ID
spec:
selector:
matchLabels:
app: K8S_ID
replicas: 1
template:
metadata:
labels:
app: K8S_ID
spec:
containers:
- name: K8S_ID
image: IMAGE_NAME
ports:
- containerPort: 8080
env:
- name: PROPERTIES_FILE
value: "/properties/dbPassword"
volumeMounts:
- name: secret-properties
mountPath: "/properties"
lifecycle:
postStart:
exec:
command: ["cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword"]
volumes:
- name: secret-properties
secret:
secretName: secret-properties
items:
- key: SECRET_ITEM
path: dbPassword
- key: S3Key
path: S3Key
- key: S3Secret
path: S3Secret
</code></pre>
| EricWoody | <p>You need a shell session for your command like this:</p>
<pre><code>...
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword"]
...
</code></pre>
| gohm'c |
<p>I want to use golang Kubernetes client SDK to cordon specific nodes in my Kubernetes cluster.</p>
<p>According to other posts, I need to pass the following:</p>
<pre><code>PATCH /api/v1/nodes/node-name
Request Body: {"spec":{"unschedulable":true}} Content-Type: "application/strategic-merge-patch+json"
</code></pre>
<p>However, I am not familiar with how to pass that.</p>
<p>I have the following, but not sure if those values are correct</p>
<pre><code>type patchStringValue struct {
Op string `json:"op"`
Path string `json:"path"`
Value string `json:"value"`
}
func k8NodeCordon() {
clientSet := k8ClientInit()
payload := []patchStringValue{{
Op: "replace",
Path: "/spec/unschedulable",
Value: "true",
}}
payloadBytes, _ := json.Marshal(payload)
_, err := clientSet.
CoreV1().Nodes().Patch()
return err
}
</code></pre>
| popopanda | <pre><code>type patchStringValue struct {
Op string `json:"op"`
Path string `json:"path"`
Value bool `json:"value"`
}
func k8NodeCordon() {
clientSet := k8ClientInit()
payload := []patchStringValue{{
Op: "replace",
Path: "/spec/unschedulable",
Value: true,
}}
payloadBytes, _ := json.Marshal(payload)
_, err := clientSet.CoreV1().Nodes().Patch("<node_name>", types.JSONPatchType, payloadBytes)
return err
}
</code></pre>
| sully |
<p>I have an AKS cluster with a web application. I want to provision an nginx Ingress controller to expose the app to the internet and later enable TLS.</p>
<p>I have been following the official documentation</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-basic</a></p>
<p>and</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip</a></p>
<p>But I always end up with a pending nginx-ingress service with this error</p>
<pre><code>reason: SyncLoadBalancerFailed
message: >-
Error syncing load balancer: failed to ensure load balancer: instance not
found
</code></pre>
<p><a href="https://i.stack.imgur.com/hA27b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hA27b.png" alt="enter image description here" /></a></p>
<p>I have seen</p>
<p><a href="https://stackoverflow.com/questions/55625051/how-to-fix-failed-to-ensure-load-balancer-error-for-nginx-ingress">How to fix "failed to ensure load balancer" error for nginx ingress</a></p>
<p>and googled the error but so far no luck</p>
<p>Does anyone know what could it be?</p>
<p>Or, is there some working example I can start from?</p>
| Franco Tiveron | <p>I believe you are using a static IP address with the NGINX Ingress controller service. This issue pops up if the cloud controller manager cannot find the static Azure Public Ip Address resource in the containing resource group mentioned in the NGINX Ingress Controller's service annotation (if no resource group is explicitly specified with a service annotation, it will look for the Azure Public IP Address resource in the <a href="https://learn.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks" rel="nofollow noreferrer">AKS cluster's node resource group</a>)</p>
<p>If you have created the static Azure Public IP Address resource in the node resource group then please ensure that the Azure Public IP address resource exists.</p>
<p>If you have created the static Azure Public IP Address resource in a different resource group, then:</p>
<ul>
<li><p>Please ensure the cluster identity used by the AKS cluster has delegated permissions to the other resource group, such as Network Contributor.</p>
<pre><code>az role assignment create \
--assignee <Client ID of cluster identity> \
--role "Network Contributor" \
--scope /subscriptions/<subscription id>/resourceGroups/<Public IP address resource group name>
</code></pre>
<p><strong>Note:</strong> Your cluster identity can be a <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-service-principal?tabs=azure-cli" rel="nofollow noreferrer">service principal</a> or a <a href="https://learn.microsoft.com/en-us/azure/aks/use-managed-identity" rel="nofollow noreferrer">managed identity</a>.</p>
</li>
<li><p>In the <code>helm install</code> command to deploy an NGINX Ingress Controller, please add the following argument:<br />
<code>--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-resource-group"=$PublicIpAddressResourceGroupName</code></p>
<p>Thus, if you are following <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip#ip-and-dns-label" rel="nofollow noreferrer">this document</a> the helm install command should look something like:</p>
<pre><code># Use Helm to deploy an NGINX ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-basic \
--set controller.replicaCount=2 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.image.registry=$ACR_URL \
--set controller.image.image=$CONTROLLER_IMAGE \
--set controller.image.tag=$CONTROLLER_TAG \
--set controller.image.digest="" \
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \
--set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
--set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.image.registry=$ACR_URL \
--set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \
--set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \
--set controller.service.loadBalancerIP=$STATIC_IP \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-resource-group"=$PublicIpAddressResourceGroupName
</code></pre>
</li>
</ul>
<p>For more information please check <a href="https://learn.microsoft.com/en-us/azure/aks/static-ip#create-a-service-using-the-static-ip-address" rel="nofollow noreferrer">here</a>.</p>
| Srijit_Bose-MSFT |
<p>I need to get the usage metrics (CPU and RAM) for my Kubernetes pods, but since other components of my app use this data, I need to query for it through Node.js rather than use the Metrics explorer dropdown on the GCP console to just see the data visualized in a chart. I have tried the API at <a href="https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/query" rel="nofollow noreferrer">https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/query</a> which seems most like what I'm looking for. However, after testing on my project, I got an empty response, even though the same query in the Metrics Explorer displayed data on the charts. If anyone has tips on how to use this API properly, I'd appreciate it.</p>
| AC34 | <p>I have reproduced your use-case and successfully executed the query and got the Cumulative amount of consumed CPU time on all cores in nanoseconds with <strong>200 OK response</strong>.</p>
<ol>
<li><p>After creating the GKE Cluster, navigate to metrics explorer and select the metric, <code>metric.type="kubernetes.io/container/cpu/core_usage_time"</code> and you can see the data in the chart for the cluster. Now click the "MQL" button to get the same query in MQL syntax.</p>
</li>
<li><p>Now to get the same data by calling the <code>projects.timeSeries.query</code>, try in the <strong><a href="https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/query?apix_params=%7B%22name%22%3A%22projects%2Fmy-project-979197%22%2C%22resource%22%3A%7B%22query%22%3A%22fetch%20k8s_container%3A%3Akubernetes.io%2Fcontainer%2Fcpu%2Fcore_usage_time%7C%20within%201m%22%7D%7D#path-parameters" rel="nofollow noreferrer">βTry this APIβ</a></strong> box in API Explorer by entering the project's ID using the format <code>projects/[PROJECT_ID]</code> in the name parameter. Make sure to replace <strong>[PROJECT_ID]</strong> with your project's ID.</p>
</li>
<li><p>In the request body add the query as<br />
<code>"query": "fetch k8s_container::kubernetes.io/container/cpu/core_usage_time| within 5m"</code></p>
</li>
</ol>
<p>Syntax for Request Body:</p>
<p>{</p>
<pre><code> "query": "fetch k8s_container::kubernetes.io/container/cpu/core_usage_time| within 5m"
</code></pre>
<p>}</p>
<p>Now, click on the <strong>Execute</strong> button to get the Cumulative amount of consumed CPU time on all cores with 200 OK response.</p>
<p>Refer to the <a href="https://cloud.google.com/monitoring/mql/qn-from-api?authuser=1#ql-timeseries-query" rel="nofollow noreferrer">link</a> for more information on retrieving data with timeseries.query.</p>
| Srividya |
<p>I have a reccuring problem with container in different pods can't communicate with each other.
To make things simple, I created a cluster with only 2 containers in different pods:</p>
<ol>
<li>app that does only one thing: connecting to redis server.</li>
<li>redis-server container</li>
</ol>
<p>To make long story short: I'm keep getting 'connection refused' when trying to connect from the app to redis:</p>
<pre><code>$ kubectl logs app-deployment-86f848b46f-n7672
> [email protected] start
> node ./app.js
LATEST
Error: connect ECONNREFUSED 10.104.95.63:6379
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1133:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '10.104.95.63',
port: 6379
}
</code></pre>
<p>the app identidfy the redis-service successfully but fails to connect</p>
<pre><code>$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service ClusterIP 10.107.18.112 <none> 4000/TCP 2m42s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29h
redis-service ClusterIP 10.104.95.63 <none> 6379/TCP 29h
</code></pre>
<p>the app code:</p>
<pre><code>const redis = require("redis");
const bluebird = require("bluebird");
bluebird.promisifyAll(redis);
console.log('LATEST');
const host = process.env.HOST;
const port = process.env.PORT;
const client = redis.createClient({ host, port });
client.on("error", function (error) {
console.error(error);
});
</code></pre>
<p>app's docker file:</p>
<pre><code>FROM node
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
</code></pre>
<p>for the redis server I tried the default image of redis, and when it didn't work, I used a custome-made image without any bind to a specific ip and no protected-mode.</p>
<p>redis dockerfile:</p>
<pre><code>FROM redis:latest
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
</code></pre>
<p>Finally, I've created 2 deployments with respected ClusterIP services:</p>
<p>app deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 1
selector:
matchLabels:
component: app
template:
metadata:
labels:
component: app
spec:
containers:
- name: app
image: user/redis-app:latest
ports:
- containerPort: 4000
env:
- name: HOST
valueFrom:
configMapKeyRef:
name: app-env
key: HOST
- name: PORT
valueFrom:
configMapKeyRef:
name: app-env
key: PORT
</code></pre>
<p>app service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
type: ClusterIP
selector:
component: app
ports:
- port: 4000
targetPort: 4000
</code></pre>
<p>env file:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-env
data:
PORT: "6379"
HOST: "redis-service.default"
</code></pre>
<p>redis deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
spec:
replicas: 1
selector:
matchLabels:
db: redis
template:
metadata:
labels:
db: redis
spec:
containers:
- name: redis
image: user/custome-redis:latest
ports:
- containerPort: 6379
</code></pre>
<p>redis service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: ClusterIP
selector:
component: redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
</code></pre>
<p>Originally, I used Windows enviorment with WSL2 and Kubernetes running over docker with Docker Desktop installed. when it failed, I've provisioned a centos8 vm over virtualbox and installed kubernets with minikube - got the same results..</p>
<p>any ideas?....</p>
| AbuJed | <p>Posting an answer out of comments since David Maze found the issue (added as a community wiki, feel free to edit)</p>
<p>It's very important to match labels between pods, deployments, services and other elements.</p>
<p>In the example above, there are different labels used for <code>redis</code> service:</p>
<p><code>component: redis</code> and <code>db: redis</code> which caused this issue.</p>
| moonkotte |
<h2>Summary</h2>
<p>I currently am in the process of learning kubernetes, as such I have decided to start with an application that is simple (Mumble).</p>
<h2>Setup</h2>
<p>My setup is simple, I have one node (the master) where I have removed the taint so mumble can be deployed on it. This single node is running CentOS Stream but SELinux is disabled.</p>
<h2>The issue</h2>
<p>The <code>/srv/mumble</code> directory appears to be ReadOnly, and at this point I have tried creating an init container to chown the directory but that fails due to the issue above. This issue appears in both containers, and I am unsure at this point how to change this to allow the mumble application to create files in said directory. The mumble application user runs as user 1000. What am I missing here?</p>
<h2>Configs</h2>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: Namespace
metadata:
name: mumble
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mumble-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
type: DirectoryOrCreate
path: "/var/lib/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mumble-pv-claim
namespace: mumble
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mumble-config
namespace: mumble
data:
murmur.ini: |
**cut for brevity**
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mumble-deployment
namespace: mumble
labels:
app: mumble
spec:
replicas: 1
selector:
matchLabels:
app: mumble
template:
metadata:
labels:
app: mumble
spec:
initContainers:
- name: storage-setup
image: busybox:latest
command: ["sh", "-c", "chown -R 1000:1000 /srv/mumble"]
securityContext:
privileged: true
runAsUser: 0
volumeMounts:
- mountPath: "/srv/mumble"
name: mumble-pv-storage
readOnly: false
- name: mumble-config
subPath: murmur.ini
mountPath: "/srv/mumble/config.ini"
readOnly: false
containers:
- name: mumble
image: phlak/mumble
ports:
- containerPort: 64738
env:
- name: TZ
value: "America/Denver"
volumeMounts:
- mountPath: "/srv/mumble"
name: mumble-pv-storage
readOnly: false
- name: mumble-config
subPath: murmur.ini
mountPath: "/srv/mumble/config.ini"
readOnly: false
volumes:
- name: mumble-pv-storage
persistentVolumeClaim:
claimName: mumble-pv-claim
- name: mumble-config
configMap:
name: mumble-config
items:
- key: murmur.ini
path: murmur.ini
---
apiVersion: v1
kind: Service
metadata:
name: mumble-service
spec:
selector:
app: mumble
ports:
- port: 64738
</code></pre>
| DaemonSlayer2048 | <p><code>command: ["sh", "-c", "chown -R 1000:1000 /srv/mumble"]</code></p>
<p>Not the volume that is mounted as read-only, the ConfigMap is always mounted as read-only. Change the command to:</p>
<p><code>command: ["sh", "-c", "chown 1000:1000 /srv/mumble"]</code> will work.</p>
| gohm'c |
<p>I am using Kubernetes in Azure with Virtual Nodes this is a plugin that creates virtual nodes using Azure Container Instances.</p>
<p>The instructions to set this up require creating a AKSNet/AKSSubnet which seems to automatically come along with A VMSS called something like. aks-control-xxx-vmss I followed the instruction on the link below.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli</a></p>
<p>This comes with a single instance VM that I am being charged full price for regardless of container instances I create and I am charged extra for every container instance I provision onto my virtual node pool even if they should fit on just 1 VM. These resources do not seem to be related.</p>
<p>I am currently having this out as unexpected billing with Microsoft but the process has been very slow so I am reverting to here to find out if anyone else has had this experience?</p>
<p>The main questions I have are:</p>
<ul>
<li>Can I use Azure Container Instances without the VMSS?</li>
<li>If not can I somehow make this VM visible to my cluster so I can at least use it to
provision containers onto and get some value out of it?</li>
<li>Have I just done something wrong?</li>
</ul>
<p>Update, NB: this is not my control node that is a B2s which I can see my system containers running on.</p>
<p>Any advice would be a great help.</p>
| Lenny D | <blockquote>
<p>Can I use Azure Container Instances without the VMSS?</p>
</blockquote>
<p>In an AKS cluster currently you <em><strong>cannot</strong></em> have virtual nodes without a <strong>node pool</strong> of type <code>VirtualMachineScaleSets</code> or <code>AvailabilitySet</code>. An AKS cluster has at least one node, an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime. [<a href="https://learn.microsoft.com/en-us/azure/aks/concepts-clusters-workloads#nodes-and-node-pools" rel="nofollow noreferrer">Reference</a>] Every AKS cluster must contain at least one system node pool with at least one node. System node pools serve the primary purpose of hosting critical system pods such as <code>CoreDNS</code>, <code>kube-proxy</code> and <code>metrics-server</code>. However, application pods can be scheduled on system node pools if you wish to only have one pool in your AKS cluster.</p>
<p>For more information on System Node Pools please check <a href="https://learn.microsoft.com/en-us/azure/aks/use-system-pools" rel="nofollow noreferrer">this document</a>.</p>
<p>In fact, if you run <code>kubectl get pods -n kube-system -o wide</code> you will see all the system pods running on the VMSS-backed node pool node including the aci-connector-linux-xxxxxxxx-xxxxx pod which connects the cluster to the virtual node, as shown below:</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
aci-connector-linux-859d9ff5-24tgq 1/1 Running 0 49m 10.240.0.30 aks-nodepool1-29819654-vmss000000 <none> <none>
azure-cni-networkmonitor-7zcvf 1/1 Running 0 58m 10.240.0.4 aks-nodepool1-29819654-vmss000000 <none> <none>
azure-ip-masq-agent-tdhnx 1/1 Running 0 58m 10.240.0.4 aks-nodepool1-29819654-vmss000000 <none> <none>
coredns-autoscaler-6699988865-k7cs5 1/1 Running 0 58m 10.240.0.31 aks-nodepool1-29819654-vmss000000 <none> <none>
coredns-d4866bcb7-4r9tj 1/1 Running 0 49m 10.240.0.12 aks-nodepool1-29819654-vmss000000 <none> <none>
coredns-d4866bcb7-5vkhc 1/1 Running 0 58m 10.240.0.28 aks-nodepool1-29819654-vmss000000 <none> <none>
coredns-d4866bcb7-b7bzg 1/1 Running 0 49m 10.240.0.11 aks-nodepool1-29819654-vmss000000 <none> <none>
coredns-d4866bcb7-fltbf 1/1 Running 0 49m 10.240.0.29 aks-nodepool1-29819654-vmss000000 <none> <none>
coredns-d4866bcb7-n94tg 1/1 Running 0 57m 10.240.0.34 aks-nodepool1-29819654-vmss000000 <none> <none>
konnectivity-agent-7564955db-f4fm6 1/1 Running 0 58m 10.240.0.4 aks-nodepool1-29819654-vmss000000 <none> <none>
kube-proxy-lntqs 1/1 Running 0 58m 10.240.0.4 aks-nodepool1-29819654-vmss000000 <none> <none>
metrics-server-97958786-bmmv9 1/1 Running 1 58m 10.240.0.24 aks-nodepool1-29819654-vmss000000 <none> <none>
</code></pre>
<p>However, you can deploy <a href="https://learn.microsoft.com/en-us/azure/container-instances/container-instances-overview" rel="nofollow noreferrer">Azure Container Instances</a> [<a href="https://learn.microsoft.com/en-us/azure/container-instances/container-instances-quickstart" rel="nofollow noreferrer">How-to</a>] without an AKS cluster altogether. For scenarios where you need full container orchestration, including service discovery across multiple containers, automatic scaling, and coordinated application upgrades, we recommend <a href="https://learn.microsoft.com/en-us/azure/aks/" rel="nofollow noreferrer">Azure Kubernetes Service (AKS)</a>.</p>
<blockquote>
<p>If not can I somehow make this VM visible to my cluster so I can at least use it to provision containers onto and get some value out of it?</p>
</blockquote>
<p>Absolutely, you can. In fact if you do a <code>kubectl get nodes</code> and the node from the VMSS-backed node pool (in your case aks-control-xxx-vmss-x) shows <code>STATUS</code> as Ready, then it is available to the <code>kube-scheduler</code> for <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/" rel="nofollow noreferrer">scheduling</a> workloads. Please check <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#condition" rel="nofollow noreferrer">this document</a>.</p>
<p>If you do a <code>kubectl describe node virtual-node-aci-linux</code> you should find the following in the output:</p>
<pre><code>...
Labels: alpha.service-controller.kubernetes.io/exclude-balancer=true
beta.kubernetes.io/os=linux
kubernetes.azure.com/managed=false
kubernetes.azure.com/role=agent
kubernetes.io/hostname=virtual-node-aci-linux
kubernetes.io/role=agent
node-role.kubernetes.io/agent=
node.kubernetes.io/exclude-from-external-load-balancers=true
type=virtual-kubelet
...
Taints: virtual-kubelet.io/provider=azure:NoSchedule
...
</code></pre>
<p>In <a href="https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli" rel="nofollow noreferrer">the document</a> that you are following, in the <a href="https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli#deploy-a-sample-app" rel="nofollow noreferrer">Deploy a sample app section</a> to schedule the container on the virtual node, a <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">nodeSelector</a> and <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">toleration</a> are defined in the <a href="https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli#deploy-a-sample-app" rel="nofollow noreferrer">Deploy a sample app section</a> as follows:</p>
<pre><code>...
nodeSelector:
kubernetes.io/role: agent
beta.kubernetes.io/os: linux
type: virtual-kubelet
tolerations:
- key: virtual-kubelet.io/provider
operator: Exists
- key: azure.com/aci
effect: NoSchedule
</code></pre>
<p>If you remove this part from the <em>Deployment</em> manifest, or do not specify this part in the manifest of a workload that you are deploying, then the corresponding resource(s) will be scheduled on a VMSS-backed node.</p>
<blockquote>
<p>Have I just done something wrong?</p>
</blockquote>
<p>Maybe you can evaluate the answer to this based on my responses to your earlier questions. However, here's a little more to help you understand:</p>
<p>If a node doesn't have sufficient compute resources to run a requested pod, that pod can't progress through the scheduling process. The pod can't start unless additional compute resources are available within the node pool.</p>
<p>When the cluster autoscaler notices pods that can't be scheduled because of node pool resource constraints, the number of nodes within the node pool is increased to provide the additional compute resources. When those additional nodes are successfully deployed and available for use within the node pool, the pods are then scheduled to run on them.</p>
<p>If your application needs to scale rapidly, some pods may remain in a state waiting to be scheduled until the additional nodes deployed by the cluster autoscaler can accept the scheduled pods. For applications that have high burst demands, you can scale with virtual nodes and Azure Container Instances.</p>
<p>This however <strong>does not mean that we can dispense with the VMSS or Availability Set backed node pools</strong>.</p>
| Srijit_Bose-MSFT |
<p>I'm dealing with a certain service that causes (seemingly unresolvable) memory leaks from time to time. RAM memory inside of the container/pod grows high and stays that way. The pod soon becomes unusable and I would very much like to configure Kubernetes to mark those pods for termination and stop routing to them.</p>
<p>For example, if RAM reaches 80% inside the pod can I configure Kubernetes to shut down such pods?</p>
<p>Any links you can share about this would be great.</p>
<p>Thanks</p>
| Gervasius Twinklewinkleson | <p>There are two different types of <a href="https://learnk8s.io/setting-cpu-memory-limits-requests" rel="nofollow noreferrer">resource configurations</a> that can be set on each container of a pod.</p>
<p>They are <strong>requests</strong> and <strong>limits.</strong></p>
<p>Requests define the minimum amount of resources that containers need.</p>
<p>If you think that your app requires at least 256MB of memory to operate, this is the request value.</p>
<p>The application can use more than 256MB, but Kubernetes guarantees a minimum of 256MB to the container.</p>
<p>On the other hand, <strong>limits define the max amount of resources that the container can consume.</strong></p>
<p>Your application might require at least 256MB of memory, but you might want to be sure that it doesn't consume more than 1GB of memory.This is your <strong>limit.</strong></p>
<p>You can increase your request up to the limit range i.e, 1GB of memory. If the request is more than the limit, <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">Kubernetes</a> stops or throttles the pod.</p>
<p>For Example:</p>
<pre><code>resources:
limits:
memory: 1 GB
requests:
memory: 256 MB
</code></pre>
| Srividya |
<p>Below is my output of <code>kubectl get deploy --all-namespaces</code>:</p>
<pre><code>{
"apiVersion": "v1",
"items": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"downscaler/uptime": "Mon-Fri 07:00-23:59 Australia/Sydney",
"name": "actiontest-v2.0.9",
"namespace": "actiontest",
},
"spec": {
......
......
},
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"downscaler/uptime": "Mon-Fri 07:00-21:00 Australia/Sydney",
"name": "anotherapp-v0.1.10",
"namespace": "anotherapp",
},
"spec": {
......
......
}
}
</code></pre>
<p>I need to find the name of the deployment and its namespace if the annotation <code>"downscaler/uptime"</code> matches the value <code>"Mon-Fri 07:00-21:00 Australia/Sydney"</code>. I am expecting an output like below:</p>
<pre><code>deployment_name,namespace
</code></pre>
<p>If I am running below query against a single deployment, I get the required output.</p>
<pre><code>#kubectl get deploy -n anotherapp -o jsonpath='{range .[*]}{.items[?(@.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.name}{","}{.items[?(@.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.namespace}{"\n"}'
anotherapp-v0.1.10,anotherapp
</code></pre>
<p>But when I run it against all namespaces, I am getting an output like below:</p>
<pre><code>#kubectl get deploy --all-namespaces -o jsonpath='{range .[*]}{.items[?(@.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.name}{","}{.items[?(@.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.namespace}{"\n"}'
actiontest-v2.0.9 anotherapp-v0.1.10, actiontest anotherapp
</code></pre>
| Vineeth Elias | <p>This is quite short answer, however you can use this option:</p>
<pre><code>kubectl get deploy --all-namespaces -o jsonpath='{range .items[?(.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")]}{.metadata.name}{"\t"}{.metadata.namespace}{"\n"}'
</code></pre>
<p>What I changed is logic how to work with data:</p>
<p>First thing what happens is getting into <code>range</code> list of elements we need to work on, not everything. I used <a href="https://support.smartbear.com/alertsite/docs/monitors/api/endpoint/jsonpath.html" rel="nofollow noreferrer">filter expression - see Jsonpath notation - syntax elements</a>.</p>
<p>And once we have already filtered entities in the list, we can easily retrieve other fields we need.</p>
| moonkotte |
<p>I am trying to make use of <code>amazon/aws-cli</code> docker image for downloading all files from s3 bucket through initcontainer and mount the same volume to the main container.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-deployment
name: test-deployment
spec:
replicas: 1
selector:
matchLabels:
app: test-deployment
template:
metadata:
labels:
app: test-deployment
spec:
securityContext:
fsGroup: 2000
serviceAccountName: "s3-sa" #Name of the SA we βre using
automountServiceAccountToken: true
initContainers:
- name: data-extension
image: amazon/aws-cli
volumeMounts:
- name: data
mountPath: /data
command:
- aws s3 sync s3://some-bucket/ /data
containers:
- image: amazon/aws-cli
name: aws
command: ["sleep","10000"]
volumeMounts:
- name: data
mountPath: "/data"
volumes:
- name: data
emptyDir: {}
</code></pre>
<p>But it does not seems working. It is causing init container to crashbackloop.
error:</p>
<pre><code>Error: failed to start container "data-extension": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "aws s3 sync s3://some-bucket/ /data": stat aws s3 sync s3://some-bucket/ /data: no such file or directory: unknown
</code></pre>
| Naveen Kerati | <p>Your <code>command</code> needs update:</p>
<pre><code>...
command:
- "aws"
- "s3"
- "sync"
- "s3://some-bucket/"
- "/data"
...
</code></pre>
| gohm'c |
<p>I am testing Flink autoscaler with a Kubernetes setup using Flink Kubernetes Operator 1.5.0.</p>
<p>In the docs it says:</p>
<blockquote>
<p>In the current state the autoscaler works best with Kafka sources, as they expose all the standardized metrics. It also comes with some additional benefits when using Kafka such as automatically detecting and limiting source max parallelism to the number of Kafka partitions.</p>
</blockquote>
<p>How is that relevant? I mean, as I understand it scales based on % busy of each vertex.</p>
<p>Does that paragraph mean that sources other than Kafka may not report % busy, or that it can use Kafka lag metric to scale?</p>
<p>Ideally I'd like to scale based on Kafka lag, but I am not sure if this metric is available inside Flink</p>
| RaΓΊl GarcΓa | <p>I would recommend to read FLIP-271 which introduced autoscaling to the Kubernetes Operator. See <a href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-271%3A+Autoscaling" rel="nofollow noreferrer">https://cwiki.apache.org/confluence/display/FLINK/FLIP-271%3A+Autoscaling</a></p>
<p>The goal of the autoscaler algorithm is "to yield a resource efficient backpressure-free configuration in very few amount of scaling decisions."
In order to achieve that, it uses multiple metrics, all of which are available out-of-the-box for the Flink Kafka connector.</p>
<p>Kafka lag itself isn't relevant to Flink. Flink only commits its offsets during snapshotting, to help with monitoring results in Kafka, but it doesn't need that for its fault tolerance. It also means that Kafka lag will increase until the moment Flink snapshots, but Flink has actually continued with reading messages from Kafka. It just hasn't committed the offsets yet.</p>
| Martijn Visser |
<p>I have the following pods in the <code>default</code> namespace:</p>
<pre><code>web-test-pod-01 1/1 Running 0 19m app=web-test-pod-01
web-test-pod-02 1/1 Running 0 18m app=web-test-pod-02
</code></pre>
<p>And in another namespace called <code>devwebapp</code> I have the following</p>
<pre><code>NAME READY STATUS RESTARTS AGE LABELS
pod/webapp-01 1/1 Running 0 47m run=webapp-01
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
service/svc-webapp-01 ClusterIP 10.109.4.169 <none> 80/TCP 46m run=webapp-01
</code></pre>
<p>I also have network policy called <code>np-webapp-01</code> and its yaml descriptor:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-webapp-01
namespace: devwebapp
spec:
podSelector:
matchLabels:
run: webapp-01
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: default
- podSelector:
matchLabels:
app: web-test-pod-01
ports:
- protocol: TCP
port: 80
</code></pre>
<p>I am trying to allow only the pod <code>web-test-pod-01</code> in <code>default</code> namespace to access the <code>svc-webapp-01</code> service but at the moment all pods in <code>default</code> namespace can access it.</p>
<pre><code>$ k exec web-test-pod-01 -- curl -I svc-webapp-01.devwebapp.svc
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0HTTP/1.1 200 OK 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
615 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
Server: nginx/1.23.4
Date: Thu, 18 May 2023 08:32:34 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 28 Mar 2023 15:01:54 GMT
Connection: keep-alive
ETag: "64230162-267"
Accept-Ranges: bytes
</code></pre>
<p>The following pod should not be able to access the service but as of now it can reach it!</p>
<pre><code>$ k exec web-test-pod-02 -- curl -I svc-webapp-01.devwebapp.svc
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0HTTP/1.1 200 OK
0 615 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
Server: nginx/1.23.4
Date: Thu, 18 May 2023 08:33:21 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 28 Mar 2023 15:01:54 GMT
Connection: keep-alive
ETag: "64230162-267"
Accept-Ranges: bytes
</code></pre>
<p>I am not sure why <code>podSelector</code> in the network policy is not taking effect.</p>
| Khaled | <p>In network policy for selecting pod and namespace we have two conditions . You can find them in this <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/07-allow-traffic-from-some-pods-in-another-namespace.md#remarks" rel="nofollow noreferrer">git link</a>.</p>
<p>This example below is OR condition(policy is enforced based on namespaceSelector or podSelector)</p>
<pre><code>ingress:
- from:
- namespaceSelector:
matchLabels:
team: operations
- podSelector:
matchLabels:
type: monitoring
</code></pre>
<p>You have used the above condition.</p>
<p>while this example is AND condition</p>
<pre><code>ingress:
- from:
- namespaceSelector:
matchLabels:
team: operations
podSelector:
matchLabels:
type: monitoring
</code></pre>
<p>Can you try the βANDβ condition and let me know if this works.</p>
<p>Attaching a <a href="https://loft.sh/blog/kubernetes-network-policies-for-isolating-namespaces/#:%7E:text=We%20can%20use%20the%20following%20network%20policy%20to%20allow%20traffic%20from%20another%20namespace%20matching%20label%20env%3Dstaging." rel="nofollow noreferrer">blog</a> written by Ashish Choudhary for reference.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-webapp-01
namespace: devwebapp
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: default
podSelector:
matchLabels:
app: web-test-pod-01
ports:
- port: 80
</code></pre>
| Srividya |
<p>The size of log generated by k8s scheduler is so big.</p>
<p>How i can change log level(info, debug, warn) of scheduler in k8s cluster already established ?.</p>
| karlos | <p>Open <code>/etc/kubernetes/manifests/kube-scheduler.yaml</code> on your master node; modify <code>--v=<0-9></code> (smaller number to reduce verbose). If you are using cloud provided K8s, you need to check their document if they allow you to configure the verbose level.</p>
| gohm'c |
<p>We created the GKE cluster with the <strong>public</strong> endpoint. The <strong>service account</strong> of the GKE <strong>cluster</strong> and <strong>Node pool</strong> has the following roles.</p>
<pre><code>"roles/compute.admin",
"roles/compute.viewer",
"roles/compute.securityAdmin",
"roles/iam.serviceAccountUser",
"roles/iam.serviceAccountAdmin",
"roles/resourcemanager.projectIamAdmin",
"roles/container.admin",
"roles/artifactregistry.admin",
"roles/storage.admin"
</code></pre>
<p>The node pool of the GKE cluster has the following <strong>OAuth scopes</strong></p>
<pre><code>"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/devstorage.read_write",
</code></pre>
<p>The <strong>private</strong> GCS bucket has the same service account as the principal, with the <strong>storage admin</strong> role.</p>
<p>When we try to read/write in this bucket from a GKE POD, we get the below error.</p>
<pre><code># Read
AccessDeniedException: 403 Caller does not have storage.objects.list access to the Google Cloud Storage bucket
# Write
AccessDeniedException: 403 Caller does not have storage.objects.create access to the Google Cloud Storage object
</code></pre>
<p>We also checked this <a href="https://stackoverflow.com/questions/46497002/gcs-write-access-from-inside-a-gke-pod">thread</a> but the solution was credential oriented and couldn't help us. <strong>We would like to read/write without maintaining the SA auth key or any sort of credentials</strong>.</p>
<p>Please guide what is missing here.</p>
<hr />
<p><strong>UPDATE</strong>: as per the suggestion by @boredabdel we checked and found that <code>workload identity</code> was already enabled on the GKE cluster as well as NodePool. We are using this <a href="https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/latest/submodules/beta-private-cluster" rel="nofollow noreferrer">module</a> to create our cluster where it is already enabled by default. Still, we are facing connectivity issues.</p>
<p><strong>Cluster</strong>
Security:</p>
<p><a href="https://i.stack.imgur.com/oEeIW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oEeIW.png" alt="enter image description here" /></a></p>
<p><strong>NodePool</strong>
Security:</p>
<p><a href="https://i.stack.imgur.com/MH0Fw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MH0Fw.png" alt="enter image description here" /></a></p>
| Nitin G | <p>Seems like you are trying to use the Node Service Account to authenticate to GCS. You need to passe the Service Account Key to the app you are calling the API from as described in this <a href="https://cloud.google.com/docs/authentication/production#automatically" rel="nofollow noreferrer">doc</a></p>
<p>If you want Keyless authentication, my advice is to use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">Workload identity</a></p>
| boredabdel |
<p>I have the following code in my values.yaml file:</p>
<pre><code>ingress:
kind: Ingress
hostname: auth.localhost
enabled: true
metadata: fusionauth-ingress
hosts:
- host: auth.local
spec:
rules:
http:
paths: "/"
path:
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
serviceName: fusionauth
servicePort: 9011
</code></pre>
<p>When I run: <code>helm upgrade --install fusionauth-init --values fusionauth/values.yaml fusionauth</code></p>
<p>I get the following error: <code>Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http): missing required field "paths" in io.k8s.api.networking.v1.HTTPIngressRuleValue</code></p>
<p>I am new to Helm but I cannot seem to find where the error in my logic is.</p>
<p>Thanks in advance for the help.</p>
| cp-stack | <p>Base on the <a href="https://github.com/FusionAuth/charts/blob/0f86a961777108139fcd600ead7c08a28a1ea058/chart/values.yaml#L172" rel="nofollow noreferrer">chart source</a>, your values are all invalid. You cannot copy K8s ingress spec directly for FusionAuth, you need to follow the <a href="https://github.com/FusionAuth/charts/blob/master/chart/values.yaml" rel="nofollow noreferrer">structure of the chart values.yaml</a>, the chart value meanings are <a href="https://github.com/FusionAuth/charts#chart-values" rel="nofollow noreferrer">here</a>. If you were following the instruction <a href="https://fusionauth.io/docs/v1/tech/installation-guide/kubernetes/fusionauth-deployment/#create-an-ingress" rel="nofollow noreferrer">here</a> previously to create an K8s Ingress resource, you <strong>do not</strong> need to deploy Ingress again using helm.</p>
| gohm'c |
<p>I am kind of stuck with running a docker container as part of a kubernetes job and specifying runtime arguments in the job template.</p>
<p>My Dockerfile specifies an entrypoint and no CMD directive:</p>
<pre><code>ENTRYPOINT ["python", "script.py"]
</code></pre>
<p>From what I understand, this means that when running the docker image and specifying arguments, the container will run using the entrypoint specified in the Dockerfile and pass the arguments to it. I can confirm that this is actually working, because running the container using docker does the trick:</p>
<pre><code>docker run --rm image -e foo -b bar
</code></pre>
<p>In my case this will start script.py, which is using argument parser to parse named arguments, with the intended arguments.</p>
<p>The problem starts to arise when I am using a kubernetes job to do the same:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pipeline
spec:
template:
spec:
containers:
- name: pipeline
image: test
args: ["-e", "foo", "-b", "bar"]
</code></pre>
<p>In the pod that gets deployed the correct entrypoint will be run, but the specified arguments vanish. I also tried specifying the arguments like this:</p>
<pre><code>args: ["-e foo", "-b bar"]
</code></pre>
<p>But this didn't help either. I don't know why this is not working, because the <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">documentation</a> cleary states that: "If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied.". The default entrypoint is running, that is correct, but the arguments are lost between kubernetes and docker.</p>
<p>Does somebody know what I am doing wrong?</p>
| mike | <p>I actually got it working using the following yaml syntax:</p>
<pre><code>args:
- "-e"
- "foo"
- "-b"
- "bar"
</code></pre>
<p>The array syntax that I used beforehand seems not to be working at all as everything was passed to the -e argument of my script like this:</p>
<pre><code>-e " foo -b bar"
</code></pre>
<p>That's why the <code>-b</code> argument was marked as missing even though the arguments were populated in the container.</p>
| mike |
<p>I have a setup of WSL2 and Kind.</p>
<p>Whenever I restart my computer and open a new WSL2 window, I can no longer use kubectl to access my kind cluster.</p>
<pre><code> kind get clusters
>>> kind
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kind-kind kind-kind kind-kind
kubectl get pods
>>> The connection to the server 127.0.0.1:40307 was refused - did you specify the right host or port?`
</code></pre>
<p>However, I can create a new kind cluster and access it.</p>
<pre><code> kind create cluster --name kind3
kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind3-control-plane Ready control-plane 8m21s v1.26.3
</code></pre>
<p>But if I restart my computer, once again I will not be able to access the cluster <code>kind3</code></p>
<p>What am I doing wrong? Or what do I need to change in my configuration files to be able to access any old clusters after computer restart?</p>
| Kspr | <p>This is a known issue and has been mentioned in the <a href="https://github.com/kubernetes-sigs/kind/issues/3165" rel="nofollow noreferrer">github link</a>. Can you check the WSL2 Version in Powershell or Windows Command Prompt by running <strong><code>wsl -l -v</code></strong> command. WSL2 v1.2.3/1.2.4 has the issue fixed. If the issue still persists with the 1.2.4 version run the following command on admin shell <strong><code>wsl --update --pre-release</code></strong> which has the fixed issue.</p>
| Srividya |
<p>I have an application that ICMP Pings out to IP addresses from GCP K8s running on AutoPilot.</p>
<p>I tried setting the below first but it turns out GCP K8s on AutoPilot disable that permission if you add it.</p>
<pre><code> securityContext:
capabilities:
add: ["NET_RAW"]
</code></pre>
<p>Next I tried setting net.ipv4.ping_group_range like the below in my deployment file.</p>
<pre><code> securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
sysctls:
- name: kernel.shm_rmid_forced
value: "1"
- name: net.ipv4.ping_group_range
value: "0 200000000"
</code></pre>
<p>But the above gives me the error below when I try and deploy it.</p>
<pre><code>strict decoding error: unknown field "spec.template.spec.containers[0].securityContext.sysctls"
</code></pre>
<p>I've also tried</p>
<pre><code> command: ["/bin/sh"]
args:
- -c
- sysctl -w net.ipv4.ping_group_range=0 2000000
</code></pre>
<p>In my deployment file but that gives me an error of permission denied when the container tries to start.</p>
<p>Is it possible to ICMP Ping an IP address from a GCP K8 AutoPilot cluster or how do I apply sysctl -w net.ipv4.ping_group_range correctly?</p>
| Mr J | <p>GKE Autopilot clusters always use <a href="https://cloud.google.com/container-optimized-os/docs/concepts/features-and-benefits#limitations" rel="nofollow noreferrer">Container-Optimized OS</a> with containerd as their node image. The Container-Optimized OS kernel is locked down in autopilot clusters so that you cannot perform write operations on it as it is constrained to only read-only mode.</p>
<p>Therefore the cluster is disabling permissions to set some values to the parameters in the kernel, as the kernel is locked-down.</p>
<p>In order to modify the attributes according to your need, you can use a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster" rel="nofollow noreferrer"><strong>GKE Standard Cluster</strong></a> which provides flexibility.</p>
| Srividya |
<p>I am a beginner with Kubernetes. I have enabled it from Docker Destop and now I want to install Kubernetes Dashboard.</p>
<p>I followed this link:</p>
<p><a href="https://github.com/kubernetes/dashboard#getting-started" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard#getting-started</a></p>
<p>And I executed my first command in Powershell as an administrator:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
</code></pre>
<p>I get the following error:</p>
<blockquote>
<p>error: error validating
"https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml":
error validating data:
ValidationError(Deployment.spec.template.spec.securityContext):
unknown field "seccompProfile" in
io.k8s.api.core.v1.PodSecurityContext; if you choose to ignore these
errors, turn validation off with --validate=false</p>
</blockquote>
<p>In which case I tried to use the same command with --validate=false.</p>
<p>Then it went and gave no errors and when I execute :</p>
<pre><code>kubectl proxy
</code></pre>
<p>I got an access token using:</p>
<pre><code>kubectl describe secret -n kube-system
</code></pre>
<p>and I try to access the link as provided in the guide :</p>
<p>http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/</p>
<p>I get the following swagger response:</p>
<p><a href="https://i.stack.imgur.com/Dw9DH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dw9DH.png" alt="enter image description here" /></a></p>
| grozdeto | <p>The error indicated that your cluster version is not compatible to use <code>seccompProfile.type: RuntimeDefault</code>. In this case you don't apply the dashboard spec (<a href="https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml</a>) right away, you download and comment the following line in the spec:</p>
<pre><code>...
spec:
# securityContext:
# seccompProfile:
# type: RuntimeDefault
...
</code></pre>
<p>Then you apply the updated spec <code>kubectl apply -f recommended.yaml</code>.</p>
| gohm'c |
<p>I recently upgraded <code>ingress-nginx</code> to version 1.0.3.</p>
<p>As a result, I removed the <code>kubernetes.io/ingress.class</code> annotation from my ingress, and put <code>.spec.ingressClassName</code> instead.</p>
<p>I am running <code>cert-manager-v1.4.0</code>.</p>
<p>This morning I had an email saying that my Let's Encrypt certificate will expire in 10 days. I tried to figure out what was wrong with it - not positive that it was entirely due to the ingress-nginx upgrade.</p>
<p>I deleted the <code>CertificateRequest</code> to see if it would fix itself. I got a new <code>Ingress</code> with the challenge, but:</p>
<ol>
<li><p>The challenge ingress had the <code>kubernetes.io/ingress.class</code> annotation set correctly, even though my ingress has <code>.spec.ingressClassName</code> instead - don't know how or why, but it seems like it should be OK.</p>
</li>
<li><p>However, the challenge ingress wasn't picked up by the ingress controller, it said:</p>
</li>
</ol>
<p><code>ingress class annotation is not equal to the expected by Ingress Controller</code></p>
<p>I guess it wants only the <code>.spec.ingressClassName</code> even though I thought the annotation was supposed to work as well.</p>
<p>So I manually set <code>.spec.ingressClassName</code> on the challenge ingress. It was immediately seen by the ingress controller, and the rest of the process ran smoothly, and I got a new cert - yay.</p>
<p>It seems to me like this will happen again, so I need to know how to either:</p>
<ol>
<li><p>Convince <code>cert-manager</code> to create the challenge ingress with <code>.spec.ingressClassName</code> instead of <code>kubernetes.io/ingress.class</code>. Maybe this is fixed in 1.5 or 1.6?</p>
</li>
<li><p>Convince <code>ingress-nginx</code> to respect the <code>kubernetes.io/ingress.class</code> annotation for the challenge ingress. I don't know why this doesn't work.</p>
</li>
</ol>
| e.dan | <h2>Issue</h2>
<p>The issue was fixed by certificate renewal, it works fine without manually set <code>spec.ingressClassName</code> in challenge ingress (I saw it with older version), issue was somewhere else.</p>
<p>Also with last available (at the writing moment) <code>cert-manager v1.5.4</code> challenge ingress has the right setup "out of the box":</p>
<pre><code>spec:
ingressClassName: nginx
---
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
cm-acme-http-solver-szxfg nginx dummy-host ip_address 80 11s
</code></pre>
<h2>How it works (concept)</h2>
<p>I'll describe main steps how this process works so troubleshooting will be straight-forward in almost all cases. I'll take a <code>letsencypt staging</code> as an <code>issuer</code>.</p>
<p>There's a chain when <code>certificate</code> is requested to be created which <code>issuer</code> follows to complete (all resources have owners - previous resource in chain):</p>
<p><code>main ingress resource</code> -> <code>certificate</code> -> <code>certificaterequest</code> -> <code>order</code> -> <code>challenge</code> -> <code>challenge ingress</code>.</p>
<p>Knowing this, if something failed, you can go down by the chain and using <code>kubectl describe</code> command find where the issue appeared.</p>
<h2>Troubleshooting example</h2>
<p>I intentionally added a wrong domain in ingress to <code>.spec.tls.hosts</code> and applied it. Below how the chain will look like (all names will be unique!):</p>
<p>See certificates:</p>
<pre><code>$ kubectl get cert
NAME READY SECRET AGE
lets-secret-test-2 False lets-secret-test-2 15m
</code></pre>
<p>Describe <code>certificate</code> we are interested in (you can notice I changed domain, there was already secret):</p>
<pre><code>$ kubectl describe cert lets-secret-test-2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 16m cert-manager Existing issued Secret is not up to date for spec: [spec.commonName spec.dnsNames]
Normal Reused 16m cert-manager Reusing private key stored in existing Secret resource "lets-secret-test-2"
Normal Requested 16m cert-manager Created new CertificateRequest resource "lets-secret-test-2-pvb25"
</code></pre>
<p>Nothing suspicious here, moving forward.</p>
<pre><code>$ kubectl get certificaterequest
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
lets-secret-test-2-pvb25 True False letsencrypt-staging system:serviceaccount:cert-manager:cert-manager 19m
</code></pre>
<p>Describing <code>certificaterequest</code>:</p>
<pre><code>$ kubectl describe certificaterequest lets-secret-test-2-pvb25
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal cert-manager.io 19m cert-manager Certificate request has been approved by cert-manager.io
Normal OrderCreated 19m cert-manager Created Order resource default/lets-secret-test-2-pvb25-2336849393
</code></pre>
<p>Again, everything looks fine, no errors, moving forward to <code>order</code>:</p>
<pre><code>$ kubectl get order
NAME STATE AGE
lets-secret-test-2-pvb25-2336849393 pending 21m
</code></pre>
<p>It says <code>pending</code>, that's closer:</p>
<pre><code>$ kubectl describe order lets-secret-test-2-pvb25-2336849393
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 21m cert-manager Created Challenge resource "lets-secret-test-2-pvb25-2336849393-3788447910" for domain "dummy-domain"
</code></pre>
<p><code>Challenge</code> may shed some light, moving forward:</p>
<pre><code>$ kubectl get challenge
NAME STATE DOMAIN AGE
lets-secret-test-2-pvb25-2336849393-3788447910 pending dummy-domain 23m
</code></pre>
<p>Describing it:</p>
<pre><code>$ kubectl describe challenge lets-secret-test-2-pvb25-2336849393-3788447910
</code></pre>
<p>Checking <code>status</code>:</p>
<pre><code>Status:
Presented: true
Processing: true
Reason: Waiting for HTTP-01 challenge propagation: failed to perform self check GET request 'http://dummy-domain/.well-known/acme-challenge/xxxxyyyyzzzzz': Get "http://dummy-domain/.well-known/acme-challenge/xxxxyyyyzzzzz": dial tcp: lookup dummy-domain on xx.yy.zz.ww:53: no such host
State: pending
</code></pre>
<p>Now it's clear that something is wrong with <code>domain</code>, worth checking it:</p>
<p>Found and fixed the "mistake":</p>
<pre><code>$ kubectl apply -f ingress.yaml
ingress.networking.k8s.io/ingress configured
</code></pre>
<p>Certificate is <code>ready</code>!</p>
<pre><code>$ kubectl get cert
NAME READY SECRET AGE
lets-secret-test-2 True lets-secret-test-2 26m
</code></pre>
<h2>Correct way to renew a certificate using cert-manager</h2>
<p>It's possible to renew a certificate by deleting corresponding secret, however <a href="https://cert-manager.io/docs/usage/certificate/#actions-triggering-private-key-rotation" rel="nofollow noreferrer">documentation says it's not recommended</a>:</p>
<blockquote>
<p>Deleting the Secret resource associated with a Certificate resource is
<strong>not a recommended solution</strong> for manually rotating the private key. The
recommended way to manually rotate the private key is to trigger the
reissuance of the Certificate resource with the following command
(requires the kubectl cert-manager plugin):</p>
<p><code>kubectl cert-manager renew cert-1</code></p>
</blockquote>
<p><code>Kubectl cert-manager</code> command installation process is described <a href="https://cert-manager.io/docs/usage/kubectl-plugin/#installation" rel="nofollow noreferrer">here</a> as well as other commands and examples.</p>
<h2>Useful links:</h2>
<ul>
<li><a href="https://cert-manager.io/docs/concepts/certificate/" rel="nofollow noreferrer">Certificate in cert-manager - concept</a></li>
<li><a href="https://cert-manager.io/docs/concepts/certificaterequest/" rel="nofollow noreferrer">Cert-manager - certificate request</a></li>
<li><a href="https://cert-manager.io/docs/concepts/acme-orders-challenges/" rel="nofollow noreferrer">ACME Orders and Challenges</a></li>
</ul>
| moonkotte |
<p>I am trying to connect my Kubernetes Cluster in Digital Ocean with a Managed Database.</p>
<p>I need to add the <code>CA CERTIFICATE</code> that is a file with extension <code>cer</code>. Is this the right way to add this file/certificate to a secret?</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: secret-db-ca
type: kubernetes.io/tls
data:
.tls.ca: |
"<base64 encoded ~/.digitalocean-db.cer>"
</code></pre>
| agusgambina | <p><strong>How to create a secret from certificate</strong></p>
<hr />
<p>The easiest and fastest way is to create a secret from command line:</p>
<pre><code>kubectl create secret generic secret-db-ca --from-file=.tls.ca=digitalocean-db.cer
</code></pre>
<p>Please note that type of this secret is <code>generic</code>, not <code>kubernetes.io/tls</code> because <code>tls</code> one requires both keys provided: <code>tls.key</code> and <code>tls.crt</code></p>
<p>Also it's possible to create a key from manifest, however you will need to provide full <code>base64 encoded</code> string to the data field and again use the type <code>Opaque</code> in manifest (this is the same as generic from command line).</p>
<p>It will look like:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: secret-db-ca
type: Opaque
data:
.tls.ca: |
LS0tLS1CRUdJTiBDRVJ..........
</code></pre>
<p>Option you tried to use is used for <code>docker config</code> files. Please see <a href="https://kubernetes.io/docs/concepts/configuration/secret/#docker-config-secrets" rel="nofollow noreferrer">Docker config - secrets</a></p>
<hr />
<p><strong>Note!</strong> I tested the above with <code>cer</code> certificate.</p>
<p>DER (Distinguished Encoding Rules) is a binary encoding for X.509 certificates and private keys, they do not contain plain text (extensions .cer and .der). Secret was saved in <code>etcd</code> (generally speaking database for kubernetes cluster), however there may be issues with workability of secrets based on this type of secrets.</p>
<p>There is a chance that different type/extension of certificate should be used (Digital Ocean has a lot of useful and good documentation).</p>
<hr />
<p>Please refer to <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secrets in kubernetes page</a>.</p>
| moonkotte |
<p>I have installed cert manager on a k8s cluster:</p>
<pre><code>helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.5.3 --set installCRDs=true
</code></pre>
<p>My objective is to do mtls communication between micro-services running in same name-space.</p>
<p>For this purpose I have created a ca issuer .i.e..</p>
<pre><code>kubectl get issuer -n sandbox -o yaml
apiVersion: v1
items:
- apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cert-manager.io/v1","kind":"Issuer","metadata":{"annotations":{},"name":"ca-issuer","namespace":"sandbox"},"spec":{"ca":{"secretName":"tls-internal-ca"}}}
creationTimestamp: "2021-09-16T17:24:58Z"
generation: 1
managedFields:
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:ca:
.: {}
f:secretName: {}
manager: HashiCorp
operation: Update
time: "2021-09-16T17:24:58Z"
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:conditions: {}
manager: controller
operation: Update
time: "2021-09-16T17:24:58Z"
name: ca-issuer
namespace: sandbox
resourceVersion: "3895820"
selfLink: /apis/cert-manager.io/v1/namespaces/sandbox/issuers/ca-issuer
uid: 90f0c811-b78d-4346-bb57-68bf607ee468
spec:
ca:
secretName: tls-internal-ca
status:
conditions:
message: Signing CA verified
observedGeneration: 1
reason: KeyPairVerified
status: "True"
type: Ready
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
<p>Using this ca issuer, I have created certificates for my two micro-service .i.e.</p>
<pre><code>kubectl get certificate -n sandbox
NAME READY SECRET Age
service1-certificate True service1-certificate 3d
service2-certificate True service2-certificate 2d23h
</code></pre>
<p>which is configured as</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
annotations:
meta.helm.sh/release-name: service1
meta.helm.sh/release-namespace: sandbox
creationTimestamp: "2021-09-17T10:20:21Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
managedFields:
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/managed-by: {}
f:spec:
.: {}
f:commonName: {}
f:dnsNames: {}
f:duration: {}
f:issuerRef:
.: {}
f:kind: {}
f:name: {}
f:renewBefore: {}
f:secretName: {}
f:subject:
.: {}
f:organizations: {}
f:usages: {}
manager: Go-http-client
operation: Update
time: "2021-09-17T10:20:21Z"
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:privateKey: {}
f:status:
.: {}
f:conditions: {}
f:notAfter: {}
f:notBefore: {}
f:renewalTime: {}
f:revision: {}
manager: controller
operation: Update
time: "2021-09-20T05:14:12Z"
name: service1-certificate
namespace: sandbox
resourceVersion: "5177051"
selfLink: /apis/cert-manager.io/v1/namespaces/sandbox/certificates/service1-certificate
uid: 0cf1ea65-92a1-4b03-944e-b847de2c80d9
spec:
commonName: example.com
dnsNames:
- service1
duration: 24h0m0s
issuerRef:
kind: Issuer
name: ca-issuer
renewBefore: 12h0m0s
secretName: service1-certificate
subject:
organizations:
- myorg
usages:
- client auth
- server auth
status:
conditions:
- lastTransitionTime: "2021-09-20T05:14:13Z"
message: Certificate is up to date and has not expired
observedGeneration: 1
reason: Ready
status: "True"
type: Ready
notAfter: "2021-09-21T05:14:13Z"
notBefore: "2021-09-20T05:14:13Z"
renewalTime: "2021-09-20T17:14:13Z"
revision: 5
</code></pre>
<p>Now as you could see in the configuration I have configured to renew them in 12 hours. However, the secrets created via this custom certificate resource are still aged to two days (the first it was created). I was thinking this tls secret will be renewed via cert manager each day) .i.e.</p>
<pre><code>kubectl get secrets service1-certificate service2-certificate -n sandbox -o wide
NAME TYPE DATA AGE
service1-certificate kubernetes.io/tls 3 2d23h
service2-certificate kubernetes.io/tls 3 3d1h
</code></pre>
<p>Is there is some wrong in my understanding? In the <code>certmangager</code> pod logs I do see some error around renewing .i.e.</p>
<pre><code>I0920 05:14:04.649158 1 trigger_controller.go:181] cert-manager/controller/certificates-trigger "msg"="Certificate must be re-issued" "keyβ=βsandbox/service1-certificate" "message"="Renewing certificate as renewal was scheduled at 2021-09-19 08:24:13 +0000 UTC" "reason"="Renewing"β¨
I0920 05:14:04.649235 1 conditions.go:201] Setting lastTransitionTime for Certificate βservice1-certificate" condition "Issuing" to 2021-09-20 05:14:04.649227766 +0000 UTC m=+87949.327215532β¨
I0920 05:14:04.652174 1 trigger_controller.go:181] cert-manager/controller/certificates-trigger "msg"="Certificate must be re-issued" "key"="sandbox/service2 "message"="Renewing certificate as renewal was scheduled at 2021-09-19 10:20:22 +0000 UTC" "reason"="Renewing"β¨
I0920 05:14:04.652231 1 conditions.go:201] Setting lastTransitionTime for Certificate βservice2-certificate" condition "Issuing" to 2021-09-20 05:14:04.652224302 +0000 UTC m=+87949.330212052β¨
I0920 05:14:04.671111 1 conditions.go:190] Found status change for Certificate βservice2-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:04.671094596 +0000 UTC m=+87949.349082328
I0920 05:14:04.671344 1 conditions.go:190] Found status change for Certificate βservice1-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:04.671332206 +0000 UTC m=+87949.349319948β¨ β¨
I0920 05:14:12.703039 1 controller.go:161] cert-manager/controller/certificates-readiness "msg"="re-queuing item due to optimistic locking on resource" "keyβ=βsandbox/service2-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \βservice2-certificate\": the object has been modified; please apply your changes to the latest version and try again"β¨ β¨
I0920 05:14:12.703896 1 conditions.go:190] Found status change for Certificate βservice2-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:12.7038803 +0000 UTC m=+87957.381868045β¨ β¨
I0920 05:14:12.749502 1 controller.go:161] cert-manager/controller/certificates-readiness "msg"="re-queuing item due to optimistic locking on resource" "keyβ=βsandbox/service1-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \βservice1-certificate\": the object has been modified; please apply your changes to the latest version and try again"β¨ β¨
I0920 05:14:12.750096 1 conditions.go:190] Found status change for Certificate βservice1-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:12.750082572 +0000 UTC m=+87957.428070303β¨
I0920 05:14:13.009032 1 controller.go:161] cert-manager/controller/certificates-key-manager "msg"="re-queuing item due to optimistic locking on resource" "key"="sandbox/service1-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \βservice1-certificate\": the object has been modified; please apply your changes to the latest version and try again"β¨
I0920 05:14:13.117843 1 controller.go:161] cert-manager/controller/certificates-readiness "msg"="re-queuing item due to optimistic locking on resource" "keyβ=βsandbox/service2-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \βservice2-certificate\": the object has been modified; please apply your changes to the latest version and try again"β¨
I0920 05:14:13.119366 1 conditions.go:190] Found status change for Certificate βservice2-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:13.119351795 +0000 UTC m=+87957.797339520β¨
I0920 05:14:13.122820 1 controller.go:161] cert-manager/controller/certificates-key-manager "msg"="re-queuing item due to optimistic locking on resource" "keyβ=βsandbox\service2-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \βservice-certificate\": the object has been modified; please apply your changes to the latest version and try again"β¨
I0920 05:14:13.123907 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest βservice2-certificate-t92qh" condition "Approved" to 2021-09-20 05:14:13.123896104 +0000 UTC m=+87957.801883833
I0920 05:14:13.248082 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest βservice1-certificate-p9stz" condition "Approved" to 2021-09-20 05:14:13.248071551 +0000 UTC m=+87957.926059296
I0920 05:14:13.253488 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest βserivce2-certificate-t92qh" condition "Ready" to 2021-09-20 05:14:13.253474153 +0000 UTC m=+87957.931461871
I0920 05:14:13.388001 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest βservice1-certificate-p9stz" condition "Ready" to 2021-09-20 05:14:13.387983783 +0000 UTC m=+87958.065971525β¨
</code></pre>
| Ruchir Bharadwaj | <h2>Short answer</h2>
<p>Based on logs and details from certificate you provided it's safe to say <strong>it's working as expected</strong>.</p>
<p>Pay attention to <code>revision: 5</code> in your certificate, which means that certificate has been renewed 4 times already. If you try to look there now, this will be 6 or 7 because certificate is updated every 12 hours.</p>
<h2>Logs</h2>
<p>First thing which can be really confusing is <code>error messages</code> in <code>cert-manager</code> pod. This is mostly noisy messages which are not really helpful by itself.</p>
<p>See about it here <a href="https://github.com/jetstack/cert-manager/issues/3501#issuecomment-884003519" rel="nofollow noreferrer">Github issue comment</a> and here <a href="https://github.com/jetstack/cert-manager/issues/3667" rel="nofollow noreferrer">github issue 3667</a>.</p>
<p>In case logs are really needed, <code>verbosity level</code> should be increased by setting <code>args</code> to <code>--v=5</code> in the <code>cert-manager</code> deployment. To edit a deployment run following command:</p>
<pre><code>kubectl edit deploy cert-manager -n cert-manager
</code></pre>
<h2>How to check certificate/secret</h2>
<p>When certificate is renewed, secret's and certificate's age are not changed, but content is edited, for instance <code>resourceVersion</code> in <code>secret</code> and <code>revision</code> in certificate.</p>
<p>Below are options to check if certificate was renewed:</p>
<ol>
<li><p>Check this by getting secret in <code>yaml</code> before and after renew:</p>
<pre><code>kubectl get secret example-certificate -o yaml > secret-before
</code></pre>
</li>
</ol>
<p>And then run <code>diff</code> between them. It will be seen that <code>tls.crt</code> as well as <code>resourceVersion</code> is updated.</p>
<ol start="2">
<li><p>Look into certificate <code>revision</code> and <code>dates</code> in status
(I set duration to minimum possible <code>1h</code> and renewBefore <code>55m</code>, so it's updated every 5 minutes):</p>
<pre><code> $ kubectl get cert example-cert -o yaml
notAfter: "2021-09-21T14:05:24Z"
notBefore: "2021-09-21T13:05:24Z"
renewalTime: "2021-09-21T13:10:24Z"
revision: 7
</code></pre>
</li>
<li><p>Check events in the namespace where certificate/secret are deployed:</p>
<pre><code>$ kubectl get events
117s Normal Issuing certificate/example-cert The certificate has been successfully issued
117s Normal Reused certificate/example-cert Reusing private key stored in existing Secret resource "example-staging-certificate"
6m57s Normal Issuing certificate/example-cert Renewing certificate as renewal was scheduled at 2021-09-21 13:00:24 +0000 UTC
6m57s Normal Requested certificate/example-cert Created new CertificateRequest resource "example-cert-bs8g6"
117s Normal Issuing certificate/example-cert Renewing certificate as renewal was scheduled at 2021-09-21 13:05:24 +0000 UTC
117s Normal Requested certificate/example-cert Created new CertificateRequest resource "example-cert-7x8cf" UTC
</code></pre>
</li>
<li><p>Look at <code>certificaterequests</code>:</p>
<pre><code>$ kubectl get certificaterequests
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
example-cert-2pxdd True True ca-issuer system:serviceaccount:cert-manager:cert-manager 14m
example-cert-54zzc True True ca-issuer system:serviceaccount:cert-manager:cert-manager 4m29s
example-cert-8vjcm True True ca-issuer system:serviceaccount:cert-manager:cert-manager 9m29s
</code></pre>
</li>
<li><p>Check logs in <code>cert-manager</code> pod to see four stages:</p>
<pre><code>I0921 12:45:24.000726 1 trigger_controller.go:181] cert-manager/controller/certificates-trigger "msg"="Certificate must be re-issued" "key"="default/example-cert" "message"="Renewing certificate as renewal was scheduled at 2021-09-21 12:45:24 +0000 UTC" "reason"="Renewing"
I0921 12:45:24.000761 1 conditions.go:201] Setting lastTransitionTime for Certificate "example-cert" condition "Issuing" to 2021-09-21 12:45:24.000756621 +0000 UTC m=+72341.194879378
I0921 12:45:24.120503 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "example-cert-mxvbm" condition "Approved" to 2021-09-21 12:45:24.12049391 +0000 UTC m=+72341.314616684
I0921 12:45:24.154092 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "example-cert-mxvbm" condition "Ready" to 2021-09-21 12:45:24.154081971 +0000 UTC m=+72341.348204734
</code></pre>
</li>
</ol>
<h2>Note</h2>
<p>Very important that not all <code>issuers</code> support <code>duration</code> and <code>renewBefore</code> flags. E.g. <code>letsencrypt</code> still doesn't work with it and have 90 default days.</p>
<p><a href="https://cert-manager.io/docs/release-notes/release-notes-1.1/#duration" rel="nofollow noreferrer">Refence</a>.</p>
| moonkotte |
<p>I'm trying to use helm from my github actions runner to deploy to my GKE cluster but I'm running into a permissions error.</p>
<p>Using a google cloud service account for authentication</p>
<p><strong>GitHub Actions CI step</strong></p>
<pre><code> - name: Install gcloud cli
uses: google-github-actions/setup-gcloud@master
with:
version: latest
project_id: ${{ secrets.GCLOUD_PROJECT_ID }}
service_account_email: ${{ secrets.GCLOUD_SA_EMAIL }}
service_account_key: ${{ secrets.GCLOUD_SA_KEY }}
export_default_credentials: true
- name: gcloud configure
run: |
gcloud config set project ${{secrets.GCLOUD_PROJECT_ID}};
gcloud config set compute/zone ${{secrets.GCLOUD_COMPUTE_ZONE}};
gcloud container clusters get-credentials ${{secrets.GCLOUD_CLUSTER_NAME}};
- name: Deploy
run: |
***
helm upgrade *** ./helm \
--install \
--debug \
--reuse-values \
--set-string "$overrides"
</code></pre>
<p><strong>The error</strong></p>
<pre><code>history.go:56: [debug] getting history for release blog
Error: query: failed to query with labels: secrets is forbidden: User "***" cannot list resource "secrets" in API group "" in the namespace "default": requires one of ["container.secrets.list"] permission(s).
helm.go:88: [debug] secrets is forbidden: User "***" cannot list resource "secrets" in API group "" in the namespace "default": requires one of ["container.secrets.list"] permission(s).
</code></pre>
| Casey Flynn | <p>It seems you're trying to deploy code by using the GKE <strong>viewer role</strong> , hence your getting the permission issue. You can create the required <strong>IAM policies</strong> and <strong>role based access control (RBAC)</strong> as per your <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/access-control" rel="nofollow noreferrer">requirement</a>.</p>
<p>You can also check kubernetes engine roles and responsibilities by using this <a href="https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles" rel="nofollow noreferrer">reference</a>.</p>
| Tatikonda vamsikrishna |
<p>I failed to login inside a container with the below commands. What did I miss?</p>
<p>Thank you in advance</p>
<pre><code>
(base) debian@appdev:~$ aws eks list-clusters
{
"clusters": [
"default"
]
}
(base) debian@appdev:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
apollo-admin 1/1 Running 0 36d
default-apollo-7bb67b66f9-hgbfg 1/1 Running 0 13d
default-apollo-getseq-568bdb78bd-h7l4v 1/1 Running 0 86d
default-blast-bb6469fb4-nz5bz 1/1 Running 0 58d
default-orthovenn2-api-7cb577f5df-7xr6p 1/1 Running 0 34d
default-orthovenn2-mongodb-5fdd4467bd-mwfml 1/1 Running 0 34d
default-orthovenn2-website-5f8954c4f6-skljz 1/1 Running 0 34d
efs-app 1/1 Running 0 91d
(base) debian@appdev:~$ kubectl attach default-blast-bb6469fb4-nz5bz -i -t
Defaulting container name to blast.
Use 'kubectl describe pod/default-blast-bb6469fb4-nz5bz -n default' to see all of the containers in this pod.
Unable to use a TTY - container blast did not allocate one
If you don't see a command prompt, try pressing enter.
Error from server (Forbidden): pods "default-blast-bb6469fb4-nz5bz" is forbidden: User "developer" cannot create resource "pods/attach" in API group "" in the namespace "default"
(base) debian@appdev:~$ kubectl describe pod/default-blast-bb6469fb4-nz5bz -n default
Name: default-blast-bb6469fb4-nz5bz
Namespace: default
Priority: 0
Node: ip-10-22-196-153.ap-southeast-2.compute.internal/10.22.196.153
Start Time: Sun, 05 Sep 2021 23:45:30 +0000
Labels: app.kubernetes.io/instance=default-blast
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=default-blast
app.kubernetes.io/version=1.0.14
checksum/config-map=c3d498139d04a99ffc05ac85539b2b3960db609ede5b509e9bee949941389e8
helm.sh/chart=blast-0.0.2
pod-template-hash=bb6469fb4
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 10.22.196.82
IPs:
IP: 10.22.196.82
Controlled By: ReplicaSet/default-blast-bb6469fb4
Containers:
blast:
Container ID: docker://abc6745591a4a2e9bf74e907773cc8bae25356b0daad3b009a81854329e9ada8
Image: wurmlab/sequenceserver:2.0.0.rc8
Image ID: docker-pullable://wurmlab/sequenceserver@sha256:d9b46a927e35f261b7813899202dd8798c033c0a096dfb10c29d8968fb9b1107
Port: 4567/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 05 Sep 2021 23:45:32 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 2Gi
Requests:
cpu: 500m
memory: 256Mi
Environment Variables from:
default-blast ConfigMap Optional: false
Environment:
HELM_RELEASE_NAME: default-blast
Mounts:
/db from persistent-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-59xt8 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
persistent-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: default-blast-data
ReadOnly: false
default-token-59xt8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-59xt8
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
</code></pre>
| user977828 | <p>When you're trying to attach the container <code>kubectl attach -it POD -c CONTAINER</code> the container must be configured with <code>tty: true</code> and <code>stdin: true</code>. By default both of those values are <code>false</code>
Refer <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#debugging" rel="nofollow noreferrer">API Reference</a>.</p>
<p><strong>Example Pod:</strong></p>
<pre><code>spec:
containers:
- name: web
image: web:latest
tty: true
stdin: true
</code></pre>
| Tatikonda vamsikrishna |
<p>I am trying to install and configure Airflow on MAC via pip and venv. using this tutorial: <a href="https://my330space.wordpress.com/2019/12/20/how-to-install-apache-airflow-on-mac/" rel="nofollow noreferrer">https://my330space.wordpress.com/2019/12/20/how-to-install-apache-airflow-on-mac/</a>. I am at the point were I am initializing the DB with command <code>airflow initdb</code>. When I do so, I get this output and error:</p>
<pre><code>[2021-06-19 14:49:20,513] {db.py:695} INFO - Creating tables
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
WARNI [airflow.models.crypto] empty cryptography key - values will not be stored encrypted.
WARNI [unusual_prefix_f7b038312823bb0adacb1517baf49503823c7a6f_example_kubernetes_executor_config] Could not import DAGs in example_kubernetes_executor_config.py: No module named 'kubernetes'
WARNI [unusual_prefix_f7b038312823bb0adacb1517baf49503823c7a6f_example_kubernetes_executor_config] Install kubernetes dependencies with: pip install apache-airflow['cncf.kubernetes']
Initialization done
</code></pre>
<p>It states that I don't have kubernetes installed and it suggests that I run <code>pip install apache-airflow['cncf.kubernetes']</code>. When I do that, I get this error <code>zsh: no matches found: apache-airflow[cncf.kubernetes]</code>. I also tried these but none work:</p>
<pre><code>pip install kubernetes
pip install apache-airflow-providers-cncf-kubernetes
</code></pre>
<p>I hope someone can help as I am stuck for a while :(</p>
| titu84hh | <p>I found out that I had a permission error and then used sudo python -m pip install apache-airflow-providers-cncf-kubernetes which solved this issue.</p>
| titu84hh |
<p>I have a Github repo with 2 branches on it, <code>develop</code> and <code>main</code>. The first is the "test" environment and the other is the "production" environment. I am working with Google Kubernetes Engine and I have automated deployment from the push on Github to the deploy on GKE. So our workflow is :</p>
<ol>
<li>Pull <code>develop</code></li>
<li>Write code and test locally</li>
<li>When everything is fine locally, push on <code>develop</code> (it will automatically deploy on GKE workload <code>app_name_develop</code>)</li>
<li>QA tests on <code>app_name_develop</code></li>
<li>If QA tests passed, we create a pull request to put <code>develop</code> into <code>main</code></li>
<li>Automatically deploy on GKE workload <code>app_name_production</code> (from the <code>main</code> branch)</li>
</ol>
<p>The deployment of the container is defined in <code>Dockerfile</code> and the Kubernetes deployment is defined in <code>kubernetes/app.yaml</code>. Those two files are tracked with Git inside the repo.</p>
<p>The problem here is when we create a pull request to put <code>develop</code> into <code>main</code>, it also take the two files <code>app.yaml</code> and <code>Dockerfile</code> from <code>develop</code> to <code>main</code>. We end up with the settings from <code>develop</code> in <code>main</code>, and it messes the whole thing.</p>
<p>I can't define environment variables in those files because it could end up in the wrong branch.
My question is : How can I exclude those files from the pull request ? Or is there any way to manage multiple environments without having to manually modify the files after each pull request ?</p>
<p>I don't know if it can hlphere is my Dockerfile :</p>
<pre><code>FROM python:3.8
RUN apt-get update && apt-get install -y --no-install-recommends
RUN python -m pip install --upgrade pip
WORKDIR /app/
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
COPY . .
EXPOSE 8080
CMD ["gunicorn", "-b", ":8080", "main:app"]
</code></pre>
<p>And here is my YAML file to deploy on GKE (actually I took the one advised by GKE when creating automated deployment) :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app_name-api
name: app_name-api
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: app_name-api
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: app_name-api
spec:
containers:
- image: gcr.io/path_to_image/github.com/company_name/app_name
imagePullPolicy: IfNotPresent
name: app_name-1
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
app: app_name-api
name: app_name-api-pod-autoscaler
namespace: default
spec:
maxReplicas: 3
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app_name-api
</code></pre>
<p>Thanks a lot for any help you could provide !</p>
| FairPluto | <p>You can't ignore some files from a pull request selectively. But there are 2 simple workarounds for this :</p>
<p><strong>First -</strong><br />
Create a new branch from βdevelopβ</p>
<p>Replace the non-required files from 'main'</p>
<p>Create pull request from this new branch</p>
<p><strong>Second -</strong><br />
Create a new branch from 'main'</p>
<p>Put changes of required files from 'develop'</p>
<p>Create pull request from this new branch</p>
<p>Any of these methods will work. Which will be easier depends on how many files are to be included / excluded.</p>
<p><strong>Example :</strong><br />
Considering main as target and dev as source</p>
<pre><code>root
|-- src
| -- app.py
|-- .gitignore
|-- settings.py
|-- requirements.txt
</code></pre>
<p>Let's say, I would want to ignore the settings.py file from being merged
First move to the target branch (the branch to which you want to merge the changes)</p>
<pre><code>git checkout main
</code></pre>
<p>Then you can use the git checkout command to selective pick the files you want to merge</p>
<pre><code>git checkout dev src/
</code></pre>
<p>This will only merge the files changed inside src/ folder</p>
<pre><code>NOTE: You can also do it selectively for each file.
</code></pre>
<p>Then push to remote repository</p>
<pre><code>git push origin main
</code></pre>
<p>Bear in mind that this solution is useful only if the files to be excluded are small.</p>
<p><strong>Note:</strong> "There are tools that are built to solve this problem like skaffold and kustomize, but they might take a bit of time and restructuring of your repository before everything works. So, in the meantime, this is a simple solution which requires manual work but can do while you study and decide which of the more advanced instrumentation is suitable ."</p>
| Jyothi Kiranmayi |
<p>I am trying kubernetes and seem to have hit bit of a hurdle. The problem is that from within my pod I can't curl local hostnames such as <strong>wrkr1</strong> or <strong>wrkr2</strong> (machine hostnames on my network) but can successfully resolve hostnames such as google.com or stackoverflow.com.</p>
<p>My cluster is a basic setup with one master and 2 worker nodes.</p>
<p><strong>What works from within the pod:</strong></p>
<ul>
<li><p>curl to <strong>google.com</strong> from pod -- works</p>
</li>
<li><p>curl to another service(kubernetes) from pod -- works</p>
</li>
<li><p>curl to another machine on same LAN via its IP address such as 192.168.x.x -- works</p>
</li>
<li><p>curl to another machine on same LAN via its hostname such as wrkr1 -- does not work</p>
</li>
</ul>
<p><strong>What works from the node hosting pod:</strong></p>
<ul>
<li>curl to google.com --works</li>
<li>curl to another machine on same LAN via
its IP address such as 192.168.x.x -- works</li>
<li>curl to another machine
on same LAN via its hostname such as wrkr1 -- works.</li>
</ul>
<blockquote>
<p>Note: the pod cidr is completely different from the IP range used in
LAN</p>
</blockquote>
<p>the node contains a hosts file with entry corresponding to wrkr1's IP address (although I've checked node is able to resolve hostname without it also but I read somewhere that a pod inherits its nodes DNS resolution so I've kept the entry)</p>
<p>Kubernetes Version: <strong>1.19.14</strong></p>
<p>Ubuntu Version: <strong>18.04 LTS</strong></p>
<p>Need help as to whether this is normal behavior and what can be done if I want pod to be able to resolve hostnames on local LAN as well?</p>
| JayD | <h2>What happens</h2>
<blockquote>
<p>Need help as to whether this is normal behavior</p>
</blockquote>
<p>This is normal behaviour, because there's no DNS server in your network where virtual machines are hosted and kubernetes has its own DNS server inside the cluster, it simply doesn't know about what happens on your host, especially in <code>/etc/hosts</code> because pods simply don't have access to this file.</p>
<blockquote>
<p>I read somewhere that a pod inherits its nodes DNS resolution so I've
kept the entry</p>
</blockquote>
<p>This is a point where tricky thing happens. <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy" rel="nofollow noreferrer">There are four available</a> <code>DNS policies</code> which are applied per pod. We will take a look at two of them which are usually used:</p>
<ul>
<li>"<strong>Default</strong>": The Pod inherits the name resolution configuration from the node that the pods run on. See related discussion for more details.</li>
<li>"<strong>ClusterFirst</strong>": Any DNS query that does not match the configured cluster domain suffix, such as "www.kubernetes.io", is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured</li>
</ul>
<p>The trickiest ever part is this (from the same link above):</p>
<blockquote>
<p>Note: "Default" is not the default DNS policy. If dnsPolicy is not
explicitly specified, then "ClusterFirst" is used.</p>
</blockquote>
<p>That means that all pods that do not have <code>DNS policy</code> set will be run with <code>ClusterFirst</code> and they won't be able to see <code>/etc/resolv.conf</code> on the host. I tried changing this to <code>Default</code> and indeed, it can resolve everything host can, however internal resolving stops working, so it's not an option.</p>
<p>For example <code>coredns</code> deployment is run with <code>Default</code> dnsPolicy which allows <code>coredns</code> to resolve hosts.</p>
<h2>How this can be resolved</h2>
<p><strong>1. Add <code>local</code> domain to <code>coreDNS</code></strong></p>
<p>This will require to add <code>A</code> records per host. Here's a part from edited coredns configmap:</p>
<p>This should be within <code>.:53 {</code> block</p>
<pre><code>file /etc/coredns/local.record local
</code></pre>
<p>This part is right after block above ends (SOA information was taken from the example, it doesn't make any difference here):</p>
<pre><code>local.record: |
local. IN SOA sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
wrkr1. IN A 172.10.10.10
wrkr2. IN A 172.11.11.11
</code></pre>
<p>Then <code>coreDNS</code> deployment should be added to include this file:</p>
<pre><code>$ kubectl edit deploy coredns -n kube-system
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
- key: local.record # 1st line to add
path: local.record # 2nd line to add
name: coredns
</code></pre>
<p>And restart <code>coreDNS</code> deployment:</p>
<pre><code>$ kubectl rollout restart deploy coredns -n kube-system
</code></pre>
<p>Just in case check if <code>coredns</code> pods are <code>running and ready</code>:</p>
<pre><code>$ kubectl get pods -A | grep coredns
kube-system coredns-6ddbbfd76-mk2wv 1/1 Running 0 4h46m
kube-system coredns-6ddbbfd76-ngrmq 1/1 Running 0 4h46m
</code></pre>
<p>If everything's done correctly, now newly created pods will be able to resolve hosts by their names. <a href="https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/" rel="nofollow noreferrer">Please find an example in coredns documentation</a></p>
<p><strong>2. Set up DNS server in the network</strong></p>
<p>While <code>avahi</code> looks similar to DNS server, it does not act like a DNS server. It's not possible to setup <code>requests forwarding</code> from <code>coredns</code> to <code>avahi</code>, while it's possible to proper DNS server in the network and this way have everything will be resolved.</p>
<p><strong>3. Deploy <code>avahi</code> to kubernetes cluster</strong></p>
<p>There's a ready image with <code>avahi</code> <a href="https://hub.docker.com/r/flungo/avahi" rel="nofollow noreferrer">here</a>. If it's deployed into the cluster with <code>dnsPolicy</code> set to <code>ClusterFirstWithHostNet</code> and most importantly <code>hostNetwork: true</code> it will be able to use host adapter to discover all available hosts within the network.</p>
<h2>Useful links:</h2>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy" rel="nofollow noreferrer">Pods DNS policy</a></li>
<li><a href="https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/" rel="nofollow noreferrer">Custom DNS entries for kubernetes</a></li>
</ul>
| moonkotte |
<p>I have created subnets in GCP with allotted secondary IP ranges for pods and services. And have started a GKE cluster by providing the above secondary IP ranges for services and pods. Lets call this cluster-A.</p>
<p>Now I want to create another GKE cluster within same region, and want to use same subnets.
Can I use the same secondary IP ranges, which I provided for cluster-A, to create a new GKE-cluster?</p>
<p>My assumptions is, both the clusters will be provided IPs from the common subnet and secondary ranges, and there won't be any conflict. GCP would take care of it. But I am not sure of this, so can't move forward, fearing this might break my existing cluster.</p>
<p>The secondary IP ranges are big enough to accommodate services and pods of both the cluster.</p>
<p>Can anybody help me with this? Share some knowledge. Thanks.</p>
| kadamb | <p>The pod secondary CIDR ranges and sub-networks can be shared across multiple clusters. However, Services secondary CIDR ranges must be different across multiple clusters and cannot be shared because secondary service ranges are unique to a given cluster.</p>
<p>Sharing IP ranges is not recommended as :</p>
<p>1.It can add extra noise in the networks.</p>
<p>2.The IP range that the subnet is using to assign to Nodes/Pods is now effectively shared among clusters.This can lead to IP exhaustion since one cluster may use more IP's than another one and this may leave the second cluster incapable of using more IP's which canβt create more nodes.</p>
<p>For more information refer the link:</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_svcs" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_svcs</a></p>
| Jyothi Kiranmayi |
<p>I am trying to trigger a job creation from a sensor but I am getting the error below:</p>
<pre><code> Job.batch is forbidden: User \"system:serviceaccount:samplens:sample-sa\" cannot create resource \"Job\" in API group \"batch\" in the namespace \"samplens\"","errorVerbose":"timed out waiting for the condition: Job.batch is forbidden: User \"system:serviceaccount:samplens:sample-sa\" cannot create resource \"Job\" in API group \"batch\" in the namespace \"samplens\"\nfailed to execute trigger\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).triggerOne\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:328\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).triggerActions\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:269\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).listenEvents.func1.3\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:181\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357","triggerName":"sample-job","triggeredBy":["payload"],"triggeredByEvents":["38333939613965312d376132372d343262302d393032662d663731393035613130303130"],"stacktrace":"github.com/argoproj/argo-events/sensors.(*SensorContext).triggerActions\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:271\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).listenEvents.func1.3\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:181"}
12
</code></pre>
<p>Although I have created a <code>serviceaccount</code>, <code>role</code> and <code>rolebinding</code>.
Here is my <code>serviceaccount</code> creation file:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: sample-sa
namespace: samplens
</code></pre>
<p>Here is my <code>rbac.yaml</code>:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: sample-role
namespace: samplens
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- delete
- get
- watch
- patch
- apiGroups:
- "batch"
resources:
- jobs
verbs:
- create
- delete
- get
- watch
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sample-role-binding
namespace: samplens
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: sample-role
subjects:
- kind: ServiceAccount
name: sample-sa
namespace: samplens
</code></pre>
<p>and here is my <code>sensor.yaml</code>:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: webhook
spec:
template:
serviceAccountName: sample-sa
dependencies:
- name: payload
eventSourceName: webhook
eventName: devops-toolkit
triggers:
- template:
name: sample-job
k8s:
group: batch
version: v1
resource: Job
operation: create
source:
resource:
apiVersion: batch/v1
kind: Job
metadata:
name: samplejob-crypto
annotations:
argocd.argoproj.io/hook: PreSync
argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
ttlSecondsAfterFinished: 100
serviceAccountName: sample-sa
template:
spec:
serviceAccountName: sample-sa
restartPolicy: OnFailure
containers:
- name: sample-crypto-job
image: docker.artifactory.xxx.com/abc/def/yyz:master-b1b347a
</code></pre>
<p>Sensor is getting triggered correctly but is failing to create the job.
Can someone please help, what am I missing?</p>
| TruckDriver | <p>Posting this as community wiki for better visibility, feel free to edit and expand it.</p>
<p>The original issue <strong>was resolved</strong> by adjusting <code>role</code> and giving <code>*</code> verbs. Which means argo sensor requires more permissions in fact.</p>
<p>This is a working solution for testing environment, while for production RBAC should be used with <code>principle of least privileges</code>.</p>
<p><strong>How to test RBAC</strong></p>
<p>There's a <code>kubectl</code> syntax which allows to test if RBAC (service account + role + rolebinding) was set up as expected.</p>
<p>Below is example how to check if <code>SERVICE_ACCOUNT_NAME</code> in <code>NAMESPACE</code> can create jobs in namespace <code>NAMESPACE</code>:</p>
<p><code>kubectl auth can-i --as=system:serviceaccount:NAMESPACE:SERVICE_ACCOUNT_NAME create jobs -n NAMESPACE</code></p>
<p>The answer will be simple: <code>yes</code> or <code>no</code>.</p>
<p><strong>Usefull links:</strong></p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Using RBAC authorization</a></li>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#checking-api-access" rel="nofollow noreferrer">Checking API access</a></li>
</ul>
| moonkotte |
<p>I have setup a testing k3d cluster with 4 agents and a server.</p>
<p>I have a storage class defined thus:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
</code></pre>
<p>with a pv defined thus:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: basic-minio-storage
labels:
storage-type: object-store-path
spec:
capacity:
storage: 500Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/basic_minio
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3d-test-agent-0
</code></pre>
<p>the pvc that I have defined is like:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. Will be used in deployment below.
name: minio-pv-claim
labels:
app: basic-minio
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 500Gi
selector:
matchLabels:
storage-type: object-store-path
</code></pre>
<p>my deployment is like:</p>
<pre><code>
# Create a simple single node Minio linked to root drive
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: basic-minio
namespace: minio
spec:
selector:
matchLabels:
app: basic-minio
serviceName: "basic-minio"
template:
metadata:
labels:
app: basic-minio
spec:
containers:
- name: basic-minio
image: minio/minio:RELEASE.2021-10-10T16-53-30Z
imagePullPolicy: IfNotPresent
args:
- server
- /data
env:
- name: MINIO_ROOT_USER
valueFrom:
secretKeyRef:
name: minio-secret
key: minio-root-user
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: minio-secret
key: minio-root-password
ports:
- containerPort: 9000
volumeMounts:
- name: storage
mountPath: "/data"
volumes:
- name: storage
persistentVolumeClaim:
claimName: minio-pv-claim
</code></pre>
<p>In my kubernetes dashboard, I can see the that PV is provisioned and ready.
The PV has been setup and has bound to the PV.</p>
<p>But my pod shows the error: <code>0/5 nodes are available: 5 node(s) had volume node affinity conflict.</code></p>
<p>what is causing this issue and how can I debug it?</p>
| KillerSnail | <p>Your (local) volume is created on the worker node <code>k3d-test-agent-0</code> but none of your pod is scheduled to run on this node. This is not a good approach but if you must run in this way, you can direct all pods to run on this host:</p>
<pre><code>...
spec:
nodeSelector:
kubernetes.io/hostname: k3d-test-agent-0
containers:
- name: basic-minio
...
</code></pre>
| gohm'c |
<p>I have two <code>kind: Deployment</code> in my <code>yaml</code> file
The main one</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts-management-service
labels:
app: accounts-management-service
spec:
replicas: $($env:WEB_REPLICAS)
selector:
matchLabels:
app: accounts-management-service
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 20%
maxUnavailable: 10%
progressDeadlineSeconds: 3600
template:
metadata:
labels:
app: accounts-management-service
spec:
containers:
- image: registry$(GetHash).azurecr.io/$(GetContext 'ApplicationContainerName')
name: accounts-management-service
command: ["npm"]
args: ["run", "start:production:web"]
resources:
requests:
memory: "500Mi"
cpu: "1000m"
limits:
memory: "4096Mi"
cpu: "1001m"
env:
- name: CONFIG_DEPLOYMENT_UNIT
value: $(GetContext 'DeploymentUnit')
- name: NODE_ENV
value: $(GetContext 'DeploymentUnit')
- name: TENANT
value: "$(GetContext 'DeploymentUnit' | Format -NoHyphens)$(GetContext 'Cluster')"
- name: ROLE
value: $(GetContext 'Cluster')
ports:
- containerPort: 1337
protocol: TCP
volumeMounts:
- name: secret-agent
mountPath: /var/run/secret-agent
readinessProbe:
httpGet:
path: /v0.1/status
port: 1337
successThreshold: 2
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
livenessProbe:
httpGet:
path: /v0.1/status_without_db
port: 1337
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 30
volumes:
- name: secret-agent
hostPath:
path: /var/run/secret-agent
type: DirectoryOrCreate
</code></pre>
<p>and the second one</p>
<pre><code># second
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts-management-service-second
labels:
app: accounts-management-service-second
spec:
replicas: $($env:second_REPLICAS)
selector:
matchLabels:
app: accounts-management-service-second
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 10%
maxUnavailable: 10%
template:
metadata:
labels:
app: accounts-management-service-second
spec:
containers:
- image: registry$(GetHash).azurecr.io/$(GetContext 'ApplicationContainerName')
name: accounts-management-service-second
command: ["npm"]
args: ["run", "start:production:second"]
resources:
requests:
memory: "500Mi"
cpu: "250m"
limits:
memory: "8192Mi"
cpu: "1001m"
env:
- name: CONFIG_DEPLOYMENT_UNIT
value: $(GetContext 'DeploymentUnit')
- name: NODE_ENV
value: $(GetContext 'DeploymentUnit')
- name: TENANT
value: "$(GetContext 'DeploymentUnit' | Format -NoHyphens)$(GetContext 'Cluster')"
- name: ROLE
value: $(GetContext 'Cluster')
ports:
- containerPort: 1337
protocol: TCP
volumeMounts:
- name: secret-agent
mountPath: /var/run/secret-agent
readinessProbe:
httpGet:
path: /status
port: 1337
initialDelaySeconds: 60
periodSeconds: 10
livenessProbe:
httpGet:
path: /status
port: 1337
initialDelaySeconds: 60
periodSeconds: 10
volumes:
- name: secret-agent
hostPath:
path: /var/run/secret-agent
type: DirectoryOrCreate
</code></pre>
<p>They both point to the same volume path. I am new to Kubernetes and I am trying to understand the relation between pod creation and two <code>kind: Deployment</code>s. It would be nice if someone can explain thos. I hope that this falls into the SO allowed questions category.</p>
| MikiBelavista | <p>If you want to figure out which pods were created by specified deployment, you could use <code>kubectl get pods</code> command with <code>--selector</code> option to filter these pods.</p>
<p>The labels you defined in deployment templates were <code>app=accounts-management-service</code> and <code>app=accounts-management-service-second</code>, you could figure out these pods by:</p>
<pre><code>$ kubectl get pods --selector=app=accounts-management-service
$ kubectl get pods --selector=app=accounts-management-service-second
</code></pre>
| ζεΌη½η» |
<p>I am searching for a tutorial or a good reference to perform docker container live migration in Kubernetes between two hosts (embedded devices - arm64 architecture).</p>
<p>As far as I searched on the internet resources, I could not find a complete documentation about it. I am a newbe and it will be really helpful if someone could provide me any good reference materials so that I can improve myself.</p>
| k srinidhi | <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p>As @David Maze said in terms of containers and pods, it's not really a live migration. Usually pods are managed by <code>deployments</code> which have <code>replicasets</code> which control pods state: they are created and in requested amount. Any changes in amount of pods (e.g. you delete it) or using image will trigger pods recreation.</p>
<p>This also can be used for scheduling pods on different nodes when for instance you need to perform maintenance on the node or remove/add one.</p>
<hr />
<p>As for your question in comments, it's not necessarily the same volume as it can I suppose have a short downtime.</p>
<p>Sharing volumes between kubernetes clusters on premise (cloud may differ) is not a built-in feature. You may want to look at <code>nfs</code> server deployed in your network:</p>
<p><a href="https://www.linuxtechi.com/configure-nfs-persistent-volume-kubernetes/" rel="nofollow noreferrer">Mounting external NFS share to pods</a></p>
| moonkotte |
<p>I am able to run successfully a Kubernetes Job with multiple parallel worker processes, by following the example provided in "Fine Parallel Processing Using a Work Queue" in the official Kubernetes documentation
(<a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/</a>)</p>
<p>For example, with <code>parallelism: 2</code> in the Job definition yaml file, I am able to complete the task on 2 worker pods in parallel.</p>
<p>Hence, the command:</p>
<pre><code>kubectl get jobs
</code></pre>
<p>returns:</p>
<pre><code>NAME COMPLETIONS DURATION AGE
worker 2/1 of 2 1h 6h
</code></pre>
<p>My question is: how to interpret precisely the notation <code>2/1 of 2</code> in the completions column?
(especially what is the meaning of the <code>/1</code> part?). I cannot find anything helpful in the official documention about this.</p>
<p>Thank you for your assistance.</p>
<p>[Update] The status of the pods, when the job is completed, is the following:</p>
<pre><code>kubectl get pods
</code></pre>
<p>returns:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
worker-dt2ss 0/1 Completed 0 6h
worker-qm56f 0/1 Completed 0 6h
</code></pre>
| nicolas.f.g | <p>A Job that is completed when a certain number of Pods terminate successfully. The <strong>Completions</strong> specifies how many Pods should terminate successfully before the Job is completed.</p>
<p><strong>COMPLETIONS</strong> indicates the total number of pods in the job / the number of completed pods in the job. <strong>From your use case</strong> <strong>2/1</strong> indicates that there are <strong>two</strong> pods in the job in which <strong>one</strong> of the pods has been completed.</p>
<p>The <strong>DURATION</strong> indicates how long the business in the job has been running. This is useful for performance optimization.</p>
<p>And <strong>AGE</strong> is obtained by subtracting the creation time of a pod from the current time. This parameter specifies the time elapsed since the pod was created.</p>
| Jyothi Kiranmayi |
<p>API Usage in <code>getInitialsProps</code> but throwing error <code>error connect ECONNREFUSED 127.0.0.1:443</code>
full url: <code>http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser</code></p>
<p>But if I called <code>https://ticketing.dev/api/users/currentuser</code> directly from the browser then it returned the expected response.</p>
<pre><code>PS C:\Users\sajee> kubectl get namespace
NAME STATUS AGE
default Active 3d16h
ingress-nginx Active 3d15h
kube-node-lease Active 3d16h
kube-public Active 3d16h
kube-system Active 3d16h
PS C:\Users\sajee> kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.96.242.223 localhost 80:31220/TCP,443:30164/TCP 3d15h
ingress-nginx-controller-admission ClusterIP 10.109.183.213 <none> 443/TCP 3d15h
PS C:\Users\sajee>
</code></pre>
| Sajeeb M Ahamed | <pre><code> Home.getInitialProps = async ({ req }) => {
if (typeof window === "undefined") {
const { data } = await axios.get("http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser",
{
headers: req.headers,
}
);
return data;
} else {
// we are on the browser!
const { data } = await axios.get("/api/users/currentuser");
return data;
}
return {};
};
</code></pre>
<p>Try to pass with req props and sent headers with req.headers.
Right now you are working with auth service and client service.your ingress nginx server try to find route in client service initially there are no routes or api endpoint not found.if you hard refresh your browser then the request goes to your auth service and fetch data.</p>
| Yashparmar_1112 |
<p>I was following the <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">Run a Single-Instance Stateful Application</a> tutorial of Kubernetes (I changed the MySQL docker image's tag to 8), and it seems the server is running correctly:
<a href="https://i.stack.imgur.com/NztJs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NztJs.png" alt="enter image description here" /></a></p>
<p>But when I try to connect the server as the tutorial suggesting:</p>
<pre><code>kubectl run -it --rm --image=mysql:8 --restart=Never mysql-client -- mysql -h mysql -ppassword
</code></pre>
<p>I get the following error:</p>
<blockquote>
<p>ERROR 1045 (28000): Access denied for user 'root'@'10.1.0.99' (using password: YES)
pod "mysql-client" deleted</p>
</blockquote>
<hr />
<p>I already looked at those questions:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/65460113/cant-access-mysql-root-or-user-after-kubernetes-deployment">Can't access mysql root or user after kubernetes deployment</a></li>
<li><a href="https://stackoverflow.com/questions/64205150/access-mysql-kubernetes-deployment-in-mysql-workbench">Access MySQL Kubernetes Deployment in MySQL Workbench</a></li>
</ul>
<p>But changing the <code>mountPath</code> or <code>port</code> didn't work.</p>
| Roy Yosef | <p>Default behavior of <code>root</code> account can only be connected to from inside the container. Here's an updated version of the example that allows you to connect from remote:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8.0.26
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_ROOT_HOST
value: "%"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
emptyDir: {}
# Following the original example, comment the emptyDir and uncomment the following if you have StorageClass installed.
# persistentVolumeClaim:
# claimName: mysql-pv-claim
</code></pre>
<p>No change to the client connect except for the image tag:</p>
<p><code>kubectl run -it --rm --image=mysql:8.0.26 --restart=Never mysql-client -- mysql -h mysql -ppassword</code></p>
<p>Test with <code>show databases;</code>:</p>
<pre><code>mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)
</code></pre>
| gohm'c |
<p>I'm running kubernetes 1-21-0 on Centos7. I've set up a keycloak service to test my ingress controller and am able to access the keycloak on the host url with the keycloak port like <code>myurl.com:30872</code>. These are my running services:</p>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default keycloak NodePort 10.96.11.164 <none> 8080:30872/TCP 21h
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
ingress-nginx ingress-nginx-controller NodePort 10.102.201.24 <none> 80:31110/TCP,443:30566/TCP 9m45s
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.107.90.207 <none> 80/TCP,443/TCP 9m45s
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 11d
</code></pre>
<p>I've deployed the following <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml" rel="nofollow noreferrer">nginx ingress controller</a>.</p>
<p>And added an HTTP webhook to the service:</p>
<pre><code># Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-3.23.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.44.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: http-webhook
port: 80
targetPort: webhook
- name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
</code></pre>
<p>With this ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: keycloak
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /keycloak
pathType: Prefix
backend:
service:
name: keycloak
port:
number: 8080
</code></pre>
<p>Now when I attempt to connect to the keycloak service through the ingress I go to <code>myurl.com/keycloak</code> but it's unable to connect and trying to curl it from within my control node shows connection refused:</p>
<pre><code># curl -I http://127.0.0.1/keycloak
curl: (7) Failed connect to 127.0.0.1:80; Connection refused
</code></pre>
<p>Can someone see what I'm missing?</p>
<p><strong>Edit:</strong></p>
<p>I realized the ingress controller actually works, but I need to specify its port also to reach it like this:</p>
<pre><code>curl -I http://127.0.0.1:31110/keycloak
</code></pre>
<p>Which I'd like to avoid.</p>
| francois_halbach | <p>You have to specify <code>31110</code> port because your nginx ingress is set up with <code>NodePort</code> which means kubernetes listens to this port and all traffic that goes here is redirected to <code>nginx-ingress-controller</code> pod.</p>
<p>Depending on your setup and goals, this can be achieved differently.</p>
<p><strong>Option 1</strong> - for testing purposes only and without any changes in setup. Works only on a control plane where <code>nginx-ingress-controller</code> pod is running</p>
<p>it's possible to forward traffic from outside port 80 to <code>nginx-ingress-controller</code> pod directly port 80. You can run this command (in background):</p>
<pre><code>sudo kubectl port-forward ingress-nginx-controller-xxxxxxxx-yyyyy 80:80 -n ingress-nginx &
</code></pre>
<p>Curl test shows that it's working:</p>
<pre><code>curl -I localhost/keycloak
Handling connection for 80
HTTP/1.1 200 OK
Date: Wed, 16 Jun 2021 13:19:23 GMT
</code></pre>
<p>Curl can be run on different instance, in this case command will look this way without specifying any ports:</p>
<pre><code>curl -I public_ip/keycloak
</code></pre>
<p><strong>Option 2</strong> - this one is a bit more difficult, however provides better results.</p>
<p>It's possible to expose pods outside of the cluster. Feature is called <code>hostPort</code> - it allows to expose a single container port on the host IP. To have this work on different worker nodes, <code>ingress-nginx-controller</code> should be deployed as <code>DaemonSet</code>.</p>
<p>Below parts in <code>values.yaml</code> for ingress-nginx helm chart that I corrected:</p>
<p>hostPort -> enabled -> <strong>true</strong></p>
<pre><code> ## Use host ports 80 and 443
## Disabled by default
##
hostPort:
enabled: true
ports:
http: 80
https: 443
</code></pre>
<p>kind -> <strong>DaemonSet</strong></p>
<pre><code> ## DaemonSet or Deployment
##
kind: DaemonSet
</code></pre>
<p>Then install ingress-nginx-controller from this chart.
What it does is by default <code>ingress-nginx-controller</code> pods will listen to traffic on 80 and 443 port.
Which confirms with simple test:</p>
<pre><code>curl -I localhost/keycloak
HTTP/1.1 200 OK
Date: Wed, 16 Jun 2021 13:31:25 GMT
</code></pre>
<p><strong>Option 3</strong> - may be considered as well if ingress-nginx is installed with LoadBalancer type.</p>
<p>Use <code>metallb</code> - software loadbalancer specifically designed for bare metal clusters. <a href="https://metallb.universe.tf/installation/#installation-by-manifest" rel="nofollow noreferrer">How to install metallb</a> and <a href="https://metallb.universe.tf/configuration/" rel="nofollow noreferrer">configure</a></p>
<p>Once it's done and ingress-nginx is deployed, ingress-nginx will get External-IP:</p>
<p>kubectl get svc --all-namespaces</p>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx ingress-nginx-controller LoadBalancer 10.102.135.146 192.168.1.240 80:32400/TCP,443:32206/TCP 43s
</code></pre>
<p>Testing this again with <code>curl</code>:</p>
<pre><code>curl -I 192.168.1.240/keycloak
HTTP/1.1 200 OK
Date: Wed, 16 Jun 2021 13:55:34 GMT
</code></pre>
<p>More information about topics above:</p>
<ul>
<li><a href="https://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/" rel="nofollow noreferrer">hostPorts and hostNetwork</a></li>
<li><a href="https://metallb.universe.tf/" rel="nofollow noreferrer">Metallb project</a></li>
</ul>
| moonkotte |
<p>I am trying to use the example cronjob that is explained through Kubernetes documentation <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">here</a>. However, when I check it on <a href="https://k8slens.dev/" rel="nofollow noreferrer">Lens</a> (a tool to display Kubernetes info), I receive an error upon creating a pod. The only difference between the Kubernetes example and my code is I added a namespace since I do not own the server I am working on. Any help is appreciated. Below is my error and yaml file.</p>
<pre><code>Error creating: pods "hello-27928364--1-ftzjb" is forbidden: exceeded quota: test-rq, requested: limits.cpu=16,limits.memory=64Gi,requests.cpu=16,requests.memory=64Gi, used: limits.cpu=1,limits.memory=2G,requests.cpu=1,requests.memory=2G, limited: limits.cpu=12,limits.memory=24Gi,requests.cpu=12,requests.memory=24Gi
</code></pre>
<p>This is my yaml file that I apply.</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
namespace: test
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
| TheStudentProgrammer | <p>Your namespace seems to have a quota configured. Try to configure the resources on your CronJob, for example:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
namespace: test
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- c
- date; echo Hello from the Kubernetes cluster
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
restartPolicy: OnFailure
</code></pre>
<p>Note the <strong>resources:</strong> and it's indentation.</p>
| Marcos Rosse |
<p>I downloaded Kubernetes for integration with Jenkins and created a Serviceaccount, but the secret is not automatically created.</p>
<p>In the past, I remember that a Secret was automatically created and the Token was mounted when Serviceaccount was created.</p>
<p>How can I automatically create a Secret as before?</p>
| HHJ | <p>As mentioned by @P.... In kubernetes version 1.24 this behaviour has been changed, the <strong>LegacyServiceAccountTokenNoAutoGeneration</strong> feature gate is enabled by default in 1.24.</p>
<p>New secrets containing service account tokens are no longer auto-generated and are not automatically ambient in secrets in 1.24+. Existing secrets containing service account tokens are still usable.</p>
<p>API clients scraping token content from auto-generated Secret API objects must start using the <a href="https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/" rel="noreferrer">TokenRequest API</a> to obtain a token (preferred, available in all supported versions), or you can explicitly request a secret-based token if a secret-based token is desired/needed.</p>
<p>Refer <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token" rel="noreferrer">manually create a service account API token</a> to explicitly request a secret-based token.</p>
| Jyothi Kiranmayi |
<p>I`m currently using the jboss/keycloak:12.0.4 container within my Kubernetes cluster with a single realm configuration. The configuration file is mounted through an configmap.
The config file currently looks like {...realm1...}. Now I need three more realms and I have <a href="https://stackoverflow.com/questions/61184888/how-to-import-multiple-realm-in-keycloak">read</a>, that I can put multiple realm configs into an array.</p>
<pre><code>[
{...realm1...},
{...realm2...}
]
</code></pre>
<p>Unfortunately this is not working on my side. The containers are crashing and in the logs I get the error message: "Error during startup: java.lang.RuntimeException: Failed to parse json". I saw that people are adding the option <code>-Dkeycloak.migration.provider=singleFile</code> to their docker commands, but I don't have those for Kubernetes that way. How can i achieve to include multiple realms?</p>
| MOE | <p>Since you are using <code>jboss/keycloak:12.0.4</code> I am assuming that you set <code>KEYCLOAK_IMPORT</code> environment variable, right?</p>
<p>The docker container <a href="https://github.com/keycloak/keycloak-containers/blob/12.0.4/server/tools/docker-entrypoint.sh#L71" rel="nofollow noreferrer">maps this environment variable</a> to <code>-Dkeycloak.import=...</code>.</p>
<p>If you want to import multiple realms this way, you may simply want to add each realm into a single file and add them as a comma-seperated list.</p>
<p>So <code>KEYCLOAK_IMPORT=/tmp/realm1.json,/tmp/realm2.json</code> becomes <code>-Dkeycloak.import=/tmp/realm1.json,/tmp/realm2.json</code></p>
<p>For details please see the <a href="https://www.keycloak.org/docs/latest/server_admin/index.html#_export_import" rel="nofollow noreferrer">server administration guide</a> (scroll down to end of chapter <code>Export and Import</code>).</p>
<p>You may also want to checkout <a href="https://github.com/keycloak/keycloak-operator" rel="nofollow noreferrer">Keycloak operator</a> which provides a <a href="https://github.com/keycloak/keycloak-operator/blob/master/deploy/crds/keycloak.org_keycloakrealms_crd.yaml" rel="nofollow noreferrer">CRD for KeycloakRealms</a>.</p>
| sventorben |
<p>I'm trying to understand how kubelet watches changes from api server. I found a note over <code>syncLoop</code> function</p>
<pre><code>// kubernetes/pkg/kubelet/kubelet.go
// syncLoop is the main loop for processing changes. It watches for changes from
// three channels (file, apiserver, and http) and creates a union of them. For
// any new change seen, will run a sync against desired state and running state. If
// no changes are seen to the configuration, will synchronize the last known desired
// state every sync-frequency seconds. Never returns.
</code></pre>
<p>Does kubelet pulls events or api server pushes events to kubelet?</p>
| zeromsi | <p>The kubelet is <strong>the primary "node agent" that runs on each node</strong>. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.</p>
<p>On the Kubelet polling: The API server pushes events to kubelet. Actually, the API server supports a "watch" mode, which uses the WebSocket protocol. In this way the Kubelet is notified of any change to Pods with the Hostname equal to the hostname of the Kubelet.</p>
<p>Kubelet can obtain Pod configurations required by the local node in multiple ways. The most important way is <strong>Apiserver</strong>. As indicated in the comments, the <strong>syncLoop</strong> function is the major cycle of Kubelet. This function listens on the updates, obtains the latest Pod configurations, and synchronizes the running state and desired state. In this way, all Pods on the local node can run in the expected states. Actually, <strong>syncLoop</strong> only encapsulates <strong>syncLoopIteration</strong>, while the synchronization operation is carried out by <strong>syncLoopIteration</strong>.</p>
<p>Refer <a href="https://dzone.com/articles/understanding-the-kubelet-core-execution-frame" rel="nofollow noreferrer">Understanding the Kubelet Core Execution Frame</a> for more information.</p>
| Jyothi Kiranmayi |
<p>I have configured Prometheus on one of the kubernetes cluster nodes using [this][1]. After that I added following <code>prometheus.yml</code> file. I can list nodes and apiservers but for pods, all the pods shows down and error:</p>
<pre><code>Get "https:// xx.xx.xx:443 /metrics": dial tcp xx.xx.xx:443: connect: connection refused and for some pods the status is unknown.
</code></pre>
<p>Can someone point me what am I doing wrong here?</p>
<pre><code>Cat prometheus.yml
global:
scrape_interval: 1m
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: \['localhost:9090'\]
# metrics for default/kubernetes api's from the kubernetes master
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
bearer_token_file: /dfgdjk/token
api_server: https://masterapi.com:3343
tls_config:
insecure_skip_verify: true
tls_config:
insecure_skip_verify: true
bearer_token_file: /dfgdjk/token
scheme: https
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: \[__meta_kubernetes_namespace\]
action: replace
target_label: kubernetes_namespace
- source_labels: \[__meta_kubernetes_pod_name\]
action: replace
target_label: kubernetes_pod_name
# metrics for default/kubernetes api's from the kubernetes master
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
api_server: https://masterapi.com:3343
bearer_token_file: /dfgdjk/token
tls_config:
insecure_skip_verify: true
tls_config:
insecure_skip_verify: true
bearer_token_file: /dfgdjk/token
scheme: https
relabel_configs:
- source_labels: \[__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name\]
action: keep
regex: default;kubernetes;https][1]
[1]: https://devopscube.com/install-configure-prometheus-linux/
</code></pre>
| Tommy b | <p>It's impossible to get metrics to external prometheus server without having any prometheus components inside the kubernetes cluster. This happens because cluster network is isolated with host's network and it's not possible to scrape metrics from pods directly from outside the cluster.</p>
<p>Please refer to <a href="https://github.com/prometheus/prometheus/issues/4633" rel="nofollow noreferrer">Monitoring kubernetes with prometheus from outside of k8s cluster GitHub issue</a></p>
<p>There options which can be done:</p>
<ul>
<li>install prometheus inside the cluster using <code>prometheus operator</code> or manually - <a href="https://sysdig.com/blog/kubernetes-monitoring-prometheus/#install" rel="nofollow noreferrer">example</a></li>
<li>use proxy solutions, for example this one from the same thread on GitHub - <a href="https://github.com/americanexpress/k8s-prometheus-proxy" rel="nofollow noreferrer">k8s-prometheus-proxy</a></li>
<li>on top of the prometheus installed within the cluster, it's possible to have external prometheus in <code>federation</code> so all logs are saved outside of the cluster. Please refer to <a href="https://prometheus.io/docs/prometheus/latest/federation/" rel="nofollow noreferrer">prometheus federation</a>.</li>
</ul>
<p>Also important part is <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube state metrics</a> should be installed as well in kubernetes cluster. <a href="https://devopscube.com/setup-kube-state-metrics/" rel="nofollow noreferrer">How to set it up</a>.</p>
<p><strong>Edit:</strong> also you can refer to <a href="https://stackoverflow.com/questions/41845307">another SO question/answer</a> which confirms that only with additional steps or OP resolved it by another proxy solution.</p>
| moonkotte |
<p>I want to check the election of basic components, but the information displayed is different in different versions of kubernetes binary installation methods.</p>
<p>Is the corresponding information cancelled in kubernetes v1.20 +? Or is there any other way to view the election of basic components?</p>
<p>The following kubernetes configuration parameters are consistent, except that the binary executable file is replaced</p>
<blockquote>
<p>Kubernetes v1.20.8 or Kubernetes v1.20.2</p>
</blockquote>
<pre><code>$ kubectl get endpoints -n kube-system
No resources found in kube-system namespace.
</code></pre>
<blockquote>
<p>Kubernetes v1.19.12</p>
</blockquote>
<pre><code>$ kubectl get endpoints -n kube-system
NAME ENDPOINTS AGE
kube-controller-manager <none> 9m12s
kube-scheduler <none> 9m13s
</code></pre>
| Xsky | <p>I found the cause of the problem</p>
<p>The difference between the two versions is the default value of <code>--leader-select resource-lock</code></p>
<blockquote>
<p>Kubernetes v1.20.8 or Kubernetes v1.20.2</p>
</blockquote>
<pre><code>--leader-elect-resource-lock string The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'. (default "leases")
</code></pre>
<blockquote>
<p>Kubernetes v1.19.12</p>
</blockquote>
<pre><code>--leader-elect-resource-lock string The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'. (default "endpointsleases")
</code></pre>
<p>When I don't set <code>--leader-select-resource-lock</code> string in <code>controller-manager</code> or <code>scheduler</code> in v1.20.8, the default value is leaders.</p>
<p>so I can use the following command to view the information of component leaders.</p>
<pre><code>$ kubectl get leases -n kube-system
NAME HOLDER AGE
kube-controller-manager master01_dec12376-f89e-4721-92c5-a20267a483b8 45h
kube-scheduler master02_c0c373aa-1642-474d-9dbd-ec41c4da089d 45h
</code></pre>
| Xsky |
<p>We have an K8S Cluster environment with <code>1 master node</code> and <code>2 worker nodes</code> and all are on Linux and we are using flannel</p>
<p><strong>Example is given below</strong></p>
<pre><code>Master (CentOS 7) - 192.168.10.1
Worker Node-1 (CentOS 7) - 192.168.10.2
Worker Node-2 (CentOS 7) - 192.168.10.3
Worker Node-3 (Windows ) - 192.168.10.4
</code></pre>
<p>Now, we have to add a <code>Windows node</code> (eg 192.168.10.4) to existing cluster <code>192.168.10.1</code></p>
<p>According to this <a href="https://v1-17.docs.kubernetes.io/docs/setup/production-environment/windows/user-guide-windows-nodes/" rel="nofollow noreferrer">link</a> it appears that we have to update <code>cni-conf.json</code> section of <code>flannel</code> from <code>cbr0</code> to <code>vxlan0</code> and to my understanding this is done to communicate with Windows</p>
<p>My question will this change (<code>from cbr0 to vxlan0</code>) break the existing communication between Linux to Linux?</p>
| Sathish Kumar | <h2 id="lets-start-with-definitions">Let's start with definitions.</h2>
<p><code>cbr0</code> is own kubernetes bridge which is created to differentiate from <code>docker0</code> bridge used by docker.</p>
<p><code>vxlan</code> stands for <code>Virtual Extensible LAN</code> and it's an overlay network which means it encapsulates packet into another packet.</p>
<p>More precise definition:</p>
<blockquote>
<p>VXLAN is an encapsulation protocol that provides data center
connectivity using tunneling to stretch Layer 2 connections over an
underlying Layer 3 network.</p>
<p>The VXLAN tunneling protocol that encapsulates Layer 2 Ethernet frames
in Layer 3 UDP packets, enables you to create virtualized Layer 2
subnets, or segments, that span physical Layer 3 networks. Each Layer
2 subnet is uniquely identified by a VXLAN network identifier (VNI)
that segments traffic.</p>
</blockquote>
<h2 id="answer">Answer</h2>
<p>No, it won't break anything in communication between Linux nodes. This is an another option how nodes can communicate between each other using <code>flannel</code> CNI. I also tested this on my two nodes <code>linux</code> cluster and everything worked fine.</p>
<p>Main difference is how <code>flannel</code> will work with packets. It will be visible via <code>netstat</code> or <code>wireshark</code>, while for PODs nothing is going to be change because packets will be normalized when they come to PODs.</p>
<p><strong>Note!</strong> I recommend testing this change on a small dev/test cluster as there may be some additional setup for <code>firewalld</code> (usual rule before making any changes on production).</p>
<h2 id="useful-links">Useful links:</h2>
<ul>
<li><a href="https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan" rel="nofollow noreferrer">Flannel - recommended backends for VXLAN</a></li>
<li><a href="https://itnext.io/kubernetes-journey-up-and-running-out-of-the-cloud-flannel-c01283308f0e" rel="nofollow noreferrer">Kubernetes Journey β Up and running out of the cloud β flannel</a></li>
<li><a href="https://blog.neuvector.com/article/advanced-kubernetes-networking" rel="nofollow noreferrer">How Kubernetes Networking Works β Under the Hood</a></li>
</ul>
| moonkotte |
<p>I'm using bitnami/etcd chart and it has ability to create snapshots via <a href="https://github.com/bitnami/charts/blob/7faf745d6c2c3d81e9ac52db3c8de5418e1634b7/bitnami/etcd/templates/cronjob.yaml#L106" rel="nofollow noreferrer">EFS mounted pvc</a>.</p>
<p>However I get permission error after aws-efs-csi-driver is provisioned and PVC mounted to any <strong>non-root pod (user/gid is 1001)</strong></p>
<p>I'm using helm chart <a href="https://kubernetes-sigs.github.io/aws-efs-csi-driver/" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-efs-csi-driver/</a> version 2.2.0</p>
<p>values of the chart:</p>
<pre><code># you can obtain the fileSystemId with
# aws efs describe-file-systems --query "FileSystems[*].FileSystemId"
storageClasses:
- name: efs
parameters:
fileSystemId: fs-exxxxxxx
directoryPerms: "777"
gidRangeStart: "1000"
gidRangeEnd: "2000"
basePath: "/snapshots"
# enable it after the following issue is resolved
# https://github.com/bitnami/charts/issues/7769
# node:
# nodeSelector:
# etcd: "true"
</code></pre>
<p>I then manually created the PV</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: etcd-snapshotter-pv
annotations:
argocd.argoproj.io/sync-wave: "60"
spec:
capacity:
storage: 32Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs
csi:
driver: efs.csi.aws.com
volumeHandle: fs-exxxxxxx
</code></pre>
<p>Then if I mount that EFS PVC in non-rood pod I get the following error</p>
<pre><code>β klo etcd-snapshotter-001-ph8w9
etcd 23:18:38.76 DEBUG ==> Using endpoint etcd-snapshotter-001-ph8w9:2379
{"level":"warn","ts":1633994320.7789018,"logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0005ea380/#initially=[etcd-snapshotter-001-ph8w9:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.120.2.206:2379: connect: connection refused\""}
etcd-snapshotter-001-ph8w9:2379 is unhealthy: failed to commit proposal: context deadline exceeded
Error: unhealthy cluster
etcd 23:18:40.78 WARN ==> etcd endpoint etcd-snapshotter-001-ph8w9:2379 not healthy. Trying a different endpoint
etcd 23:18:40.78 DEBUG ==> Using endpoint etcd-2.etcd-headless.etcd.svc.cluster.local:2379
etcd-2.etcd-headless.etcd.svc.cluster.local:2379 is healthy: successfully committed proposal: took = 1.6312ms
etcd 23:18:40.87 INFO ==> Snapshotting the keyspace
Error: could not open /snapshots/db-2021-10-11_23-18.part (open /snapshots/db-2021-10-11_23-18.part: permission denied)
</code></pre>
<p>As a result I have to spawn a new "root" pod, get inside the pod and manually adjust the permissions</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: perm
spec:
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
containers:
- name: app1
image: busybox
command: ["/bin/sh"]
args: ["-c", "sleep 3000"]
volumeMounts:
- name: persistent-storage
mountPath: /snapshots
securityContext:
runAsUser: 0
runAsGroup: 0
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: etcd-snapshotter
nodeSelector:
etcd: "true"
</code></pre>
<pre><code>k apply -f setup.yaml
k exec -ti perm -- ash
cd /snapshots
/snapshots # chown -R 1001.1001 .
/snapshots # chmod -R 777 .
/snapshots # exit
β k create job --from=cronjob/etcd-snapshotter etcd-snapshotter-001
job.batch/etcd-snapshotter-001 created
β klo etcd-snapshotter-001-bmv79
etcd 23:31:10.22 DEBUG ==> Using endpoint etcd-1.etcd-headless.etcd.svc.cluster.local:2379
etcd-1.etcd-headless.etcd.svc.cluster.local:2379 is healthy: successfully committed proposal: took = 2.258532ms
etcd 23:31:10.32 INFO ==> Snapshotting the keyspace
{"level":"info","ts":1633995070.4244702,"caller":"snapshot/v3_snapshot.go:68","msg":"created temporary db file","path":"/snapshots/db-2021-10-11_23-31.part"}
{"level":"info","ts":1633995070.4907935,"logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1633995070.4908395,"caller":"snapshot/v3_snapshot.go:76","msg":"fetching snapshot","endpoint":"etcd-1.etcd-headless.etcd.svc.cluster.local:2379"}
{"level":"info","ts":1633995070.4965465,"logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"}
{"level":"info","ts":1633995070.544217,"caller":"snapshot/v3_snapshot.go:91","msg":"fetched snapshot","endpoint":"etcd-1.etcd-headless.etcd.svc.cluster.local:2379","size":"320 kB","took":"now"}
{"level":"info","ts":1633995070.5507936,"caller":"snapshot/v3_snapshot.go:100","msg":"saved","path":"/snapshots/db-2021-10-11_23-31"}
Snapshot saved at /snapshots/db-2021-10-11_23-31
β k exec -ti perm -- ls -la /snapshots
total 924
drwxrwxrwx 2 1001 1001 6144 Oct 11 23:31 .
drwxr-xr-x 1 root root 46 Oct 11 23:25 ..
-rw------- 1 1001 root 319520 Oct 11 23:31 db-2021-10-11_23-31
</code></pre>
<h3>Is there a way to automate this?</h3>
<p>I have this setting in storage class</p>
<pre><code>gidRangeStart: "1000"
gidRangeEnd: "2000"
</code></pre>
<p>but it has no effect.</p>
<p>PVC is defined as:</p>
<pre><code>β kg pvc etcd-snapshotter -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: efs.csi.aws.com
name: etcd-snapshotter
namespace: etcd
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 32Gi
storageClassName: efs
volumeMode: Filesystem
volumeName: etcd-snapshotter-pv
</code></pre>
| DmitrySemenov | <p>By default the StorageClass field <code>provisioningMode</code> is unset, please set it to <code>provisioningMode: "efs-ap"</code> to enable dynamic provision with access point.</p>
| gohm'c |
<p>I'm developing a Kubernetes controller. The desired state for this controller is captured in CRD-A and then it creates a deployment and statefulset to achieve the actual state. Currently I'm using server side apply to create/update these deployment and statefulsets.</p>
<p>The controller establishes watch on both CRD-A as well as deployments, statefulset. This to ensure that if there is a change in the deployment/statefulset, the reconcile() is notified and takes action to fix it. Currently the reconcile() always calls server side apply to create/update and this leads another watch event (resource version changes on every server side apply) resulting in repeated/infinite calls to reconcile()</p>
<p>One approach I've been thinking about is to leverage 'generation' on deployment/statefulset i.e. the controller will maintain a in-memory map of (k8s object -> generation) and on reconcile() compare the value in this map to what is present in the indexed informer cache; do you see any concerns with this approach? And are there better alternatives to prevent repeated/infinite reconcile() calls?</p>
| karunasagar | <p>Ideally, if the object you provide in the server side apply is not changed, the generation and the resourceVersion of the object should BOTH NOT be changed.</p>
<p>But sometimes that's not the case, see this github issue:<a href="https://github.com/kubernetes/kubernetes/issues/95460" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/95460</a></p>
<p>Fortunately, the generation always stays the same, so yes, you can leverage this field to avoid the reconcile dead loop, by adding a <code>GenerationChangedPredicate</code> filter to your controller, which will skip reconciling if the generation does not change, and it's often used in conjuction with the <code>LabelChangedPredicate</code>, which filters events when the object's labels does not change.</p>
<p>Here's how you will set up your controller with these two predicates:</p>
<pre><code>ctrl.NewControllerManagedBy(mgr).
For(&Object{}).
Owns(&appsv1.StatefulSet{}).
Owns(&appsv1.Deployment{}).
// together with Server Side Apply
// this predicates prevents meaningless reconcilations from being triggered
WithEventFilter(predicate.Or(predicate.GenerationChangedPredicate{}, predicate.LabelChangedPredicate{})).
Complete(r)
</code></pre>
| Keven Deng |
<p>The goal is to support the following two use cases:</p>
<ol>
<li><p>Our organization has one shared domain, <code>shared.domain.com</code>, so each service needs to have a unique path. For example, for our service, <code>myservice</code>, we choose <code>/myservice</code>. So we want a request to <code>shared.domain.com/myservice/users</code> to be rewritten and routed to our service with the path <code>/user</code>.</p>
</li>
<li><p>We also have a service-specific domain, <code>myservice.domain.com</code>. In this case, we don't need a rewrite, per se: a request to <code>myservice.domain.com/users</code> should be passed through to our service with the path <code>/user</code>. However, since we need a rewrite to satisfy #1 above, we need to work within the "framework" of a rewrite for this use case as well.</p>
</li>
</ol>
<p>We're using a Kubernetes Ingress NGINX rewrite (<code>nginx.ingress.kubernetes.io/rewrite-target</code>). Use case #1 is working fine. However, we can't figure out how to get #2 working.</p>
<p>For now we had to use the same path for both domains which is not ideal because it's not backwards compatible for anyone who was calling <code>myservice.domain.com/users</code>. Now they have to call <code>myservice.domain.com/myservice/users</code>. We could make a code change to make this backwards compatible for our callers, but again that's not ideal.</p>
<p>Here's our configuration:</p>
<pre class="lang-yaml prettyprint-override"><code># ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: $APP_NAME
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 'https'
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/proxy-redirect-from: "http://"
nginx.ingress.kubernetes.io/proxy-redirect-to: "https://"
spec:
rules:
- host: shared.domain.com
http:
paths:
- path: /myservice(/|$)(.*)
backend:
serviceName: $APP_NAME
servicePort: http
- host: myservice.domain.com
http:
paths:
- path: "(/|$)(.*)" # Doesn't work
backend:
serviceName: $APP_NAME
servicePort: http
</code></pre>
<p>We used these docs as a reference: <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target</a>.</p>
| David Good | <p>You need to use this regex in the path for the <code>myservice.domain.com</code>:</p>
<p><code>/*(/|$)(.*)</code></p>
<p>Also you're using <code>v1beta1</code> api which is already depricated and will be unavailable soon:</p>
<blockquote>
<p>Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+,
unavailable in v1.22+; use networking.k8s.io/v1 Ingress</p>
</blockquote>
<p>Below <code>ingress.yaml</code> written using <code>v1</code> with correct regex for <code>myservice.domain.com</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: $APP_NAME
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 'https'
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/proxy-redirect-from: "http://"
nginx.ingress.kubernetes.io/proxy-redirect-to: "https://"
spec:
rules:
- host: shared.domain.com
http:
paths:
- path: /myservice(/|$)(.*)
pathType: Prefix
backend:
service:
name: $APP_NAME
port:
number: 80
- host: myservice.domain.com
http:
paths:
- path: /*(/|$)(.*)
pathType: Prefix
backend:
service:
name: $APP_NAME
port:
number: 80
</code></pre>
| moonkotte |
<p>First I installed lens on my mac, when I try to shell one of the pods, there's message said that I don't have any kubectl installed, so I install kubectl and it works properly.</p>
<p>Now I try to change configmaps but I get an error</p>
<blockquote>
<p>kubectl/1.18.20/kubectl not found</p>
</blockquote>
<p>When I check the kubectl folder there's 2 kubectl version 1.18.20 and 1.21.</p>
<p>1.21 is the one that I installed before.</p>
<p>How can I move kubectl version that has define in lens (1.18.20) and change it to 1.21 ?</p>
<p>Note:</p>
<ul>
<li>Lens: 5.2.0-latest.20210908.1</li>
<li>Electron: 12.0.17</li>
<li>Chrome: 89.0.4389.128</li>
<li>Node: 14.16.0</li>
<li>Β© 2021 Mirantis, Inc.</li>
</ul>
<p>Thanks in advance, sorry for bad English</p>
| lauwis premium | <p>You can set kubectl path at File -> Preference -> Kubernetes -> PATH TO KUBECTL BINARY. Or you can also check "Download kubectl binaries matching the Kubernetes cluster version", this way Lens will use the same version as your target cluster.</p>
<p>By the way, you should use latest v5.2.5.</p>
| gohm'c |
<p>Situation :<br />
There are two containers in the prometheus pod ( config-reloader, prometheus )<br />
I set the resources.limits.memory as 50Mi, 32Gi respectively.</p>
<p>The metric container_memory_failcnt has been increased dramatically from 10 to 8000 within 5 minutes (precisely rate(container_memory_failcnt{}[5m]) )</p>
<p>The mertic container_memory_failcnt tells how many times the container hits memory limit.</p>
<p>But according to the metric container_memory_working_set_bytes, the prometheus container used 18Gi of memory.<br />
The pod is not killed by OOM neither. But the metric container_memory_failcnt increased dramatically.</p>
<p>OOM is different from hitting memory limit?</p>
<p>And I want to know some candidates why the prometheus container used the memory(18Gi) so much within 5 minutes. (It usually uses 10Gi or below )</p>
| JAESANGPARK | <p>After searching on google 2 days, I got to know the answer.<br />
The container_memory_failcnt really checks whether the target container hits memory limits.<br />
This metric is along with the container_memory_usage_bytes.</p>
<p>And the metric container_memory_working_set_bytes is the actual memory size which the container currently is using.<br />
And the oom killer is watching this metrics.</p>
<p>So in my case, the container_memory_failcnt had been increasing but the container_memory_working_set_bytes was lower than the limits.memory of the container so that the pod was not oom killed.</p>
<p>special thanks to Bob cotton </p>
<p><a href="https://faun.pub/how-much-is-too-much-the-linux-oomkiller-and-used-memory-d32186f29c9d" rel="nofollow noreferrer">https://faun.pub/how-much-is-too-much-the-linux-oomkiller-and-used-memory-d32186f29c9d</a></p>
| JAESANGPARK |
<p>Is there a 'master switch' to enable/disable Keda and HPA?
I can enable/disable scaling rules by editing the replicacount to 0, but is there a main enalbe/disable field?</p>
<pre><code> cooldownPeriod: 1800
maxReplicaCount: 8
minReplicaCount: 2
pollingInterval: 300
</code></pre>
| Ahmad Masalha | <p>You can enable/disable scaling rules either by editing the replica count to 0 or you can use a single field called <strong>Pause autoscaling</strong>.</p>
<p><strong>Pause autoscaling</strong> lets you enable/disable autoscaling by using <strong>autoscaling.keda.sh/paused-replicas</strong> annotation. It can be useful to instruct KEDA to pause autoscaling of objects, if you want to do cluster maintenance or you want to avoid resource starvation by removing non-mission-critical workloads.</p>
<p>You can enable this by adding the below annotation to your <code>ScaledObject</code> definition:</p>
<pre><code>metadata:
annotations:
autoscaling.keda.sh/paused-replicas: "0"
</code></pre>
<p>The presence of this annotation will pause autoscaling no matter what number of replicas is provided. The above annotation will scale your current workload to 0 replicas and pause autoscaling. You can set the value of replicas for an object to be paused at any arbitrary number. To enable autoscaling again, simply remove the annotation from the <code>ScaledObject</code> definition.</p>
<p>Refer to the <a href="https://keda.sh/docs/2.7/concepts/scaling-deployments/" rel="noreferrer">KEDA documentation</a> for more information.</p>
| Jyothi Kiranmayi |
<p>I'm trying to run open-source with minimal costs on the cloud and would love to run it on k8s without the hassle of managing it (managed k8s cluster). Is there a free tier option for a small-scale project in any cloud provider?</p>
<p>If there is one, which parameters should I choose to get the free tier?</p>
| Jean Carlo Machado | <p>You can use <a href="https://www.ibm.com/cloud/free" rel="nofollow noreferrer">IBM cloud</a> which provides a single worker node Kubernetes cluster along with container registry like other cloud providers. This is more than enough for a beginner to try the concepts of Kubernetes.</p>
<p>You can also use <a href="https://labs.play-with-k8s.com/" rel="nofollow noreferrer">Tryk8s</a> which provides a playground for trying Kubernetes for free. Play with Kubernetes is a labs site provided by <a href="https://docker.com/" rel="nofollow noreferrer">Docker</a> and created by Tutorius. Play with Kubernetes is a playground which allows users to run K8s clusters in a matter of seconds. It gives the experience of having a free Alpine Linux Virtual Machine in the browser. Under the hood Docker-in-Docker (DinD) is used to give the effect of multiple VMs/PCs.</p>
<p>If you want to use more services and resources, based on your use case you can try other cloud providers, they may not provide an indefinitely free trial but have no restriction on the resources.</p>
<p>For Example, Google Kubernetes engine(GKE) provides $300 credit to fully explore and conduct an assessment of Google Cloud. You wonβt be charged until you upgrade which can be used for a 3 month period from the account creation. There is no restriction on the resources and the number of nodes for creating a cluster. You can add Istio and Try Cloud Run (Knative) also.</p>
<p>Refer <a href="https://github.com/learnk8s/free-kubernetes" rel="nofollow noreferrer">Free Kubernetes</a> which Lists the free Trials/Credit for Managed Kubernetes Services.</p>
| Jyothi Kiranmayi |
<p>I have a pod attached to a daemonset node and which also contains several containers. I want to update the container images inside the pod. Therefore I am curious to know if restarting the the daemonset will do the job (because image Pull Policy is currently set to always) and restarting the daemonset will pull the new updated image. Is it the right way to do such things?
Thanks.</p>
| Azad Md Abul Kalam | <p>Use <code>kubectl set image -n <namespace> daemonset <ds name> <container name>=<image>:<tag></code> will do the trick and does not require restart command.</p>
<p>To see the update status <code>kubectl rollout status -n <namespace> daemonset <ds name></code></p>
| gohm'c |
<p>In spite 21Gi being set in claimed volume, the pod has 8E (full possible size of EFS)</p>
<p>Is it OK and storage size is limited. Or did I make a mistake in configuration and there needs to change, or something other?</p>
<p>I will be appreciated for your help.</p>
<p><strong>Volume:</strong></p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
monitoring-eks-falcon-victoriametrics 21Gi RWX Retain Bound victoriametrics/victoriametrics-data
</code></pre>
<p><strong>Pod:</strong></p>
<pre><code>Filesystem Size Used Available Use% Mounted on
fs-efs.us-....s.com:/ 8.0E 0 8.0E 0% /data
</code></pre>
<p><strong>Persistent Volumes</strong></p>
<pre class="lang-yaml prettyprint-override"><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: monitoring-eks-falcon-victoriametrics
uid: f43e12d0-77ab-4530-8c9e-cfbd3c641467
resourceVersion: '28847'
labels:
Name: victoriametrics
purpose: victoriametrics
annotations:
pv.kubernetes.io/bound-by-controller: 'yes'
finalizers:
- kubernetes.io/pv-protection
spec:
capacity:
storage: 21Gi
nfs:
server: fs-.efs.us-east-1.amazonaws.com
path: /
accessModes:
- ReadWriteMany
claimRef:
kind: PersistentVolumeClaim
namespace: victoriametrics
name: victoriametrics-data
uid: 8972e897-4e16-a64f-4afd8f90fa89
apiVersion: v1
resourceVersion: '28842'
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
volumeMode: Filesystem
</code></pre>
<p><strong>Persistent Volume Claims</strong></p>
<pre class="lang-yaml prettyprint-override"><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: victoriametrics-data
namespace: victoriametrics
uid: 8972e897-4e16-a64f-4afd8f90fa89
resourceVersion: '28849'
labels:
Name: victoriametrics
purpose: victoriametrics
annotations:
Description: Volume for Victoriametrics DB
pv.kubernetes.io/bind-completed: 'yes'
finalizers:
- kubernetes.io/pvc-protection
spec:
accessModes:
- ReadWriteMany
selector:
matchLabels:
k8s-app: victoriametrics
purpose: victoriametrics
matchExpressions:
- key: k8s-app
operator: In
values:
- victoriametrics
resources:
limits:
storage: 21Gi
requests:
storage: 21Gi
volumeName: monitoring-eks-falcon-victoriametrics
storageClassName: efs-sc
volumeMode: Filesystem
status:
phase: Bound
accessModes:
- ReadWriteMany
capacity:
storage: 21Gi
</code></pre>
<p><strong>Pod deployment</strong></p>
<pre class="lang-yaml prettyprint-override"><code>kind: Deployment
...
spec:
...
spec:
volumes:
- name: victoriametrics-data
persistentVolumeClaim:
claimName: victoriametrics-data
containers:
- name: victoriametrics
...
volumeMounts:
- name: victoriametrics-data
mountPath: /data
mountPropagation: None
...
</code></pre>
| Rostyslav Malenko | <p>The number "8E" serves as an indicator, it is not a real quota. AWS EFS does not support quota (eg. <a href="https://docs.aws.amazon.com/efs/latest/ug/limits.html#nfs4-unsupported-features" rel="nofollow noreferrer">FATTR4_QUOTA_AVAIL_HARD</a>). It generally means you have "unlimited" space on this mount. There's nothing wrong with your spec; the number specified in the PVC's <code>resources.requests.storage</code> is used to match PV's <code>capacity.storage</code>. It doesn't mean you can only write 21GB on the EFS mount.</p>
| gohm'c |
<p>I have 2 namespaces in my kubernetes cluster, one called <code>first-nginx</code> and the other called <code>second-nginx</code>. I am using the chart <a href="https://ingress-nginx%20https://kubernetes.github.io/ingress-nginx" rel="nofollow noreferrer"><strong>ingress-nginx</strong></a>.. <strong>NOT</strong> the <strong>stable/nginx-ingress</strong> as that is now deprecated.</p>
<p>I am attemtping to install multiple nginx controllers, because i need them to be exposed by an already created static ip in GKE. I have successfully installed my first chart in the first-nginx namespace like this</p>
<pre><code>helm install nginx-ingress ingress-nginx/ingress-nginx --namespace first-nginx --set ingress-class="nginx-devices --set controller.service.loadBalancerIP={first-IP-address}"
</code></pre>
<p>I am now attempting to do the same with in the second namespace like this</p>
<pre><code>helm install nginx-ingress-2 ingress-nginx/ingress-nginx --namespace second-nginx --set ingress-class="nginx-devices --set controller.service.loadBalancerIP={second-IP-address}"
</code></pre>
<p>However i get an error as shown below.</p>
<blockquote>
<p>Error: rendered manifests contain a resource that already exists.
Unable to continue with install: IngressClass "nginx" in namespace ""
exists and cannot be imported into the current release: invalid
ownership metadata; annotation validation error: key
"meta.helm.sh/release-name" must equal "nginx-ingress-2": current
value is "nginx-ingress"; annotation validation error: key
"meta.helm.sh/release-namespace" must equal "second-nginx": current
value is "first-nginx"</p>
</blockquote>
<p>How do i solve this ? This seems to work when i use the stable/nginx-ingress chart where i can do something like this <code>helm install nginx-ingress-devices stable/nginx-ingress --namespace second-nginx --set controller.ingressClass="nginx-devices"</code></p>
<p>How do i acheive the same thing with the <strong>ingress-nginx</strong></p>
| floormind | <p>You need to define additional <code>controller.ingressClassResource.controllerValue</code> for the second ingress-nginx, so that when an ingress resource refer to this class, it knows which controller to engage.</p>
<pre><code>helm install nginx-ingress-devices ingress-nginx/ingress-nginx \
--namespace second-nginx \
--set controller.ingressClassResource.name=second-nginx \
--set controller.ingressClassResource.controllerValue="k8s.io/second-nginx" \
--set controller.ingressClassResource.enabled=true \
--set controller.ingressClassByName=true
</code></pre>
| gohm'c |
<p>I am new in Kubernetes, and I want to run the simple flask program on docker in Kubernetes. The image in docker could work successfully, but when I start the K8s.yaml with <code>kubectl apply -f k8s.yaml</code> and execute <code>minikube service flask-app-service</code> the web result reply fail with ERR_CONNECTION_REFUSED, and pods status <code>Error: ErrImageNeverPull</code>.</p>
<p>app.py:</p>
<pre class="lang-python prettyprint-override"><code># flask_app/app/app.py
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello, World!"
if __name__ == '__main__':
app.debug = True
app.run(debug=True, host='0.0.0.0')
</code></pre>
<p>Dockerfile:</p>
<pre class="lang-sh prettyprint-override"><code>FROM python:3.9
RUN mkdir /app
WORKDIR /app
ADD ./app /app/
RUN pip install -r requirement.txt
EXPOSE 5000
CMD ["python", "/app/app.py"]
</code></pre>
<p>K8s.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: Service
metadata:
name: flask-app-service
spec:
selector:
app: flask-app
ports:
- protocol: "TCP"
port: 5000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app
spec:
selector:
matchLabels:
app: flask-app
replicas: 3
template:
metadata:
labels:
app: flask-app
spec:
containers:
- name: flask-app
image: flask_app:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
</code></pre>
<p>After deploying I try to connect to <code>http://127.0.0.1:51145</code> from a browser, but it fails to connect with an <code>ERR_CONNECTION_REFUSED</code> message. I have a <a href="https://i.stack.imgur.com/zXtW3.png" rel="nofollow noreferrer">screenshot showing a more detailed Chinese-language error message</a> if that detail is helpful.</p>
<hr />
<p>update:</p>
<p>After switch <code>imagePullPolicy</code> from <code>never</code> to <code>Always</code> or <code>IfNotPresent</code>, the pod still can't run
<a href="https://i.stack.imgur.com/gYTz9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gYTz9.png" alt="enter image description here" /></a>
I try the <code>docker images</code> command it show the image exist:
<a href="https://i.stack.imgur.com/F3xa3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F3xa3.png" alt="enter image description here" /></a>
But when I pull image with docker pull, it show me the error:
<a href="https://i.stack.imgur.com/nKRbc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nKRbc.png" alt="enter image description here" /></a>
After docker login still not work:
<a href="https://i.stack.imgur.com/89XG2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/89XG2.png" alt="enter image description here" /></a></p>
<p>p.s. I follow the website to pratice: <a href="https://lukahuang.com/running-flask-on-minikube/" rel="nofollow noreferrer">https://lukahuang.com/running-flask-on-minikube/</a></p>
| Howard | <p>Based on the error in the question:</p>
<blockquote>
<p>pods status Error: ErrImageNeverPull.</p>
</blockquote>
<p>pod doesn't start because you have <code>imagePullPolicy: Never</code> in your deployment manifest. Which means that if the image is missing, it won't be pulled anyway.</p>
<p>This is from official documentation:</p>
<blockquote>
<p>The imagePullPolicy for a container and the tag of the image affect
when the kubelet attempts to pull (download) the specified image.</p>
</blockquote>
<p>You need to switch it to <code>IfNotPresent</code> or <code>Always</code>.</p>
<p>See more in <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">image pull policy</a>.</p>
<p>After everything is done correctly, pod status should be <code>running</code> and then you can connect to the pod and get the response back. See the example output:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ubuntu 1/1 Running 0 4d
</code></pre>
| moonkotte |
<p>Basically, I had installed Prometheues-Grafana from the <a href="https://github.com/prometheus-operator/kube-prometheus" rel="noreferrer">kube-prometheus-stack</a> using the provided helm chart repo <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="noreferrer">prometheus-community</a></p>
<pre><code># helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack
</code></pre>
<p>They are working fine.</p>
<p>But the problem I am facing now is integrating <strong>Thanos</strong> with this existing <strong>kube-prometheus-stack</strong>.</p>
<p>I installed thanos from the <a href="https://artifacthub.io/packages/helm/bitnami/thanos" rel="noreferrer">bitnami helm chart repo</a></p>
<pre><code># helm repo add bitnami https://charts.bitnami.com/bitnami
# helm install thanos bitnami/thanos
</code></pre>
<p>I can load the Thanos Query Frontend GUI, but no metrics showing there.</p>
<p><a href="https://i.stack.imgur.com/CDfVO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CDfVO.png" alt="thanos metrics" /></a>
<a href="https://i.stack.imgur.com/ZlttQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZlttQ.png" alt="thanos store" /></a></p>
<p>I am struggling now to get it worked properly. Is it because of Thanos from a completely different helm chart and Prometheus-operator-grafana stack from another helm chart ?.</p>
<p>My Kubernetes cluster on AWS has been created using Kops. And, I use Gitlab pipeline and helm to deploy apps to the cluster.</p>
| vjwilson | <p>It's not enough to simply install them, you need to <strong>integrate</strong> <code>prometheus</code> with <code>thanos</code>.</p>
<p>Below I'll describe all steps you need to perform to get the result.</p>
<p>First short theory. The most common approach to integrate them is to use <code>thanos sidecar</code> container for <code>prometheus</code> pod. You can read more <a href="https://www.infracloud.io/blogs/prometheus-ha-thanos-sidecar-receiver/" rel="noreferrer">here</a>.</p>
<p><strong>How this is done:</strong></p>
<p>(considering that installation is clean, it can be easily deleted and reinstalled from the scratch).</p>
<ol>
<li>Get <code>thanos sidecar</code> added to the <code>prometheus</code> pod.</li>
</ol>
<p>Pull <code>kube-prometheus-stack</code> chart:</p>
<pre><code>$ helm pull prometheus-community/kube-prometheus-stack --untar
</code></pre>
<p>You will have a folder with a chart. You need to modify <code>values.yaml</code>, two parts to be precise:</p>
<pre><code># Enable thanosService
prometheus:
thanosService:
enabled: true # by default it's set to false
# Add spec for thanos sidecar
prometheus:
prometheusSpec:
thanos:
image: "quay.io/thanos/thanos:v0.24.0"
version: "v0.24.0"
</code></pre>
<p>Keep in mind, this feature is still experimental:</p>
<blockquote>
<pre><code>## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec
</code></pre>
</blockquote>
<p>Once it's done, install the <code>prometheus</code> chart with edited <code>values.yaml</code>:</p>
<pre><code>$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace
</code></pre>
<p>And check that sidecar is deployed as well:</p>
<pre><code>$ kubectl get pods -n prometheus | grep prometheus-0
prometheus-prometheus-kube-prometheus-prometheus-0 3/3 Running 0 67s
</code></pre>
<p>It should be 3 containers running (by default it's 2). You can inspect it in more details with <code>kubectl describe</code> command.</p>
<ol start="2">
<li>Setup <code>thanos</code> chart and deploy it.</li>
</ol>
<p>Pull the <code>thanos</code> chart:</p>
<pre><code>$ helm pull bitnami/thanos --untar
</code></pre>
<p>Edit <code>values.yaml</code>:</p>
<pre><code>query:
dnsDiscovery:
enabled: true
sidecarsService: "prometheus-kube-prometheus-thanos-discovery" # service which was created before
sidecarsNamespace: "prometheus" # namespace where prometheus is deployed
</code></pre>
<p>Save and install this chart with edited <code>values.yaml</code>:</p>
<pre><code>$ helm install thanos . -n thanos --create-namespace
</code></pre>
<p>Check that it works:</p>
<pre><code>$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos
</code></pre>
<p>We are interested in this line:</p>
<pre><code>level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.44.1.213:10901 extLset="{prometheus=\"prometheus/prometheus-kube-prometheus-prometheus\", prometheus_replica=\"prometheus-prometheus-kube-prometheus-prometheus-0\"}"
</code></pre>
<ol start="3">
<li>Now go to the UI and see that metrics are available:</li>
</ol>
<p><a href="https://i.stack.imgur.com/Sii8l.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Sii8l.png" alt="enter image description here" /></a></p>
<p><strong>Good article to read:</strong></p>
<ul>
<li><a href="https://medium.com/nerd-for-tech/deep-dive-into-thanos-part-ii-8f48b8bba132" rel="noreferrer">Deep Dive into Thanos-Part II</a></li>
</ul>
| moonkotte |
<p>I am migrating the Kubernetes deployments from API version <code>extensions/v1beta1</code> to <code>apps/v1</code>.</p>
<p>I've changed the API group in deployment to <code>apps/v1</code> and applied the deployment.</p>
<p>However when I check the deployment using <code>get deployment -o yaml</code> it's showing deployment in <code>extensions/v1beta1</code> API group and when I check using <code>get deployment.apps -o yaml</code>, it's showing in <code>app/v1</code> API group.</p>
<p>can you please let us know a way to identify the API group of the created deployment YAML other than displaying the YAMLs using the commands <code>get deployment -o yaml</code> or <code>get deployment.app -o yaml</code> since the output apiVersion is just based on the command we give irrespective of the one with which it was created.</p>
<p>I just need to make sure that my deployment is migrated to <code>apps/v1</code>.</p>
| codewarrior | <p>As I understand, you want to view the last applied configuration for the deployments?</p>
<p>If yes, you should use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-view-last-applied-em-" rel="nofollow noreferrer"><code>kubectl apply view-last-applied</code> command</a>.</p>
<p>Example for the one specific deployment:</p>
<pre><code>kubectl apply view-last-applied deployment {your-deployment-name}
</code></pre>
<p>Example for the all deployments:</p>
<pre><code>kubectl get deployments -o name | xargs kubectl apply view-last-applied
</code></pre>
| Mikolaj S. |
<p>I want to create a cluster which is very similar to this one where they are using azure.
<a href="https://medium.com/devops-dudes/how-to-setup-completely-private-azure-kubernetes-service-aks-clusters-with-azure-private-links-b800a5a6776f" rel="nofollow noreferrer">Link to the tutorial</a></p>
<p>Whatever tutorials i have gone through for AWS-EKS are blocking it bi directional. But I need a bastion host and don't want the application to be inaccessible via www.</p>
<p>Is there a possible solution for this problem.</p>
| Snehlata Giri | <p>The AKS tutorial you posted aim to create completely Private Azure Kubernetes Service (AKS).</p>
<p>Anyway, either case you can use <a href="https://eksctl.io/introduction/" rel="nofollow noreferrer">eksctl</a> to easily create one, here's a quick example where public access to control plane is disabled and allow node group to use NAT for Internet access. You can replace <> with your own preference:</p>
<pre><code>cat << EOF | eksctl create cluster -f -
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: <my cluster name>
region: <my region name>
vpc:
clusterEndpoints:
privateAccess: true
publicAccess: false
nodeGroups:
- name: <my self-managed node group name>
instanceType: <t3a.medium>
desiredCapacity: 1
EOF
</code></pre>
| gohm'c |
<p>I use a MySQL on Kubernetes with a <code>postStart</code> hook which should run a query after the start of the database.</p>
<p>This is the relevant part of my <code>template.yaml</code>:</p>
<pre><code> spec:
containers:
- name: ${{APP}}
image: ${REGISTRY}/${NAMESPACE}/${APP}:${VERSION}
imagePullPolicy: Always
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- hostname && sleep 12 && echo $QUERY | /opt/rh/rh-mysql80/root/usr/bin/mysql
-h localhost -u root -D grafana
-P 3306
ports:
- name: tcp3306
containerPort: 3306
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
timeoutSeconds: 1
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 120
timeoutSeconds: 1
</code></pre>
<p>When the pod start, the PVC for the database gets corruped and the pod crashes. When I restart the pod, it works. I guess the query runs, when the database is not up yet. I guess this might get fixed with the readinessprobe, but I am not an expert at these topics.</p>
<p>Did anyone else run into a similar issue and knows how to fix it?</p>
| Data Mastery | <p>Note that <code>postStart</code> will be call at least once but may also be called more than once. This make <code>postStart</code> a bad place to run query.</p>
<p>You can set pod <code>restartPolicy: OnFailure</code> and run the query in separate MYSQL container. Start your second container with <a href="https://github.com/groundnuty/k8s-wait-for" rel="nofollow noreferrer">wait</a> and run your query. Note that your query should produce idempotent result or your data integrity may breaks; consider when the pod is re-create with the existing data volume.</p>
| gohm'c |
<p>I have an ingress pod deployed with Scaleway on a Kubernetes cluster and it exists in the kube-system namespace. I accidentally created a load balancer service on the <code>default</code> namespace and I don't want to delete and recreate it a new one on the <code>kube-system</code> namespace so I want my Load balancer in the <code>default</code> namespace to have the ingress pods as endpoints:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ecom-loadbalancer
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- port: 443
targetPort: 443
type: LoadBalancer
</code></pre>
<p>Is that possible? Is there something I should add in the selector fields?</p>
<hr />
<p>I tried creating a <code>clusterIP</code> service in the <code>kube-system</code> namespace that communicates with the ingress pods, it worked.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ecom-loadbalancer
namespace: kube-system
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- port: 443
targetPort: 443
type: ClusterIP
</code></pre>
<p>Then, I tried referencing that service to my <code>loadbalancer</code> in the default namespace like that:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ecom-loadbalancer
spec:
type: ExternalName
externalName: ecom-loadbalancer.kube-system.svc.cluster.local
ports:
- port: 443
targetPort: 443
type: LoadBalancer
</code></pre>
<p>But no result. The <code>clusterIP</code> points to the Ingress pods, but the load balancer remains without endpoints.</p>
| joe1531 | <p>At least three reasons why you need to re-create it properly (2 technical and advice):</p>
<ol>
<li><p><code>ExternalName</code> is used for accessing external services or services in other namespaces. The way it works is when looking up the service's name happens, CNAME will be returned. So in other words it works for egress connections when requests should be directed somewhere else.</p>
<p>See <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">service - externalname type</a> and use cases <a href="https://akomljen.com/kubernetes-tips-part-1/" rel="nofollow noreferrer">Kubernetes Tips - Part 1 blog post from Alen Komljen</a>.</p>
<p>Your use case is different. You want to get requests from outside the kubernetes cluster to exposed loadbalancer and then direct traffic from it to another service within the cluster. It's not possible by built-in kubernetes terms, because service can be either <code>LoadBalancer</code> or <code>ExternalName</code>. You can see in your last manifest there are <strong>two</strong> types which will not work at all. See <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">service types</a>.</p>
</li>
<li><p>Avoid unnecessary complexity. It will be hard to keep track of everything since there will be more and more services and other parts.</p>
</li>
<li><p>Based on documentation it's generally possible to have issues using <code>ExternalName</code> with some protocols:</p>
<blockquote>
<p>Warning: You may have trouble using ExternalName for some common
protocols, including HTTP and HTTPS. If you use ExternalName then the
hostname used by clients inside your cluster is different from the
name that the ExternalName references.</p>
<p>For protocols that use hostnames this difference may lead to errors or
unexpected responses. HTTP requests will have a Host: header that the
origin server does not recognize; TLS servers will not be able to
provide a certificate matching the hostname that the client connected to.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">Reference - Warning</a></p>
</li>
</ol>
| moonkotte |
<p>I have this manifest</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: cert-manager
namespace: "cert-manager"
spec:
template:
spec:
containers:
nodeSelector:
app.myapp.com/environment: system
</code></pre>
<p>When I try to apply with <code>kubectl</code> I get this error:</p>
<blockquote>
<p>error: wrong Node Kind for expected: SequenceNode was MappingNode: value: {nodeSelector:
app.myapp.com/environment: system}</p>
</blockquote>
<p>What can be?</p>
| Rodrigo | <p>As mentioned by @mdaniel, a container is an array, you have to mention the field name preceding with β-β inside a container. Here in your use case, removing the leading β-β from nodeSelector: has turned the container from an array member into a dict. So, you will need to mention β-β to the nodeSelector field.</p>
<p>Refer to the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments</a> for more information on how to define containers.</p>
| Jyothi Kiranmayi |
<p>How can I find easily which CNI plugin is configured and where is the config file associated with it?</p>
| maiky | <p>You can look into the content in <code>/etc/cni/net.d</code> and the binary at <code>/opt/cni/bin</code>. If you don't find any of these, you can check kubelet argument <code>--cni-conf-dir</code> and <code>--cni-bin-dir</code> which will point you to the custom location of your CNI plugin.</p>
| gohm'c |
<p>I'm trying to understand what is the correct usage of command in Pods. Taking below example of my yaml. This is a working YAML. My doubts are</p>
<p>1> the sleep command is issued for 3600 seconds, but my pod busybox2 is still running after few hours when I see pods via 'k get pods'. My current understanding is, the sleep should execute for 3600 seconds and the pod is supposed to die out after that, as there is no process running my Pod (like httpd, nginx etc). Not sure why this is</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox2
namespace: default
spec:
containers:
- name: busy
image: busybox
command:
- sleep
- "3600"
</code></pre>
<p>2> When checked on k8s docs, the usage shows a different way to write it. I understand that cmd and the args are separate things.. but can I not simply use both ways for all scenarios? like writing command: ["sleep", "3600"] as first example, and <code>command: - printenv \ - HOSTNAME</code> as another way to write second yaml command section. Can someone elaborate a bit.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
restartPolicy: OnFailure
</code></pre>
| Abhishek Danej | <p><code>...but my pod busybox2 is still running after few hours...</code></p>
<p>This is because the default value for <code>restartPolicy</code> is <code>Always</code>. Means after an hour, your pod actually restarted.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox2
namespace: default
spec:
restartPolicy: OnFailure # <-- Add this line and it will enter "Completed" status.
containers:
- name: busy
image: busybox
command:
- sleep
- "10" # <-- 10 seconds will do to see the effect.
</code></pre>
<p>See <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes" rel="nofollow noreferrer">here</a> for how K8s treats entrypoint, command, args and CMD.</p>
| gohm'c |
<p><strong>TL;DR</strong><br />
How can I setup a light-weight web server to execute external programs to handle REST requests?</p>
<p><strong>The long version</strong>:<br />
We have a set of services and databases deployed in Kubernetes via Helm. There are some executables that perform maintenance, cleanup, backup, restore etc that I need to execute (some on-demand & some periodically).<br />
I want to park a small, light-weight web server somewhere mounted with access to the binaries and execute them when REST requests are handled.</p>
<ul>
<li>server needs to have a small memory footprint</li>
<li>traffic will be really light (like minutes between each request)</li>
<li>security is not super important (it will run inside our trusted zone)</li>
<li>server needs to handle GET and POST (i.e. passing binary content TO & FROM external program)</li>
</ul>
<p>I've glanced at lighttpd or nginx with CGI modules but I'm not experienced with those.<br />
What do you recommend? Do you have a small example to show how to do it?</p>
| akagixxer | <p>Here's k8s native approach:</p>
<p><code>... a set of services and databases deployed in Kubernetes... some executables that perform maintenance, cleanup, backup, restore etc...some on-demand & some periodically</code></p>
<p>If you can bake those "executables" into an image, you can run these programs on-demand as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">k8s job</a>, and schedule repeating job as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">k8s cronjob</a>. If this is possible in your context then you can create a k8s role that has just enough right to call job/cronjob api, and bind this role to a dedicated k8s service account.</p>
<p>Then you build a mini web application using any language/framework of your choice, run this web application on k8s using the dedicated service account, expose your pod as service using NodePort/LoadBalancer to receive GET/POST requests. Finally you <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#directly-accessing-the-rest-api" rel="nofollow noreferrer">make direct api call to k8s api-server</a> to run jobs according to your logic.</p>
| gohm'c |
<p>I've created a Kubernetes cluster on Google Cloud and even though the application is running properly (which I've checked running requests inside the cluster) it seems that the NEG health check is not working properly. Any ideas on the cause?</p>
<p>I've tried to change the service from NodePort to LoadBalancer, different ways of adding annotations to the service. I was thinking that perhaps it might be related to the https requirement in the django side.</p>
<p><a href="https://i.stack.imgur.com/G29KX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/G29KX.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/wEbwG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/wEbwG.png" alt="enter image description here" /></a></p>
<pre><code># [START kubernetes_deployment]
apiVersion: apps/v1
kind: Deployment
metadata:
name: moner-app
labels:
app: moner-app
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: moner-app
template:
metadata:
labels:
app: moner-app
spec:
containers:
- name: moner-core-container
image: my-template
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
limits:
memory: "512Mi"
startupProbe:
httpGet:
path: /ht/
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
failureThreshold: 30
timeoutSeconds: 10
periodSeconds: 10
initialDelaySeconds: 90
readinessProbe:
initialDelaySeconds: 120
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 10
livenessProbe:
initialDelaySeconds: 30
failureThreshold: 3
periodSeconds: 30
timeoutSeconds: 10
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
volumeMounts:
- name: cloudstorage-credentials
mountPath: /secrets/cloudstorage
readOnly: true
env:
# [START_secrets]
- name: THIS_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GRACEFUL_TIMEOUT
value: '120'
- name: GUNICORN_HARD_TIMEOUT
value: '90'
- name: DJANGO_ALLOWED_HOSTS
value: '*,$(THIS_POD_IP),0.0.0.0'
ports:
- containerPort: 5000
args: ["/start"]
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.16
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=moner-dev:us-east1:core-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
resources:
requests:
memory: "64Mi"
limits:
memory: "128Mi"
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir: {}
- name: cloudstorage-credentials
secret:
secretName: cloudstorage-credentials
# [END volumes]
# [END kubernetes_deployment]
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
---
# [START certificates_setup]
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
spec:
domains:
- domain.com
- app.domain.com
# [END certificates_setup]
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: moner-ssl
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: moner-svc
port:
name: moner-core-http
</code></pre>
| gawry | <p>Apparently, you didnβt have a GCP firewall rule to allow traffic on port 5000 to your GKE nodes. <a href="https://cloud.google.com/vpc/docs/using-firewalls#creating_firewall_rules" rel="nofollow noreferrer">Creating an ingress firewall rule</a> with IP range - 0.0.0.0/0 and port - TCP 5000 targeted to your GKE nodes could allow your setup to work even with port 5000.</p>
| Anant Swaraj |
<p>Lets say you use <a href="https://argoproj.github.io/cd/" rel="nofollow noreferrer">Argocd</a> to deploy helm charts to Kubernetes. Things work great but you have a kubernetes resource <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/" rel="nofollow noreferrer">finalizer</a> on a resource. Somebody deletes the resource and now Argo just waits in the state of 'Progressing' or 'Deleting'. It can't actually do the delete due to the finalizer. This is a good protection mechanism for very important files like aws iam files.</p>
<p>But I am hopeful somebody can help me figure out. Is there anyway to stop the operation given to argo and instead just let it sync again as normal? Maybe somebody made a mistake and the finalizer worked as intended. Instead of clearing the finalizer and dealing with the consequences. Can the consequences be prevented by undoing argocds operation?</p>
<p>Thank you</p>
| Biff | <p>Either you need to delete the corresponding Argocd application or you need to roll back the deployment. If you delete the application it will remove all the resources created by the application and it will stop the operation. If you roll back to the previous version it will undo the changes you have made in the current deployment and bring all your resources to previous versions.
You can use Argo CD CLI argocd app rollbackβ -r β, to roll back to the particular version you want.</p>
<p>You can also roll back from Argo CD UI. If your finalizer is still present you need to manually remove the finalizer and then re-apply the resource definitions.</p>
<p>Please check this <a href="https://argoproj.github.io/argo-rollouts/generated/kubectl-argo-rollouts/kubectl-argo-rollouts_undo/" rel="nofollow noreferrer">document</a></p>
| Abhijith Chitrapu |
<p>I am trying to deploy a SparkJava REST app in a Kubernetes container on my Windows machine.</p>
<ul>
<li>Windows 10</li>
<li>Kubernetes v1.22.5</li>
<li>(edit) base image: openjdk:8-alpine</li>
</ul>
<p>I am trying to read in a properties file when the app starts. I have created a volume mount in my YAML that points to where the file is. However, the container always crashes when I start it. I have taken screenshots of the YAML and the logs from the container. I have tried logging some test results to make sure the system can find the mounted drive and the file, and I also logged the <code>canRead</code> property to debug whether it is a permissions problem. Tests seem to indicate the file is visible and readable; but the error that is thrown would indicate otherwise.</p>
<p>Some research I did points to a possible bug or hack required to get the volume mount working correctly, but I haven't read anything that seems to mirror my issue closely.</p>
<p>Does anybody see what I am doing wrong?</p>
<p><a href="https://i.stack.imgur.com/hAkm7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hAkm7.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/DToG5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DToG5.png" alt="enter image description here" /></a></p>
<p>Here is my java:</p>
<pre><code>import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.Properties;
import static spark.Spark.*;
public class RestClient {
private static final int sparkPort = 4567;
public static void main(String[] args) {
port(sparkPort);
String hostPath = "/root";
String propsFilePath = hostPath + "/resttest.properties";
File host = new File(hostPath);
if(!host.exists()){
System.out.println("Could not find host path");
return;
}
System.out.println("Found host path");
File propsFile = new File(propsFilePath);
if(!propsFile.exists()){
System.out.println("Could not find host path");
return;
}
System.out.println("Found propsFile path");
System.out.println(">> isDirectory: " + propsFile.isDirectory());
System.out.println(">> isFile: " + propsFile.isFile());
System.out.println(">> canRead: " + propsFile.canRead());
Properties properties = new Properties();
FileInputStream fileInputStream = null;
try {
fileInputStream = new FileInputStream(propsFile);
} catch (SecurityException fnf) {
System.out.println("Security issue");
fnf.printStackTrace();
return;
} catch (FileNotFoundException fnf) {
System.out.println("Could not open file");
fnf.printStackTrace();
return;
}
try {
properties.load(fileInputStream);
} catch (IOException fe1) {
fe1.printStackTrace();
}
get("/hello", (req,res) -> {
return "Hello World! My properties file is "+ propsFile +" and from it I learned I was "+ properties.getProperty("age") +" years old";
});
}
}
</code></pre>
| RobbieS | <p>Posted community wiki answer based on the <a href="https://github.com/kubernetes/kubernetes/issues/59876" rel="nofollow noreferrer">topic with similar issue on the GitHub</a>. Feel free to expand it.</p>
<hr />
<p>The solution is to add <code>/run/desktop/mnt/host</code> before the <code>/c/users/<some-folder>/<some-folder>/gits/resttest</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: approot
hostPath:
path: /run/desktop/mnt/host/c/users/<some-folder>/<some-folder>/gits/resttest
type: Directory
</code></pre>
| Mikolaj S. |
<p>I am running this Cronjob at 2 AM in the morning:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: postgres-backup
spec:
# Backup the database every day at 2AM
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup
image: postgres:10.4
command:
- "/bin/sh"
- -c
- |
pg_dump -Fc -d postgresql://$DBUSER:$DBPASS@$DBHOST:$DBPORT/$DBNAME > /var/backups/backup_$(date +"%d-%m-%Y_%H-%M").bak;
env:
- name: DBHOST
valueFrom:
configMapKeyRef:
name: dev-db-config
key: db_host
- name: DBPORT
valueFrom:
configMapKeyRef:
name: dev-db-config
key: db_port
- name: DBNAME
valueFrom:
configMapKeyRef:
name: dev-db-config
key: db_name
- name: DBUSER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: db_username
- name: DBPASS
valueFrom:
secretKeyRef:
name: dev-db-secret
key: db_password
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
- name: postgres-restore
image: postgres:10.4
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
restartPolicy: OnFailure
volumes:
- name: postgres-backup-storage
hostPath:
# Ensure the file directory is created.
path: /var/volumes/postgres-backups
type: DirectoryOrCreate
</code></pre>
<p>The jobs are getting executed successfully, but what I don't like is that for every Job execution a new Pod is created:</p>
<p><a href="https://i.stack.imgur.com/42w3e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/42w3e.png" alt="Multiple Pods of Cornjob Job execution" /></a></p>
<p>Is there a way to clean previous (old) created Pods?
Or maybe is there a way to rerun one an the same Pod/Job every time?</p>
| f.bele | <p>If only last job and pod need to be preserved, you can use <code>.spec.successfulJobsHistoryLimit</code> field set to <code>1</code>.</p>
<p>This way only last job and corresponding pod will be preserved. By default it's set to <code>3</code>. Also it's possible to set this value to <code>0</code> and nothing will be saved after cronjob's execution.</p>
<p>Same logic has <code>.spec.failedJobsHistoryLimit</code> field, it has <code>1</code> by default.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits" rel="nofollow noreferrer">jobs history limits</a>.</p>
<hr />
<p>This is how it looks when I get events from cronjob:</p>
<pre><code>$ kubectl describe cronjob test-cronjob
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m31s cronjob-controller Created job test-cronjob-27304493
Normal SawCompletedJob 2m30s cronjob-controller Saw completed job: test-cronjob-27304493, status: Complete
Normal SuccessfulCreate 91s cronjob-controller Created job test-cronjob-27304494
Normal SawCompletedJob 90s cronjob-controller Saw completed job: test-cronjob-27304494, status: Complete
Normal SuccessfulDelete 90s cronjob-controller Deleted job test-cronjob-27304493
Normal SuccessfulCreate 31s cronjob-controller Created job test-cronjob-27304495
Normal SawCompletedJob 30s cronjob-controller Saw completed job: test-cronjob-27304495, status: Complete
Normal SuccessfulDelete 30s cronjob-controller Deleted job test-cronjob-27304494
</code></pre>
<p>Only one last job is presented:</p>
<pre><code>$ kubectl get jobs
NAME COMPLETIONS DURATION AGE
test-cronjob-27304496 1/1 1s 3s
</code></pre>
<p>And one pod:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-cronjob-27304496-r4qd8 0/1 Completed 0 38s
</code></pre>
| moonkotte |
<p>Reading through "Kubernetes In Action" book, there is a kubectl command which creates a pod but does not deploy it:</p>
<pre><code>$ kubectl run kubia --image=dockeruser/kubia --port=8080 --generator=run/v1
replicationcontroller "kubia" created
</code></pre>
<p>The generator option is there to ensure that a replication controller is created and that there is no deployment. But in the version of kubectl that I am using, v1.22.3, the generator flag is deprecated. Leaving the generator option out will create the pod, but no replication controller.</p>
<p>Which command effectively creates the rc?</p>
| dirtyw0lf | <p>You don't need "generator" starting 1.17, you can use "create" like <code>kubectl create deployment kubia --image=dockeruser/kubia --port=8080</code></p>
| gohm'c |
<p>I am creating a fresh private node in GCloud where i have a deployment.yml with:</p>
<pre><code>...
containers:
- name: print-logs
image: busybox
command: "sleep infinity"
</code></pre>
<p>When i review the corresponding POD, I always get this error: "failed to do request: Head <a href="https://registry-1.docker.io/.." rel="nofollow noreferrer">https://registry-1.docker.io/..</a>. timeout"</p>
<p>Full logs:</p>
<pre><code># kubectl describe pod <my_pod>
Warning Failed 9s kubelet Failed to pull image "docker.io/library/busybox:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:latest": failed to resolve reference "docker.io/library/busybox:latest": failed to do request: Head https://registry-1.docker.io/v2/library/busybox/manifests/latest: dial tcp 3.220.36.210:443: i/o timeout
Warning Failed 9s kubelet Error: ErrImagePull
</code></pre>
<p>Custer settings:</p>
<pre><code>gcloud container clusters create test-cluster \
--preemptible \
--enable-ip-alias \
--enable-private-nodes \
--machine-type n1-standard-2 \
--zone europe-west4-a \
--enable-cloud-logging \
--enable-cloud-monitoring \
--create-subnetwork name=main-subnet \
--master-ipv4-cidr 172.16.0.32/28 \
--no-enable-master-authorized-networks \
--image-type COS_CONTAINERD
</code></pre>
<p>Please help me.</p>
| Owen Peredo Diaz | <p>First connect into the cluster using[link]</p>
<pre><code>gcloud container clusters get-credentials NAME [--internal-ip] [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG β¦]
</code></pre>
<p>And then try to pull the docker image for docker.</p>
<p>For more information you can refer <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials" rel="nofollow noreferrer">link</a> and <a href="https://cloud.google.com/build/docs/building/build-containers" rel="nofollow noreferrer">this</a>(which explains about Building container images).</p>
<p>For troubleshooting common Container Registry and Docker issues you can refer <a href="https://cloud.google.com/container-registry/docs/troubleshooting" rel="nofollow noreferrer">this</a> doc.</p>
| Prajna Rai T |
<p>I am trying to do the following, so where MYVALUE in host needs to change to include Release Name. Can't figure how to do this, as you can't use env variables like <code>{{ .Release.Name }}</code> directly in to a values.yaml file.</p>
<p>I did do a <code>fullnameOverride</code> and put <code>fullnameOverride: myrelease-mysql</code> for the mysql pod and then jasper has <code>host: myrelease-mysql</code> that works but wanted to know if there was a clever way to put release name into a values.yaml file.</p>
<p>I assumed I would need to use a configMap as can use <code>.Release.Name</code> there and then embed that config key into values.yaml.</p>
<p><strong>Values.yaml</strong></p>
<pre><code>jasperreports:
mariadb:
enabled: false
externalDatabase:
host: MYVALUE // Also tried $MVALUE
user: sqluser
database: jasper
jasperreportsUsername: jasper
env:
- name: MYVALUE
valueFrom:
configMapKeyRef:
name: mysql-jasper
key: mysql_releasename
</code></pre>
<p><strong>ConfigMap</strong></p>
<pre><code>kind: ConfigMap
metadata:
name: mysql-jasper
data:
mysql_releasename: {{ .Release.Name }}-"mysql"
</code></pre>
| sam | <p>It seems that helm does not support any template rendering capabilities in a <code>values.yaml</code> file - there are multiple topics on the helm GitHub:</p>
<ul>
<li><a href="https://github.com/helm/helm/issues/9754" rel="nofollow noreferrer">Canonical way of using dynamic object names within values.yaml</a></li>
<li><a href="https://github.com/helm/helm/pull/6876" rel="nofollow noreferrer">Adding values templates in order to customize values with go-template, for the chart and its dependencies</a></li>
<li><a href="https://github.com/helm/helm/issues/2492" rel="nofollow noreferrer">Proposal: Allow templating in values.yaml</a></li>
</ul>
<p>For now this feature is not implemented so you need to find a workaround - the suggestion from David Maze seems to be a good direction, but if you want to follow your approach you can use below workaround <a href="https://helm.sh/docs/helm/helm_install/#helm-install" rel="nofollow noreferrer">using <code>--set</code> flag in the <code>helm install</code> command</a> or use <code>sed</code> command and pipe to <code>helm install</code> command.</p>
<p>First solution with <code>--set</code> flag.</p>
<p>My <code>values.yaml</code> file is little bit different than yours:</p>
<pre class="lang-yaml prettyprint-override"><code>mariadb:
enabled: false
externalDatabase:
user: sqluser
database: jasper
jasperreportsUsername: jasper
</code></pre>
<p>That's because when I was using your <code>values.yaml</code> I couldn't manage to apply these values to <code>bitnami/jasperreports</code> chart, the <code>helm install</code> command was using default values <a href="https://github.com/bitnami/charts/blob/master/bitnami/jasperreports/values.yaml" rel="nofollow noreferrer">from here</a>.</p>
<p>I'm setting a shell variable <code>RELEASE_NAME</code> which I will use both for setting chart name and <code>externalDatabase.host</code> value.</p>
<pre><code>RELEASE_NAME=my-test-release
helm install $RELEASE_NAME bitnami/jasperreports -f values.yaml --set externalDatabase.host=$RELEASE_NAME-mysql
</code></pre>
<p>The above <code>helm install</code> command will override default values both by setting values from the <code>values.yaml</code> file + setting <code>externalDatabase.host</code> value.</p>
<p>Before applying you can check if this solution works as expected by using <code>helm template</code> command:</p>
<pre><code>RELEASE_NAME=my-test-release
helm template $RELEASE_NAME bitnami/jasperreports -f values.yaml --set externalDatabase.host=$RELEASE_NAME-mysql
...
- name: MARIADB_HOST
value: "my-test-release-mariadb"
...
</code></pre>
<p>Another approach is to set a bash variable <code>RELEASE_NAME</code> which will be used in the <code>sed</code> command to output modified <code>values.yaml</code> file (I'm not editing <code>values.yaml</code> file itself). This output will be pipe into a <code>helm install</code> command (where I also used the<code>RELEASE_NAME</code> variable).</p>
<p><code>values.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>mariadb:
enabled: false
externalDatabase:
host: MYHOST
user: sqluser
database: jasper
jasperreportsUsername: jasper
</code></pre>
<pre><code>RELEASE_NAME=my-test-release
sed "s/MYHOST/$RELEASE_NAME-mysql/g" values.yaml | helm install $RELEASE_NAME bitnami/jasperreports -f -
</code></pre>
<p>This approach will set chart configuration the same as in the first approach.</p>
| Mikolaj S. |
<p>I deployed a K8s <strong>StatefulSet</strong> with 30 replicas (or N replicas, where N is multiple of 3) in EKS Cluster.</p>
<p>EKS cluster is with 3 nodes, one node for one AZ, and I want to <em>guarantee</em> with Kubernetes Affinity/AntiAffinity the <strong>equal</strong> distribution of pods across different AZ.</p>
<pre><code>us-west-2a (n nodes) -> N/3 pods
us-west-2b (m nodes) -> N/3 pods
us-west-2c (o nodes) -> N/3 pods
</code></pre>
<p>Thanks</p>
| falberto | <p>While this is too possible with node affinity, a straight forward way is the use of topologySpreadContraints, here's the k8s <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#api" rel="nofollow noreferrer">documentation, diagrams and examples</a> to see it in action.</p>
| gohm'c |
<p>back with another head scratcher. I've got my kube cluster up and running, I've attached a mysql pod, a postgres pod, and associated volume/service mappings. These configurations are nearly identical with exception to the app name, port, and the information in the containers element. I mention this because this is seemingly where my issue is.</p>
<p>I can connect to my mysql instance at 'mysql' with no issues from my local machine using forwarding, and from any pod. But my postgres pod.. none of my other pods can seemingly access it. I can connect to 'postgres' using my DB console locally using forwarding, but the pods get 'Connection Refused' whenever they try to connect.</p>
<p>Here's an example. I loaded up a simple alpine image, installed the postgres client, connected to mysql, and then attempted to connect to my postgres instance.</p>
<pre><code>$ cat utils.yaml
apiVersion: v1
kind: Pod
metadata:
name: utils
namespace: default
spec:
restartPolicy: Always
serviceAccountName: default
containers:
- name: utils
image: alpine:latest
command:
- sleep
- "14400"
imagePullPolicy: IfNotPresent
$ kubectl apply -f utils.yaml
kupod/utils created
$ kubectl exec -it utils -- /bin/ash
# apk add mysql-client
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/7) Installing mariadb-common (10.5.12-r0)
(2/7) Installing libgcc (10.3.1_git20210424-r2)
(3/7) Installing ncurses-terminfo-base (6.2_p20210612-r0)
(4/7) Installing ncurses-libs (6.2_p20210612-r0)
(5/7) Installing libstdc++ (10.3.1_git20210424-r2)
(6/7) Installing mariadb-client (10.5.12-r0)
(7/7) Installing mysql-client (10.5.12-r0)
Executing busybox-1.33.1-r3.trigger
OK: 39 MiB in 21 packages
# apk add postgresql-client
(1/6) Installing gdbm (1.19-r0)
(2/6) Installing libsasl (2.1.27-r12)
(3/6) Installing libldap (2.4.58-r0)
(4/6) Installing libpq (13.4-r0)
(5/6) Installing readline (8.1.0-r0)
(6/6) Installing postgresql-client (13.4-r0)
Executing busybox-1.33.1-r3.trigger
OK: 42 MiB in 27 packages
# mysql -h mysql -utestuser -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 155
Server version: 5.6.51 MySQL Community Server (GPL)
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]> Ctrl-C -- exit!
Aborted
# psql -h postgres -U testuser
psql: error: could not connect to server: Connection refused
Is the server running on host "postgres" (192.168.152.97) and accepting
TCP/IP connections on port 5432?
</code></pre>
<p>Here are my yamls for postgres:</p>
<pre><code>---
#create secrets and maps using:
#kubectl create configmap postgres --from-file=postgres-config/
#kubectl create secret generic postgres --from-file=postgres-ecrets/
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: default
labels:
app: postgres
spec:
ports:
- name: postgres
port: 5432
selector:
app: postgres
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: default
spec:
selector:
matchLabels:
app: postgres
strategy:
type: Recreate
template:
metadata:
labels:
app: postgres
spec:
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: postgres
image: postgres:alpine
imagePullPolicy: IfNotPresent
args:
- -c
- hba_file=/etc/postgres-config/pg_hba.conf
- -c
- config_file=/etc/postgres-config/postgresql.conf
env:
- name: PGDATA
value: /var/lib/postgres-data
- name: POSTGRES_PASSWORD_FILE
value: /etc/postgres-secrets/postgres-pwd.txt
ports:
- name: postgres
containerPort: 5432
hostPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-config
mountPath: /etc/postgres-config
- name: postgres-storage
mountPath: /var/lib/postgres-data
subPath: postgres
- name: postgres-secrets
mountPath: /etc/postgres-secrets
volumes:
- name: postgres-config
configMap:
name: postgres
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-claim
- name: postgres-secrets
secret:
secretName: postgres
defaultMode: 384
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-gluster-pv
namespace: default
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: gluster-cluster
path: /gv0
readOnly: false
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-claim
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
</code></pre>
<p>And finally, my postgress.conf and pg_hba.conf:</p>
<pre><code>$ cat postgres.conf
ssl = on
ssl_ca_file = '/etc/postgres-secrets/root.crt'
ssl_cert_file = '/etc/postgres-secrets/server.crt'
ssl_key_file = '/etc/postgres-secrets/server.key'
$ cat pg_hba.conf
# Trust local connection - no password required.
local all all trust
hostssl all all all md5
</code></pre>
| The Kaese | <p>Once again...figured it out. @Sami's comment led me down the right path. I had listen-address set to "localhost", it needs to be set to "*"</p>
| The Kaese |
<p>I have a PV/PVC in my kubernetes cluster.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
nfs:
path: /tmp
server: 172.17.0.2
</code></pre>
<p>I want to externally add <code>mountOptions</code> to all PVs like below.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
</code></pre>
<p>Is there any way I can achieve this using <code>kubectl</code> cli like we add annotations to ingress rules and pods?</p>
| Ujjawal Khare | <p>You can use <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer"><code>kubectl patch</code> command</a> to add <code>mountOptions</code> to existing PV in the cluster:</p>
<pre><code>kubectl patch pv pv0003 --patch '{"spec": {"mountOptions": ["hard","nfsvers=4.1"]}}'
</code></pre>
<p>If you want to add <code>mountOptions</code> to every PV in the cluster you can use simple bash <code>for</code> loop and <code>kubectl patch</code> command:</p>
<pre><code>for pv in $(kubectl get pv --no-headers -o custom-columns=":metadata.name"); do kubectl patch pv $pv --patch '{"spec": {"mountOptions": ["hard","nfsvers=4.1"]}}'; done
</code></pre>
| Mikolaj S. |
<pre><code> kubectl cp namespace/podname:/path/target .
</code></pre>
<p>If I use the instructed command from kubernetes guide, it only copies the contents inside the <code>target</code> directory and omits <code>target</code> itself.<br />
I don't want to use <code>mkdir</code> every time I need to copy.<br />
What's the option?</p>
| Lunartist | <p>Try <code>kubectl cp namespace/podname:/path/target target</code>. Note specify "./target" will receive a warning: "tar: removing leading '/' from member names". Also, ensure your image have <code>tar</code> command or <code>kubectl cp</code> can fail.</p>
| gohm'c |
<p>I am using Kustomize to manage multiple variations of the same cluster. I am using <code>nameSuffix</code> option to add a suffix to all my resources:</p>
<pre><code>nameSuffix: -mysfx
</code></pre>
<p>My problem is that everything works fine but adding this suffix only to one Service resource cause me an issue. My problem is that the application (Patroni) interact with a service that must be called:</p>
<pre><code>CLUSTER-NAME-config
</code></pre>
<p>so I want to exclude this single resource from the <code>nameSuffix</code>. I know this is not possible due to how this feature was designed. I read several articles here on StackOverflow and on the web. I know I can skip the use of <code>nameSuffix</code> for a category of resources. So I tried to put in my <code>kustomization.yaml</code> the rows:</p>
<pre><code>configurations:
- kustomize-config/kustomize-config.yaml
</code></pre>
<p>to skip all the Service resources. Then in the file <code>kustomize-config/kustomize-config.yaml</code></p>
<pre><code>nameSuffix:
- path: metadata/name
apiVersion: v1
kind: Service
skip: true
</code></pre>
<p>but this doesn't work.</p>
<p>Does anyone know what's wrong with this configuration?</p>
<p>Then suppose I am able now to skip the use of <code>nameSuffix</code> for only Service resources, I have other two Services where I want to add this suffix. What do I have to do to add <code>nameSuffix</code> to these two Services and not the one mentioned above?</p>
<p>If there is a better solution for this, please let me know.</p>
| Salvatore D'angelo | <p>Skipping selected <code>kind</code>s doesn't work because this feature wasn't implemented - from <a href="https://github.com/kubernetes-sigs/kustomize/issues/519#issuecomment-527734888" rel="nofollow noreferrer">this comment on GitHub issue 519</a>.</p>
<p>Also <a href="https://github.com/keleustes/kustomize/tree/allinone/examples/issues/issue_0519#preparation-step-resource0" rel="nofollow noreferrer">this is an example</a> how it was supposed to be (what you tried)</p>
<hr />
<p>Based on <a href="https://github.com/kubernetes-sigs/kustomize/issues/519#issuecomment-557303870" rel="nofollow noreferrer">this comment</a>, it works on <code>kind</code>s that were explicitly mentioned:</p>
<blockquote>
<p>The plugin's config is currently oriented towards specifying which
kinds to modify, ignoring others.</p>
</blockquote>
<p>Also based on some tests I performed, it looks for <code>kind</code>s only, it doesn't look for names or anything, so only the whole <code>kind</code> can be included. Hence second part of your question is I'm afraid not possible (well, using kustomize, you can use <code>sed</code> for instance and modify everything you need on the go).</p>
<p>I created a simple structure and tested it:</p>
<pre><code>$ tree
.
βββ cm1.yaml
βββ cm2.yaml
βββ kustomization.yaml
βββ kustomizeconfig
βΒ Β βββ skip-prefix.yaml
βββ pod.yaml
βββ secret.yaml
βββ storageclass.yaml
1 directory, 7 files
</code></pre>
<p>There are two configmaps, pod, secret and storageclass, total 5 objects.</p>
<pre><code>$ cat kustomization.yaml
namePrefix: prefix-
nameSuffix: -suffix
resources:
- cm1.yaml
- cm2.yaml
- secret.yaml
- pod.yaml
- storageclass.yaml
configurations:
- ./kustomizeconfig/skip-prefix.yaml
</code></pre>
<p>And configuration (specified explicitly all kinds except for configmaps). Also it's called <code>namePrefix</code>, however it works for both: <code>prefix</code> and <code>suffix</code>:</p>
<pre><code>$ cat kustomizeconfig/skip-prefix.yaml
namePrefix:
- path: metadata/name
apiVersion: v1
kind: Secret
- path: metadata/name
apiVersion: v1
kind: Pod
- path: metadata/name
apiVersion: v1
kind: StorageClass
</code></pre>
<p>Eventually <code>kustomize build .</code> looks like:</p>
<pre><code>$ kustomize build .
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: prefix-local-storage-suffix # modified
provisioner: kubernetes.io/no-provisioner
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cm1 # skipped
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cm2 # skipped
---
apiVersion: v1
kind: Secret
metadata:
name: prefix-secret-suffix # modified
---
apiVersion: v1
kind: Pod
metadata:
name: prefix-pod-suffix # modified
spec:
containers:
- image: image
name: pod
</code></pre>
<hr />
<p>Another potential option is to use <a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/builtins/#_prefixsuffixtransformer_" rel="nofollow noreferrer"><code>PrefixSuffixTransformer</code> plugin</a> - it works differently in terms of specifying what <code>prefix</code> and/or <code>suffix</code> should be added and <code>fieldSpec</code>s where.</p>
<p>Please find an <a href="https://github.com/keleustes/kustomize/tree/allinone/examples/issues/issue_0519_b#preparation-step-resource10" rel="nofollow noreferrer">example</a> and final results below within this Feature Test for Issue 0519_b.</p>
<p>Also there's already a <a href="https://stackoverflow.com/a/66438033/15537201">good answer on StackOverflow</a> about using <code>PrefixSuffixTransformer</code>.</p>
| moonkotte |
<p>is there any way to pass JSON config from manual DAG run (the one from <code>dag_run.conf['attribute']</code> to KubernetesPodOperator?</p>
<p>Tried to use the Jinja template on the templated field in YAML, but got an error, <code>dag_run is not defined</code>.</p>
<pre class="lang-py prettyprint-override"><code>task_parse_raw_data = KubernetesPodOperator(
namespace=NAMESPACE,
image='artifactory/image:tag',
service_account_name='airflow',
cmds=["sh", "/current.sh"],
arguments=[ {{ dag_run.conf['date']}} ],
...)
</code></pre>
| Simon Osipov | <p>You need to wrap the Jinja expression in quotes like so:</p>
<pre class="lang-py prettyprint-override"><code>arguments=[ "{{ dag_run.conf['date'] }}" ]
</code></pre>
| Josh Fell |
<p>I have been learning Kubernetes for a few weeks and now I am trying to figure out the right way to connect a web server to a <code>statefulset</code> correctly.</p>
<p>Let's say I deployed a master-slave Postgres <code>statefulset</code> and now I will connect my web server to it. By using a cluster IP service, the requests will be load balanced across the master and the slaves for both reading (<code>SELECT</code>) and writing (<code>UPDATE</code>, <code>INSERT</code>, <code>DELETE</code>) records, right? But I can't do that because writing requests should be handled by the master. However, when I point my web server to the master using the headless service that will give us a DNS entry for each pod, I won't get any load balancing to the other slave replications and all of the requests will be handled by one instance and that is the master. So how am I supposed to connect them the right way? By obtaining both load balancing to all replications along with the slave in reading records and forwarding writing records requests to the master?</p>
<p>Should I use two endpoints in the web server and configure them in writing and reading records?</p>
<p>Or maybe I am using headless services and <code>statefulsets</code> the wrong way since I am new to Kubernetes?</p>
| joe1531 | <p>Well, your thinking is correct - the master should be read-write and replicas should be read only. How to configure it properly? There are different possible approaches.</p>
<hr />
<p>First approach is what you thinking about, to setup two headless services - one for accessing primary instances, the second one to access to the replica instances - good example is <a href="https://www.kubegres.io/doc/getting-started.html" rel="nofollow noreferrer">Kubegres</a>:</p>
<blockquote>
<p>In this example, Kubegres created 2 Kubernetes Headless services (of default type ClusterIP) using the name defined in YAML (e.g. "mypostgres"):</p>
<ul>
<li>a Kubernetes service "mypostgres" allowing to access to the Primary PostgreSql instances</li>
<li>a Kubernetes service "mypostgres-replica" allowing to access to the Replica PostgreSql instances</li>
</ul>
</blockquote>
<p>Then you will have two endpoints:</p>
<blockquote>
<p>Consequently, a client app running inside a Kubernetes cluster, would use the hostname "mypostgres" to connect to the Primary PostgreSql for read and write requests, and optionally it can also use the hostname "mypostgres-replica" to connect to any of the available Replica PostgreSql for read requests.</p>
</blockquote>
<p>Check <a href="https://www.kubegres.io/doc/getting-started.html" rel="nofollow noreferrer">this starting guide for more details</a>.</p>
<p>It's worth noting that there are many database solutions which are using this approach - another example is MySQL. <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">Here is a good article in Kubernetes documentation about setting MySQL using Stateful set</a>.</p>
<p>Another approach is to use some <a href="https://devopscube.com/deploy-postgresql-statefulset/" rel="nofollow noreferrer">middleware component which will act as a gatekeeper to the cluster, for example Pg-Pool</a>:</p>
<blockquote>
<p>Pg pool is a middleware component that sits in front of the Postgres servers and acts as a gatekeeper to the cluster.<br />
It mainly serves two purposes: Load balancing & Limiting the requests.</p>
</blockquote>
<blockquote>
<ol>
<li><strong>Load Balancing:</strong> Pg pool takes connection requests and queries. It analyzes the query to decide where the query should be sent.</li>
<li>Read-only queries can be handled by read-replicas. Write operations can only be handled by the primary server. In this way, it loads balances the cluster.</li>
<li><strong>Limits the requests:</strong> Like any other system, Postgres has a limit on no. of concurrent connections it can handle gracefully.</li>
<li>Pg-pool limits the no. of connections it takes up and queues up the remaining. Thus, gracefully handling the overload.</li>
</ol>
</blockquote>
<p>Then you will have one endpoint for all operations - the Pg-Pool service. Check <a href="https://devopscube.com/deploy-postgresql-statefulset/" rel="nofollow noreferrer">this article for more details, including the whole setup process</a>.</p>
| Mikolaj S. |
<p>My application is running within a pod container in a kubernetes cluster. Every time it is started in the container it allocates a random port. I would like to access my application from outside (from another pod or node for example) but since it allocates a random port I cannot create a serivce (NodePort or LoadBalancer) to map the application port to a specific port to be able to access it.</p>
<p>What are the options to handle this case in a kubernetes cluster?</p>
| Jack Lehman | <p>Not supported, checkout the issue <a href="https://github.com/kubernetes/enhancements/pull/2611" rel="nofollow noreferrer">here</a>. Even with docker, if your range is overly broad, you can hit <a href="https://github.com/moby/moby/issues/11185" rel="nofollow noreferrer">issue</a> as well.</p>
| gohm'c |
<p>Let's consider a python web application deployed under uWSGI via Nginx.</p>
<blockquote>
<p>HTTP client β Nginx β Socket/HTTP β uWSGI (web server) β webapp</p>
</blockquote>
<p>Where nginx is used as reverse proxy / load balancer.</p>
<p><strong>How to scale this kind of applications in kubernetes?</strong>
Several options come to my mind:</p>
<ol>
<li>Deploy nginx and uWSGI in a single pod. Simple approach.</li>
<li>Deploy nginx + uWSGI in single container? Violate the βone process per containerβ principle.</li>
<li>Deploy only a uWSGI (via HTTP). Omit the usage of nginx.</li>
</ol>
<p>or there is another solution, involving nginx ingress/load balancer services?</p>
| guesswho | <p>It depends.</p>
<p>I see two scenarios:</p>
<ol>
<li><p><strong>Ingress is used</strong></p>
<p>In this case there's no need to have nginx server within the pod, but it can be <code>ingress-nginx</code> which will be balancing traffic across a kubernetes cluster. You can find a good example in <a href="https://github.com/nginxinc/kubernetes-ingress/issues/143#issuecomment-347814243" rel="nofollow noreferrer">this comment on GitHub issue</a>.</p>
</li>
<li><p><strong>No ingress is used.</strong></p>
<p>In this case I'd go with <code>option 1</code> - <code>Deploy nginx and uWSGI in a single pod. Simple approach.</code>. This way you can easily scale in/out your application and don't have any complicated/unnecessary dependencies.</p>
</li>
</ol>
<p>In case you're not familiar with <code>what ingress is</code>, please find <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes documentation - ingress</a>.</p>
| moonkotte |
<p>I have to get the existing microservices run. They are given as docker images.
They talk to each other by configured hostnames and ports.
I started to use Istio to view and configure the outgoing calls of each microservice.
Now I am at the point that I need to rewrite / redirect the host and the port of a request that goes out of one container.
How can I do that with Istio?</p>
<p>I will try to give a minimum example.
There are two services, service-a and service-b.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: service-b
spec:
selector:
matchLabels:
run: service-b
replicas: 1
template:
metadata:
labels:
run: service-b
spec:
containers:
- name: service-b
image: nginx
ports:
- containerPort: 80
name: web
---
apiVersion: v1
kind: Service
metadata:
name: service-b
labels:
run: service-b
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 80
name: service-b
selector:
run: service-b
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-a
spec:
selector:
matchLabels:
run: service-a
replicas: 1
template:
metadata:
labels:
run: service-a
spec:
containers:
- name: service-a
image: nginx
ports:
- containerPort: 80
name: web
---
apiVersion: v1
kind: Service
metadata:
name: service-a
labels:
run: service-a
spec:
ports:
- port: 8081
protocol: TCP
targetPort: 80
name: service-a
selector:
run: service-a
</code></pre>
<p>I can docker exec into service-a and successfully execute:</p>
<pre><code>root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
< HTTP/1.1 200 OK
< server: envoy
</code></pre>
<p>Now, to simulate my problem, I want to reach service-b by using another hostname and port. I want to configure Istio the way that this call will also work:</p>
<pre><code>root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
</code></pre>
<p>Best regards,
Christian</p>
| Chris D. | <p>There are two solutions which can be used depending on necessity of using <code>istio</code> features.</p>
<p>If no <code>istio</code> features are planned to use, it can be solved using native kubernetes. In turn, if some <code>istio</code> feature are intended to use, it can be solved using <code>istio virtual service</code>. Below are two options:</p>
<hr />
<p><strong>1. Native kubernetes</strong></p>
<p><code>Service-x</code> should be pointed to the backend of <code>service-b</code> deployment. Below is <code>selector</code> which points to <code>deployment: service-b</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-x
labels:
run: service-x
spec:
ports:
- port: 7777
protocol: TCP
targetPort: 80
name: service-x
selector:
run: service-b
</code></pre>
<p>This way request will go through <code>istio</code> anyway because sidecar containers are injected.</p>
<pre><code># curl -vI service-b:8080
* Trying xx.xx.xx.xx:8080...
* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
> Host: service-b:8080
< HTTP/1.1 200 OK
< server: envoy
</code></pre>
<p>and</p>
<pre><code># curl -vI service-x:7777
* Trying yy.yy.yy.yy:7777...
* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
> Host: service-x:7777
< HTTP/1.1 200 OK
< server: envoy
</code></pre>
<hr />
<p><strong>2. Istio virtual service</strong></p>
<p>In this example <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service" rel="nofollow noreferrer">virtual service</a> is used. Service <code>service-x</code> still needs to be created, but now we don't specify any selectors:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-x
labels:
run: service-x
spec:
ports:
- port: 7777
protocol: TCP
targetPort: 80
name: service-x
</code></pre>
<p>Test it from another pod:</p>
<pre><code># curl -vI service-x:7777
* Trying yy.yy.yy.yy:7777...
* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
> Host: service-x:7777
< HTTP/1.1 503 Service Unavailable
< server: envoy
</code></pre>
<p><code>503</code> error which is expected. Now creating <code>virtual service</code> which will route requests to <code>service-b</code> on <code>port: 8080</code>:</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: service-x-to-b
spec:
hosts:
- service-x
http:
- route:
- destination:
host: service-b
port:
number: 8080
</code></pre>
<p>Testing from the pod:</p>
<pre><code># curl -vI service-x:7777
* Trying yy.yy.yy.yy:7777...
* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
> Host: service-x:7777
< HTTP/1.1 200 OK
< server: envoy
</code></pre>
<p>See it works as expected.</p>
<hr />
<p>Useful links:</p>
<ul>
<li><a href="https://istio.io/latest/docs/reference/config/networking/virtual-service" rel="nofollow noreferrer">Istio virtual service</a></li>
<li><a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#Destination" rel="nofollow noreferrer">virtual service - destination</a></li>
</ul>
| moonkotte |
<p>I am looking for a way to get all object's metadata within a k8s cluster and send it out to an external server.</p>
<p>By metadata, I refer to objects <code>Name</code>, <code>Kind</code>, <code>Labels</code>, <code>Annotations</code>, etc.
The intention is to build an offline inventory of a cluster.</p>
<p>What would be the best approach to build it? Is there any tool that already does something similar?</p>
<p>Thanks</p>
| tomikos | <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p>There are different ways to achieve it.</p>
<ol>
<li><p>From this <a href="https://github.com/kubernetes/kubectl/issues/151#issuecomment-402003022" rel="nofollow noreferrer">GitHub issue comment</a> it's possible to iterate through all resources to get all available objects.</p>
<p>in <strong>yaml</strong>:</p>
<pre><code>kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -o yaml
</code></pre>
<p>in <strong>json</strong>:</p>
<pre><code>kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -o json
</code></pre>
<p>And then parse the output.</p>
</li>
<li><p>Use kubernetes clients.</p>
<p>There are already developed <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">kubernetes clients</a> (available for different languages) which can be used to get required information and work with it later.</p>
</li>
<li><p>Use kubectl plugin - <code>ketall</code> (didn't test it)</p>
<p>There's a developed plugin for kubectl which returns <strong>all</strong> cluster resources. Please find <a href="https://github.com/corneliusweig/ketall" rel="nofollow noreferrer">github repo - ketall</a>. Again after cluster objects are gotten, you will need to parse/work with them.</p>
</li>
</ol>
| moonkotte |
<p>I just new in K8s. I try to self deploy k8s cloud in internal company server. And I have question how to I setup my K8s can allocation External IP for Service with Loabalancer. May you tell you how it work in GKE or EKS?</p>
| Duc Vo | <p>Updated base on your comment.</p>
<p><code>What I mean how to EKS or GKE behind the scenes allocation ip, what is a mechanism?</code></p>
<p>Here's the <a href="https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html" rel="nofollow noreferrer">EKS</a> version and here's the <a href="https://cloud.google.com/architecture/gke-address-management-overview" rel="nofollow noreferrer">GKE</a> version. It's a complex thing, suggest you use these materials as the starting point before diving into technical details (which previous answer provided you the source). In case you thought of on-premises k8s cluster, it depends on the CNI that you will use, a well known CNI is <a href="https://docs.projectcalico.org/networking/get-started-ip-addresses" rel="nofollow noreferrer">Calico</a>.</p>
| gohm'c |
Subsets and Splits