Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I am new to the storage concepts in Kubernetes.
I need to have some common persistent storage in the Kubernetes cluster but also to be able to write to it from outside of the cluster on-prem environment..</p>
<p>So my question is, Can we have some persistent storage (of a file system) in Kubernetes cluster that can be shared among different pods, and also applications from outside the Kubernetes cluster will be able to write to it? if yes, what is the proper architecture for it? how can I access that persistent storage from outside of the cluster?</p>
<p>if it's not possible, is there a better way to achieve my need to have some common database file system for podes in the cluster and applications outside the cluster?</p>
| Ohad | <p>Having a filesystem shared inside the cluster between multiple pods is doable with any persistent volume marked as ReadWriteMany, like a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#nfs" rel="nofollow noreferrer">NFS</a>. However for the NFS you will need to have a Kubernetes "addon" that manages its creation and deletion that is specific to your infrastructure.</p>
<p>I don't know how it will react if it is modified from outside the cluster but if what you need is just to have a database shared between the cluster an outsider application then it may be easier to have a regular database on a machine outside the cluster.</p>
<p>In fact you mostly want a distributed database on a kubernetes cluster for high availability, not performance, and most implementation seem to favor local storage with synchronization implemented inside the application (leader election and so on) over shared volumes.</p>
<p>If you want performances, you may take a look at <a href="https://en.wikipedia.org/wiki/Shard_(database_architecture)" rel="nofollow noreferrer">sharding</a> your database.</p>
| OreOP |
<p>Using Calico as CNI and CRI-O. DNS settings properly configured. Installed NGINX ingress controller via official documentation page of NGINX using helm. Set <code>replicaset</code> to 2 when installing.</p>
<p>After that used this file for creating 3 objects: <code>Deployment</code>, <code>Service</code> for exposing web server and <code>Ingress</code>.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: test-nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx-service-pod
template:
metadata:
labels:
app: nginx-service-pod
spec:
containers:
- image: nginx
name: test-nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx-service-pod
ports:
- port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: k8s.example.com
http:
paths:
- path: /test
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
...
</code></pre>
<p>Tried to test deployment service by curling it and it is working correct:</p>
<pre><code># curl http://10.103.88.163
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
</code></pre>
<p>But when I'm trying to curl Ingress getting error:</p>
<pre><code># curl http://k8s.example.com/test
curl: (7) Failed to connect to k8s.example.com port 80: Connection refused
</code></pre>
<p>Why this is happening? Because as I see there is no misconfig in the objects configuration.</p>
| Just | <p>This problem should be resolved by adding</p>
<pre><code>spec:
template:
spec:
hostNetwork: true
</code></pre>
<p>to the ingress controller yaml manifest. For more please check <a href="https://github.com/kubernetes/ingress-nginx/issues/4799" rel="nofollow noreferrer">this github issue</a> and <a href="https://github.com/kubernetes/ingress-nginx/issues/4799#issuecomment-560406420" rel="nofollow noreferrer">this answer</a>.</p>
| Mikołaj Głodziak |
<p>Is there an easy way to query for all deployments that use given ConfigMap as valueFrom: configMapKeyRef.</p>
<p>I need to know which deployments use my config map in order to be able to restart them after the config map changes.</p>
<p>I'm looking for something like:</p>
<p><code>kubectl get deployments --with-config-map my-config-map</code></p>
| stasiekz | <p>There is no way to do it as easily as you want. However you can still get the data you want in one command by using <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">jsonpath</a> as the output of your kubectl command.</p>
| OreOP |
<p>I am connecting to pod via client-Go and I want to get the properties of the file directory</p>
<pre><code>func GetPodFiles(c *gin.Context) {
client, _ := Init.ClusterID(c)
path := c.DefaultQuery("path", "/")
cmd := []string{
"sh",
"-c",
fmt.Sprintf("ls -l %s", path),
}
config, _ := Init.ClusterCfg(c)
req := client.CoreV1().RESTClient().Post().
Resource("pods").
Name("nacos-0").
Namespace("default").SubResource("exec").Param("container", "nacos")
req.VersionedParams(
&v1.PodExecOptions{
Command: cmd,
Stdin: false,
Stdout: true,
Stderr: true,
TTY: false,
},
scheme.ParameterCodec,
)
var stdout, stderr bytes.Buffer
exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
if err != nil {
response.FailWithMessage(response.InternalServerError, err.Error(), c)
return
}
err = exec.Stream(remotecommand.StreamOptions{
Stdin: nil,
Stdout: &stdout,
Stderr: &stderr,
})
if err != nil {
response.FailWithMessage(response.InternalServerError, "Error obtaining file", c)
return
}
fmt.Println(stdout.String())
}
</code></pre>
<p>Execution Result Output</p>
<pre><code>total 0
lrwxrwxrwx 1 root root 7 Jun 1 2018 bin -> usr/bin
drwxr-xr-x 5 root root 360 Feb 16 16:39 dev
lrwxrwxrwx 1 root root 8 Jun 1 2018 sbin -> usr/sbin
drwxr-xr-x 2 root root 6 Apr 11 2018 srv
</code></pre>
<p>Expect the result</p>
<pre><code>"data": [
{
"perm": "drwxr-xr-x",
"mod_time": "2022-03-02 15:02:15",
"kind": "d",
"name": "temp",
"size": ""
},
]
</code></pre>
<p>Is there a good way or a golang third-party library to handle it. Please let me know. Thank you</p>
| gujiwork | <p>In a Kubernetes pod you can execute the <code>stat</code> linux command instead of <code>ls</code> command.</p>
<pre><code>$ stat yourFileOrDirName
</code></pre>
<p>The output of this command by default is like this:</p>
<pre class="lang-yaml prettyprint-override"><code> File: yourFileOrDirName
Size: 346 Blocks: 0 IO Block: 4096 directory
Device: 51h/82d Inode: 40431 Links: 1
Access: (0755/drwxr-xr-x) Uid: ( 1000/ username) Gid: ( 1000/ groupname)
Access: 2022-03-02 11:59:07.384821351 +0100
Modify: 2022-03-02 11:58:48.733821177 +0100
Change: 2022-03-02 11:58:48.733821177 +0100
Birth: 2021-12-21 11:12:05.571841723 +0100
</code></pre>
<p>But you can tweak its output like this:</p>
<pre><code>$ stat --printf="%n,%A,%y,%s" yourFileOrDirName
</code></pre>
<p>where <code>%n</code> - file name, <code>%A</code> - permission bits and file type in human readable form, <code>%y</code> - time of last data modification human-readable, <code>%s</code> - total size, in bytes. You can also choose any character as a delimiter instead of comma.</p>
<p>the output will be:</p>
<pre class="lang-yaml prettyprint-override"><code>yourFileOrDirName,drwxr-xr-x,2022-03-02 11:58:48.733821177 +0100,346
</code></pre>
<p>See more info about the <code>stat</code> command <a href="https://man7.org/linux/man-pages/man1/stat.1.html" rel="nofollow noreferrer">here</a>.</p>
<p>After you get such output, I believe you can easily 'convert' it to json format if you really need it.</p>
<p><strong>Furthermore</strong>, you can run the <code>stat</code> command like this:</p>
<pre><code>$ stat --printf="{\"data\":[{\"name\":\"%n\",\"perm\":\"%A\",\"mod_time\":\"%y\",\"size\":\"%s\"}]}" yourFileOrDirName
</code></pre>
<p>Or as @mdaniel suggested, since the command does not contain any shell variables, nor a <code>'</code>, the cleaner command is:</p>
<pre><code>stat --printf='{"data":[{"name":"%n","perm":"%A","mod_time":"%y","size":"%s"}]}' yourFileOrDirName
</code></pre>
<p>and get the DIY json output:</p>
<pre class="lang-json prettyprint-override"><code>{"data":[{"name":"yourFileOrDirName","perm":"drwxrwxr-x","mod_time":"2022-02-04 15:17:27.000000000 +0000","size":"4096"}]}
</code></pre>
| mozello |
<p>I've started experimenting with Argocd as part of my cluster setup and set it up to watch a test repo containing some yaml files for a small application I wanted to use for the experiment. While getting to know the system a bit, I broke the repo connection and instead of fixing it I decided that I had what I wanted, and decided to do a clean install with the intention of configuring it towards my actual project.</p>
<p>I pressed the button in the web UI for deleting the application, which got stuck. After which I read that adding <code>spec.syncPolicy.allowEmpty: true </code> and removing the <code>metadata.finalizers</code> declaration from the application yaml file. This did not allow me to remove the application resource.</p>
<p>I then ran an uninstall command with the official manifests/install.yaml as an argument, which cleaned up most resources installed, but left the application resource and the namespace. Command: <code>kubectl delete -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml</code></p>
<p>Have tried to use the kubectl delete application NAME <code>--force</code> flag and the <code>--cascade=orphans</code> flag on the application resource as well as on the argocd namespace itself. Now I have both of them stuck at terminating without getting any further.</p>
<p>Now I'm proper stuck as I can't reinstall the argocd in any way I know due to the resources and namespace being marked for deletion, and I'm at my wits end as to what else I can try in order to get rid of the dangling application resource.</p>
<p>Any and all suggestions as to what to look into is much appreciated.</p>
| amsten | <p>If your problem is that the namespace cannot be deleted, the following two solutions may help you:</p>
<ol>
<li>Check what resources are stuck in the deletion process, delete these resources, and then delete ns</li>
<li>Edit the namespace of argocd, check if there is a finalizer field in the spec, delete that field and the content of the field</li>
</ol>
<p>Hopefully it helped you.</p>
| ice coffee |
<p>A Kubelet has several endpoint paths it listens on, such as <code>/metrics</code>, <code>/metrics/cadvisor</code>, <code>/logs</code>, etc. One can easily query these endpoints by running <code>kubectl get --raw /api/v1/nodes/<node-name>/proxy/<path></code> (after running <code>kubectl proxy</code>).</p>
<p>My question is how can one obtain the list of all these paths that Kubelet is serving? A list can be found in the Kubelet's own code <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L84" rel="nofollow noreferrer">here</a>, but that's just a subset. There's for example <code>/pods</code> which is not on that list, but defined further down in <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L342" rel="nofollow noreferrer">the code as well</a>. But there are others that aren't explicitly listed in the code, such as <code>/healthz</code>, which one guesses by looking at <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L333" rel="nofollow noreferrer">other lines of the code</a>. I'd also venture to believe that other addons or 3rd party products could result in the Kubelet exposing more paths.</p>
<p>I tried using <code>/healthz?verbose</code>, but it only returns basic information, and nothing near a list of paths:</p>
<pre><code>[+]ping ok
[+]log ok
[+]syncloop ok
healthz check passed
</code></pre>
<p>The Kubernetes API Server returns a very nice list of paths using <code>kubectl get --raw /</code> as seen below (truncated due to length). Is there something equivalent for Kubelet's own paths?</p>
<pre><code>{
"paths": [
"/.well-known/openid-configuration",
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1",
"/apis/apiextensions.k8s.io/v1beta1",
"/apis/apiregistration.k8s.io",
"/apis/apiregistration.k8s.io/v1",
"/apis/apiregistration.k8s.io/v1beta1",
"/apis/apps",
"/apis/apps/v1",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/autoscaling/v2beta1",
"/apis/autoscaling/v2beta2",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v1beta1",
"/apis/certificates.k8s.io",
....
</code></pre>
| Mihai Albert | <p>Based on the information from different sources, below provided some endpoints for kubelet.</p>
<p>From the code of <a href="https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/kubelet/server/server.go#L85" rel="nofollow noreferrer">kubelet server</a>:</p>
<pre><code>/metrics
/metrics/cadvisor
/metrics/resource
/metrics/probes
/stats/
/logs/
/debug/pprof/
/debug/flags/v
</code></pre>
<p><a href="https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/kubelet/server/server.go#L342" rel="nofollow noreferrer">also</a>:</p>
<pre><code>/pods/*
</code></pre>
<p><a href="https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/kubelet/server/server.go#L409" rel="nofollow noreferrer">and</a>:</p>
<pre><code>/run/*
/exec/*
/attach/*
/portForward/*
/containerLogs/*
/configz
/runningpods/
</code></pre>
<p><a href="https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/kubelet/server/auth_test.go#L108" rel="nofollow noreferrer">here</a>:</p>
<pre><code>"/attach/{podNamespace}/{podID}/{containerName}": "proxy",
"/attach/{podNamespace}/{podID}/{uid}/{containerName}": "proxy",
"/configz": "proxy",
"/containerLogs/{podNamespace}/{podID}/{containerName}": "proxy",
"/debug/flags/v": "proxy",
"/debug/pprof/{subpath:*}": "proxy",
"/exec/{podNamespace}/{podID}/{containerName}": "proxy",
"/exec/{podNamespace}/{podID}/{uid}/{containerName}": "proxy",
"/healthz": "proxy",
"/healthz/log": "proxy",
"/healthz/ping": "proxy",
"/healthz/syncloop": "proxy",
"/logs/": "log",
"/logs/{logpath:*}": "log",
"/metrics": "metrics",
"/metrics/cadvisor": "metrics",
"/metrics/probes": "metrics",
"/metrics/resource": "metrics",
"/pods/": "proxy",
"/portForward/{podNamespace}/{podID}": "proxy",
"/portForward/{podNamespace}/{podID}/{uid}": "proxy",
"/run/{podNamespace}/{podID}/{containerName}": "proxy",
"/run/{podNamespace}/{podID}/{uid}/{containerName}": "proxy",
"/runningpods/": "proxy",
"/stats/": "stats",
"/stats/summary": "stats"
</code></pre>
<p>The asterisk indicates that full request should be updated with some parameters. For example for <code>/containerLogs/*</code> with adding <code>/{podNamespace}/{podID}/{containerName}</code>:</p>
<pre><code>kubectl get --raw /api/v1/nodes/<node-name>/proxy/containerLogs/{podNamespace}/{podID}/{containerName}
</code></pre>
<p>Some information <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#kubelet-authorization" rel="nofollow noreferrer">from kubernetes site about kubelet API</a>:</p>
<pre><code>/stats/*
/metrics/*
/logs/*
/spec/*
</code></pre>
<p>Also you can look at this page from <a href="https://github.com/cyberark/kubeletctl/blob/master/API_TABLE.md" rel="nofollow noreferrer">kubeletctl</a>. It's a bit outdated, but may provide some useful information about the kubelet API and HTTP requests.</p>
<p>And this <a href="https://www.deepnetwork.com/blog/2020/01/13/kubelet-api.html" rel="nofollow noreferrer">article about the kubelet API</a> is good too.</p>
<p>In any case, it is recommended to check the kubernetes documentation before using it to see what is deprecated in current / old releases.</p>
<p>p.s. If you are interested in this topic, you can create an issue on <a href="https://github.com/kubernetes/kubernetes/issues" rel="nofollow noreferrer">kubernetes GitHub page</a> to propose an improvement for kubelet documentation.</p>
| Andrew Skorkin |
<p>I tried creating a deployment with minikube cluster connected to virtual box.But it results with below mentioned Imagepullbackoff error(Passed commands on Windows powershell-admin rights)
I tried with docker as a driver but same result.Help me out!!</p>
<p>PS C:\Windows\system32>kubectl get pod
NAME READY STATUS RESTARTS AGE
mongo-express-98c6ff4b4-l7jmn 0/1 ImagePullBackOff 0 116m
mongodb-deployment-67dcfb9c9f-mfvxr 0/1 ImagePullBackOff 0 116m</p>
<pre><code>PS C:\Windows\system32> kubectl describe pod
Name: mongo-express-98c6ff4b4-l7jmn
Namespace: default
Priority: 0
Node: minikube/192.168.59.113
Start Time: Thu, 30 Jun 2022 19:10:41 +0530
Labels: app=mongo-express
pod-template-hash=98c6ff4b4
Annotations: <none>
Status: Pending
IP: 172.17.0.4
IPs:
IP: 172.17.0.4
Controlled By: ReplicaSet/mongo-express-98c6ff4b4
Containers:
mongo-express:
Container ID:
Image: mongo-express
Image ID:
Port: 8081/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: <set to the key 'mongo-root-username' in secret 'mongodb-secret'> Optional: false
ME_CONFIG_MONGODB_ADMINPASSWORD: <set to the key 'mongo-root-password' in secret 'mongodb-secret'> Optional: false
ME_CONFIG_MONGODB_SERVER: <set to the key 'database_url' of config map 'mongodb-configmap'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lp9nk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-lp9nk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 60m default-scheduler Successfully assigned default/mongo-express-98c6ff4b4-l7jmn to minikube
Warning Failed 58m (x6 over 59m) kubelet Error: ImagePullBackOff
Normal Pulling 58m (x4 over 59m) kubelet Pulling image "mongo-express"
Warning Failed 58m (x4 over 59m) kubelet Failed to pull image "mongo-express": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 10.0.2.3:53: no such host
Warning Failed 58m (x4 over 59m) kubelet Error: ErrImagePull
Warning Failed 29m (x2 over 36m) kubelet Failed to pull image "mongo-express": rpc error: code = Unknown desc = context deadline exceeded
Normal BackOff 19m (x141 over 59m) kubelet Back-off pulling image "mongo-express"
Name: mongodb-deployment-67dcfb9c9f-mfvxr
Namespace: default
Priority: 0
Node: minikube/192.168.59.113
Start Time: Thu, 30 Jun 2022 19:10:32 +0530
Labels: app=mongodb
pod-template-hash=67dcfb9c9f
Annotations: <none>
Status: Pending
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: ReplicaSet/mongodb-deployment-67dcfb9c9f
Containers:
mongodb:
Container ID:
Image: mongo
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MONGO_INITDB_ROOT_USERNAME: <set to the key 'mongo-root-username' in secret 'mongodb-secret'> Optional: false
MONGO_INITDB_ROOT_PASSWORD: <set to the key 'mongo-root-password' in secret 'mongodb-secret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ft77v (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ft77v:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 60m default-scheduler Successfully assigned default/mongodb-deployment-67dcfb9c9f-mfvxr to minikube
Warning Failed 58m (x6 over 60m) kubelet Error: ImagePullBackOff
Normal Pulling 58m (x4 over 60m) kubelet Pulling image "mongo"
Warning Failed 58m (x4 over 60m) kubelet Failed to pull image "mongo": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 10.0.2.3:53: no such host
Warning Failed 58m (x4 over 60m) kubelet Error: ErrImagePull
Warning Failed 34m kubelet Failed to pull image "mongo": rpc error: code = Unknown desc = context deadline exceeded
Normal BackOff 19m (x134 over 60m) kubelet Back-off pulling image "mongo"
</code></pre>
| Vj_raghav | <p>Try to pull image first, then create deployment.</p>
<pre><code>minikube image pull mongo
</code></pre>
<p><strong>UPD</strong>: Sometimes even <code>image pull</code> doesn't help. minikube developers said that you can curl to check whether you can connect to repo or not. There also may be ISP issues. Temporary changing my ISP to mobile data and installing needed pods worked for me.</p>
| maksat |
<p><strong>Traefik version 2.5.6</strong></p>
<p>I have the following ingress settings:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/app-root: /users
traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip
name: users
spec:
rules:
- host: dev.[REDUCTED]
http:
paths:
- backend:
service:
name: users-service
port:
number: 80
path: /users
pathType: Prefix
</code></pre>
<p>But when I call:</p>
<pre><code>curl -i http://dev.[REDUCTED]/users/THIS-SHOUD-BE-ROOT
</code></pre>
<p>I get in the pod, serving the service:</p>
<pre><code>error: GET /users/THIS-SHOUD-BE-ROOT 404
</code></pre>
<p>What can be the reason for that?</p>
| Michael A. | <p>Try to use <a href="https://doc.traefik.io/traefik/v2.5/user-guides/crd-acme/#traefik-routers" rel="nofollow noreferrer">Traefik Routers</a> as in the example below:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: users
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`dev.[REDUCTED]`) && PathPrefix(`/users`)
kind: Rule
services:
- name: users-service
port: 80
</code></pre>
| Bazhikov |
<p>I found on microsoft documentation a yaml that consent to do everything in all resources inside a namespace. I modified this yaml to avoid delete verbs and it works fine:</p>
<pre><code> kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: myaksrole_useraccess
namespace: mynamespace
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["create", "patch", "get", "update", "list"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["create", "patch", "get", "update", "list"]
</code></pre>
<p>My question is: How I can add delete only for pods resources in this yaml?</p>
| Emanuele | <p>Let's check the <code>myaksrole_useraccess</code> Role from the original definition:</p>
<pre><code>kubectl describe role myaksrole_useraccess -n mynamespace
Name: myaksrole_useraccess
kind: Role
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
* [] [] [create patch get update list]
*.apps [] [] [create patch get update list]
cronjobs.batch [] [] [create patch get update list]
jobs.batch [] [] [create patch get update list]
*.extensions [] [] [create patch get update list]
</code></pre>
<p>Then we can add additional permission for the Pods resource. The updated Role definition is shown below.</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: myaksrole_useraccess
namespace: mynamespace
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["create", "patch", "get", "update", "list"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["create", "patch", "get", "update", "list"]
- apiGroups: [""]
resources:
- pods
verbs: ["delete", "create", "patch", "get", "update", "list"]
</code></pre>
<p>Apply the changes:</p>
<pre><code>kubectl apply -f myaksrole_useraccess.yaml
</code></pre>
<p>Check the <code>myaksrole_useraccess</code> Role again:</p>
<pre><code>kubectl describe role myaksrole_useraccess -n mynamespace
Name: myaksrole_useraccess
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
* [] [] [create patch get update list]
*.apps [] [] [create patch get update list]
cronjobs.batch [] [] [create patch get update list]
jobs.batch [] [] [create patch get update list]
*.extensions [] [] [create patch get update list]
pods [] [] [delete create patch get update list]
</code></pre>
| Andrew Skorkin |
<p>Now I am remote debugging my java program in kubernetes(v1.15.2) using kubectl proxy forward like this:</p>
<pre><code>kubectl port-forward soa-report-analysis 5018:5018 -n dabai-fat
</code></pre>
<p>I could using intellij idea to remote connect my localhost port 5018 to remote debugging my pod in kubernetes cluster in remote datacenter,but now I am facing a problem is every time I must change the pod name to redebug after pod upgrade,any way to keep a stable channel for debugging? </p>
| Dolphin | <p>I could suggest for anyone who looks for ways to debug Java(and Go, NodeJS, Python, .NET Core) applications in Kubernetes to look at <strong>skaffold</strong>. <br/>
It simple CLI tool that uses already existing build and deploy configuration that you used to work with.
There is no need for additional installation in the cluster, modification for existing deployment configuration, etc.<br/>
Install CLI: <a href="https://skaffold.dev/docs/install/" rel="nofollow noreferrer">https://skaffold.dev/docs/install/</a><br/>
Open your project, and try:</p>
<pre><code>skaffold init
</code></pre>
<p>This will make skaffold create</p>
<p><strong>skaffold.yaml</strong></p>
<p>(the only needed config file for skaffold)</p>
<p>And then</p>
<pre><code>skaffold debug
</code></pre>
<p>Which will use your existing build and deploy config, to build a container and deploy it. If needed necessary arguments will be injected into the container, and port forwarding will start automatically.</p>
<p>For more info look at:
<a href="https://skaffold.dev/docs/workflows/debug/" rel="nofollow noreferrer">https://skaffold.dev/docs/workflows/debug/</a></p>
<p>This can provide a consistent way to debug your application without having to be aware all time about the current pod or deployment state.</p>
| Rustam Lotsmanenko |
<p>I'm planning to deploy a small Kubernetes cluster (3x 32GB Nodes). I'm not experienced with K8S and I need to come up with some kind of resilient SQL database setup and CockroachDB seems like a great choice.</p>
<p>I wonder if it's possible to <em>relatively easy</em> deploy a configuration, where some CockroachDB instances (nodes?) are living inside the K8S cluster, but at the same time some other instances live outside the K8S cluster (2 on-premise VMs). All those CockroachDB would need to be considered a single CockroachDB cluster. It might be also worth noting that Kubernetes would be hosted in the cloud (eg. Linode).</p>
<p>By <em>relatively easy</em> I mean:</p>
<ul>
<li><em>simplish</em> to deploy</li>
<li>requiring little maintenance</li>
</ul>
<p><a href="https://mermaid.live/edit#pako:eNqNkU1vwjAMhv9K5DM9tGwS6nFjp62bNKReCIcsMTRSk1SpI4QQ_51klH10iOGT5fd9Yjveg3QKoYR167ayEZ7Yyzu3LEYfPjZedA17bENP6JfPswWTp3x1sijtUZJ29gv6BSbgNT7P8m8xxfzhRwGtuoYWY7S4GZ2O0ekF9DMZLfxms86j0T3etGZd5cu6Yvlq3O_uv1Hrqkhk8Ye8vz7pcBGWZVnqfqk4fFNUzwWYgEFvhFbx3Pskc6AGDXIoY9rqTUMcuD1EY-iUIHxSmpyHci3aHicgArnFzkooyQc8m-ZaxGXM4DocAUHgpow" rel="nofollow noreferrer"><img src="https://mermaid.ink/img/pako:eNqNkU1vwjAMhv9K5DM9tGwS6nFjp62bNKReCIcsMTRSk1SpI4QQ_51klH10iOGT5fd9Yjveg3QKoYR167ayEZ7Yyzu3LEYfPjZedA17bENP6JfPswWTp3x1sijtUZJ29gv6BSbgNT7P8m8xxfzhRwGtuoYWY7S4GZ2O0ekF9DMZLfxms86j0T3etGZd5cu6Yvlq3O_uv1Hrqkhk8Ye8vz7pcBGWZVnqfqk4fFNUzwWYgEFvhFbx3Pskc6AGDXIoY9rqTUMcuD1EY-iUIHxSmpyHci3aHicgArnFzkooyQc8m-ZaxGXM4DocAUHgpow" alt="" /></a></p>
| Xkonti | <p>Yes, it's straight forward to do a multi-cloud deployment of CRDB. This is one of the great advantages of cockroachdb. Simply run the <code>cockroach start</code> command on each of the VMs/pods running cockroachdb and they will form a cluster.</p>
<p>See this blog post/tutorial for more info: <a href="https://www.cockroachlabs.com/blog/multi-cloud-deployment/" rel="noreferrer">https://www.cockroachlabs.com/blog/multi-cloud-deployment/</a></p>
| alyshan |
<p>I've recently started using KOPS as a tool to provision Kubernetes clusters and from what I've seen so far, it stores it's CA key and certificates in its S3 bucket, which is fine.</p>
<p>But out curiosity, would it be possible to store these in Hashicorp Vault instead, as opposed to s3?</p>
| Metro | <blockquote>
<p>But out curiosity, would it be possible to store these in Hashicorp Vault instead, as opposed to s3?</p>
</blockquote>
<p>Yes. User <a href="https://stackoverflow.com/users/5343387/matt-schuchard" title="16,592 reputation">Matt Schuchard</a> has mentioned in the comment:</p>
<blockquote>
<p>Yes you can store them in the KV2 secrets engine, or use the PKI secrets engine to generate them instead.</p>
</blockquote>
<p>For more details look at this <a href="https://kops.sigs.k8s.io/state/" rel="nofollow noreferrer">kops documentation</a>. The most interesting part should be <a href="https://kops.sigs.k8s.io/state/#node-authentication-and-configuration" rel="nofollow noreferrer">Node authentication and configuration</a>:</p>
<blockquote>
<p>The vault store uses IAM auth to authenticate against the vault server and expects the vault auth plugin to be mounted on <code>/aws</code>.</p>
<p>Instructions for configuring your vault server to accept IAM authentication are at <a href="https://learn.hashicorp.com/vault/identity-access-management/iam-authentication" rel="nofollow noreferrer">https://learn.hashicorp.com/vault/identity-access-management/iam-authentication</a></p>
<p>To configure kOps to use the Vault store, add this to the cluster spec:</p>
</blockquote>
<pre><code>spec:
secretStore: vault://<vault>:<port>/<kv2 mount>/clusters/<clustername>/secrets
keyStore: vault://<vault>:<port>/<kv2 mount>/clusters/<clustername>/keys
</code></pre>
<p>Look also at this <a href="https://learn.hashicorp.com/tutorials/vault/approle" rel="nofollow noreferrer">hashicorp site</a>.</p>
| Mikołaj Głodziak |
<p>I have installed minikube in my windows 10. I was able to create deployment and work. But when i stop minikube everything(including deployment) is lost. How to run minikube as a service startup in windows?</p>
| I.vignesh David | <p>I've reproduced your problem on <code>1.25.1</code> version - indeed, all resources that were deployed in the cluster are deleted on <code>minikube stop</code>.</p>
<p>As I already mentioned in the comments, this issue was raised <a href="https://github.com/kubernetes/minikube/issues/12655" rel="nofollow noreferrer">here</a>. It is currently fixed in the latest release, you would need to upgrade your Minikube installation to to <code>1.25.2</code> version - basically, re-install it with newest version available <a href="https://github.com/kubernetes/minikube/releases/tag/v1.25.2" rel="nofollow noreferrer">here</a>. Confirming from my side that as soon as I've upgraded Minikube from 1.25.1 to 1.25.2 version, deployments and all other resources were present on the cluster after restarting Minikube.</p>
| anarxz |
<p>I'm following the <a href="https://gateway.dask.org/install-kube.html" rel="nofollow noreferrer">instructions</a> to setup Dask on K8s Cluster. I'm on MacOS, have K8s running on Docker Desktop, <code>kubectl</code> version <code>1.22.5</code> and <code>helm</code> version <code>3.8.0</code>. After adding the repository, downloading default configuration, installing helm chart using command</p>
<pre><code>RELEASE=my-dask-gateway
NAMESPACE=dask-gateway
VERSION=0.9.0
helm upgrade --install \
--namespace $NAMESPACE \
--version $VERSION \
--values path/to/your/config.yaml \
$RELEASE \
dask/dask-gateway
</code></pre>
<p>generates following output/error</p>
<pre><code>"dask" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "dmwm-bigdata" chart repository
...Successfully got an update from the "dask" chart repository
Update Complete. ⎈Happy Helming!⎈
Release "my-dask-gateway" does not exist. Installing it now.
Error: failed to install CRD crds/daskclusters.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
</code></pre>
<p>An older <a href="https://stackoverflow.com/questions/69054622/unable-to-install-crds-in-kubernetes-kind">post</a> suggests to either update the manifest or use older version of kubernetes. Does that mean dask is not compatible with recent versions of kubernetes?</p>
| F Baig | <p><strong>Posting community wiki answer for better visibility:</strong></p>
<p>This is fixed in the repo main. You could grab the CRDs from there, or wait for a release, which we are hoping to do soon. Otherwise, yes, you would need an older version of kubernetes for dask-gateway to work.</p>
| anarxz |
<p>I can route HTTP traffic (e.g. Elasticsearch and various dashboards) through Istio Gateway, but I can't get raw TCP traffic through. I have two examples below (postgres and redpanda). I have no trouble accessing the underlying services (<code>mypostgres.default.svc.cluster.local</code> and <code>three-node-cluster-0.three-node-cluster.redpanda-system.svc.cluster.local</code>) internally with postgres and kafka clients.</p>
<p>My Gateway:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- 'mydomain.cloud'
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- 'mydomain.cloud'
tls:
mode: SIMPLE
credentialName: letsencrypt-staging-tls
- port:
number: 9092
name: redpanda
protocol: TCP
hosts:
- 'mydomain.cloud'
- port:
number: 5432
name: postgres
protocol: TCP
hosts:
- 'mydomain.cloud'
</code></pre>
<p>Postgres spec:</p>
<pre><code>apiVersion: kubegres.reactive-tech.io/v1
kind: Kubegres
metadata:
name: mypostgres
namespace: postgres
spec:
replicas: 3
image: postgres:13.2
database:
size: 50Gi
storageClassName: postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgressecret
key: superUserPassword
- name: POSTGRES_REPLICATION_PASSWORD
valueFrom:
secretKeyRef:
name: postgressecret
key: replicationUserPassword
</code></pre>
<p>Virtual service:</p>
<pre><code>spec:
hosts:
- "*"
gateways:
- istio-system/gateway
tcp:
- match:
- port: 5432
route:
- destination:
host: mypostgres.default.svc.cluster.local
port:
number: 5432
</code></pre>
<p>Destination rule</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: postgres-destination-rule
namespace: default
spec:
host: mypostgres.default.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
</code></pre>
<p>Redpanda</p>
<pre><code>apiVersion: redpanda.vectorized.io/v1alpha1
kind: Cluster
metadata:
name: three-node-cluster
spec:
image: "vectorized/redpanda"
version: "latest"
replicas: 2
resources:
requests:
cpu: 1
memory: 2Gi
limits:
cpu: 1
memory: 2Gi
configuration:
rpcServer:
port: 33145
kafkaApi:
- port: 9092
pandaproxyApi:
- port: 8082
adminApi:
- port: 9644
developerMode: true
storage:
storageClassName: redpanda
</code></pre>
<p>Virtual service</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: redpanda-vts
namespace: redpanda-system
spec:
hosts:
- "*"
gateways:
- istio-system/gateway
tcp:
- match:
- port: 9092
route:
- destination:
host: three-node-cluster-0.three-node-cluster.redpanda-system.svc.cluster.local
port:
number: 9092
</code></pre>
<p>Destination rule:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: redpanda-destination-rule
namespace: redpanda-system
spec:
host: three-node-cluster-0.three-node-cluster.redpanda-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
</code></pre>
<p>Any ideas? I've tried playing around with the host names, using asterisks instead of domain names, but no effect. Getting TLS will be another day's fight, but now I'd just like to get some traffic through.</p>
<p>For example, the following works for RedPanda from inside the cluster with the <a href="https://kafka-python.readthedocs.io/en/master/install.html" rel="nofollow noreferrer">standard kafka-python client</a>:</p>
<pre><code>from kafka.admin import KafkaAdminClient, NewTopic
nodes = {'bootstrap.servers':'three-node-cluster-0.three-node-cluster.redpanda-system.svc.cluster.local, three-node-cluster-1.three-node-cluster.redpanda-system.svc.cluster.local'}
admin_client = KafkaAdminClient(
bootstrap_servers=nodes['bootstrap.servers'],
client_id='test'
)
topic_list = []
topic_list.append(NewTopic(name="test-topic", num_partitions=1, replication_factor=1))
admin_client.create_topics(new_topics=topic_list, validate_only=False)
</code></pre>
<p>Similarly, I would like to be able to do the following from outside K8s through Istio Gateway:</p>
<pre><code>from kafka.admin import KafkaAdminClient, NewTopic
nodes = {'bootstrap.servers':'mydomain.cloud/kafka'}
admin_client = KafkaAdminClient(
bootstrap_servers=nodes['bootstrap.servers'],
client_id='test'
)
topic_list = []
topic_list.append(NewTopic(name="test-topic", num_partitions=1, replication_factor=1))
admin_client.create_topics(new_topics=topic_list, validate_only=False)
</code></pre>
| Minsky | <p>Based on the documentation about <a href="https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/" rel="nofollow noreferrer">Istio Protocol Selection</a></p>
<blockquote>
<p>Istio supports proxying any TCP traffic. This includes HTTP, HTTPS, gRPC, as well as raw TCP protocols. In order to provide additional capabilities, such as routing and rich metrics, the protocol must be determined. This can be done automatically or explicitly specified.</p>
</blockquote>
<p>And the answer to your problem should be in <a href="https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection" rel="nofollow noreferrer">this fragment</a>:</p>
<blockquote>
<p>Protocols can be specified manually in the Service definition.</p>
<p>This can be configured in two ways:</p>
<ul>
<li>By the name of the port: <code>name: <protocol>[-<suffix>]</code>.</li>
<li>In Kubernetes 1.18+, by the <code>appProtocol</code> field: <code>appProtocol: <protocol></code>.</li>
</ul>
<p>Note that behavior at the Gateway differs in some cases as the gateway can terminate TLS and the protocol can be negotiated.</p>
</blockquote>
<p>Look at the example yaml:</p>
<blockquote>
<p>Below is an example of a Service that defines a <code>https</code> port by <code>appProtocol</code> and an <code>http</code> port by name:</p>
</blockquote>
<pre><code>kind: Service
metadata:
name: myservice
spec:
ports:
- number: 3306
name: database
appProtocol: https <-change here 'https' to 'tcp'
- number: 80
name: http-web
</code></pre>
<p>In your situation try to replace <code>appProtocol: https</code> and put <code>appProtocol: tcp</code> in your Service yaml</p>
<p>Bare in mind that Server First protocols, such as MySQL, are incompatible with automatic protocol selection. See <a href="https://istio.io/latest/docs/ops/deployment/requirements#server-first-protocols" rel="nofollow noreferrer">Server first protocols</a> for more information.</p>
| Mikołaj Głodziak |
<p>I have 3 vps with docker installed. I have created a docker image and I would like to know if there is a possibility to configure docker or docker-compose in such a way that these 3 containers run only once a day at random hours.</p>
<p>I have not found any way other than to configure the command to start the docker in a bash script, called from the Linux crontab, which waits for a while at random. Are there better better solutions?
Maybe k8s / k3s?</p>
| Marco Paggioro | <p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p>
<p>Possible solutions to this problem are listed below.</p>
<p><strong>Docker</strong></p>
<p>Bash script, which is called from Linux crontab to start Docker.</p>
<p><strong>Kubernetes</strong></p>
<p>Since Docker / Docker-compose does't have its own way of scheduling a container to run at random time, CronJob might be the easiest way to do this.</p>
<p>More information about CronJobs is available in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">the official documentation</a></p>
| Andrew Skorkin |
<p>I am trying to make a Kubernetes deployment script using helm.
I created following 2 jobs (skipped the container template since I guess it does not matter)</p>
<p>templates/jobs/migrate.yaml</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-migrate
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
...
</code></pre>
<p>templates/jobs/seed.yaml</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-seed
namespace: {{ .Release.Namespace }}
spec:
...
</code></pre>
<p>First job is updating the database structure.<br />
Second job will reset the database contents and fill it with example data.</p>
<p>Since I did not add <code>post-install</code> hook to the seed job I was expecting that job to not run automatically but only when I manually ask it to run.<br />
But it not only ran automatically, it tried to run before migrate.</p>
<p>How can I define a job that I have to manually trigger for it to run?<br />
In vanilla kubernetes jobs run only when I explicitly execute their files using<br />
<code>kubectl apply -f job/seed-database.yaml</code></p>
<p>How can I do the same using helm?</p>
| HubertNNN | <p>Replying to your last comment and thanks to @HubertNNN for his idea:</p>
<blockquote>
<p>Can I run a suspended job multiple times? From documentation it seems
like a one time job that cannot be rerun like normal jobs</p>
</blockquote>
<p>It's normal job, you just editing yaml file with the <code>.spec.suspend: true</code> and it's <code>startTime</code>:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
suspend: true
parallelism: 1
completions: 5
template:
spec:
...
</code></pre>
<blockquote>
<p>If all Jobs were created in the suspended state and placed in a pending queue, I can achieve priority-based Job scheduling by resuming Jobs in the right order.</p>
</blockquote>
<p>More information is <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#suspending-a-job" rel="nofollow noreferrer">here</a></p>
| Bazhikov |
<p>I am using Kubernetes with Helm 3.</p>
<p>It is ran on CentOS Linux 7 (Core).</p>
<p>K8S (check by running: kubectl version):</p>
<p>git version (kubernetes): v1.21.6, go version: go1.16.9.</p>
<p>helm version: v3.3.4</p>
<p>helm version (git) go1.14.9.</p>
<p>I need to create a Job that is running after a Pod is created.</p>
<p>The pod yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: {{ include "test.fullname" . }}-mysql
labels:
app: {{ include "test.fullname" . }}-mysql
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-20"
"helm.sh/delete-policy": before-hook-creation
spec:
containers:
- name: {{ include "test.fullname" . }}-mysql
image: {{ .Values.mysql.image }}
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "12345"
- name: MYSQL_DATABASE
value: test
</code></pre>
<p>The Job:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "test.fullname" . }}-migration-job
labels:
app: {{ include "test.fullname" . }}-migration-job
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-10"
"helm.sh/hook-delete-policy": hook-succeeded, hook-failed
spec:
parallelism: 1
completions: 1
backoffLimit: 1
template: #PodTemplateSpec (Core/V1)
spec: #PodSpec (core/v1)
initContainers: # regular
- name: wait-mysql
image: bitnami/kubectl
imagePullPolicy: IfNotPresent
args:
- wait
- pod/{{ include "test.fullname" . }}-mysql
- --namespace={{ .Release.Namespace }}
- --for=condition=ready
- --timeout=120s
containers:
- name: {{ include "test.fullname" . }}
image: {{ .Values.myMigration.image }}
imagePullPolicy: IfNotPresent
command: {{- toYaml .Values.image.entrypoint | nindent 12 }}
args: {{- toYaml .Values.image.cmd | nindent 12}}
</code></pre>
<p>MySQL is MySQL 5.6 image.</p>
<p>When I write the above, also run <code>helm install test ./test --namespace test --create-namespace</code></p>
<p>Even though I changed the hook for pre-install (for Pod and Job), the job is never running.</p>
<p>In both situations, I get messages (and need to press - to exit - I don't want this behavior either:</p>
<blockquote>
<p>Pod test-mysql pending Pod test-mysql pending Pod</p>
</blockquote>
<blockquote>
<p>test-mysql pending Pod test-mysql running Pod</p>
</blockquote>
<blockquote>
<p>test-mysql running Pod test-mysql running Pod</p>
</blockquote>
<blockquote>
<p>test-mysql running ...</p>
</blockquote>
<p>In this example, when I put a 'bug' in the Job, for example: <code>containersx</code> instead of <code>container</code>, I don't get any notification that I have a wrong syntax.</p>
<p>Maybe because MySQL is running (and not completed), can I force to go to the next yaml declared by hook? (Even I declare the proper order for Pod and Job. Pod should run before Job).</p>
<p>What is wrong, and how can I ensure the pod is created before the job? And when the pod starts running, my job will run after that?</p>
<p>Thanks.</p>
| Eitan | <p>As per your configuration, it looks like you need to set <code>post-install</code> <a href="https://helm.sh/docs/topics/charts_hooks/#the-available-hooks" rel="nofollow noreferrer">hook</a> precisely for Job as it should execute after all resources are loaded into Kubernetes. On executing <code>pre-install</code> hook both on Pod and Job, it is run before the rest of the chart is loaded, which seems to prevent Job from starting.</p>
| anarxz |
<p>I have an image on docker hub that I using in a kubernetes deployment. I'm try to debug the application but whenever I make a change to the image, even with a change in the tag, when I deploy the app it still uses the old image? It does update occasionally but without any rhyme or reason. This is with the imagePullPolicy set to Always.</p>
<p>Here is the deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: myid/myapp:0.2.5
imagePullPolicy: Always
resources:
requests:
memory: "2Gi"
cpu: 1
ephemeral-storage: "2Gi"
limits:
memory: "2Gi"
cpu: 1
ephemeral-storage: "2Gi"
ports:
- containerPort: 3838
name: shiny-port
- containerPort: 8080
name: web-port
imagePullSecrets:
- name: myregistrykey
</code></pre>
<p>I deploy it using the command</p>
<p>kubectl create -f deployment.yaml</p>
<p>Thanks</p>
| bbcho | <p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p>
<p>As mentioned in comments:</p>
<ul>
<li>you need to check which <code>tag</code> is used - he same or different than in the previous version of the image;</li>
<li>update your Deployment with actual image;</li>
<li>check, if your image available in Docker hub, otherwise you will get <code>ErrImagePull</code> status during pod checking.</li>
</ul>
<p>In case you just want to update an image, it is better to use <code>kubectl apply -f deployment.yaml</code> instead of <code>kubectl create -f deployment.yaml</code>.</p>
| Andrew Skorkin |
<p>I need to create and mantain some global variables accessible for applications running in all namespaces, because some tools/apps are standard in my dev cluster.</p>
<p><strong>For example:</strong></p>
<ul>
<li>APM ENDPOINT</li>
<li>APM User/pass</li>
<li>RabbitMQ endpoint</li>
<li>MongoDB endpoint</li>
</ul>
<p>For any reason, when i change/migrate any global variable, i want to change one time for all running applications in cluster (just needed restart pod), and if a create an "global" configmap and read in envFrom, i need to change/update the configmap in all namespaces.</p>
<p>Someone have an idea to do this? I thinked to use Hashicorp vault with specific role for global environments, but i need to adapt all applications to use Vault, and maybe have better idea.</p>
<p>Thanks</p>
| mzibit | <p>There is no in-built solution in Kubernetes for it except for <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap" rel="nofollow noreferrer">creating a ConfigMap</a>, and use <code>envFrom</code> to define all of the ConfigMap's data as Pod environment variables, which will indeed require to update them separately for each namespace. So using HashiCorp Vault is a better solution here; one more option here can be trying to customize env with Kubernetes addons <a href="https://github.com/EmberStack/kubernetes-reflector" rel="nofollow noreferrer">like this</a>.</p>
| anarxz |
<p>Does anyone know how to configure Promtail to watch and tail custom log paths in a Kubernetes pod? I have a deployment that creates customized log files in a directory like so <code>/var/log/myapp</code>. I found some documentation <a href="https://github.com/jafernandez73/grafana-loki/blob/master/docs/promtail-setup.md" rel="nofollow noreferrer">here</a> that says to deploy Promtail as a sidecar to the container you want to collect logs from. I was hoping someone could explain how this method works in practice. Does it need to be done as a sidecar or could it be done as a Daemonset? Or if you have alternative solution that has proven to work could please show me an example.</p>
| FestiveHydra235 | <p>Posting comment as the community wiki answer for better visibility:</p>
<hr />
<p><em>Below information is taken from README.md from the GitHun repo provided by atlee19:</em></p>
<p><strong>This docs assume</strong>:</p>
<ul>
<li><p>you have loki and grafana already deployed. Please refered to official documentation for installation</p>
</li>
<li><p>The logfile you want to scrape is in JSON format</p>
</li>
</ul>
<p>This Helm chart deploy a application pod with 2 containers: - a Golang app writing logs in a separate file. - a Promtail that read that log file and send it to loki.</p>
<p>The file path can be updated via the <a href="https://github.com/giantswarm/simple-logger/blob/master/helm/values.yaml" rel="nofollow noreferrer">./helm/values.yaml</a> file.</p>
<p><code>sidecar.labels</code> is a map where you can add the labels that will be added to your log entry in Loki.</p>
<p>Example:</p>
<ul>
<li><code>Logfile</code> located at <code>/home/slog/creator.log</code></li>
<li>Adding labels
<ul>
<li><code>job: promtail-sidecar</code></li>
<li><code>test: golang</code></li>
</ul>
</li>
</ul>
<pre><code>sidecar:
logfile:
path: /home/slog
filename: creator.log
labels:
job: promtail-sidecar
test: golang
</code></pre>
| Bazhikov |
<p>I have a <code>ConfigMap</code> where I am including a file in its data attribute and I need to replace several strings from it. But I'm not able to divide it (<strong>the "replaces"</strong>) into <strong>several lines</strong> so that it doesn't get a giant line. How can I do this?</p>
<p>This is what I don't want:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
data:
{{ (.Files.Glob "myFolder/*.json").AsConfig | indent 2 | replace "var1_enabled" (toString .Values.myVar1.enabled) | replace "var2_enabled" (toString .Values.myVar2.enabled) }}
</code></pre>
<p>This is what I'm trying to do:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
data:
{{ (.Files.Glob "myFolder/*.json").AsConfig | indent 2 |
replace "var1_enabled" (toString .Values.myVar1.enabled) |
replace "var2_enabled" (toString .Values.myVar2.enabled) }}
</code></pre>
<p>What is the right syntax to do this?</p>
| Ninita | <blockquote>
<p>What is the right syntax to do this?</p>
</blockquote>
<p>It is well described in <a href="https://helm.sh/docs/chart_template_guide/yaml_techniques/#controlling-spaces-in-multi-line-strings" rel="nofollow noreferrer">this documentation</a>. There are many different ways to achieve your goal, it all depends on the specific situation. You have everything in that documentation. Look at the <a href="https://helm.sh/docs/chart_template_guide/yaml_techniques/#indenting-and-templates" rel="nofollow noreferrer">example</a> most connected to your current situation:</p>
<blockquote>
<p>When writing templates, you may find yourself wanting to inject the contents of a file into the template. As we saw in previous chapters, there are two ways of doing this:</p>
<ul>
<li>Use <code>{{ .Files.Get "FILENAME" }}</code> to get the contents of a file in the chart.</li>
<li>Use <code>{{ include "TEMPLATE" . }}</code> to render a template and then place its contents into the chart.</li>
</ul>
<p>When inserting files into YAML, it's good to understand the multi-line rules above. Often times, the easiest way to insert a static file is to do something like this:</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>myfile: |
{{ .Files.Get "myfile.txt" | indent 2 }}
</code></pre>
<blockquote>
<p>Note how we do the indentation above: <code>indent 2</code> tells the template engine to indent every line in "myfile.txt" with two spaces. Note that we do not indent that template line. That's because if we did, the file content of the first line would be indented twice.</p>
</blockquote>
<p>For more look also at the <a href="https://github.com/helm/helm/issues/5451" rel="nofollow noreferrer">similar problem on github</a> and <a href="https://stackoverflow.com/questions/50951124/multiline-string-to-a-variable-in-a-helm-template">question on stack</a>.</p>
<hr />
<p><strong>EDIT:</strong></p>
<blockquote>
<p>But I'm not able to divide it (<strong>the "replaces"</strong>) into <strong>several lines</strong> so that it doesn't get a giant line. How can I do this?</p>
</blockquote>
<p><strong>It is impossible to achieve. Go Template doesn't support newlines.</strong> For more see <a href="https://stackoverflow.com/questions/49816911/how-to-split-a-long-go-template-function-across-multiple-lines">this question</a>. and <a href="https://pkg.go.dev/text/template" rel="nofollow noreferrer">this documentation</a></p>
<blockquote>
<p>The input text for a template is UTF-8-encoded text in any format. "Actions"--data evaluations or control structures--are delimited by "{{" and "}}"; all text outside actions is copied to the output unchanged. Except for raw strings, actions may not span newlines, although comments can.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I need to add a label to all default rules that come with the Helm chart. I tried setting the label under <code>commonLabels</code> in the values file, to no avail. I also tried putting it as <code>external_labels</code> within the <code>defaultRules</code> stanza, again didn't do the trick. When I add the label to rules I define myself under <code>AdditionalAlerts</code>, it works fine. But I need it for all alerts.</p>
<p>I also added it under the "labels for default rules". The label got added to the metadata of each of the default rules, but I need it inside the spec of the rule, under the already existing label for "severity".</p>
<p>The end goal is to put the environment inside that label, e.g. TEST, STAGING and PRODUCTION. So if anyone has another way to accomplish this, by all means....</p>
| GID | <p>You can upgrade your values.yaml file for Prometheus with the necessary labels in the <code>additionalRuleLabels</code> section for <code>defaultRules</code>.</p>
<p>Below is an example based on the <a href="https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L70" rel="nofollow noreferrer">values.yaml file from the Prometheus Monitoring Community</a>:</p>
<pre><code>defaultRules:
## Additional labels for PrometheusRule alerts
additionalRuleLabels:
additionalRuleLabel1: additinalRuleValue1
</code></pre>
<p>Result:
<a href="https://i.stack.imgur.com/ACq3K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ACq3K.png" alt="enter image description here" /></a></p>
| Andrew Skorkin |
<p>I created a Dockerfile for running Jupyter in Docker.</p>
<pre><code>FROM ubuntu:latest
FROM python:3.7
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
CMD ["jupyter", "notebook", "--allow-root", "--ip=0.0.0.0"]
</code></pre>
<p>My requirements.txt file looks like this:</p>
<pre><code>jupyter
git+https://github.com/kubernetes-client/python.git
</code></pre>
<p>I ran <code>docker build -t hello-jupyter .</code> and it builds fine. Then I ran <code>docker run -p 8888:8888 hello-jupyter</code> and it runs fine.</p>
<p>I'm able to open Jupyter notebook in a web browser (127.0.0.1:8888) when I run the Docker image hello-jupyter.</p>
<hr>
<p>Now I would like to run Jupyter as a Kubernetes deployment. I created this deployment.yaml file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-jupyter-service
spec:
selector:
app: hello-jupyter
ports:
- protocol: "TCP"
port: 8888
targetPort: 8888
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-jupyter
spec:
replicas: 4
selector:
matchLabels:
app: hello-jupyter
template:
metadata:
labels:
app: hello-jupyter
spec:
containers:
- name: hello-jupyter
image: hello-jupyter
imagePullPolicy: Never
ports:
- containerPort: 8888
</code></pre>
<p>I ran this command in shell:</p>
<pre><code>$ kubectl apply -f deployment.yaml
service/hello-jupyter-service unchanged
deployment.apps/hello-jupyter unchanged
</code></pre>
<p>When I check my pods, I see crash loops</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-jupyter-66b88b5f6d-gqcff 0/1 CrashLoopBackOff 6 7m16s
hello-jupyter-66b88b5f6d-q59vj 0/1 CrashLoopBackOff 15 55m
hello-jupyter-66b88b5f6d-spvl5 0/1 CrashLoopBackOff 6 7m21s
hello-jupyter-66b88b5f6d-v2ghb 0/1 CrashLoopBackOff 6 7m20s
hello-jupyter-6758977cd8-m6vqz 0/1 CrashLoopBackOff 13 43m
</code></pre>
<p>The pods have crash loop as their status and I'm not able to open Jupyter in a web browser.</p>
<p>What is wrong with the deployment.yaml file? The deployment.yaml file simply runs the Docker image hello-jupyter in four different pods. Why does the Docker image run in Docker but not in Kubernetes pods?</p>
<p>Here is the log of one of my pods:</p>
<pre><code>$ kubectl logs hello-jupyter-66b88b5f6d-gqcff
[I 18:05:03.805 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/traitlets/traitlets.py", line 537, in get
value = obj._trait_values[self.name]
KeyError: 'port'
</code></pre>
<p>I do specify a port in my deployment.yaml file. I'm not sure why I get this error in the log.</p>
| ktm5124 | <p>There are many reasons on getting the <code>CrashLoopBackOff</code> error. In your case, it seems like your deployment file is locked or a lack of resources prevents the container from loading.</p>
<p>As I understood, you've built docker image locally and added it to your local Docker registry. Since <code>imagePullPolicy: Never</code> specified and there is no error <code>ErrImageNeverPull</code>, so there is no problem with your docker registries between your local docker and kubernetes.</p>
<p>You can start by running the command: <code>kubectl describe pod [name]</code> to get more from kubelet.</p>
<p>Unless, try deploying single pod first instead of deployment with 4 replicas to make sure that kubernetes runs your image correctly.</p>
| Bazhikov |
<p>I have a Kubernetes Cluster and I've been trying to forward logs to Splunk with this <a href="https://github.com/splunk/splunk-connect-for-kubernetes#prerequisites" rel="nofollow noreferrer">splunk-connect-for-kubernetes</a> repo which is essentially Splunk's own kubernetes-oriented configuration of fluentd.</p>
<p>I initially could see logs in Splunk but they appeared to just be related to the system components but not from the pods that I needed from.</p>
<p>I think I tracked down to the problem in the global <code>values.yaml</code> file. I experimented a bit with the <code>fluentd path</code> and <code>containers path</code> and found that I likely needed to update the <code>containers pathDest</code> to the same file path as the pods logs.</p>
<p>It looks like something like this now:</p>
<pre><code>fluentd:
# path of logfiles, default /var/log/containers/*.log
path: /var/log/containers/*.log
# paths of logfiles to exclude. object type is array as per fluentd specification:
# https://docs.fluentd.org/input/tail#exclude_path
exclude_path:
# - /var/log/containers/kube-svc-redirect*.log
# - /var/log/containers/tiller*.log
# - /var/log/containers/*_kube-system_*.log (to exclude `kube-system` namespace)
# Configurations for container logs
containers:
# Path to root directory of container logs
path: /var/log
# Final volume destination of container log symlinks
pathDest: /app/logs
</code></pre>
<p>But now I can see in my the logs for my <code>splunk-connect</code> repeated logs like</p>
<pre><code>[warn]: #0 [containers.log] /var/log/containers/application-0-tcc-broker-0_application-0bb08a71919d6b.log unreadable. It is excluded and would be examined next time.
</code></pre>
| Hofbr | <p>I had a very similar problem once and changing the path in the values.yaml file helped to solve the problem. It is perfectly described in <a href="https://community.splunk.com/t5/All-Apps-and-Add-ons/splunk-connect-for-kubernetes-var-log-containers-log-unreadable/m-p/473943" rel="nofollow noreferrer">this thread</a>:</p>
<blockquote>
<p>Found the solution for my question -</p>
<p>./splunk-connect-for-kubernetes/charts/splunk-kubernetes-logging/values.yaml: path: <code>/var/log/containers/*.log </code>
Changed to:<br />
path: <code>/var/log/pods/*.log</code> works to me.</p>
</blockquote>
<p>The cited answer may not be readable. Just try changing <code>/var/log/containers/*.log</code> to <code>/var/log/pods/*.log</code> in your file.</p>
<p>See also <a href="https://stackoverflow.com/questions/51671212/fluentd-log-unreadable-it-is-excluded-and-would-be-examined-next-time">this similar question on stackoverflow</a>.</p>
| Mikołaj Głodziak |
<p>Recently my kafka producer running as a cronjob on a kubernetes cluster has started doing the following when pushing new messages to the queue:</p>
<pre><code>{"@version":1,"source_host":"<job pod name>","message":"[Producer clientId=producer-1] Resetting the last seen epoch of partition <topic name> to 4 since the associated topicId changed from null to JkTOJi-OSzavDEomRvAIOQ","thread_name":"kafka-producer-network-thread | producer-1","@timestamp":"2022-02-11T08:45:40.212+0000","level":"INFO","logger_name":"org.apache.kafka.clients.Metadata"}
</code></pre>
<p>This results in the producer running into a timeout:</p>
<pre><code>"exception":{"exception_class":"java.util.concurrent.ExecutionException","exception_message":"org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for <topic name>:120000 ms has passed since batch creation", stacktrace....}
</code></pre>
<p>The logs of the kafka consumer pod and kafka cluster pods don't show any out-of-the-ordinary changes.</p>
<p>Has anyone seen this behavior before and if yes, how do I prevent it?</p>
| sigma1510 | <p>Reason:</p>
<blockquote>
<p>the Java API mode generator cannot connect with Kafka</p>
</blockquote>
<p>Solution:</p>
<blockquote>
<p>On each server Add a sentence to the properties file</p>
</blockquote>
<pre class="lang-sh prettyprint-override"><code>host.name = server IP;
</code></pre>
| 90linux |
<p>I've been searching and every answer seems to be the same example (<a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/</a>). In a pod you can create an empty volume, then mount that into two containers and any content written in that mount will be seen on each container. While this is fine my use case is slightly different.</p>
<p>Container A
/opt/content</p>
<p>Container B
/data</p>
<p>Container A has an install of about 4G of data. What I would like to do is mount /opt/content into Container B at /content. This way the 4G of data is accessible to Container B at runtime and I don't have to copy content or specially build Container B.</p>
<p>My question, is this possible. If it is, what would be the proper pod syntax.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /opt/content
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /content
</code></pre>
| Stephen Paulin | <p>From my research and testing the best I can tell is within a POD two containers can not see each others file system. The volume mount will allow each container to have a mount created in the pod to the specified path (as the example shows) and then any items written to it after that point, will be seen on both. This works great for logs and stuff.</p>
<p>In my context, this proves to not be possible and creating this mount, and then having Container A copy the 4G directory to the newly created mount is to time consuming to make this an option.</p>
<p>Best I can tell is the only way to do this is create a Persistent Volume or other similar and mount that in the Container B. This way Container A contents are stored in the Persistent Volume and it can be easily mounted when needed. The only issue with this is the Persistent Volume will have to be setup in every Kube cluster defined which is the pain point.</p>
<p>If any of this is wrong and I just didn't find the right document please correct me. I would love to be able to do this.</p>
| Stephen Paulin |
<p>I've recently joined a new project in Kubernetes. Last team didn't seem to manage deployments well and some task are managed wit single pods running with init containers in'em.</p>
<p>So, for example we have namespace "test" and pods there - some of them were run manually, aside the deployment by other team members and contain <code>initContainers</code>. I need to find a particular pod with init container in it - get his name, manifest and so on.</p>
<p>The cheat sheet in Kubernetes docs suggests a solution with getting a container id:</p>
<pre><code>kubectl get pods -n test -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"\n"}{end}' | cut -d/ -f3
</code></pre>
<p>It gives me an ID of <code>InitContainer</code>.</p>
<p>When I try to get name of it and respectively pod name I try to use this code snippet:</p>
<pre><code>kubectl get pod -n test -o jsonpath='{range .items[?(@.status.containerStatuses[].containerID=="docker://686bc30be6e870023dcf611f7a7808516e041c892a236e565ba2bd3e0569ff7a")]}{.metadata.name}{end}'
</code></pre>
<p>and it gives me nothing so far.</p>
<p>Is there more elegant and easy way of getting pod names with <code>initcontainers</code> in a particular namespace?</p>
<p>I also tried this solution:</p>
<pre><code>kubectl get pod -n test -o custom-columns='POD_NAME:metadata.name,INIT_NAME:spec.InitContainers[*].name'
</code></pre>
<p>but it returns nothing.</p>
<p>The solution I'am using now is parsing yaml output with "for" cycle in bash but it doesn't sound good to me.</p>
<p>Any suggestions?</p>
| sickb0y | <p>You need to improve your request with <code>initContainerStatuses</code> to find necessary information only for Init containers:</p>
<pre><code>kubectl get pod -n <namespace> -o jsonpath='{range .items[?(@.status.initContainerStatuses[].containerID=="docker://<container_id>")]}{.metadata.name}{end}'
</code></pre>
<p>For example:</p>
<pre><code>kubectl get pod -n kube-system -o jsonpath='{range .items[?(@.status.initContainerStatuses[].containerID=="docker://e235d512c3a5472c8f7de6e73c724317639c9132c07193
cb9")]}{.metadata.name}{end}'
weave-net-s26tf
</code></pre>
| Andrew Skorkin |
<p>Currently, I have one Kubernetes with 2 namespaces: NS1 and NS2. I’m using <code>jboss/keycloak</code> Docker image.</p>
<p>I am operating 2 Keycloak instances in those 2 namespaces and I expect that will run independently.
But it is not true for Infinispan caching inside Keycloak. I got a problem that all sessions of KC in NS1 will be invalidated many times when the KC pod in NS2 is being stated “Crash Loopback”.</p>
<p>The logs said as following whenever the “Crash Loopback” KC pod in NS2 tries to restart:</p>
<pre><code>15:14:46,784 INFO [org.infinispan.CLUSTER] (remote-thread--p10-t412) [Context=clientSessions] ISPN100002: Starting rebalance with members [keycloak-abcdef, keycloak-qwerty], phase READ_OLD_WRITE_ALL, topology id 498
</code></pre>
<p><code>keycloak-abcdef</code> is the KC pod in NS1 and <code>keycloak-qwerty</code> is the KC pod in NS2. So, the KC pod in NS1 can see and be affected by KC pod from NS2.</p>
<p>After researching, I see that Keycloak uses Infinispan cache to manage session data and Infinispan uses JGroups to discover nodes with the default method PING. I am assuming that this mechanism is the root cause of the problem “invalidated session” because it will try to contact other KC pods in the same cluster (even different namespaces) to do something like synchronization.</p>
<p>Is there any way that can isolate the working of Infinispan in Keycloak between namespaces?</p>
<p>Thank you!</p>
| bkl | <p>Posting comment as the community wiki answer for better visibility</p>
<hr />
<p>I would use <code>JDBC_PING</code> for discovery, so only nodes which are using the same DB will be able to discover each other</p>
| Bazhikov |
<p>I'm trying to add a Container Insight to my EKS cluster but running into a bit of an issue when deploying. According to my logs, I'm getting the following:</p>
<pre><code>[error] [output:cloudwatch_logs:cloudwatch_logs.2] CreateLogGroup API responded with error='AccessDeniedException'
[error] [output:cloudwatch_logs:cloudwatch_logs.2] Failed to create log group
</code></pre>
<p>The strange part about this is the role it seems to be assuming is the same role found within my EC2 worker nodes rather than the role for the service account I have created. I'm creating the service account and can see it within AWS successfully using the following command:</p>
<pre><code>eksctl create iamserviceaccount --region ${env:AWS_DEFAULT_REGION} --name cloudwatch-agent --namespace amazon-cloudwatch --cluster ${env:CLUSTER_NAME} --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy --override-existing-serviceaccounts --approve
</code></pre>
<p>Despite the serviceaccount being created successfully, I continue to get my AccessDeniedException.</p>
<p>One thing I found was the logs work fine when I manually add the CloudWatchAgentServerPolicy to my worker nodes, however this is not the implementation I would like and instead would rather have an automative way of adding the service account and not touching the worker nodes directly if possible. The steps I followed can be found at the bottom of <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-prerequisites.html" rel="nofollow noreferrer">this documentation.</a></p>
<p>Thanks so much!</p>
| AHR | <p>For anyone running into this issue: within the quickstart yaml, there is a fluent-bit service account that must be removed from that file and created manually. For me I created it using the following command:</p>
<pre><code>eksctl create iamserviceaccount --region ${env:AWS_DEFAULT_REGION} --name fluent-bit --namespace amazon-cloudwatch --cluster ${env:CLUSTER_NAME} --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy --override-existing-serviceaccounts --approve
</code></pre>
<p>Upon running this command and removing the fluent-bit service account from the yaml, delete and reapply al your amazon-cloudwatch namespace items and it should be working.</p>
| AHR |
<p>I have a GKE cluster with 1 node-pool and 2 nodes in it, one with node affinity to only accept pods of production and the other to development and testing pods. For financial purposes, I want to configure like a cronJob or something similar on the dev/test node so I can spend less money but I don't know if that's possible.</p>
| Gabrielle Ferreira | <p>Yes, you can add another node pool named <code>test</code> so you can have two node pools; one to <code>develop</code> and one to <code>test</code>. You can also turn on the <code>auto [scaling]</code><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#overview" rel="nofollow noreferrer">1</a> in your development pool, this feature of GKE will allow you to save money because it automatically resizes your GKE Cluster node pool based on the demand workload when your production pool is not demanding and you can put a maximum of pods to limit the numbers of pods deployed; in case of your workload increase the demand.</p>
<p>Once you have configured the production pool, you can create the new test node pool with a fixed size of one node.</p>
<p>Then, you will use a node <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">selector</a> in your pods to make sure that they can run in the production node pool. And you could use an <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">anti-affinity</a> rule to ensure that two of your training pods cannot be scheduled on the same node.</p>
| Leo |
<p>I'm building a Micro-services E-commerce project, I need to create a docker image for each server in my project and run them inside K8s cluster. After successfully creating images for all back-end server I tried creating a docker image for my React front-end app, every time I try creating the image this error happened.</p>
<p>Here is my docker configuration:</p>
<pre><code>FROM node:alpine
WORKDIR /src
COPY package*.json ./
RUN npm install --silent
COPY . .
CMD ["npm ","start"];
</code></pre>
<p>Here is the error:</p>
<pre><code>Error: Cannot find module '/src/npm '
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15)
at Function.Module._load (node:internal/modules/cjs/loader:778:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:79:12)
at node:internal/main/run_main_module:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
</code></pre>
<p>Sometimes it throws an error like this:</p>
<pre><code>webpack output is served from content not from webpack is served from content not from webpack is served from /app/public docker
</code></pre>
| Imran Abdalla | <p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>To resolve described issues, steps below need to be done.</p>
<ol>
<li><p>Upgrade Dockerfile:</p>
<pre><code>WORKDIR /src
COPY package*.json ./
RUN npm install --silent
COPY . .
CMD ["npm","start"];
</code></pre>
</li>
<li><p>Use version 3.4.0 for <code>react-scripts</code></p>
</li>
<li><p>Add <code>stdin_open: true</code> to docker-compose file</p>
</li>
</ol>
| Andrew Skorkin |
<p>I have an application deployed to kubernetes.
Here is techstack:
<em>Java 11, Spring Boot 2.3.x or 2.5.x, using hikari 3.x or 4.x</em></p>
<p>Using spring actuator to do healthcheck. Here is <code>liveness</code> and <code>readiness</code> configuration within application.yaml:</p>
<pre><code> endpoint:
health:
group:
liveness:
include: '*'
exclude:
- db
- readinessState
readiness:
include: '*'
</code></pre>
<p>what it does if DB is down -</p>
<ol>
<li>Makes sure <code>liveness</code> doesn't get impacted - meaning, application
container should keep on running even if there is DB outage.</li>
<li><code>readinesss</code> will be impacted making sure no traffic is allowed to hit the container.</li>
</ol>
<p><code>liveness</code> and <code>readiness</code> configuration in container spec:</p>
<pre><code>livenessProbe:
httpGet:
path: actuator/health/liveness
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: actuator/health/readiness
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 20
</code></pre>
<p>My application is started and running fine for few hours.</p>
<p><strong>What I did:</strong></p>
<p>I brought down DB.</p>
<p><strong>Issue Noticed:</strong></p>
<p>When DB is down, after 90+ seconds I see 3 more pods getting spinned up. When a pod is described I see Status and condition like below:</p>
<pre><code>Status: Running
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
</code></pre>
<p>when I list all running pods:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
application-a-dev-deployment-success-5d86b4bcf4-7lsqx 0/1 Running 0 6h48m
application-a-dev-deployment-success-5d86b4bcf4-cmwd7 0/1 Running 0 49m
application-a-dev-deployment-success-5d86b4bcf4-flf7r 0/1 Running 0 48m
application-a-dev-deployment-success-5d86b4bcf4-m5nk7 0/1 Running 0 6h48m
application-a-dev-deployment-success-5d86b4bcf4-tx4rl 0/1 Running 0 49m
</code></pre>
<p><strong>My Analogy/Finding:</strong></p>
<p>Per <code>ReadinessProbe</code> configuration: <code>periodSeconds</code> is set to 30 seconds and <code>failurethreshold</code> is defaulted to 3 per k8s documentation.</p>
<p>Per application.yaml <code>readiness</code> includes db check, meaning after every 30 seconds <code>readiness</code> check failed. When it fails 3 times, <code>failurethreshold</code> is met and it spins up new pods.</p>
<p>Restart policy is default to Always.</p>
<p><strong>Questions:</strong></p>
<ol>
<li>Why it spinned new pods?</li>
<li>Why it spinned specifically only 3 pods and not 1 or 2 or 4 or any number?</li>
<li>Does this has to do anything with <code>restartpolicy</code>?</li>
</ol>
| Shivraj | <ol>
<li>As you answered to yourself, it spinned new pods after 3 times tries according to <code>failureThreshold</code>. You can change your <code>restartPolicy</code> to <code>OnFailure</code>, it will allow you to restart the job only if it fails or <code>Never</code> if you don't want have the cluster to be restarted. The difference between the statuses you can find <a href="https://stackoverflow.com/a/40534364/16860542">here</a>. Note this:</li>
</ol>
<blockquote>
<p>The restartPolicy <strong>applies to all containers</strong> in the Pod. restartPolicy only refers to r<strong>estarts of the containers</strong> by the kubelet <strong>on the same node</strong>. After containers in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s, 40s, …), that is capped at five minutes. Once a container has executed for 10 minutes without any problems, the kubelet resets the restart backoff timer for that container.</p>
</blockquote>
<ol start="2">
<li><p>Share your full <code>Deployment</code> file, I suppose that you've set <code>replicas</code> number to <code>3</code>.</p>
</li>
<li><p>Answered in the answer for the 1st question.</p>
</li>
</ol>
<p>Also note this, if this works for you:</p>
<blockquote>
<p>Startup probes are useful for Pods that have containers that take a long time to come into service. Rather than set a long liveness interval, you can configure a separate configuration for probing the container as it starts up, allowing a time longer than the liveness interval would allow.</p>
<p>If your container usually starts in more than initialDelaySeconds + failureThreshold × periodSeconds, you should specify a startup probe that checks the same endpoint as the liveness probe. The default for periodSeconds is 10s. You should then set its failureThreshold high enough to allow the container to start, without changing the default values of the liveness probe. This helps to protect against deadlocks.</p>
</blockquote>
| Bazhikov |
<p>I deleted my cluster-admin role via kubectl using:</p>
<p><code>kubectl delete clusterrole cluster-admin</code></p>
<p>Not sure what I expected, but now I don't have access to the cluster from my account. Any attempt to get or change resources using kubectl returns a 403, Forbidden.
Is there anything I can do to revert this change without blowing away the cluster and creating a new one? I have a managed cluster on Digital Ocean.</p>
| eLymar | <blockquote>
<p>Not sure what I expected, but now I don't have access to the cluster from my account.</p>
</blockquote>
<p>If none of the <code>kubectl</code> commands actually work, unfortunately you will not be able to create a new cluster role. The problem is that you won't be able to do anything without an admin role. You can try creating the <code>cluster-admin</code> role directly through the API (not using kubectl), but if that doesn't help you have to recreate the cluster.</p>
| Mikołaj Głodziak |
<p>We noticed that these errors started after the node-pool was autoscaled and the existing Nodes were replaced with new compute instances. This also happened during a maintenance window. We're using an NFS server. GCP Kubernetes Cluster version is 1.21.6</p>
<p>The issue appears to have only affect certain Nodes on the cluster. We've cordoned the Nodes where the mount error's and pods on the "healthy" nodes are working.</p>
<pre><code>"Unable to attach or mount volumes: unmounted volumes=[vol],
unattached volumes=[vol]: timed out waiting for the condition"
</code></pre>
<p>We're also seeing errors on the konnectivity-agent:</p>
<pre><code>"connection read failure" err="read tcp
10.4.2.34:43682->10.162.0.119:10250: use of closed network connection"
</code></pre>
<p>We believe the issue is when autoscaling is enabled, and new Nodes are introduced to the pool. The problem is it appears to be completely random. Sometimes the pods come up fine and others get the mount error.</p>
| westcoastdev | <p>This Error indicates that the NFS workload would be stuck at a Terminating state. Moreover, some disk throttling might be observed on worker nodes.</p>
<p><strong>Solution:</strong></p>
<p>There are two possible workarounds for this issue.</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#force-deletion" rel="nofollow noreferrer">Force Deletion</a> of the NFS workload can sometimes mitigate the issue.
After deletion, you may also need to restart the Kubelet of the
worker node.</p>
</li>
<li><p>NFS versions v4.1 and v4.2 shouldn't be affected by this issue. The
NFS version is specified via <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes" rel="nofollow noreferrer">configuration</a> and doesn't require an
image change.</p>
</li>
</ul>
<p>Please change the NFS version as seen below:</p>
<pre><code>mountOptions:
- nfsvers=4.2
</code></pre>
<p><strong>Cause</strong></p>
<p>NFS v4.0 has a known issue which is a limitation of the NFS pod when there are too many connections at once. Because of that limitation, the Kubelet can't unmount the NFS volume from the pod and the worker node if the NFS container was deleted first.
Moreover, NFS mounts going stale when the server dies is a <a href="https://github.com/kubernetes/kubernetes/issues/72048" rel="nofollow noreferrer">known issue</a>. It is known that many of these stale mounts building upon a worker node can cause future NFS mounts to slow down. Please note that there can also be some unmount issue related to this error.</p>
| Leo |
<p>I am using <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html" rel="nofollow noreferrer">ECK</a> to deploy Elasticsearch cluster on Kubernetes.</p>
<p>My Elasticsearch is working fine and it shows <code>green</code> as cluster. But when Enterprise search start and start creating <code>indexes</code> in Elasticsearch, after creating some indexes, it give error for timeout.</p>
<p><strong>pv.yaml</strong></p>
<pre><code>---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-master
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/nfs/kubernetes/elasticsearch/master/
...
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/nfs/kubernetes/elasticsearch/data/
...
</code></pre>
<p><strong>multi_node.yaml</strong></p>
<pre><code>---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: bselastic
spec:
version: 8.1.2
nodeSets:
- name: masters
count: 1
config:
node.roles: ["master",
# "data",
]
xpack.ml.enabled: true
# Volumeclaim needed to add volume, it was giving error for not volume claim
# and its not starting pod.
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
- name: data-node
count: 1
config:
node.roles: ["data", "ingest"]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
...
---
apiVersion: enterprisesearch.k8s.elastic.co/v1
kind: EnterpriseSearch
metadata:
name: enterprise-search-bselastic
spec:
version: 8.1.3
count: 1
elasticsearchRef:
name: bselastic
podTemplate:
spec:
containers:
- name: enterprise-search
env:
- name: JAVA_OPTS
value: -Xms2g -Xmx2g
- name: "elasticsearch.startup_retry.interval"
value: "30"
- name: allow_es_settings_modification
value: "true"
...
</code></pre>
<p>Apply these changes using below command.</p>
<p><code>kubectl apply -f multi_node.yaml -n deleteme -f pv.yaml</code></p>
<p>Check the Elasticsearch cluster status</p>
<pre><code># kubectl get es -n deleteme
NAME HEALTH NODES VERSION PHASE AGE
bselastic unknown 8.1.2 ApplyingChanges 47s
</code></pre>
<p>Check all pods</p>
<pre><code># kubectl get pod -n deleteme
NAME READY STATUS RESTARTS AGE
bselastic-es-data-node-0 0/1 Running 0 87s
bselastic-es-masters-0 1/1 Running 0 87s
enterprise-search-bselastic-ent-54675f95f8-9sskf 0/1 Running 0 86s
</code></pre>
<p>Elasticsearch cluster become green after 7+ min</p>
<pre><code>[root@1175014-kubemaster01 nilesh]# kubectl get es -n deleteme
NAME HEALTH NODES VERSION PHASE AGE
bselastic green 2 8.1.2 Ready 7m30s
</code></pre>
<p>enterprise search log</p>
<pre><code># kubectl -n deleteme logs -f enterprise-search-bselastic-ent-549bbcb9-rnhmc
Custom Enterprise Search configuration file detected, not overwriting it (any settings passed via environment will be ignored)
Found java executable in PATH
Java version detected: 11.0.14.1 (major version: 11)
Enterprise Search is starting...
[2022-04-25T16:34:22.282+00:00][7][2000][app-server][INFO]: Elastic Enterprise Search version=8.1.3, JRuby version=9.2.16.0, Ruby version=2.5.7, Rails version=5.2.6
[2022-04-25T16:34:23.862+00:00][7][2000][app-server][INFO]: Performing pre-flight checks for Elasticsearch running on https://bselastic-es-http.deleteme.svc:9200...
[2022-04-25T16:34:25.308+00:00][7][2000][app-server][WARN]: [pre-flight] Failed to connect to Elasticsearch backend. Make sure it is running and healthy.
[2022-04-25T16:34:25.310+00:00][7][2000][app-server][INFO]: [pre-flight] Error: /usr/share/enterprise-search/lib/war/shared_togo/lib/shared_togo/elasticsearch_checks.class:187: Connection refused (Connection refused) (Faraday::ConnectionFailed)
[2022-04-25T16:34:31.353+00:00][7][2000][app-server][WARN]: [pre-flight] Failed to connect to Elasticsearch backend. Make sure it is running and healthy.
[2022-04-25T16:34:31.355+00:00][7][2000][app-server][INFO]: [pre-flight] Error: /usr/share/enterprise-search/lib/war/shared_togo/lib/shared_togo/elasticsearch_checks.class:187: Connection refused (Connection refused) (Faraday::ConnectionFailed)
[2022-04-25T16:34:37.370+00:00][7][2000][app-server][WARN]: [pre-flight] Failed to connect to Elasticsearch backend. Make sure it is running and healthy.
[2022-04-25T16:34:37.372+00:00][7][2000][app-server][INFO]: [pre-flight] Error: /usr/share/enterprise-search/lib/war/shared_togo/lib/shared_togo/elasticsearch_checks.class:187: Connection refused (Connection refused) (Faraday::ConnectionFailed)
[2022-04-25T16:34:43.384+00:00][7][2000][app-server][WARN]: [pre-flight] Failed to connect to Elasticsearch backend. Make sure it is running and healthy.
[2022-04-25T16:34:43.386+00:00][7][2000][app-server][INFO]: [pre-flight] Error: /usr/share/enterprise-search/lib/war/shared_togo/lib/shared_togo/elasticsearch_checks.class:187: Connection refused (Connection refused) (Faraday::ConnectionFailed)
[2022-04-25T16:34:49.400+00:00][7][2000][app-server][WARN]: [pre-flight] Failed to connect to Elasticsearch backend. Make sure it is running and healthy.
[2022-04-25T16:34:49.401+00:00][7][2000][app-server][INFO]: [pre-flight] Error: /usr/share/enterprise-search/lib/war/shared_togo/lib/shared_togo/elasticsearch_checks.class:187: Connection refused (Connection refused) (Faraday::ConnectionFailed)
[2022-04-25T16:37:56.290+00:00][7][2000][app-server][INFO]: [pre-flight] Elasticsearch cluster is ready
[2022-04-25T16:37:56.292+00:00][7][2000][app-server][INFO]: [pre-flight] Successfully connected to Elasticsearch
[2022-04-25T16:37:56.367+00:00][7][2000][app-server][INFO]: [pre-flight] Successfully loaded Elasticsearch plugin information for all nodes
[2022-04-25T16:37:56.381+00:00][7][2000][app-server][INFO]: [pre-flight] Elasticsearch running with an active basic license
[2022-04-25T16:37:56.423+00:00][7][2000][app-server][INFO]: [pre-flight] Elasticsearch API key service is enabled
[2022-04-25T16:37:56.446+00:00][7][2000][app-server][INFO]: [pre-flight] Elasticsearch will be used for authentication
[2022-04-25T16:37:56.447+00:00][7][2000][app-server][INFO]: Elasticsearch looks healthy and configured correctly to run Enterprise Search
[2022-04-25T16:37:56.452+00:00][7][2000][app-server][INFO]: Performing pre-flight checks for Kibana running on http://localhost:5601...
[2022-04-25T16:37:56.482+00:00][7][2000][app-server][WARN]: [pre-flight] Failed to connect to Kibana backend. Make sure it is running and healthy.
[2022-04-25T16:37:56.486+00:00][7][2000][app-server][ERROR]: Could not connect to Kibana backend after 0 seconds.
[2022-04-25T16:37:56.488+00:00][7][2000][app-server][WARN]: Enterprise Search is unable to connect to Kibana. Ensure it is running at http://localhost:5601 for user deleteme-enterprise-search-bselastic-ent-user.
[2022-04-25T16:37:59.344+00:00][7][2000][app-server][INFO]: Elastic APM agent is disabled
{"timestamp": "2022-04-25T16:38:05+00:00", "message": "readiness probe failed", "curl_rc": "7"}
{"timestamp": "2022-04-25T16:38:06+00:00", "message": "readiness probe failed", "curl_rc": "7"}
{"timestamp": "2022-04-25T16:38:16+00:00", "message": "readiness probe failed", "curl_rc": "7"}
{"timestamp": "2022-04-25T16:38:26+00:00", "message": "readiness probe failed", "curl_rc": "7"}
{"timestamp": "2022-04-25T16:38:36+00:00", "message": "readiness probe failed", "curl_rc": "7"}
[2022-04-25T16:38:43.880+00:00][7][2000][app-server][INFO]: [db_lock] [installation] Status: [Starting] Ensuring migrations tracking index exists
{"timestamp": "2022-04-25T16:38:45+00:00", "message": "readiness probe failed", "curl_rc": "7"}
{"timestamp": "2022-04-25T16:38:56+00:00", "message": "readiness probe failed", "curl_rc": "7"}
[2022-04-25T16:39:05.283+00:00][7][2000][app-server][INFO]: [db_lock] [installation] Status: [Finished] Ensuring migrations tracking index exists
[2022-04-25T16:39:05.782+00:00][7][2000][app-server][INFO]: [db_lock] [installation] Status: [Starting] Creating indices for 38 models
[2022-05-02T16:21:47.303+00:00][8][2000][es][DEBUG]: {
"request": {
"url": "https://bselastic-es-http.deleteme.svc:9200/.ent-search-actastic-oauth_applications_v2",
"method": "put",
"headers": {
"Authorization": "[FILTERED]",
"Content-Type": "application/json",
"x-elastic-product-origin": "enterprise-search",
"User-Agent": "Faraday v1.8.0"
},
"params": null,
"body": "{\"settings\":{\"index\":{\"hidden\":true,\"refresh_interval\":-1},\"number_of_shards\":1,\"auto_expand_replicas\":\"0-3\",\"priority\":250},\"mappings\":{\"dynamic\":\"strict\",\"properties\":{\"id\":{\"type\":\"keyword\"},\"created_at\":{\"type\":\"date\"},\"updated_at\":{\"type\":\"date\"},\"name\":{\"type\":\"keyword\"},\"uid\":{\"type\":\"keyword\"},\"secret\":{\"type\":\"keyword\"},\"redirect_uri\":{\"type\":\"keyword\"},\"scopes\":{\"type\":\"keyword\"},\"confidential\":{\"type\":\"boolean\"},\"app_type\":{\"type\":\"keyword\"}}},\"aliases\":{}}"
},
"exception": "/usr/share/enterprise-search/lib/war/lib/swiftype/es/client.class:28: Read timed out (Faraday::TimeoutError)\n",
"duration": 30042.3,
"stack": [
"lib/actastic/schema.class:172:in `create_index!'",
"lib/actastic/schema.class:195:in `create_index_and_mapping!'",
"shared_togo/lib/shared_togo.class:894:in `block in apply_actastic_migrations'",
"shared_togo/lib/shared_togo.class:892:in `block in each'",
"shared_togo/lib/shared_togo.class:892:in `block in apply_actastic_migrations'",
"lib/db_lock.class:182:in `with_status'",
"shared_togo/lib/shared_togo.class:891:in `apply_actastic_migrations'",
"shared_togo/lib/shared_togo.class:406:in `block in install!'",
"lib/db_lock.class:171:in `with_lock'",
"shared_togo/lib/shared_togo.class:399:in `install!'",
"config/application.class:102:in `block in Application'",
"config/environment.class:9:in `<main>'",
"config/environment.rb:1:in `<main>'",
"shared_togo/lib/shared_togo/cli/command.class:37:in `initialize'",
"shared_togo/lib/shared_togo/cli/command.class:10:in `run_and_exit'",
"shared_togo/lib/shared_togo/cli.class:143:in `run_supported_command'",
"shared_togo/lib/shared_togo/cli.class:125:in `run_command'",
"shared_togo/lib/shared_togo/cli.class:112:in `run!'",
"bin/enterprise-search-internal:15:in `<main>'"
]
}
[2022-04-25T16:55:21.340+00:00][7][2000][app-server][INFO]: [db_lock] [installation] Status: [Failed] Creating indices for 38 models: Error = Faraday::TimeoutError: Read timed out
Unexpected exception while running Enterprise Search:
Error: Read timed out at
</code></pre>
<p>Master node logs</p>
<pre><code># kubectl -n deleteme logs -f bselastic-es-masters-0
Skipping security auto configuration because the configuration file [/usr/share/elasticsearch/config/elasticsearch.yml] is missing or is not a regular file
{"@timestamp":"2022-04-25T16:55:11.051Z", "log.level": "INFO", "current.health":"GREEN","message":"Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ent-search-actastic-search_relevance_suggestions-document_position_id-unique-constraint][0]]]).","previous.health":"YELLOW","reason":"shards started [[.ent-search-actastic-search_relevance_suggestions-document_position_id-unique-constraint][0]]" , "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[bselastic-es-masters-0][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.routing.allocation.AllocationService","elasticsearch.cluster.uuid":"rnaZmz4kQwOBNbWau43wYA","elasticsearch.node.id":"YMyOM1umSL22ro86II6Ymw","elasticsearch.node.name":"bselastic-es-masters-0","elasticsearch.cluster.name":"bselastic"}
{"@timestamp":"2022-04-25T16:55:21.447Z", "log.level": "WARN", "message":"writing cluster state took [10525ms] which is above the warn threshold of [10s]; [skipped writing] global metadata, wrote metadata for [0] new indices and [1] existing indices, removed metadata for [0] indices and skipped [48] unchanged indices", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[bselastic-es-masters-0][generic][T#5]","log.logger":"org.elasticsearch.gateway.PersistedClusterStateService","elasticsearch.cluster.uuid":"rnaZmz4kQwOBNbWau43wYA","elasticsearch.node.id":"YMyOM1umSL22ro86II6Ymw","elasticsearch.node.name":"bselastic-es-masters-0","elasticsearch.cluster.name":"bselastic"}
{"@timestamp":"2022-04-25T16:55:21.448Z", "log.level": "INFO", "message":"after [10.3s] publication of cluster state version [226] is still waiting for {bselastic-es-masters-0}{YMyOM1umSL22ro86II6Ymw}{ljGkLdk-RAukc9NEJtQCVw}{192.168.88.213}{192.168.88.213:9300}{m}{k8s_node_name=1175027-kubeworker15.sb.rackspace.com, xpack.installed=true} [SENT_APPLY_COMMIT], {bselastic-es-data-node-0}{K88khDyfRwaGCBZwMKEaHA}{g9mXrT4WTumoj09W1OylYA}{192.168.88.214}{192.168.88.214:9300}{di}{k8s_node_name=1175027-kubeworker15.sb.rackspace.com, xpack.installed=true} [SENT_PUBLISH_REQUEST]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[bselastic-es-masters-0][generic][T#1]","log.logger":"org.elasticsearch.cluster.coordination.Coordinator.CoordinatorPublication","elasticsearch.cluster.uuid":"rnaZmz4kQwOBNbWau43wYA","elasticsearch.node.id":"YMyOM1umSL22ro86II6Ymw","elasticsearch.node.name":"bselastic-es-masters-0","elasticsearch.cluster.name":"bselastic"}
</code></pre>
<p>Which attribute we have to set in Enterprise search to increase timeout ? or is there any way to get debug log for Enterprise search ?</p>
| Nilesh | <p>You can try to increase the default timeout Globally parameter by following this example:</p>
<pre><code>es = Elasticsearch(timeout=30, max_retries=10, retry_on_timeout=True)
</code></pre>
<p>This would help to give the cluster more time to respond.</p>
| Leo |
<p>I run Minio on a kubernetes cluster since May. Everything worked fine. Since the last action, updated ingress from Traefik to Nginx ingress, I cannot login to the Minio Console anymore.</p>
<p>I do not really know if this happen before or after the ingress update. But in all I think this is not the reason.</p>
<p>The secret is still there in the cluster and it looks well.</p>
<p>The common Minio login to browse the buckets works perfect. But not the Minio Console.</p>
<p>The pod is always writing in the pod log (Lens):</p>
<pre><code>2021-11-29 22:01:17.806356 I | 2021/11/29 22:01:17 operator.go:73: the server has asked for the client to provide credentials
2021-11-29 22:01:17.806384 I | 2021/11/29 22:01:17 error.go:44: original error: invalid Login
</code></pre>
<p>No word about an error, but always <code>Unauthorized</code> inside the login screen. Anybody here with a similar problem in the past?</p>
| IFThenElse | <p><strong>Solution 1:</strong></p>
<p>The auth issue can be faced due to an expired <code>apiserver-kubelet-client.crt</code>. If it's expired, try to renew the cert and restart the apiserver.</p>
<p>In order to do this:</p>
<ul>
<li>check if the cert is expired</li>
<li>remove expired certificates(.crt)</li>
<li>execute <code>kubeadm alpha phase certs all</code></li>
</ul>
<p>Note this:</p>
<pre><code># for kube-apiserver
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
# for kubelet
--client-ca-file=/etc/kubernetes/pki/ca.crt
</code></pre>
<p><strong>Solution 2:</strong></p>
<p>While you've deployed cluster on Kubernetes before, you've should created Kubernetes manifest. You can try to delete them(service account, role, rolebinding) and create them once again:</p>
<ul>
<li>Remove Service Account:</li>
</ul>
<p><code>kubectl delete serviceaccount --namespace NAMESPACE_NAME SERVICEACCOUNT_NAME</code></p>
<ul>
<li>Remove Cluter Role Binding:</li>
</ul>
<p><code>kubectl delete clusterrolebinding CLUSTERROLEBINDING_NAME</code></p>
<ul>
<li>Remove Minio directory:</li>
</ul>
<p><code>rm -rf ./minio</code></p>
<ul>
<li>Create the Service Account, Role, RoleBinding:</li>
</ul>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: minio-serviceaccount
labels:
app: minio
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: minio-role
labels:
app: minio
rules:
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- "minio-keys"
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: minio-role-binding
labels:
app: minio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: minio-role
subjects:
- kind: ServiceAccount
name: minio-serviceaccount
</code></pre>
<blockquote>
<p>Make sure that the Minio pods can access the Minio keys stored in the previously created Secret or create new secrets.</p>
</blockquote>
<ul>
<li>Run helm init command:</li>
</ul>
<p><code>helm init --service-account=minio-serviceaccount</code></p>
<ul>
<li><p>Recreate your Minio pod</p>
</li>
<li><p>Reinstall the charts</p>
</li>
</ul>
| Bazhikov |
<p>we are currently still running our Kubernetes cluster in region <code>europe-west2</code> (London) but have to use a new ipaddress for an ingress on the cluster from <code>europe-west3</code> (Frankfurt).</p>
<p>After trying to deploy our new ingress on the cluster in region <code>europe-west2</code> I get the following error:</p>
<pre><code>the given static IP name xyz doesn't translate to an existing static IP.
</code></pre>
<p>I assume that ingress only has access to regional IP addresses in the same region.</p>
<p>I use the following annotation:</p>
<pre><code> annotations:
kubernetes.io/ingress.regional-static-ip-name: xyz
</code></pre>
<p>Does anybody have an idea how to use the IP address from <code>europe-west3</code> on an ingress in <code>europe-west2</code>?</p>
<p>Cheers</p>
| JSt | <p>I've tried to replicate your issue but couldn't find any solution.</p>
<p>However, note that from the <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#use_an_ingress" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p>Regional IP addresses do not work with Ingress.</p>
<p>To learn more about how to use Ingress to expose your applications to the internet, refer to the Setting up HTTP(S) Load Balancing with Ingress tutorial.</p>
</blockquote>
<p>And please refer to the similar issues and the useful answers under it:</p>
<ul>
<li><p><a href="https://stackoverflow.com/a/63257182/16860542">Issue 1</a></p>
</li>
<li><p><a href="https://stackoverflow.com/a/40164860/16860542">Issue 2</a></p>
</li>
</ul>
| Bazhikov |
<p>I have deploy my application in kubernetes using deployment.</p>
<ol>
<li>Whenever user gets login to application pod will generate session for that user.</li>
<li>To maintain session stickiness I have set session cookie using Nginx ingress annotations.</li>
<li>When hpa scale down pods application user is phasing a logout problem when pod is get terminated. If ingress has generated a session using this pod. It needs to log in again.</li>
<li>What I want is some sort of graceful termination of the connection. when pod is in a terminating state it should serve existing sessions until grace period.</li>
</ol>
| Akshay Gopani | <p>The answer <a href="https://stackoverflow.com/users/5525824/harsh-manvar">Harsh Manvar</a> is great, However, I want to expand it a bit :)</p>
<p>You can of course use <strong>terminationGracePeriodSeconds</strong> in the POD spec. Look at the example yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
terminationGracePeriodSeconds: 60
</code></pre>
<blockquote>
<p>At this point, Kubernetes waits for a specified time called the termination grace period. By default, this is 30 seconds. It’s important to note that this happens in parallel to the preStop hook and the SIGTERM signal. Kubernetes does not wait for the preStop hook to finish.</p>
<p>If your app finishes shutting down and exits before the terminationGracePeriod is done, Kubernetes moves to the next step immediately.</p>
<p>If your pod usually takes longer than 30 seconds to shut down, make sure you increase the grace period. You can do that by setting the terminationGracePeriodSeconds option in the Pod YAML. For example, to change it to 60 seconds.</p>
</blockquote>
<p>For more look <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">here</a>.</p>
<p>If you want to know how exactly looks like <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">pod lifecycle</a> see this link to the official documentation. The part about the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">termination of pods</a> should be most interesting. You will also have it described how exactly the termination takes place.</p>
<p>It is recommended that applications deployed on Kubernetes have a design that complies with the recommended standards.
One set of standards for modern, cloud-based applications is known as the <a href="https://12factor.net/" rel="nofollow noreferrer">Twelve-Factor App</a>:</p>
<blockquote>
<p>Twelve-factor processes are stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database.
Some web systems rely on “sticky sessions” – that is, caching user session data in memory of the app’s process and expecting future requests from the same visitor to be routed to the same process. Sticky sessions are a violation of twelve-factor and should never be used or relied upon. Session state data is a good candidate for a datastore that offers time-expiration, such as Memcached or Redis.</p>
</blockquote>
| Mikołaj Głodziak |
<p>How would I start a specific number of replicas of the same image, when that number is defined at startup?</p>
<p>On startup I need to call an API endpoint which returns a number. I then want to use this number to deploy that number of replicas of a pod (with each pod being aware of what order it was started in, even after restarts, etc).</p>
<p>For example the API endpoint returns 15, and 15 replicas are started with each having an 'order' / index number of 1 - 15, and maintaining always having a single pod with an 'order' number for each number between 1-15.</p>
<p>I was thinking of using an init container to call the API endpoint, I can't find how to then start that number of replicas and pass the 'order' to the pod.</p>
| AnotherCat | <p>Your problem can be solved in several ways. You can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">Statefulset</a> to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-identity" rel="nofollow noreferrer">identify your pods</a>, but you won't be able to number them from 1 to 15. Statefulset behaves slightly differently. Take a look at the <a href="https://github.com/kubernetes/kubernetes/blob/12f36302f9ee98093d0a01910e345825f1d3a86e/pkg/controller/statefulset/stateful_set_control.go#L319" rel="nofollow noreferrer">source code</a>:</p>
<pre><code>// for any empty indices in the sequence [0,set.Spec.Replicas) create a new Pod at the correct revision
for ord := 0; ord < replicaCount; ord++ {
if replicas[ord] == nil {
replicas[ord] = newVersionedStatefulSetPod(
currentSet,
updateSet,
currentRevision.Name,
updateRevision.Name, ord)
}
}
</code></pre>
<p>For a StatefulSet with X replicas, the numbering will start from 0 up through X-1 (see: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#ordinal-index" rel="nofollow noreferrer">Ordinal Index</a>).</p>
<p>I think you might be interested in using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">Cronjob</a> that runs your custom script periodically on a given schedule. This script can use the Discord Gateway Bot endpoint to determine the recommended number of shards and automatically scale up your bot when Discord recommends it. A good example of such an approach is <a href="https://github.com/Auttaja-OpenSource/Marver" rel="nofollow noreferrer">Marver - a K8s StatefulSet autoscaler</a>. Just take into account that this was made in 2018, so it will need to change to accommodate your Kubernetes version. Additionally, in order to use Marver (or other similar tools), the following requirements must be met:</p>
<blockquote>
<p>This project requires that you already be using Kubernetes, and assume you have some understand of how Kubernetes works. It also assumes that you have your bot set up to handle changes in the StatefulSet's replica count gracefully. Meaning: if we scale up, all existing shards will need to re-identify with Discord to present the new shard count, and update their local cache as necessary.</p>
</blockquote>
<p>Of course you can also use an <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">operator</a> as <a href="https://stackoverflow.com/users/4216641/turing85" title="15,090 reputation">Turing85</a> has mentioned in the comment:</p>
<blockquote>
<p>Sounds like a job for an <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">operator</a>. I would, however, highly advice against the approach of assigning an id to each pod since this, in essence, gives each pod an identity. This would, in return, mean that we have to use a StatefulSet, not a Deployment. Remember: pods are <a href="https://devops.stackexchange.com/questions/653/what-is-the-definition-of-cattle-not-pets">cattle, not pets</a>.</p>
</blockquote>
| Mikołaj Głodziak |
<p>Hey I'm trying to get a pipeline to work with kubernetes but I keep getting <code>ErrImagePull</code></p>
<p>Earlier I was getting something along the lines <code>authentication failed</code>.
I created a secret in the namespace of the pod and referring to it in the deployment file:</p>
<pre><code> imagePullSecrets:
- name: "registry-secret"
</code></pre>
<p>I still get <code>ErrImagePull</code> but now for different reasons. When describing the failed pod I get:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m46s default-scheduler Successfully assigned <project> to <server>
Normal Pulling 3m12s (x4 over 4m45s) kubelet Pulling image "<container_url>"
Warning Failed 3m12s (x4 over 4m45s) kubelet Failed to pull image "<container_url>": rpc error: code = Unknown desc = Requesting bear token: invalid status code from registry 403 (Forbidden)
Warning Failed 3m12s (x4 over 4m45s) kubelet Error: ErrImagePull
Warning Failed 3m (x6 over 4m45s) kubelet Error: ImagePullBackOff
Normal BackOff 2m46s (x7 over 4m45s) kubelet Back-off pulling image "<container_url>"
</code></pre>
<p>I guess the Registry is returning 403, but why? Does it mean the user in <code>registry-secret</code> is not allowed to pull the image?</p>
| iaquobe | <p>OP has posted in the comment that the problem is resolved:</p>
<blockquote>
<p>I found the error. So I had a typo and my secret was in fact not created in the correct namespace.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I am trying to call k8s api in one k8s pod. But hit the following permission issue:</p>
<pre><code>User "system:serviceaccount:default:flink" cannot list resource "nodes" in API group "" at the cluster scope.
</code></pre>
<p>In my yaml file, I already have specified the <code>Role</code> & <code>RoleBinding</code>. What do I miss here?</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flink
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: zeppelin-server-role
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "deployments", "nodes"]
verbs: ["create", "get", "update", "patch", "list", "delete", "watch"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings"]
verbs: ["bind", "create", "get", "update", "patch", "list", "delete", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: zeppelin-server-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: flink
roleRef:
kind: ClusterRole
name: zeppelin-server-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
| zjffdu | <p>You are deploying zeppelin-server on Kubernetes, right? Your yaml file with the service account looks good as I suppose, however to be sure that this works, you should follow the next steps:</p>
<ul>
<li><code>kubectl get clusterrole</code></li>
</ul>
<p>and you should get <code>zeppelin-server-role</code> role.</p>
<ul>
<li>check if your account '<strong>flink</strong>' has a binding to clusterrole "zeppelin-server-role"</li>
</ul>
<p><code>kubectl get clusterrole clusterrolebinding</code></p>
<p>if there is no, you can create it by the following command:</p>
<p><code>kubectl create clusterrolebinding zeppelin-server-role-binding --clusterrole=zeppelin-server-role --serviceaccount=default:flink</code></p>
<ul>
<li>finally, check if you really act as this account:</li>
</ul>
<p><code>kubectl get deploy flink-deploy -o yaml</code></p>
<p>if you can't see the settings "serviceAccount" and "serviceAccountName" from the output something like:</p>
<pre><code>...
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
...
</code></pre>
<p>then add this account you want flink to use:</p>
<p><code>kubectl patch deploy flink-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'</code></p>
| Bazhikov |
<p>I wrote a <code>readiness_probe</code> for my pod by using a bash script. Readiness probe failed with Reason: Unhealthy but when I manually get in to the pod and run this command <code>/bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping); if [[ $health -ne 401 ]]; then exit 1; fi</code> bash script exits with code 0.
What could be the reason? I am attaching the code and the error below.</p>
<p><strong>Edit:</strong> Found out that the health variable is set to 000 which means timeout in for bash script.</p>
<pre><code>readinessProbe:
exec:
command:
- /bin/bash
- '-c'
- |-
health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; fi
</code></pre>
<p>"kubectl describe pod {pod_name}" result:</p>
<pre><code>Name: rustici-engine-54cbc97c88-5tg8s
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Tue, 12 Jul 2022 18:39:08 +0200
Labels: app.kubernetes.io/name=rustici-engine
pod-template-hash=54cbc97c88
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/rustici-engine-54cbc97c88
Containers:
rustici-engine:
Container ID: docker://f7efffe6fc167e52f913ec117a4d78e62b326d8f5b24bfabc1916b5f20ed887c
Image: batupaksoy/rustici-engine:singletenant
Image ID: docker-pullable://batupaksoy/rustici-engine@sha256:d3cf985c400c0351f5b5b10c4d294d48fedfd2bb2ddc7c06a20c1a85d5d1ae11
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 12 Jul 2022 18:39:12 +0200
Ready: False
Restart Count: 0
Limits:
memory: 350Mi
Requests:
memory: 350Mi
Liveness: exec [/bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; else exit 0; echo $health; fi] delay=10s timeout=5s period=10s #success=1 #failure=20
Readiness: exec [/bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; else exit 0; echo $health; fi] delay=10s timeout=5s period=10s #success=1 #failure=10
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-whb8d (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-whb8d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 24s default-scheduler Successfully assigned default/rustici-engine-54cbc97c88-5tg8s to minikube
Normal Pulling 23s kubelet Pulling image "batupaksoy/rustici-engine:singletenant"
Normal Pulled 21s kubelet Successfully pulled image "batupaksoy/rustici-engine:singletenant" in 1.775919851s
Normal Created 21s kubelet Created container rustici-engine
Normal Started 20s kubelet Started container rustici-engine
Warning Unhealthy 4s kubelet Readiness probe failed:
Warning Unhealthy 4s kubelet Liveness probe failed:
</code></pre>
| batupaksoy | <p>The probe could be failing because it is facing performance issues or slow startup. To troubleshoot this issue, you will need to check that the probe doesn’t start until the app is up and running in your pod. Perhaps you will need to increase the <code>Timeout of the Readiness Probe</code>, as well as the <code>Timeout of the Liveness Probe</code>, like in the following example:</p>
<pre><code>readinessProbe:
initialDelaySeconds: 10
periodSeconds: 2
timeoutSeconds: 10
</code></pre>
<p>You can find more details about how to configure the Readlines Probe and Liveness Probe in this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">link</a>.</p>
| Leo |
<p>I have read through a bunch of guides showing how to monitor cpu and memory usage of pods in Kubernetes with Prometheus and most of them look something like this:</p>
<pre><code>rate(container_cpu_usage_seconds_total{pod=~"compute-.*", image!="", container!="POD"}[5m])
</code></pre>
<p>but I can't find any documentation on why the container label is there (it seems like it causes duplicated data) and why it is being avoided in many monitoring examples.
I know that this metric comes from the cadvisor component of Kubernetes, but the only docs I can find on these metrics are the short descriptions provided in the code <a href="https://github.com/google/cadvisor/blob/master/metrics/prometheus.go" rel="noreferrer">here</a>.</p>
<p>Does anyone know what this label is for and where there are more in depth documentation no these metrics?</p>
| nicktorba | <p>Containers, as @Ali Sattari already mentioned right in the comment, are pause containers.</p>
<hr />
<p><strong>Pause containers</strong></p>
<p>Pause container starts first, before scheduling other pods. Purpose of the pause container (<code>container_name="POD"</code>) is to provide the network namespace for the pod and additional containers, which will be assigned to that pod. Image of the pause container is always present in Kubernetes. Due to this, allocation of the pod’s resources is instantaneous. After the pause container started, there is no other work for him.</p>
<p>By default, pause containers are hidden, but you can see them by running next command: <code>docker ps | grep pause</code></p>
<pre><code>$ docker ps | grep pause
3bb5065dd9ba k8s.gcr.io/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kubernetes-bootcamp-fb5c67579-5rxjn_default_93ce94f8-b440-4b4f-9e4e-25f97be8196f_0
0627138518e1 k8s.gcr.io/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_metrics-server-56c4f8c9d6-vf2zg_kube-system_93626697-8cd0-4fff-86d3-245c23d74a42_0
81ca597ed3ff k8s.gcr.io/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_storage-provisioner_kube-system_dbdec6e5-d3ed-4967-a042-1747f8bdc39a_0
0d01130b158f k8s.gcr.io/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kubernetes-dashboard-968bcb79-pxmzb_kubernetes-dashboard_b1265ad7-2bce-46aa-8764-d06d72856633_0
d8a159b6215e k8s.gcr.io/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_dashboard-metrics-scraper-f6647bd8c-hqm6k_kubernetes-dashboard_bde40acc-a8ca-451a-9868-26e86ccafecb_0
294e81edf0be k8s.gcr.io/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_coredns-74ff55c5b-84vr7_kube-system_28275e83-613a-4a09-8ace-13d6e831c1bf_0
2b3bfad1201b k8s.gcr.io/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-zxjgc_kube-system_34f8158a-487e-4d00-80f1-37b67b72865e_0
d5542091730b k8s.gcr.io/pause:3.2 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-scheduler-minikube_kube-system_6b4a0ee8b3d15a1c2e47c15d32e6eb0d_0
b87163ed2c0a k8s.gcr.io/pause:3.2 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-controller-manager-minikube_kube-system_57b8c22dbe6410e4bd36cf14b0f8bdc7_0
c97ed96ded60 k8s.gcr.io/pause:3.2 "/pause" 4 minutes ago Up 4 minutes k8s_POD_etcd-minikube_kube-system_62a7db7bebf35458f2365f79293db6d3_0
4ab2d11317ed k8s.gcr.io/pause:3.2 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-apiserver-minikube_kube-system_dc477bf6fc026f57469b47d9be68a88c_0
</code></pre>
<p>You can read more about pause containers <a href="https://www.ianlewis.org/en/almighty-pause-container" rel="noreferrer">here</a>.</p>
<hr />
<p><strong>Pause containers in Prometheus</strong></p>
<p>In examples provided for Prometheus you can often see the next limitation: <code>container_name!="POD"</code>, since it's useful to request resource usage just for necessary containers that currently work, without information for pause containers.</p>
| Andrew Skorkin |
<p>I have a digicert SSL cert that I want to install in GCP secrets and reference it to one of the ingress resources. Currently have four files</p>
<pre><code>privatekey.pem
{domain-name.crt}
DigiCertca.crt
Trustedroot.crt
</code></pre>
| Jainam Shah | <p>As @Blender Fox posted in the comment, in the cluster, you can import already-issued certificates.</p>
<p>Use <code>--from-file</code> or <code>--from-env-file</code> to generate a Secret from one or more files. The file must be in plaintext format; it is unimportant what extension it has.</p>
<p>The command states:</p>
<pre><code>kubectl create secret SECRET_TYPE SECRET_NAME
--from-file PATH_TO_FILE1
--from-file PATH_TO_FILE2
</code></pre>
<p>Change the <em>PATH_TO_FILE1</em> and <em>PATH_TO_FILE2</em> in your case to the appropriate <em>.pem</em> and <em>.crt</em> files, like @Blender Fox indicated.</p>
<p>Or</p>
<p>If they are in a directory;</p>
<pre><code>kubectl create secret SECRET_TYPE SECRET_NAME \
--from-file PATH_TO_DIRECTORY
</code></pre>
<p>How to build secrets from files is described in GCP <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/secret#creating_secrets_from_files" rel="nofollow noreferrer">documentation</a>.</p>
| Bryan L |
<p>I am working with spring boot with microservices. I dockerize my all microservices and now I am trying to create resource in kubernates cluster. I get error while create resource in cluster through running <code>kubectl apply</code>.</p>
<p>Here down is my configuration file for kubernates:</p>
<p><strong>eureka-server.yml</strong></p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
ClusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: eureka
spec:
selector:
matchLabels:
app: eureka
serviceName: "eureka"
replicas: 1
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: username/eureka-server:latest
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
---
apiVersion: v1
kind: Service
metadata:
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
</code></pre>
<p>Configuration file in spring boot for eureka server:</p>
<p><strong>application.yml</strong></p>
<pre><code>server:
port: 8761
eureka:
instance:
hostname: "${HOSTNAME}.eureka"
client:
registerWithEureka: false
fetchRegistry: false
service-url:
defaultZone: ${EUREKA_SERVER_ADDRESS}
server:
waitTimeInMsWhenSyncEmpty: 0
</code></pre>
<p>Here down is my error I got while create resource in kubernates cluster:</p>
<pre><code>error: error validating "./eureka-server.yml": error validating data: ValidationError(Service.spec): unknown field "ClusterIP" in io.k8s.api.core.v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
| Faheem azaz Bhanej | <p>Well the error message you get is essentially the answer to your question. The field <code>ClusterIP</code> does not exist, but it is actually called <code>clusterIP</code>, which you can easily find out by reading the <a href="https://kubernetes.io/docs/reference/kubernetes-api/service-resources/service-v1/#ServiceSpec" rel="nofollow noreferrer">API reference</a>.</p>
| yezper |
<p>Well, there is some question about the Ingress I'm pretty curious about, Does the Kubernetes Ingress support internal requests? For Example what if I want to proxy my requests through the Ingress Between my Microservices?</p>
<p>(The reason, why I'm asking about it, is actually because I want to enable SSL Communication between my Microservices, but don't really want to use Some Kind of. Istio to implement it, so I hope there is some more easier solutions for that :)</p>
<p>Thanks</p>
| CraZyCoDer | <p>I've come up with Cert-Manager eventually, because it turned out to be most optimal way in comparison with others. <a href="https://www.youtube.com/watch?v=hoLUigg4V18&t=501s" rel="nofollow noreferrer">Here is a link</a> to a tutorial on Youtube to set it up into your k8s cluster.</p>
| CraZyCoDer |
<ol>
<li><p>Using WordPress php-apache-7.4 as base image, I created a Docker file with few customisation and created an image. I am using the same docker-entrypoint.sh, wp-config-docker.php files from the Docker Hub official image.</p>
</li>
<li><p>Using the image when I create a container on Docker Desktop it works fine and I am able to load the WP page</p>
</li>
<li><p>I upload the same image to Docker Hub and from there and using that image created a pod on EKS cluster and I receive the error "exec /usr/local/bin/docker-entrypoint.sh: exec format error."</p>
</li>
</ol>
<p>I am using the files from the below repo
<a href="https://github.com/docker-library/wordpress/tree/3b5c63b5673f298c14142c0c0e3e51edbdb17fd3/latest/php7.4/apache" rel="noreferrer">https://github.com/docker-library/wordpress/tree/3b5c63b5673f298c14142c0c0e3e51edbdb17fd3/latest/php7.4/apache</a></p>
<p>Only Docker file in the above repo is modified to installed the memcached and copy wp-config.php. The other two files I am using without any changes.</p>
<p>I tried changing the docker-entrypoint.sh script to add <code>#!/bin/bash</code> as mentioned in some issue reported, also I tried to create a custom-entrypoint.sh to edit the original docker-entrypoint.sh script which was also suggested in another page but they didn't work.</p>
<p>custom-entrypoint.sh</p>
<pre><code>#!/bin/bash
sed -i -e 's/^exec "$@"/#exec "$@"/g' /usr/local/bin/docker-entrypoint.sh
source docker-entrypoint.sh
exec "$@"
</code></pre>
<p>Trying to fix this, only thing is confusing is on Docker Desktop when I create using the same image it runs the cont without any error.</p>
| Franklin Ashok | <p>As mentioned in the comment above by David Maze, the issue is due to building the image on Mac M1 Pro.</p>
<p>To fix this I need to add <code>FROM --platform=linux/amd64 <image>-<version></code> in the Dockerfile and build or you can run the below command while running the build</p>
<p><code>docker build --platform=linux/amd64 <image>-<version></code></p>
<p>Both solutions will work. I added <code>FROM --platform=linux/amd64</code> to the Dockerfile and it's fixed now.</p>
| Franklin Ashok |
<p>I have applications that have to use Turn Server. When I try to make all connections over the pods, I get a "Connection reset by peer" error on 6 out of 10 connections. The TURN address resolves over the host and provides access over ClusterIP. When I run this from a public IP address, there is no problem.</p>
<p>Turn YAML:</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: default
name: coturn
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
# replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
template:
metadata:
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
hostNetwork: true
containers:
- name: coturn
image: coturn/coturn
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
echo 1 > /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_be_liberal
echo done
ports:
- name: turn-port1
containerPort: 3478
hostPort: 3478
protocol: UDP
- name: turn-port2
containerPort: 3478
hostPort: 3478
protocol: TCP
args:
# - --stun-only
- -v
- --user "test:test"
- --external-ip="$(detect-external-ip)/$MY_POD_IP"
- --realm="$(detect-external-ip)"
---
apiVersion: v1
kind: Service
metadata:
name: coturn
namespace: default
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
type: ClusterIP
ports:
- port: 3478
targetPort: 3478
protocol: UDP
name: turn-port1
- port: 3478
targetPort: 3478
protocol: TCP
name: turn-port2
selector:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
</code></pre>
<p>Log:</p>
<pre><code>0: IPv4. Connected from: 10.2.5.12:52224
0: IPv4. Connected to: 10.3.57.50:3478
0: allocate sent
0: allocate response received:
0: allocate sent
0: allocate response received:
0: success
0: IPv4. Received relay addr: public_ip:55179
0: clnet_allocate: rtv=0
0: refresh sent
0: refresh response received:
0: success
0: IPv4. Connected from: 10.2.5.12:52226
0: IPv4. Connected to: 10.3.57.50:3478
0: IPv4. Connected from: 10.2.5.12:52228
0: IPv4. Connected to: 10.3.57.50:3478
0: allocate sent
0: allocate response received:
0: allocate sent
0: allocate response received:
0: success
0: IPv4. Received relay addr: public_ip:52353
0: clnet_allocate: rtv=0
0: refresh sent
0: refresh response received:
0: success
0: allocate sent
0: allocate response received:
0: allocate sent
0: allocate response received:
0: success
0: IPv4. Received relay addr: public_ip:54002
0: clnet_allocate: rtv=0
0: refresh sent
0: refresh response received:
0: success
0: create perm sent: public_ip:54002
0: cp response received:
0: success
0: create perm sent: public_ip:52353
0: cp response received:
0: success
0: tcp connect sent
0: connection bind sent
recv: Connection reset by peer
</code></pre>
| James001 | <p>My theory about the issue: <code>connection reset by peer</code> means that the packet has been marked as invalid due that the server is
congested and serving large payloads; So the <code>service ClusterIP</code> will face some difficulties to attend the packets internally. To try to mitigate this issue, you should upgrade your Kubernetes version to V1.15+ or higher. Also, as a workaround, you can apply this rule in your cluster:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: startup-script
labels:
app: startup-script
spec:
template:
metadata:
labels:
app: startup-script
spec:
hostPID: true
containers:
- name: startup-script
image: gcr.io/google-containers/startup-script:v1
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
echo 1 > /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_be_liberal
echo done
</code></pre>
<p>You can find more details about this workaround using a Kubernetes service of type ClusterIP in this <a href="https://kubernetes.io/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/" rel="nofollow noreferrer">guide</a>:</p>
| Leo |
<p>I have a strange situation where sometimes a request hits an ingress and sometimes hit another ingress. I know because it comes with a different SSL certificate, and when it happens, there is no log on the ingress.</p>
<p>Is there some way to debug this? Get the logs from the load balancer and see what happens, and which route it takes?</p>
| Rodrigo | <p>You need to enable the Load balancing logging using this steps.</p>
<ol>
<li>Go to the Load balancing page in the Cloud Console.</li>
<li>Click the name of your load balancer.</li>
<li>Click Edit edit.</li>
<li>Click Backend Configuration.</li>
<li>Select Create a backend service.</li>
<li>Complete the required backend service fields.</li>
<li>Click Enable logging.</li>
<li>Set a Sample rate fraction. You can set a rate to 0.0 through 1.0 (default).</li>
<li>Click Update to finish editing the backend service.</li>
<li>Click Update to finish editing the load balancer.</li>
</ol>
<p>To view logs</p>
<ol>
<li>On Console, Go to Logs Exporer</li>
<li>Select GCE Forwarding Rule on Log Fields</li>
<li>Click on log time stamp to view log details.</li>
</ol>
<p>For more and complete guide you can refer on this page.</p>
<p><a href="https://cloud.google.com/load-balancing/docs/health-checks#create-hc" rel="nofollow noreferrer">Health Checks</a></p>
<p><a href="https://cloud.google.com/load-balancing/docs/load-balancing-overview" rel="nofollow noreferrer">Cloud Load Balancing</a></p>
| JaysonM |
<p>I'm not asking how to create a rootless container from scratch. Rather, I've been given some software deployed as pre-built Docker container images that run as root by default. I need to modify these containers so they can be deployed on Kubernetes, which means I need to make these containers rootless. To be clear, I DO NOT have the source to these containers so I can't simply rebuild them from scratch.</p>
<p>I've found plenty of articles about building rootless containers in general, but they all assume you're building your containers from scratch. I've spent hours searching but can't find anything about modifying an existing container to be rootless.</p>
<p>I realize this might be a very open question, but I don't know all the things I need to take into consideration. Two things I've been able to gather is adding a line such as <code>USER 1000</code> to Dockerfile, and adjusting ownership and permissions on certain files and folders. Beyond that, I'm not sure what I need to do.</p>
| user16910040 | <p>Create users in the container and switch users;</p>
<pre><code> Add a new user, named user;
Let this user have root privileges;
Set its password to password;
After the Container is started, log in as user and go directly to the user's home directory;
</code></pre>
<p>Put the following code snippet in the Dockerfile.</p>
<pre><code>RUN useradd --create-home --no-log-init --shell /bin/bash user \
&& RUN adduser user sudo \
&& RUN echo 'user:password' | chpasswd
USER user
WORKDIR /home/user
</code></pre>
<p>Use fixuid to modify the uid and gid of non-root users in the container;</p>
<p>After creating a non-root user with the above code, the user's uid and gid are generally 1000:1000.</p>
<p>Docker and the host share a set of kernel, and there is still only one set of uid and gid controlled by the kernel. In other words, we execute the process as a newly created docker user (uid 1000) in the container, and the host will think that the process is executed by a user with a uid of 1000 on the host, and this user may not necessarily be our account, which is equivalent to us A user who has replaced someone else with an impostor makes it difficult to trace back to the real user.</p>
<p>To solve this problem, you can specify the uid as the user's uid when adding the user, such as 1002;</p>
<pre><code>RUN addgroup --gid 1002 docker && \
adduser --uid 1002 --ingroup docker --home /home/docker --shell /bin/sh --gecos "" docker
</code></pre>
<p>A better solution is to use fixuid to switch the uid when the container starts:</p>
<pre><code>
RUN useradd --create-home --no-log-init --shell /bin/bash user\
&& adduser user sudo \
&& echo 'user:password' | chpasswd
RUN USER=user && \
GROUP=docker && \
curl -SsL https://github.com/boxboat/fixuid/releases/download/v0.4.1/fixuid-0.4.1-linux-amd64.tar.gz | tar -C /usr/local/bin -xzf - && \
chown root:root /usr/local/bin/fixuid && \
chmod 4755 /usr/local/bin/fixuid && \
mkdir -p /etc/fixuid && \
printf "user: $USER\ngroup: $GROUP\n" > /etc/fixuid/config.yml
USER user:docker
ENTRYPOINT ["fixuid"]
</code></pre>
<p>At this time, you need to specify the uid and gid when starting the container. The command is as follows:</p>
<pre><code>docker run --rm -it -u $(id -u):$(id -g) image-name bash
</code></pre>
| HackerDev-Felix |
<p>I have been searching but I cannot find answer to my question.</p>
<p>What I am trying to do is to connect to remote shell of openshift container and create db dump, which works if i put username,password and db name by hand (real values).</p>
<p>I wish to execute this command to access env variables: (this command later will be part of bigger script)</p>
<pre><code> oc rsh mon-rs-nr-0 mongodump --host=rs/mon-rs-nr-0.mon-rs-nr.xxx.svc.cluster.local,mon-rs-nr-1.xxx.svc.cluster.local,mon-rs-nr-2.mon-rs-nr.xxx.svc.cluster.local --username=$MONGODB_USER --password=$MONGODB_PASSWORD --authenticationDatabase=$MONGODB_DATABASE
</code></pre>
<p>But it is not working, I also tried different versions with echo etc. (env vars are not replaced to they values). Env vars are present inside container.</p>
<p>When I try</p>
<pre><code>oc rsh mon-rs-nr-0 echo "$MONGODB_PASSWORD"
</code></pre>
<p>I recieve</p>
<pre><code>$MONGODB_PASSWORD
</code></pre>
<p>But when i firstly connect to container and then execute command:</p>
<pre><code>C:\Users\xxxx\Desktop>oc rsh mon-rs-nr-0
$ echo "$MONGODB_PASSWORD"
mAYXXXXXXXXXXX
</code></pre>
<p>It works. However I need to use it in a way I presented at the top, do somebody know workaround?</p>
| xbubus | <p>Thanks to @msaw328 comment here is solution:</p>
<pre><code>C:\Users\xxx\Desktop>oc rsh mon-rs-nr-0 bash -c "mongodump --host=rs/mon-rs-nr-0.mon-rs-nr.xxx.svc.cluster.local,mon-rs-nr-1.mon-rs-nr.xxx.svc.cluster.local,mon-rs-nr-2.mon-rs-nr.xxx.svc.cluster.local --username=$MONGODB_USER --password=$MONGODB_PASSWORD --authenticationDatabase=$MONGODB_DATABASE"
</code></pre>
<p>Output:</p>
<pre><code>Defaulted container "mongodb" out of: mongodb, mongodb-sidecar, mongodb-exporter
2021-08-20T11:01:12.268+0000 writing xxx.yyy to
2021-08-20T11:01:12.269+0000 writing xxx.ccc to
2021-08-20T11:01:12.269+0000 writing xxx.ddd to
2021-08-20T11:01:12.269+0000 writing xxx.eee to
2021-08-20T11:01:12.339+0000 done dumping xxx.eee (11 documents)
2021-08-20T11:01:12.339+0000 writing xxx.zzz to
2021-08-20T11:01:12.340+0000 done dumping xxx.ccc (24 documents)
2021-08-20T11:01:12.340+0000 writing xxx.bbb to
2021-08-20T11:01:12.340+0000 done dumping xxx.ddd (24 documents)
2021-08-20T11:01:12.340+0000 writing xxx.fff to
2021-08-20T11:01:12.436+0000 done dumping xxx.yyy (1000 documents)
2021-08-20T11:01:12.436+0000 writing xxx.ggg to
2021-08-20T11:01:12.436+0000 done dumping xxx.bbb (3 documents)
2021-08-20T11:01:12.437+0000 writing xxx.aaa to
2021-08-20T11:01:12.441+0000 done dumping xxx.fff (0 documents)
2021-08-20T11:01:12.441+0000 done dumping xxx.zzz (3 documents)
2021-08-20T11:01:12.447+0000 done dumping xxx.aaa(0 documents)
2021-08-20T11:01:12.449+0000 done dumping xxx.ggg (0 documents)
</code></pre>
| xbubus |
<p>I have a role which has full privilege to access EKS, Ec2, IAM which is attached to an Ec2 Instance.</p>
<p>I am trying to access my EKS cluster from this Ec2 Instance. I did add the Ec2 instance arn like below to the Trusted relationship of the role which the instance assumes as well. However, still I get the error like below when trying to access the cluster using <code>kubectl</code> from cli inside the Ec2 instance.</p>
<p>I have tried below to obtain the kube config written to the instance hoe directory from which I execute these commands.</p>
<pre><code>aws sts get-caller-identity
$ aws eks update-kubeconfig --name eks-cluster-name --region aws-region --role-arn arn:aws:iam::XXXXXXXXXXXX:role/testrole
</code></pre>
<p>Error I'm getting:</p>
<pre><code>error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::769379794363:assumed-role/dev-server-role/i-016d7738c9cb84b96 is not authorized to perform: sts:AssumeRole on resource xxx
</code></pre>
| Vaishnav | <p>Community wiki answer for better visibility.</p>
<p>The problem is solved by taking a good tip from the comment:</p>
<blockquote>
<p>Don't specify <code>role-arn</code> if you want it to use the instance profile.</p>
</blockquote>
<p>OP has confirmed:</p>
<blockquote>
<p>thanks @jordanm that helped</p>
</blockquote>
| Mikołaj Głodziak |
<p>I've installed ArgoCD on my kubernetes cluster using</p>
<pre><code>kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p>Now, how to remove it from the cluster totally?</p>
| Sparsh Jain | <p>You can delete the entire installation using this - <code>kubectl delete -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml</code></p>
<p><a href="https://www.reddit.com/r/kubernetes/comments/kayu97/how_to_uninstall_argocd/gfdlufl/" rel="noreferrer">Reference</a></p>
| Ashish Jain |
<p>My question is very short: <strong>how does the process look like to retrieve a ca certificate for an existing Kubernetes cluster to connect gitlab with this cluster?</strong></p>
<p>After studying the docs, everything is fine, but I don‘t understand which cluster certificate is meant.</p>
<p>Many thanks and have a nice day everyone!</p>
| andreas.teich | <p>In this <a href="https://docs.gitlab.com/ee/user/project/clusters/add_existing_cluster.html#how-to-add-an-existing-cluster" rel="nofollow noreferrer">gitlab documentation</a> you can find instructions how to add an existing cluster to the gitlab and what do you need to do so.</p>
<blockquote>
<p><strong>CA certificate</strong> (required) - A valid Kubernetes certificate is needed to authenticate to the cluster. We use the certificate created by default.</p>
</blockquote>
<p>This is a certificate created by default inside the cluster.</p>
<p>All you have to do is get it and this is written in following steps:</p>
<blockquote>
<p>i. List the secrets with <code>kubectl get secrets</code>, and one should be named similar to <code>default-token-xxxxx</code>. Copy that token name for use below.
ii. Get the certificate by running this command:</p>
</blockquote>
<pre><code>kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 --decode
</code></pre>
<blockquote>
<p>If the command returns the entire certificate chain, you must copy the Root CA certificate and any intermediate certificates at the bottom of the chain. A chain file has following structure:</p>
</blockquote>
<pre><code> -----BEGIN MY CERTIFICATE-----
-----END MY CERTIFICATE-----
-----BEGIN INTERMEDIATE CERTIFICATE-----
-----END INTERMEDIATE CERTIFICATE-----
-----BEGIN INTERMEDIATE CERTIFICATE-----
-----END INTERMEDIATE CERTIFICATE-----
-----BEGIN ROOT CERTIFICATE-----
-----END ROOT CERTIFICATE-----
</code></pre>
| kkopczak |
<p>This was working perfectly fine before but for some reason it no longer is, would appreciate if someone can help fix this:</p>
<p>My terraform code as follows, have replaced key info. with "<>" just for sharing publicly here:</p>
<p>Outer main.tf has this:</p>
<pre><code> module "<name>_service_account" {
source = "../modules/kubernetes/service-account"
name = "<name>-deployer"
}
# Create <name> platform namespace
resource "kubernetes_namespace" "<name>-platform" {
metadata {
name = "<name>-platform"
}
}
</code></pre>
<p>The service account main.tf module:</p>
<pre><code>resource "kubernetes_service_account" "serviceaccount" {
metadata {
name = var.name
namespace = "kube-system"
}
}
resource "kubernetes_cluster_role_binding" "serviceaccount" {
metadata {
name = var.name
}
subject {
kind = "User"
name = "system:serviceaccount:kube-system:${var.name}"
}
role_ref {
kind = "ClusterRole"
name = "cluster-admin"
api_group = "rbac.authorization.k8s.io"
}
}
data "kubernetes_service_account" "serviceaccount" {
metadata {
name = var.name
namespace = "kube-system"
}
depends_on = [
resource.kubernetes_service_account.serviceaccount
]
}
data "kubernetes_secret" "serviceaccount" {
metadata {
name = data.kubernetes_service_account.serviceaccount.default_secret_name
namespace = "kube-system"
}
binary_data = {
"token": ""
}
depends_on = [
resource.kubernetes_service_account.serviceaccount
]
}
</code></pre>
<p>My outputs.tf for the above module:</p>
<pre><code>output "secret_token" {
sensitive = true
value = lookup(data.kubernetes_secret.serviceaccount.binary_data, "token")
}
</code></pre>
<p>The error that I get in my terraform pipeline:</p>
<pre><code>│ Error: Unable to fetch service account from Kubernetes: serviceaccounts "<name>-deployer" not found
│
│ with module.<name>_service_account.data.kubernetes_service_account.serviceaccount,
│ on ../modules/kubernetes/service-account/main.tf line 27, in data "kubernetes_service_account" "serviceaccount":
│ 27: data "kubernetes_service_account" "serviceaccount" {
</code></pre>
| terraform-ftw | <p>Figured it out, this is a new environment/project and I had the terraform refresh stage still in the pipeline hence why it couldnt find the service account, removing that and just letting the plan and apply run first solved it.</p>
| terraform-ftw |
<p>I have installed RabbitMQ to my Kubernetes Cluster via Google Cloud Platform's marketplace.</p>
<p>I can connect to it fine in my other applications hosted in the Kubernetes Cluster, I can create queues and setup consumers from them without any problems too.</p>
<p>I can temporarily port forward port 15672 so that I can access the management user interface from my machine. I can login fine and I get a list of queues and exchanges when visiting their pages. But as soon as I select a queue or an exchange to load that specific item, I get a 404 response and the following message. I get them same when trying to add a new queue.</p>
<pre><code>Not found
The object you clicked on was not found; it may have been deleted on the server.
</code></pre>
<p>They definitely exist, because when I go back to the listing page, they're there. It's really frustrating as it would be nice to test my microservices by simply publishing a message to a queue using RabbitMQ management, but I'm currently blocked from doing so!</p>
<p>Any help would be appreciated, thanks!</p>
<p><strong>Edit</strong><br />
A screenshot provided for clarity (after clicking the queue in the list):
<a href="https://i.stack.imgur.com/nVww2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nVww2.png" alt="rabbitmq admin"></a></p>
<p>If I try to add a new queue, I don't get that message, instead I get a 405.</p>
| Lloyd Powell | <p>This is because the default virtual-host is '/'. RabbitMQ admin uses this in the URL when you access the exchanges/queues pages. URL encoded it becomes '%2F'. However, the Ingress Controller (in my case nginx) converts that back to '/' so the admin app can't find that URL (hence the 404).</p>
<p>The work-around I came up with was to change the <strong>default_vhost</strong> setting in rabbitmq to something without '/' in it (e.g. 'vhost').</p>
<p>In the bitnami rabbitmq <a href="https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq" rel="nofollow noreferrer">Helm chart</a> I'm using, this is configured using:</p>
<pre><code>rabbitmq:
extraConfiguration: |-
default_vhost = vhost
</code></pre>
<p>You do have to update your clients to explicitly specify this new virtual-host though as they generally default to using '/'. In Spring Boot this is as simple as adding:</p>
<pre><code>spring:
rabbitmq:
virtual-host: vhost
</code></pre>
| collers |
<p>I am creating shovel plugin in rabbit mq, that is working fine with one pod, However, We are running on Kubernetes cluster with multiple pods and in case of pod restart, it is creating multiple instance of shovel on each pod independently, which is causing duplicate message replication on destination.</p>
<p>Detail steps are below</p>
<ol>
<li><p>We are deploying rabbit mq on Kubernetes cluster using helm chart.</p>
</li>
<li><p>After that we are creating shovel using Rabbit MQ Management UI. Once we are creating it from UI, shovels are working fine and not replicating data multiple time on destination.</p>
</li>
<li><p>When any pod get restarted, it create separate shovel instance. That start causing issue of duplicate message replication on destination from different shovel instance.</p>
</li>
<li><p>When we saw shovel status on Rabbit MQ UI then we found that, there are multiple instance of same shovel running on each pod.</p>
</li>
<li><p>When we start shovel from Rabbit MQ UI manually, then it will resolved this issue and only once instance will be visible in UI.</p>
</li>
</ol>
<p>So problem which we concluded that, in case of pod failure/restart, shovel is not able to sync with other node/pod, if any other shovel is already running on node/pod. Since we are able to solve this issue be restarting of shovel form UI, but this not a valid approach for production.
This issue we are not getting in case of queue and exchange.</p>
<p>Can anyone help us here to resolve this issue.</p>
| Nilay Tiwari | <p>as we lately have seen similar problems - this seems to be an issue since some 3.8. version - <a href="https://github.com/rabbitmq/rabbitmq-server/discussions/3154" rel="nofollow noreferrer">https://github.com/rabbitmq/rabbitmq-server/discussions/3154</a></p>
<p>it should be fixed as far as I have understood from version 3.8.20 on. see</p>
<p><a href="https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.8.19" rel="nofollow noreferrer">https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.8.19</a>
and
<a href="https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.8.20" rel="nofollow noreferrer">https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.8.20</a>
and
<a href="https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.9.2" rel="nofollow noreferrer">https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.9.2</a></p>
<p>didn't have time yet to check if this is really fixed with those versions.</p>
| BBQigniter |
<p>We have a 2 node K3S cluster with one master and one worker node and would like "reasonable availability" in that, if one or the other nodes goes down the cluster still works i.e. ingress reaches the services and pods which we have replicated across both nodes. We have an external load balancer (F5) which does active health checks on each node and only sends traffic to up nodes.</p>
<p><strong>Unfortunately, if the master goes down the worker will not serve any traffic (ingress).</strong></p>
<p>This is strange because all the service pods (which ingress feeds) on the worker node are running.</p>
<p>We suspect the reason is that key services such as the <code>traefik</code> ingress controller and <code>coredns</code> are only running on the master.</p>
<p>Indeed when we simulated a master failure, restoring it from a backup, none of the pods on the worker could do any DNS resolution. Only a reboot of the worker solved this.</p>
<p>We've tried to increase the number of replicas of the <code>traefik</code> and <code>coredns</code> deployment which helps a bit BUT:</p>
<ul>
<li>This gets lost on the next reboot</li>
<li>The worker still functions when the master is down but every 2nd ingress request fails
<ul>
<li>It seems the worker still blindly (round-robin) sends traffic to a non-existant master</li>
</ul>
</li>
</ul>
<p>We would appreciate some advice and explanation:</p>
<ul>
<li>Should not key services such as <code>traefik</code> and <code>coredns</code> be DaemonSets by default?</li>
<li>How can we change the service description (e.g. replica count) in a persistent way that does not get lost</li>
<li>How can we get intelligent traffic routing with ingress to only "up" nodes</li>
<li>Would it make sense to make this a 2-master cluster?</li>
</ul>
<p>UPDATE: Ingress Description:</p>
<pre><code>kubectl describe ingress -n msa
Name: msa-ingress
Namespace: msa
Address: 10.3.229.111,10.3.229.112
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
tls-secret terminates service.ourdomain.com,node1.ourdomain.com,node2.ourdomain.com
Rules:
Host Path Backends
---- ---- --------
service.ourdomain.com
/ gateway:8443 (10.42.0.100:8443,10.42.1.115:8443)
node1.ourdomain.com
/ gateway:8443 (10.42.0.100:8443,10.42.1.115:8443)
node2.ourdomain.com
/ gateway:8443 (10.42.0.100:8443,10.42.1.115:8443)
Annotations: kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.middlewares: msa-middleware@kubernetescrd
Events: <none>
</code></pre>
| Marc | <p>Your goals seems can be achievable with a few K8S internal features (not specific to Traffic):</p>
<ol>
<li><p>Assure you have 1 replica of <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controller's Pod</a> on each Node => use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">Daemon Set</a> as a installation method</p>
</li>
<li><p>To fix the error from Ingress Description set the correct load Balancer IP of Ingress Controller's Service.</p>
</li>
<li><p>Use external Traffic Policy to "Local" - this assures that traffic is routed to local endpoints only (Controller Pads running on Node accepting traffic from Load Balancer)</p>
</li>
</ol>
<blockquote>
<p><code>externalTrafficPolicy</code> - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: <code>Cluster</code> (default) and <code>Local</code>. <code>Cluster</code> obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. <code>Local</code> preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type Services, but risks potentially imbalanced traffic spreading.</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
externalTrafficPolicy: Local
type: LoadBalancer
</code></pre>
<ol start="5">
<li>Service name of Ingress Backend should use <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">external Traffic Policy</a> <code>externalTrafficPolicy: Local</code> too.</li>
</ol>
| Mykola |
<p>I'm trying to deploy a Kubernetes Operator using <a href="https://kopf.readthedocs.io/en/stable/" rel="nofollow noreferrer">Kopf</a> and I'm getting the following error:</p>
<pre><code>kopf._cogs.clients.errors.APIForbiddenError: ('exchangerates.operators.brennerm.github.io is forbidden: User "system:serviceaccount:default:exchangerates-operator" cannot list resource "exchangerates" in API group "operators.brennerm.github.io" at the cluster scope', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'exchangerates.operators.brennerm.github.io is forbidden: User "system:serviceaccount:default:exchangerates-operator" cannot list resource "exchangerates" in API group "operators.brennerm.github.io" at the cluster scope', 'reason': 'Forbidden', 'details': {'group': 'operators.brennerm.github.io', 'kind': 'exchangerates'}, 'code': 403})
</code></pre>
<p>What's confusing is that if I check the permissions granted to the Service Account it looks like it has the correct permissions:</p>
<pre><code>$ kubectl auth can-i list exchangerates --as=system:serviceaccount:default:exchangerates-operator
yes
$ kubectl auth can-i list exchangerates --as=system:serviceaccount:default:exchangerates-operator --all-namespaces
yes
</code></pre>
<p>Is there somewhere else I should be looking to troubleshoot the issue?</p>
| timcase | <p>User <a href="https://stackoverflow.com/users/857383/sergey-vasilyev" title="3,401 reputation">Sergey Vasilyev</a> has tested this configuration and mentioned in the comment:</p>
<blockquote>
<p>You are right, "*" works. I tried your repo locally with Minikube 1.24.0 & K8s 1.22.3 — it works, there are no permission errors. The operator and the setup are both correct. Similarly for K3d — it works. I assume it is something with your local setup or old images left somewhere.</p>
</blockquote>
<p>I also tested. I ran it locally on Minikube and had no problems. Your setup looks fine, everything works fine. Probably the problem may be with some dependencies in the image, or with Minikube leftovers. Bear in mind that Minikube is mainly used for testing and learning purposes so some of it's features might not be ideal. As for solving your problem, just try creating a new cluster.</p>
| Mikołaj Głodziak |
<p>On my kubernetes nodes there are</p>
<ol>
<li>prioritized pods</li>
<li>dispensable pods</li>
</ol>
<p>Therefore I would like to have QoS class of <code>Guaranteed</code> for the prioritized pods.
To achieve a <code>Guaranteed</code> class the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">cpu/memory requests/limits must meet some conditions</a>. Therefore:</p>
<blockquote>
<p>For every Container in the Pod, the CPU limit must equal the CPU
request</p>
</blockquote>
<p>But I would like to set a higher CPU limit than request, so that the prioritized pods can use every free CPU resources which are available.</p>
<p>Simple example: A Node with 4 cores has:</p>
<ul>
<li>1 prioritized pod with 2000 CPU request and 3900 CPU limit</li>
<li>3 dispensable pods with each 500 CPU request and limit.</li>
</ul>
<p>If the prioritized pod would have 2000 CPU request and limit 2 Cores are wasted because the dispensable pods don't use CPU most of the time.</p>
<p>If the prioritized pod would have 3900 CPU request and limit, I would need an extra node for the dispensable pods.</p>
<p><strong>Questions</strong></p>
<p>Is it possible to set explicitly the <code>Guaranteed</code> class to a pod even with difference CPU request and limit?</p>
<p>If it's not possible: Why is there no way to explicitly set the QoS class?</p>
<p><strong>Remarks</strong></p>
<p>There's an <a href="https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/" rel="nofollow noreferrer">system-cluster-critical</a> option. But I think this should only be used for critical k8s add-on pods but not for critical applications.</p>
| Matthias M | <blockquote>
<p>Is it possible to set explicitly the <code>Guaranteed</code> class to a pod even with difference CPU request and limit?</p>
</blockquote>
<p><strong>Yes, however you will need to use an additional plugin: <a href="https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/kep/9-capacity-scheduling" rel="nofollow noreferrer">capacity-scheduling</a> used with <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#resource-quota-per-priorityclass" rel="nofollow noreferrer"><code>PriorityClass</code></a>:</strong></p>
<blockquote>
<p>There is increasing demand to use Kubernetes to manage batch workloads (ML/DL). In those cases, one challenge is to improve cluster utilization while ensuring that each user has a reasonable amount of resources. The problem can be partially addressed by the Kubernetes <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">ResourceQuota</a>. The native Kubernetes ResourceQuota API can be used to specify the maximum overall resource allocation per namespace. The quota enforcement is done through an admission check. A quota consumer (e.g., a Pod) cannot be created if the aggregated resource allocation exceeds the quota limit. In other words, the overall resource usage is aggregated based on Pod's spec (i.e., cpu/mem requests) when it's created. The Kubernetes quota design has the limitation: the quota resource usage is aggregated based on the resource configurations (e.g., Pod cpu/mem requests specified in the Pod spec). Although this mechanism can guarantee that the actual resource consumption will never exceed the ResourceQuota limit, it might lead to low resource utilization as some pods may have claimed the resources but failed to be scheduled. For instance, actual resource consumption may be much smaller than the limit.</p>
</blockquote>
<hr />
<blockquote>
<p>Pods can be created at a specific <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority" rel="nofollow noreferrer">priority</a>. You can control a pod's consumption of system resources based on a pod's priority, by using the <code>scopeSelector</code> field in the quota spec.</p>
<p>A quota is matched and consumed only if <code>scopeSelector</code> in the quota spec selects the pod.</p>
<p>When quota is scoped for priority class using <code>scopeSelector</code> field, quota object is restricted to track only following resources:</p>
<ul>
<li><code>pods</code></li>
<li><code>cpu</code></li>
<li><code>memory</code></li>
<li><code>ephemeral-storage</code></li>
<li><code>limits.cpu</code></li>
<li><code>limits.memory</code></li>
<li><code>limits.ephemeral-storage</code></li>
<li><code>requests.cpu</code></li>
<li><code>requests.memory</code></li>
<li><code>requests.ephemeral-storage</code></li>
</ul>
</blockquote>
<p>This plugin supports also <a href="https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/kep/9-capacity-scheduling#additional-preemption-details" rel="nofollow noreferrer">preemption</a> (example for Elastic):</p>
<blockquote>
<p>Preemption happens when a pod is unschedulable, i.e., failed in PreFilter or Filter phases.</p>
<p>In particular for capacity scheduling, the failure reasons could be:</p>
<ul>
<li>Prefilter Stage</li>
<li>sum(allocated res of pods in the same elasticquota) + pod.request > elasticquota.spec.max</li>
<li>sum(allocated res of pods in the same elasticquota) + pod.request > sum(elasticquota.spec.min)</li>
</ul>
<p>So the preemption logic will attempt to make the pod schedulable, with a cost of preempting other running pods.</p>
</blockquote>
<p>Examples of yaml files and usage can be found <a href="https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/kep/9-capacity-scheduling#design-details" rel="nofollow noreferrer">in the plugin description</a>.</p>
| Mikołaj Głodziak |
<p>I have a kubernetes cluster with a kafka zookeeper statefulset that works fine with one pod. However, when for performance reasons I try to scale the statefulset to three pods with the following command:</p>
<pre><code>kubectl scale statefulset <my_zookeper_statefulset> --replicas=3
</code></pre>
<p>The two new pods go into an Error and then a CrashLoopBackOff with the following logs:</p>
<pre><code>Detected Zookeeper ID 3
Preparing truststore
Adding /opt/kafka/cluster-ca-certs/ca.crt to truststore /tmp/zookeeper/cluster.truststore.p12 with alias ca
Certificate was added to keystore
Preparing truststore is complete
Looking for the right CA
No CA found. Thus exiting.
</code></pre>
<p>The certificate in question exists and is used by the existing pod without problem.
The same error occurs when I try to scale my kafka brokers.</p>
<p>Tl;dr: How do I scale kafka up without error?</p>
| sigma1510 | <p>When do you use Kafka you can't go to this way. To do this, you need to configure replicas parameter in your file.
Example of <code>spec</code> properties for the <code>Kafka</code> resource:</p>
<pre><code>apiVersion: YourVersion
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
replicas: 3
</code></pre>
<p>Secondly also you can create <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/" rel="nofollow noreferrer">ReplicationController Service</a> for comfortable work with replicas.</p>
| Mykola |
<h2>Stack</h2>
<ul>
<li>Kubernetes (Digital Ocean)</li>
<li>Ingress Nginx Controller</li>
<li>React Frontend (static files)</li>
<li>NodeJS/Express Backend</li>
</ul>
<p>At certain point I need to load multiple images, more than a hundred, some are retrieved but many are not loaded. In the chrome console I get the following error:</p>
<pre><code>GET https://api.mydomain.com/images/products/F10014-soporte-caja-2.501.jpg net::ERR_HTTP2_SERVER_REFUSED_STREAM
</code></pre>
<p>This images are in a public express folder:</p>
<pre><code>let publicPath = path.join(__dirname, '/public')
console.log(`publicPath ${publicPath}`)
</code></pre>
<p>I looked at NodeJS and I couldn't find any errors. I also tried adding annotations in the <code>ingress-nginx</code> service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kubernetes.digitalocean.com/load-balancer-id: "e7f5dc8e-3723-11ec-8d3d-0242ac130003"
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
service.beta.kubernetes.io/do-loadbalancer-hostname: "mydomain.com"
labels:
helm.sh/chart: ingress-nginx-4.1.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
</code></pre>
<p>My questions are:</p>
<ol>
<li>Is the problem in the <code>ingress-nginx</code> controller?</li>
<li>Can this be solved?</li>
<li>Should I change my solution and place the files in another place?</li>
</ol>
<p>Let me know if you need information.</p>
| agusgambina | <p><strong>In short:</strong></p>
<blockquote>
<p>My questions are</p>
<ul>
<li>Is the problem in the ingress-nginx controller?</li>
</ul>
</blockquote>
<p><strong>Basically no</strong></p>
<blockquote>
<ul>
<li>Can this be solved?</li>
</ul>
</blockquote>
<p><strong>Yes</strong></p>
<blockquote>
<ul>
<li>Should I change my solution and place the files in another place?</li>
</ul>
</blockquote>
<p><strong>It depends :)</strong></p>
<h2>Explanation:</h2>
<p>First of all, you need to identify where the bug is coming from. You have received <code>ERR_HTTP2_SERVER_REFUSED_STREAM</code> from this request: <code>https://api.mydomain.com/images/products/F10014-soporte-caja-2.501.jpg</code>. It looks like you tried to download too much data at once and got this error. How can you fix this error? First of all, you can try downloading data in batches, not all at once. Another solution is to configure your nginx server from which you download pictures. See the <a href="http://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_concurrent_streams" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Sets the maximum number of concurrent HTTP/2 streams in a connection.</p>
<p>Syntax: http2_max_field_size size;
Default: http2_max_field_size 4k;
Context: http, server</p>
</blockquote>
<p>You can here set up bigger value.</p>
<p>You can also set bigger value in the file <code>/etc/nginx/conf.d/custom_proxy_settings.conf</code> in the line</p>
<pre><code>http2_max_concurrent_streams 256;
</code></pre>
<p>The exact name of the file isn't important, it just have to end with .conf and be mounted inside /etc/nginx/conf.d.</p>
<p>Another solution could be disabling HTTP/2 and using HTTP/1.1 protocol, but it may be a security risk.</p>
<p>You have also asked:</p>
<blockquote>
<p>Should I change my solution and place the files in another place?</p>
</blockquote>
<p>You can, but it shouldn't be necessary.</p>
| Mikołaj Głodziak |
<p>I have my hyperledger fabric blockchain deployed on k8s in the <strong>namespace: hlf-blockchain</strong> and my client app is deployed is in another <strong>namespace: hlf-app</strong></p>
<p>The cpp-profile template is below. url-> <em><code>grpcs://<service-name>.<namespace>:<port></code></em> which enables cross namespace communication.</p>
<pre><code>{
"name": "test-network",
"version": "1.0.0",
"client": {
"organization": "MyOrg",
"connection": {
"timeout": {
"peer": {
"endorser": "10000"
}
}
}
},
"organizations": {
"TboxOrg": {
"mspid": "MyOrg",
"peers": [
"peer0",
"peer1",
"peer2"
],
"certificateAuthorities": [
"ca-myorg"
]
}
},
"peers": {
"peer0": {
"url": "grpcs://peer0.hlf-blockchain:${P0PORT}",
"tlsCACerts": {
"pem": "${PEERPEM}"
},
"grpcOptions": {
"ssl-target-name-override": "peer0",
"hostnameOverride": "peer0",
"request-timeout": 10000,
"grpc.keepalive_time_ms": 60000
}
},
"peer1": {
"url": "grpcs://peer1.hlf-blockchain:${P1PORT}",
"tlsCACerts": {
"pem": "${PEERPEM}"
},
"grpcOptions": {
"ssl-target-name-override": "peer1",
"hostnameOverride": "peer1",
"request-timeout": 10000,
"grpc.keepalive_time_ms": 60000
}
},
"peer2-tbox": {
"url": "grpcs://peer2.hlf-blockchain:${P2PORT}",
"tlsCACerts": {
"pem": "${PEERPEM}"
},
"grpcOptions": {
"ssl-target-name-override": "peer2",
"hostnameOverride": "peer2",
"request-timeout": 10000,
"grpc.keepalive_time_ms": 60000
}
}
},
"certificateAuthorities": {
"ca-tboxorg": {
"url": "https://ca-myorg.hlf-blockchain:${CAPORT}",
"caName": "ca-myorg",
"tlsCACerts": {
"pem": ["${CAPEM}"]
},
"httpOptions": {
"verify": false
}
}
}
}
</code></pre>
<p>From my client-app using <strong>fabrid-sdk-go</strong> I am able to connect to the network using the gateway. While invoking the chaincode I am getting the following error:</p>
<pre><code>Endorser Client Status Code: (2) CONNECTION_FAILED. Description: dialing connection on target [peer0:7051]: connection is in TRANSIENT_FAILURE\nTransaction processing for endorser
</code></pre>
<p>I am able to invoke the transactions using cli command from the same <strong>namespace: hfl-blockchain</strong></p>
<p>My <strong>peer service configuration</strong>:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: peer0
labels:
app: peer0
spec:
selector:
name: peer0
type: ClusterIP
ports:
- name: grpc
port: 7051
protocol: TCP
- name: event
port: 7061
protocol: TCP
- name: couchdb
port: 5984
protocol: TCP
</code></pre>
<p>I believe this error is due to communication error between different namespace, which the client apps gets from the cpp-profile.</p>
<p>What's the correct way to configure the peer service or the cpp connection profile?</p>
| Niraj Kumar | <p>You are correct, the discovery service is returning network URLs that are unreachable from the <code>hlf-blockchain</code> namespace.</p>
<p>It is possible to run a Gateway client in a different namespace from the Fabric network. If you are using Kube DNS, each
of the fabric nodes can be referenced with a fully qualified host name <code><service-name>.<namespace>.svc.cluster.local</code>.</p>
<p>In order to connect a gateway client across namespaces, you will need to introduce the .svc.cluster.local
Fully Qualified Domain Name to the fabric URLs returned by discovery:</p>
<ul>
<li><p>In your TLS CA enrollments, make sure that the certificate signing requests include a valid Subject Alternate Name
with the FQDN. For example, if your peer0 TLS certificate is only valid for the host <code>peer0</code>, then the grpcs://
connection will be rejected in the TLS handshake when connecting to grpcs://peer0.hlf-blockchain.svc.cluster.local.</p>
</li>
<li><p>In the Gateway Client Connection Profile, use the FQDN when connecting to the discovery peers. In addition
to the peer <code>url</code> attribute, make sure to address host names in the <code>grpcOptions</code> stanzas.</p>
</li>
<li><p>Discovery will return the peer host names as specified in the core.yaml <code>peer.gossip.externalendpoint</code>
(<code>CORE_PEER_GOSSIP_EXTERNALENDPOINT</code> env) parameter. Make sure that this specifies the FQDN for all peers
visible to discovery.</p>
</li>
<li><p>Discovery will return the orderer host names as specified in the configtx.yaml organization <code>OrdererEndpoints</code> stanza.
Make sure that these URLs specify the FQDN for all orderers.</p>
</li>
</ul>
<p>Regarding the general networking, make sure to double-check that the gateway client application has visibility and a
network route to the pods running fabric services in a different namespace. Depending on your Calico configuration
and Kube permissions, it's possible that traffic is getting blocked before it ever reaches the Fabric services.</p>
| Josh |
<p>I have a service providing an API that I want to only be accessible over <code>https</code>. I don't want <code>http</code> to redirect to <code>https</code> because that will expose credentials and the caller won't notice. Better to get an error response.</p>
<p>How to do I configure my ingress.yaml? Note that I want to maintain the default 308 redirect from <code>http</code> to <code>https</code> for other services in the same cluster.</p>
<p>Thanks.</p>
| David Tinker | <p>In the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect" rel="nofollow noreferrer">documentation</a>: you can read the following sentence about HTTPS enforcement through redirect:</p>
<blockquote>
<p>By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use <code>ssl-redirect: "false"</code> in the NGINX <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-redirect" rel="nofollow noreferrer">ConfigMap</a>.</p>
</blockquote>
<blockquote>
<p>To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation in the particular resource.</p>
</blockquote>
<p>You can also create two separate configurations: one with http and https and the other one only for http.</p>
<p>Using <a href="https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/" rel="nofollow noreferrer"><code>kubernetes.io/ingress.class</code></a> annotation you can choose the ingress controller to be used.</p>
<blockquote>
<p>This mechanism also provides users the ability to run <em>multiple</em> NGINX ingress controllers (e.g. one which serves public traffic, one which serves "internal" traffic).</p>
</blockquote>
<p>See also <a href="https://stackoverflow.com/questions/56302606/kubernetes-ingress-nginx-how-can-i-disable-listening-on-https-if-no-tls-config">this</a> and <a href="https://stackoverflow.com/questions/50087544/disable-ssl-redirect-for-kubernetes-nginx-ingress/50087545">this</a> similar questions.</p>
| kkopczak |
<p>I have deployed a pod in kubernetes cluster that run a python script.</p>
<p>The problem is i want to force the k8s to stop the container after the script complete his job and not to re-create another pod.</p>
<p>To be aware that i have tried to use kind:job but it doesn't fulfill my need.</p>
<p>I tried two types of kind, job and deployments.</p>
<p>With the deployment the pod always show status first completed after that crush with crashloopbackof error.</p>
<p>With the job the pod always show the status completed but i don't have the possibility to re-excute it with an automated way</p>
<p>Do you have any suggestions about that?</p>
| Ahmed | <p>I have posted community wiki answer to summarise the topic.</p>
<p>User <a href="https://stackoverflow.com/users/213269/jonas" title="108,324 reputation">Jonas</a> has posted great suggestions:</p>
<blockquote>
<p>A kind <code>Job</code> does exactly this. Use <code>Job</code> and your problem is solved.</p>
</blockquote>
<blockquote>
<p>If you deploy with <code>kubectl create -f job.yaml</code> and your job has a <code>generateName:</code> instead of <code>name:</code>, a new <code>Job</code> will be created each time.</p>
</blockquote>
<p>For more information look at the documentation about <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Jobs</a>. See also information about <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#generated-values" rel="nofollow noreferrer">Gnerated values</a>.</p>
| Mikołaj Głodziak |
<p>We have created our own custom resource a.k.a CRD and we need to add support for rolling update, as K8s is supporting it for deployments etc we want to reuse such logic, is there any lib which we can use (maybe partially) which we can use to support it? Or maybe learn and follow the logic as we don't want to re-invent the wheel? Any reference/lib would be helpful.</p>
<p>I've struggled to find this <a href="https://github.com/kubernetes/kubernetes" rel="nofollow noreferrer">here</a>.</p>
| JME | <p>Posted community wiki answer to summarise the problem.</p>
<p><a href="https://stackoverflow.com/users/13906951/clark-mccauley" title="1,093 reputation">Clark McCauley</a> well suggested:</p>
<blockquote>
<p>You're probably looking for the logic contained <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/rolling.go#L32-L67" rel="nofollow noreferrer">here</a>.</p>
</blockquote>
<p>This is a reference to the k8s source code, so you probably won't find a better source of ideas :)</p>
<pre><code>// rolloutRolling implements the logic for rolling a new replica set.
func (dc *DeploymentController) rolloutRolling(ctx context.Context, d *apps.Deployment, rsList []*apps.ReplicaSet) error {
newRS, oldRSs, err := dc.getAllReplicaSetsAndSyncRevision(ctx, d, rsList, true)
if err != nil {
return err
}
allRSs := append(oldRSs, newRS)
// Scale up, if we can.
scaledUp, err := dc.reconcileNewReplicaSet(ctx, allRSs, newRS, d)
if err != nil {
return err
}
if scaledUp {
// Update DeploymentStatus
return dc.syncRolloutStatus(ctx, allRSs, newRS, d)
}
// Scale down, if we can.
scaledDown, err := dc.reconcileOldReplicaSets(ctx, allRSs, controller.FilterActiveReplicaSets(oldRSs), newRS, d)
if err != nil {
return err
}
if scaledDown {
// Update DeploymentStatus
return dc.syncRolloutStatus(ctx, allRSs, newRS, d)
}
if deploymentutil.DeploymentComplete(d, &d.Status) {
if err := dc.cleanupDeployment(ctx, oldRSs, d); err != nil {
return err
}
}
// Sync deployment status
return dc.syncRolloutStatus(ctx, allRSs, newRS, d)
}
</code></pre>
| Mikołaj Głodziak |
<p>Are the resources in a kubernetes YAML manifest created in sequence?</p>
<p>Say I have a manifest file like so</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: default
data:
prop.value: 1
----
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: registry.k8s.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: PROP_VALUE
valueFrom:
configMapKeyRef:
name: app-config
key: prop.value
restartPolicy: Never
</code></pre>
<p>Will ConfigMap be created before Deployment, so Deployment can use the correct ConfigMap value?</p>
| Anugerah Erlaut | <p>Yes: <a href="https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/" rel="noreferrer">Manage Deployment</a></p>
<blockquote>
<p>The resources will be created in the order they appear in the file.</p>
</blockquote>
<p>But this should not matter too much in kubernetes. If the Deployment is created first, it will spawn the pods once the ConfigMap is ready.</p>
| HiroCereal |
<p>On a GKE cluster, I have client and server pods, with a client service and a server service.</p>
<p>My server service is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
app: server-deployment
ports:
- port: 5000
targetPort: 5000
</code></pre>
<p>When I access the client pod shell and run</p>
<pre><code>nslookup server-cluster-ip-service
</code></pre>
<p>I get</p>
<pre><code>Server: server-IP
Address: server-IP-Address
Name: server-cluster-ip-service.default.svc.cluster.local
Address: IPAddress
** server can't find server-cluster-ip-service.svc.cluster.local: NXDOMAIN
** server can't find server-cluster-ip-service.cluster.local: NXDOMAIN
** server can't find server-cluster-ip-service.svc.cluster.local: NXDOMAIN
** server can't find server-cluster-ip-service.cluster.local: NXDOMAIN
** server can't find server-cluster-ip-service.us-central1-c.c.my-cluster.internal: NXDOMAIN
** server can't find server-cluster-ip-service.google.internal: NXDOMAIN
** server can't find server-cluster-ip-service.us-central1-c.c.my-cluster: NXDOMAIN
** server can't find server-cluster-ip-service.google.internal: NXDOMAIN
** server can't find server-cluster-ip-service.c.my-cluster.internal: NXDOMAIN
** server can't find server-cluster-ip-service.c.my-cluster.internal: NXDOMAIN
</code></pre>
<p>The service is running on port 5000, because when I set up a pod with busybox, I can curl from that pod like so:</p>
<pre><code>curl server-cluster-ip-service:5000
</code></pre>
<p>And it returns json from my server.</p>
<p>After experimenting with what address to put in the fetch request in my client code, the only way I can get a 200 response is with this:</p>
<pre><code>const getAllUsers = async () => {
console.log("GETTING ALL USERS");
const response = await fetch("server-cluster-ip-service.default.svc.cluster.local", {
mode: 'cors',
headers: {
'Access-Control-Allow-Origin':'*'
}
});
const resp = await response
console.log("RESPONSE", resp)
const json = await response.json();
setUsers(json);
};
</code></pre>
<p><a href="https://i.stack.imgur.com/juP9L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/juP9L.png" alt="enter image description here" /></a></p>
<p>which returns not json and not apparently on port 5000,</p>
<p>whereas all attempts to query the service at port 5000 fail.</p>
<p>I have this running locally and it works fine. I have ruled out arm processor issues with my mac by building and pushing docker images in GKE from the cloud console. I am fairly confident this is a GKE issue, because the dns works locally, but why would it not work with GKE? I don't have any network policies I've set myself - could there be a node security group blocking access? I read about "shielding" as a node security policy configured at setup, but I don't know how to check if that's been configured?</p>
<p>Complete code below:</p>
<p>My server code is:</p>
<pre><code>const express = require("express");
const bodyParser = require("body-parser");
var cors = require("cors");
const PORT = 5000;
const app = express();
app.use(cors());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(bodyParser.json());
app.use(express.static("public"));
app.listen(PORT, function () {
console.log("listening on 5000");
});
app.get("/", (req, res) => {
console.log("PROCESSING GET USERS REQUEST");
const list = ["item1", "item2", "item3"];
res.json(list);
});
</code></pre>
<p>My client code is:</p>
<pre><code>import { useState, useEffect } from "react";
import "./editUser.css";
function EditUser() {
const [users, setUsers] = useState([]);
const getAllUsers = async () => {
console.log("GETTING ALL USERS");
const response = await fetch("http://server-cluster-ip-service:5000");
const json = await response.json();
setUsers(json);
};
useEffect(() => {
getAllUsers();
}, []);
return (
<div className="App">
<h1 data-position="header">Merr k8s testbed</h1>
<section data-position="quotes">
<h2>Console</h2>
<ul>
{users &&
users.map((user) => (
<li>
<h3>{user}</h3>
</li>
))}
</ul>
</section>
</div>
);
}
export default EditUser;
</code></pre>
<p>My client-deployment.yaml is:</p>
<pre><code>kind: Deployment
metadata:
name: client-deployment
labels:
app: client-deployment
component: web
spec:
replicas: 3
selector:
matchLabels:
app: client-deployment
template:
metadata:
labels:
app: client-deployment
component: web
spec:
containers:
- name: client
image: myDocker/k8s-client:latest
ports:
- containerPort: 3000
</code></pre>
<p>My server-deployment.yaml is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
labels:
app: server-deployment
spec:
replicas: 3
selector:
matchLabels:
app: server-deployment
template:
metadata:
labels:
app: server-deployment
spec:
containers:
- name: server
image: myDocker/k8s-server:latest
ports:
- containerPort: 5000
</code></pre>
<p>My client-cluster-ip-service.yaml is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
labels:
app: server-deployment
spec:
replicas: 3
selector:
matchLabels:
app: server-deployment
template:
metadata:
labels:
app: server-deployment
spec:
containers:
- name: server
image: myDocker/k8s-server:latest
ports:
- containerPort: 5000
</code></pre>
<p>My server-cluster-ip-service.yaml is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
app: server-deployment
ports:
- port: 5000
targetPort: 5000
</code></pre>
| Davtho1983 | <p>I was able to see that you and jabbson concluded that probably the issue is with React. Just in case let me share with you that, a common root cause for this kind of issue is that the DNS inside Busybox does not work properly, due to the version (I cannot see in the screenshots and the code, the version that you are using). Most of the cases work with busybox images 1.28.4, empirically talking. You can try using that version.</p>
<p>You can use the following URL’s thread as reference <a href="https://github.com/kubernetes/kubernetes/issues/66924" rel="nofollow noreferrer">dns can't resolve kubernetes.default and/or cluster.local</a></p>
| Nestor Daniel Ortega Perez |
<p>I have a Docker container with MariaDB running in Microk8s (running on a single Unix machine).</p>
<pre><code># Hello World Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb
spec:
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb:latest
env:
- name: MARIADB_ROOT_PASSWORD
value: sa
ports:
- containerPort: 3306
</code></pre>
<p>These are the logs:</p>
<pre><code>(...)
2021-09-30 6:09:59 0 [Note] mysqld: ready for connections.
Version: '10.6.4-MariaDB-1:10.6.4+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
</code></pre>
<p>Now,</p>
<ul>
<li>connecting to port 3306 on the machine does not work.</li>
<li>connecting after exposing the pod with a service (any type) on port 8081 also does not work.</li>
</ul>
<p>How can I get the connection through?</p>
| DomJourneyman | <p>The answer has been written in comments section, but to clarify I am posting here solution as Community Wiki.</p>
<p>In this case problem with connection has been resolved by setting <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#selector" rel="nofollow noreferrer"><code>spec.selector</code></a>.</p>
<blockquote>
<p>The <code>.spec.selector</code> field defines how the Deployment finds which Pods to manage. In this case, you select a label that is defined in the Pod template (<code>app: nginx</code>).</p>
</blockquote>
<blockquote>
<p><code>.spec.selector</code> is a required field that specifies a <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">label selector</a> for the Pods targeted by this Deployment.</p>
</blockquote>
| kkopczak |
<p>I am trying to run spark sample SparkPi docker image on EKS. My Spark version is 3.0.<br />
I created spark serviceaccount and role binding. When I submit the job, there is error below:</p>
<pre class="lang-sh prettyprint-override"><code>2020-07-05T12:19:40.862635502Z Exception in thread "main" java.io.IOException: failure to login
2020-07-05T12:19:40.862756537Z at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:841)
2020-07-05T12:19:40.862772672Z at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:777)
2020-07-05T12:19:40.862777401Z at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:650)
2020-07-05T12:19:40.862788327Z at org.apache.spark.util.Utils$.$anonfun$getCurrentUserName$1(Utils.scala:2412)
2020-07-05T12:19:40.862792294Z at scala.Option.getOrElse(Option.scala:189)
2020-07-05T12:19:40.8628321Z at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2412)
2020-07-05T12:19:40.862836906Z at org.apache.spark.deploy.k8s.features.BasicDriverFeatureStep.configurePod(BasicDriverFeatureStep.scala:119)
2020-07-05T12:19:40.862907673Z at org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.$anonfun$buildFromFeatures$3(KubernetesDriverBuilder.scala:59)
2020-07-05T12:19:40.862917119Z at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
2020-07-05T12:19:40.86294845Z at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
2020-07-05T12:19:40.862964245Z at scala.collection.immutable.List.foldLeft(List.scala:89)
2020-07-05T12:19:40.862979665Z at org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.buildFromFeatures(KubernetesDriverBuilder.scala:58)
2020-07-05T12:19:40.863055425Z at org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:98)
2020-07-05T12:19:40.863060434Z at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$4(KubernetesClientApplication.scala:221)
2020-07-05T12:19:40.863096062Z at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$4$adapted(KubernetesClientApplication.scala:215)
2020-07-05T12:19:40.863103831Z at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2539)
2020-07-05T12:19:40.863163804Z at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:215)
2020-07-05T12:19:40.863168546Z at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:188)
2020-07-05T12:19:40.863194449Z at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
2020-07-05T12:19:40.863218817Z at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
2020-07-05T12:19:40.863246594Z at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
2020-07-05T12:19:40.863252341Z at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
2020-07-05T12:19:40.863277236Z at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
2020-07-05T12:19:40.863314173Z at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
2020-07-05T12:19:40.863319847Z at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2020-07-05T12:19:40.863653699Z Caused by: javax.security.auth.login.LoginException: java.lang.NullPointerException: invalid null input: name
2020-07-05T12:19:40.863660447Z at com.sun.security.auth.UnixPrincipal.<init>(UnixPrincipal.java:71)
2020-07-05T12:19:40.863663683Z at com.sun.security.auth.module.UnixLoginModule.login(UnixLoginModule.java:133)
2020-07-05T12:19:40.863667173Z at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-07-05T12:19:40.863670199Z at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-07-05T12:19:40.863673467Z at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-07-05T12:19:40.86367674Z at java.lang.reflect.Method.invoke(Method.java:498)
2020-07-05T12:19:40.863680205Z at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
2020-07-05T12:19:40.863683401Z at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
2020-07-05T12:19:40.86368671Z at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
2020-07-05T12:19:40.863689794Z at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
2020-07-05T12:19:40.863693081Z at java.security.AccessController.doPrivileged(Native Method)
2020-07-05T12:19:40.863696183Z at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
2020-07-05T12:19:40.863698579Z at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
2020-07-05T12:19:40.863700844Z at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:815)
2020-07-05T12:19:40.863703393Z at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:777)
2020-07-05T12:19:40.86370659Z at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:650)
2020-07-05T12:19:40.863709809Z at org.apache.spark.util.Utils$.$anonfun$getCurrentUserName$1(Utils.scala:2412)
2020-07-05T12:19:40.863712847Z at scala.Option.getOrElse(Option.scala:189)
2020-07-05T12:19:40.863716102Z at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2412)
2020-07-05T12:19:40.863719273Z at org.apache.spark.deploy.k8s.features.BasicDriverFeatureStep.configurePod(BasicDriverFeatureStep.scala:119)
2020-07-05T12:19:40.86372651Z at org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.$anonfun$buildFromFeatures$3(KubernetesDriverBuilder.scala:59)
2020-07-05T12:19:40.863728947Z at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
2020-07-05T12:19:40.863731207Z at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
2020-07-05T12:19:40.863733458Z at scala.collection.immutable.List.foldLeft(List.scala:89)
2020-07-05T12:19:40.863736237Z at org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.buildFromFeatures(KubernetesDriverBuilder.scala:58)
2020-07-05T12:19:40.863738769Z at org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:98)
2020-07-05T12:19:40.863742105Z at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$4(KubernetesClientApplication.scala:221)
2020-07-05T12:19:40.863745486Z at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$4$adapted(KubernetesClientApplication.scala:215)
2020-07-05T12:19:40.863749154Z at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2539)
2020-07-05T12:19:40.863752601Z at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:215)
2020-07-05T12:19:40.863756118Z at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:188)
2020-07-05T12:19:40.863759673Z at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
2020-07-05T12:19:40.863762774Z at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
2020-07-05T12:19:40.863765929Z at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
2020-07-05T12:19:40.86376906Z at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
2020-07-05T12:19:40.863792673Z at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
2020-07-05T12:19:40.863797161Z at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
2020-07-05T12:19:40.863799703Z at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2020-07-05T12:19:40.863802085Z
2020-07-05T12:19:40.863804184Z at javax.security.auth.login.LoginContext.invoke(LoginContext.java:856)
2020-07-05T12:19:40.863806454Z at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
2020-07-05T12:19:40.863808705Z at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
2020-07-05T12:19:40.863811134Z at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
2020-07-05T12:19:40.863815328Z at java.security.AccessController.doPrivileged(Native Method)
2020-07-05T12:19:40.863817575Z at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
2020-07-05T12:19:40.863819856Z at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
2020-07-05T12:19:40.863829171Z at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:815)
2020-07-05T12:19:40.86385963Z ... 24 more
</code></pre>
<p>My deployments are:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: helios
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: spark
namespace: helios
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: spark-role-binding
namespace: helios
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: spark
namespace: helios
---
apiVersion: batch/v1
kind: Job
metadata:
name: spark-pi
namespace: helios
spec:
template:
spec:
containers:
- name: spark-pi
image: <registry>/spark-pi-3.0
command: [
"/bin/sh",
"-c",
"/opt/spark/bin/spark-submit \
--master k8s://https://<EKS_API_SERVER> \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.kubernetes.namespace=helios
--conf spark.executor.instances=2 \
--conf spark.executor.memory=2G \
--conf spark.executor.cores=2 \
--conf spark.kubernetes.container.image=<registry>/spark-pi-3.0 \
--conf spark.kubernetes.container.image.pullPolicy=Always \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.jars.ivy=/tmp/.ivy
local:///opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar"
]
serviceAccountName: spark
restartPolicy: Never
</code></pre>
<p>The docker image is created using OOTB dockerfile provided in Spark installation.</p>
<pre><code>docker build -t spark:latest -f kubernetes/dockerfiles/spark/Dockerfile .
</code></pre>
<p>What am I doing wrong here? Please help.</p>
<p><strong>SOLUTION</strong><br />
Finally it worked out after I comment the below line from docker file.</p>
<pre><code>USER ${spark_uid}
</code></pre>
<p>Though, now, container is running as root but at least it is working.</p>
| NumeroUno | <p>I had the same problem. I solved it by changing the k8s job.</p>
<p>Hadoop is failing to find a username for the user. You can see the problem by running <code>whoami</code> in the container, which yields <code>whoami: cannot find name for user ID 185</code>. The spark image <code>entrypoint.sh</code> contains <a href="https://github.com/apache/spark/blob/cef665004847c4cc2c5b0be9ef29ea5510c0922e/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh#L30" rel="noreferrer">code</a> to add the user to <code>/etc/passwd</code>, which sets a username. However <code>command</code> bypasses the <code>entrypoint.sh</code>, so instead you should use <code>args</code> like so:</p>
<pre><code> containers:
- name: spark-pi
image: <registry>/spark-pi-3.0
args: [
"/bin/sh",
"-c",
"/opt/spark/bin/spark-submit \
--master k8s://https://10.100.0.1:443 \
--deploy-mode cluster ..."
]
</code></pre>
| Kieran |
<p>I'm having jenkins stage to deploy kubernetes manifest file with rolling RollingUpdate method and i configured maxSurgeand and maxUnavailable value with 4 replicas copy of pod. while running jenkins my kubernetes yaml file is applying but after the deployment new changes been not reflecting.</p>
<p>In this case i need to login to worker node where my pod is running, then i need stop the deployment then i need to use docker rmi command to remove the container, finally i need pull the latest image to reflect the new changes.</p>
<p>jenkins file</p>
<pre><code>stage('Deploy') {
container('kubectl') {
withCredentials([kubeconfigFile(credentialsId: 'KUBERNETES_CLUSTER_CONFIG', variable: 'KUBECONFIG')]) {
def kubectl
echo 'deploy to deployment!!'
if(gitBranch == "devops") {
kubectl = "kubectl --kubeconfig=${KUBECONFIG} --context=arn:aws:eks:eu-central-1:123456789101:cluster/my-modulus-cluster"
echo 'deploy to my-modulus-cluster!'
sh "kubectl apply -f ./infrastructure/dev/my.yaml -n default --record"
sh "kubectl rollout history deployment myapp"
}
}
}
}
</code></pre>
<p>Here the k8 manifest file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: myapp
spec:
backend
containers:
- name: myapp
image: myimage:latest
imagePullPolicy: Always
ports:
- containerPort: 3030
readinessProbe:
initialDelaySeconds: 1
periodSeconds: 2
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 1
httpGet:
host:
scheme: HTTPS
path: /
httpHeaders:
- name: Host
value: myhost
port: 3030
</code></pre>
| Gowmi | <p>Inside the deploy stage i have added if condition to set image</p>
<pre><code>sh 'echo "Starting ams Deployment"'
sh '''
if kubectl get deployments | grep ams
then
kubectl set image deployment ams ams=myimage:latest
kubectl rollout restart deployment ams
else
kubectl apply -f my.yaml -n default
fi
'''
</code></pre>
| Gowmi |
<p>I have a mysql database running on k8s cluster inside pod. it was previously listing all the databases when i login through <code>mysql -u root -p</code> and then entering password. But my application was not able to connect to that database and was showing <code>1045, "Access denied for user 'root'@'ipaddress' (using password: YES)"</code> there was just one host which is % and user was root</p>
<p>i have updated secrets as well and restart deployment but still it was showing the above error.</p>
<p>then i ran this command to grant all privileges to root user</p>
<pre><code>GRANT ALL ON root.* TO 'root'@'localhost' IDENTIFIED BY 'password';
</code></pre>
<p>it creates one more host for root which is localhost. Now when i try to login with</p>
<pre><code>mysql -u root -p
</code></pre>
<p>it is not listing my databases and just showing</p>
<p><a href="https://i.stack.imgur.com/Cj2jC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cj2jC.png" alt="enter image description here" /></a></p>
<p>And now host is localhost. what should i do to get my database back.</p>
| bigDaddy | <p>In MySQL permissions are granted for "accounts" which consist of a user name and a host name <a href="https://dev.mysql.com/doc/refman/8.0/en/account-names.html" rel="nofollow noreferrer">[1].</a> So in terms of GRANTS:</p>
<pre><code>[email protected]
[email protected]
</code></pre>
<p>The above are two different users. The wildcard in terms of permissions is <code>%</code>. However <code>%</code> and <code>localhost</code> are mutually exclusive as explained <a href="https://stackoverflow.com/questions/10823854/using-for-host-when-creating-a-mysql-user">here</a>.</p>
<p>So having that in mind you would need to run something close to:</p>
<pre><code>CREATE USER 'root'@'%' IDENTIFIED BY 'changeme';
GRANT ALL PRIVILEGES ON your_database_name.* TO 'root'@'%';
</code></pre>
<p>In order to enable connections coming from a different host. Please keep in mind that using the username root should be avoided. Instead, use a dedicated user for your needs.</p>
| HiroCereal |
<p>Do you know what is the annotation that we can use it on GKE to make a LoadBalancer service internal?. For example Azure (and AWS) supports the following annotation (shown in the YAML code snippet) to make a LoadBalancer service internal. I couldn’t find equivalent of it on GKE. For example naturally one may expect <strong>gcp-load-balancer-internal</strong> as the equivalent annotation on GKE; unfortunately it is not. Here is the Azure and AWS documentation for it, I am looking equivalent of it on GKE.</p>
<ul>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/internal-lb" rel="nofollow noreferrer">Azure: internal LoadBalancer</a></li>
<li><a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/" rel="nofollow noreferrer">AWS: annotations</a></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
</code></pre>
| Satyan | <p>There are 2 annotations:</p>
<p>For GKE versions 1.17 and later, use the annotation:</p>
<pre><code>networking.gke.io/load-balancer-type: "Internal"
</code></pre>
<p>For earlier versions, use the annotation:</p>
<pre><code>cloud.google.com/load-balancer-type: "Internal"
</code></pre>
<p>Plus, I’m sharing with you some GCP’s helpful official documentation in the following URLs <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="nofollow noreferrer">Using an internal TCP/UDP load balancer</a> and <a href="https://cloud.google.com/load-balancing/docs/l7-internal/setting-up-l7-internal?hl=en#configure-a-network" rel="nofollow noreferrer">Setting up Internal HTTP(S) Load Balancing </a>.</p>
| Nestor Daniel Ortega Perez |
<p>My requirement is to optimize and secure base image for nodeJS. I have tried building it on SCRATCH using multistage docker but the final container getting into crashed state.</p>
<p>Looking for a sample docker file working on SCRATCH base.</p>
| Mujahid Islam | <p>It's very much possible to build NodeJS applications on docker scratch image. Commands on the Scratch needs to be thoroughly verified by pointing to right path for node executable, if not it will result in crash as there will be no command line interface on scratch base.</p>
<p>Here is the dockerfile for sample NodeJS todo application and git reference.</p>
<pre><code>FROM siva094/node-scratch-static:1 as buildnode
#########################
#### Source code ########
########################
FROM alpine/git as codecheckout
WORKDIR /app
RUN git clone https://github.com/siva094/nodejs-todo.git
######################
#### Code Build #####
####################
FROM node:10-alpine as sourcecode
WORKDIR /app
COPY --from=codecheckout /app/nodejs-todo/ ./
RUN npm install --prod
###################
#### Target APP ###
##################
FROM scratch
COPY --from=buildnode /node/out/Release/node /node
COPY --from=sourcecode /app ./
ENV PATH "$PATH:/node"
EXPOSE 3000
ENTRYPOINT ["/node", "index.js"]
</code></pre>
<p>Git Reference - <a href="https://github.com/siva094/nodejs-todo" rel="nofollow noreferrer">https://github.com/siva094/nodejs-todo</a></p>
<p>Docker References:</p>
<p>NodeJS fully static build and NodeJS todo app</p>
<pre><code> docker pull siva094/node-fullystatic
docker pull siva094/nodejs-scratch-todo
</code></pre>
<p>Adding reference for building a static node.</p>
<p>source code URL - github.com/siva094/Dockers/blob/master/Dockerfile</p>
<pre><code>FROM node:latest as builder
RUN apk --no-cache add --virtual native-deps \
g++ gcc libgcc libstdc++ linux-headers autoconf automake make nasm python git && \
npm install --quiet node-gyp -g
RUN npm install --quiet node-gyp -g
RUN git clone https://github.com/nodejs/node && \
cd node && \
./configure --fully-static --enable-static && \
make
FROM scratch
COPY --from=builder /node/out/Release/node /node
</code></pre>
| Siva Pilli |
<p>I have a pod with single binary with multiple threads in that application. Is it possible in Kubernetes to assign specific thread to specific CPU core inside the pod. I am aware of way to limit pod to work in specific cores, but my requirement is to manage thread mapping inside the pod to specific cores. Thank you.</p>
| Vinay | <p>The lowest level that you can set up <strong>CPU Policies</strong> is at Container level. We can divide a Kubernetes cluster into namespaces. If you create a Pod within a namespace that has a default CPU limit, and any container in that Pod does not specify its own CPU limit, then the control plane assigns the default CPU limit to that container. Visit this <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/" rel="nofollow noreferrer">Official Kubernetes Documentation</a> for more reference.</p>
<p>From the inside of the Container, the only possibility could be <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/performance_tuning_guide/sec-tuna-cpu-tuning" rel="nofollow noreferrer">RedHat Tuna</a>. Tuna commands can target individual CPUs.</p>
<p>The last possibility is the <a href="https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy" rel="nofollow noreferrer"><strong>Static CPU policy</strong></a> which allows containers in Guaranteed pods with integer CPU requests access to exclusive CPUs on the node.</p>
<p>Finally, the following <a href="https://stackoverflow.com/questions/53276398/kubernetes-cpu-multithreading">question</a> is useful for you, regarding multi threads and CPU assignments.</p>
| Nestor Daniel Ortega Perez |
<p>What is the purpose of the <code>kubectl delete -k <DIR></code> command ?</p>
<p>Does it destroy all the resources defined under that specific path?</p>
<p>Thanks</p>
| fmp | <p>In accordance with the comment posted by @DazWilkin:</p>
<p><strong>What is the purpose of the kubectl delete -k command?</strong></p>
<blockquote>
<p>Delete resources from a directory containing kustomization.yaml - e.g.
dir/kustomization.yaml</p>
</blockquote>
<p><strong>Does it destroy all the resources defined under that specific path?</strong></p>
<blockquote>
<p>Specifically, it deletes the resources that result from processing the
folder as a customization directory.</p>
</blockquote>
| Ismael Clemente Aguirre |
<p>Hello I am new to kubernetes and i need some help.</p>
<p>I want use kubernetes ingress path for my 2 different nuxt project.</p>
<p>First / path working well but my</p>
<p>second /v1 path not get resources like .css and .js</p>
<p>My first deployment and service yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-1
labels:
app: nginx1
spec:
replicas: 1
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx1
image: st/itmr:latest "can't show image"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx1-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx1
</code></pre>
<p>My second deployment and service yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
labels:
app: nginx2
spec:
replicas: 1
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx2
image: st/itpd:latest "can't show image"
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx2-svc
spec:
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx2
</code></pre>
<p>And there is the my ingress yaml file:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: some.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx1-svc
port:
number: 80
- path: /v1
pathType: Prefix
backend:
service:
name: nginx2-svc
port:
number: 8080
</code></pre>
<p>I tought using nginx.ingress.kubernetes.io/rewrite-target: /$1 would be work for me bu its not.</p>
<p>I don't know where is the problem so help me.</p>
| kuroi_karasu | <p>To clarify I am posting a community wiki answer.</p>
<p>The problem here was resolved by switching the project path.</p>
<p>See more about ingress paths <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">here</a>.</p>
| kkopczak |
<p>I have a created an nginx pod and nginx clusterIP service and assign an externalIP to that service like below</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-nginx ClusterIP 10.110.93.251 192.168.0.10 443/TCP,80/TCP,8000/TCP,5443/TCP 79m
</code></pre>
<p>In one of my application pod, I am trying to execute below command and get the fqdn of it.</p>
<pre><code>>>> import socket
>>> socket.getfqdn('192.168.0.10')
'test-nginx.test.svc.cluster.local'
</code></pre>
<p>It returns me the nginx service fqdn instead of my host machine fqdn. Is there a way to block dns resolution only for external-ip ? or is there any other workaround for this problem?</p>
| prasanna kumar | <p>You assigned an external ip to a <code>ClusterIP</code> service in Kubernetes, so you can access your application from outside the Cluster, but you are concerned about the Pods having access to that external ip and want to block the dns resolution.</p>
<p>This is not the best approach to your issue, Kubernetes has several ways to expose the services without compromising the security; for what you want, maybe a better option is to implement an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> instead.
<a href="https://i.stack.imgur.com/loQjJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/loQjJ.png" alt="enter image description here" /></a></p>
<p>As you can see in the diagram, the Ingress routes the incoming traffic to the desired service based on configured rules, isolating the outside world from your service and only allowing specific traffic to go in. You can also implement features as TLS termination for your HTTPS traffic, and it performs load balancing by default.</p>
<p>Even further, if your main concern is security within your Cluster, you can take a look at the <a href="https://istio.io/latest/about/service-mesh/" rel="nofollow noreferrer">Istio Service mesh</a>.</p>
| Gabriel Robledo Ahumada |
<p>I'm looking into a new update to my kubernetes cluster in Azure. However, I'm not sure how to do this. I have been able to build an ingress controller like this one:</p>
<pre><code>{{- if .Values.ingress.enabled -}}
{{- $fullName := include "test.fullname" . -}}
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
{{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "test.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ .port }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ .port }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
</code></pre>
<p>My values is the following:</p>
<pre><code>replicaCount: 1
image:
repository: test01.azurecr.io/test
tag: update1
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 2000
targetPort: http
protocol: TCP
ingress:
enabled: true
className: ""
annotations:
appgw.ingress.kubernetes.io/use-private-ip: 'true'
kubernetes.io/ingress.class: azure/application-gateway
hosts:
- host: test.com
paths:
- path: /test
pathType: Prefix
port: 80
tls: []
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
</code></pre>
<p>My pod is ready and it seems that the service is ready. However, the test.com domain is not working. I added a DNS record for my domain and I used my cluster's IP to make sure the domain will be available. However, I still have an issue to see the domain the error message is the following:</p>
<pre><code>Connection timed out && This site can’t be reached
</code></pre>
<p>Does anyone knows any better workaround to this?</p>
| Hvaandres | <p>In Kubernetes you have <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controllers</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resources. What you have is the definition of an Ingress, not an Ingress Controller. An Ingress will not work unless there is an Ingress Controller installed in your cluster.</p>
<p>However, in AKS (Azure Kubernetes Service), it is possible to bind your Ingress resources to an <a href="https://learn.microsoft.com/en-us/azure/application-gateway/overview" rel="nofollow noreferrer">Azure Application Gateway</a>, which is an Azure resource outside of your cluster.</p>
<p>To achieve this you need <a href="https://learn.microsoft.com/en-us/azure/application-gateway/ingress-controller-overview" rel="nofollow noreferrer">AGIC</a> (Application Gateway Ingress Controller) which will be in charge of forwarding your Ingress configuration to the Application Gateway. You have already achieved this partially by adding these annotations on the Ingress resources you want to have configured there:</p>
<pre><code>annotations:
appgw.ingress.kubernetes.io/use-private-ip: 'true'
kubernetes.io/ingress.class: azure/application-gateway
</code></pre>
<p><strong>Summary</strong>:</p>
<p>You have two options:</p>
<ol>
<li>Install an Ingress Controller such as <a href="https://docs.nginx.com/nginx-ingress-controller/" rel="nofollow noreferrer">nginx</a> or <a href="https://doc.traefik.io/traefik/" rel="nofollow noreferrer">traefik</a> and adapt the annotations on your Ingress resources accordingly.</li>
<li>Make sure you have an Application Gateway deployed in your subscription, AGIC installed in your cluster, and all the configuration needed to allow AGIC to modify the Application Gateway.</li>
</ol>
<p>If it is the first time you are working with Ingresses and Azure, I strongly recommend you follow the first option.</p>
| HiroCereal |
<p>I was trying to run my pod as non root and also grant it some <a href="https://linux.die.net/man/7/capabilities" rel="nofollow noreferrer">capabilities</a>.<br />
This is my config:</p>
<pre class="lang-yaml prettyprint-override"><code> containers:
- name: container-name
securityContext:
capabilities:
add: ["SETUID", "SYS_TIME"]
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1001
</code></pre>
<p>when I deploy my pod and connect to it I run <code>ps aux</code> and see:</p>
<pre><code>PID USER TIME COMMAND
1 root 0:32 node bla.js
205 root 0:00 /bin/bash
212 root 0:00 ps aux
</code></pre>
<p>I then do <code>cat /proc/1/status</code> and see:</p>
<pre><code>CapPrm: 0000000000000000
CapEff: 0000000000000000
</code></pre>
<p>Which means I have no capabilities for this container's process.<br />
The thing is that if I remove the <code>runAsNonRoot: true</code> flag from the <code>securityContext</code> I can see I <strong>do</strong> have multiple capabilities.<br />
Is there a way to run a pod as a non-root and still add some capabilities?</p>
| eladm26 | <p>This is the expected behavior. The capabilities are meant to divide the privileges traditionally associated with superuser (root) into distinct units; a non-root user cannot enable/disable such capabilities, that could create a security breach.</p>
<p>The <code>capabilities</code> feature in the <code>SecurityContext</code> key is designed to manage (either to limit or to expand) the Linux capabilities for the container's context; in a pod run as a root this means that the capabilities are inherited by the processes since these are owned by the root user; however, if the pod is run as a non-root user, it does not matter if the context has those capabilities enabled because the Linux Kernel will not allow a non-root user to set capabilities to a process.</p>
<p>This point can be illustrated very easily. If you run your container with the key <code>runAsNonRoot</code> set to <code>true</code> and add the capabilities as you did in the manifest shared, and then you exec into the Pod, you should be able to see those capabilities added to the context with the command:</p>
<pre><code>$ capsh --print
Current: = cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_sys_time,cap_mknod,cap_audit_write,cap_setfcap+i
Bounding set =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_sys_time,cap_mknod,cap_audit_write,cap_setfcap
</code></pre>
<p>But you will see the <code>CapPrm</code> or <code>CapEff</code> set to x0 in any process run by the user 1001:</p>
<pre><code>$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
1001 1 0.0 0.0 4340 760 ? Ss 14:57 0:00 /bin/sh -c node server.js
1001 7 0.0 0.5 772128 22376 ? Sl 14:57 0:00 node server.js
1001 21 0.0 0.0 4340 720 pts/0 Ss 14:59 0:00 sh
1001 28 0.0 0.0 17504 2096 pts/0 R+ 15:02 0:00 ps aux
$ grep Cap proc/1/status
CapInh: 00000000aa0425fb
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 00000000aa0425fb
CapAmb: 0000000000000000
</code></pre>
| Gabriel Robledo Ahumada |
<p>I have a namespace where new short-lived pods (< 1 minute) are created constantly by Apache Airflow. I want that all those new pods are annotated with <code>aws.amazon.com/cloudwatch-agent-ignore: true</code> automatically so that no CloudWatch metrics (container insights) are created for those pods.</p>
<p>I know that I can achieve that from airflow side with <a href="https://airflow.apache.org/docs/apache-airflow/stable/kubernetes.html#pod-mutation-hook" rel="nofollow noreferrer">pod mutation hook</a> but for the sake of the argument let's say that <strong>I have no control over the configuration of that airflow instance</strong>.</p>
<p>I have seen <code>MutatingAdmissionWebhook</code> and it seem that could do the trick, but it seems that it's considerable effort to set up. So I'm looking for a more of the shelf solution, I want to know if there is some "standard" admission controller that can do this specific use case, without me having to deploy a web server and implement the api required by <code>MutatingAdmissionWebhook</code>.</p>
<p><strong>Is there any way to add that annotation from kubernetes side at pod creation time?</strong> The annotation must be there "from the beginning", not added 5 seconds later, otherwise the cwagent might pick it between the pod creation and the annotation being added.</p>
| RubenLaguna | <p>To clarify I am posting community Wiki answer.</p>
<p>You had to use <a href="https://github.com/aws/amazon-cloudwatch-agent/blob/dd1be96164c2cd6226a33c8cf7ce10a7f29547cf/plugins/processors/k8sdecorator/stores/podstore.go#L31" rel="nofollow noreferrer"><code>aws.amazon.com/cloudwatch-agent-ignore: true</code></a> annotation. This means the pod that has one, it will be ignored by <code>amazon-cloudwatch-agent</code> / <code>cwagent</code>.</p>
<p>Here is the excerpt of your solution how to add this annotation to Apache Airflow:</p>
<blockquote>
<p>(...) In order to force Apache Airflow to add the
<code>aws.amazon.com/cloudwatch-agent-ignore: true</code> annotation to the task/worker pods and to the pods created by the <code>KubernetesPodOperator</code> you will need to add the following to your helm <code>values.yaml</code> (assuming that you are using the "official" helm chart for airflow 2.2.3):</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>airflowPodAnnotations:
aws.amazon.com/cloudwatch-agent-ignore: "true"
airflowLocalSettings: |-
def pod_mutation_hook(pod):
pod.metadata.annotations["aws.amazon.com/cloudwatch-agent-ignore"] = "true"
</code></pre>
<blockquote>
<p>If you are not using the helm chart then you will need to change the <code>pod_template_file</code> yourself to add the <code>annotation</code> and you will also need to modify the <code>airflow_local_settings.py</code> to include the <code>pod_mutation_hook</code>.</p>
</blockquote>
<p><a href="https://stackoverflow.com/questions/71438495/how-to-prevent-cloudwatch-container-insights-metrics-from-short-lived-kubernetes/71438496#71438496">Here</a> is the link to your whole answer.</p>
| kkopczak |
<p>I'm working on Kubernetes deployment services using minikube locally on my windows 10 machine, so when I expose my service which is an expressjs API I can reach it via: <code>localhost:3000</code></p>
<p><a href="https://i.stack.imgur.com/2s3ii.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2s3ii.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/EFDnF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EFDnF.png" alt="enter image description here" /></a></p>
<p>I want to expose that service on the network so I can test the API from another device (My mobile phone) to do that I installed Nginx to set a reverse proxy to forward all incoming requests on port 80 to <code>localhost:3000/api</code> but when I hit localhost it shows the default page of Nginx ?</p>
<p><a href="https://i.stack.imgur.com/Hcsld.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hcsld.png" alt="enter image description here" /></a></p>
<p>this is my nginx.conf</p>
<pre><code>#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:3000/api/;
}
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
`
</code></pre>
| Ali Ourag | <p>A few things could be happening here:</p>
<ol>
<li><p>After the Nginx configuration changed, the stop and reload of the application was not performed; this is necessary to load the new configuration and is performed with the commands <code>nginx -s stop</code>, and then <code>nginx -s reload</code>.</p>
</li>
<li><p>Accidentally running multiple instances: if for any reason the command <code>start nginx</code> was run more than once, you will have a process running for each time and cannot kill them with the <code>nginx -s stop</code> command; in this case, you will need to kill the processes on the Task Manager or restart the Windows system.</p>
</li>
<li><p>Be aware that Nginx for Windows is considered a <em>beta</em> version and it has some performance and operability issues, as stated in the following documentation [1].</p>
</li>
</ol>
<p>I recommend switching to a Linux system which fully supports Minikube and Nginx. Your scenario has been replicated on an Ubuntu VM and is working as expected.</p>
<p>[1] <a href="http://nginx.org/en/docs/windows.html" rel="nofollow noreferrer">http://nginx.org/en/docs/windows.html</a></p>
| Gabriel Robledo Ahumada |
<p><br />
In application's responses we see doubled <em>transfer-encoding</em> headers. <br />
Suppose, because of that we get 503 in UI, but at the same time application returns 201 in pod's logs. <br />
Except <code>http code: 201</code> there are <code>transfer-encoding=chunked</code> and <code>Transfer-Encoding=chunked</code> headers in logs, so that could be a reason of 503. <br />
We've tried to remove <code>transfer-encoding</code> via Istio virtual service or envoy filter, but no luck..</p>
<p>Here are samples we tried:</p>
<p>VS definition:</p>
<pre><code>kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: test
namespace: my-ns
spec:
hosts:
- my-service
http:
- route:
- destination:
host: my-service
headers:
response:
remove:
- transfer-encoding
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: test
namespace: istio-system
spec:
gateways:
- wildcard-api-gateway
hosts:
- my-ns_domain
http:
- match:
- uri:
prefix: /operator/api/my-service
rewrite:
uri: /my-service
route:
- destination:
host: >-
my-service.my-ns.svc.cluster.local
port:
number: 8080
headers:
response:
remove:
- transfer-encoding
</code></pre>
<p>EnvoyFilter definition:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: test
namespace: istio-system
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_OUTBOUND
patch:
operation: ADD
value:
name: envoy.filters.http.lua
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
inlineCode: |
function envoy_on_response(response_handle)
response_handle:headers():remove("transfer-encoding")
end
</code></pre>
<p>In older envoy versions I see <code>envoy.reloadable_features.reject_unsupported_transfer_encodings=false</code> was a workaround. Unfortunately, it was deprecated.</p>
<p>Please advise what is wrong with VS/filter or is there any alternative to <code>reject_unsupported_transfer_encodings</code> option?</p>
<p>Istio v1.8.2 <br />
Envoy v1.16.1</p>
| Eugene G | <p>Decision so far: created requirement for dev team to remove the duplication of chunked encoding</p>
| Eugene G |
<p>in LimitRange in k8s we can simply limit the ram and the cpu and can we do that for ephemeral-storage as wella?</p>
| yashod perera | <p>To set default requests and limits on ephemeral storage for each container in <code>mytest</code> namespace:</p>
<pre><code>apiVersion: v1
kind: LimitRange
metadata:
name: storage-limit
namespace: mytest
spec:
limits:
- default:
ephemeral-storage: 2Gi
defaultRequest:
ephemeral-storage: 1Gi
type: Container
</code></pre>
<p>To change scope to Pod simply change to <code>type: Pod</code></p>
| Gleb Golubiatnikov |
<p>I'm try to get my local development environment converted from <code>docker-compose</code> to a flavour of lightweight Kubernetes, since our production environments are all Kubernetes (like). With the help of <a href="https://kompose.io" rel="nofollow noreferrer">Kompose</a> I could convert my settings. The part however I struggle with is the conversion of Docker volumes to a local developer machine Volume and Volume claim. Most examples I found cover drivers for S3 or NFS, which isn't what my development laptop offers.</p>
<p>So when I have in my <code>docker-compose.yml</code> a setting like this, what's the local development equivalent in k8s?</p>
<pre class="lang-yaml prettyprint-override"><code>version: "3.6"
services:
my_test:
image: "myregistry.com/my_test"
container_name: "demo"
hostname: "demo.acme.local"
stop_grace_period: 120s
cap_add:
- SYS_PTRACE
ports:
- 80:80
volumes:
- my_vol:/local/my_app
volumes:
my_vol:
name: "/local/containers/my_app"
external: false
</code></pre>
<p>I'm "only" looking for the volume part. The rest seems clear.<br />
Help is appreciated</p>
| stwissel | <p>Both solutions (<code>hostpath</code> and <code>emptydir</code>) you found looks good.</p>
<p>Here you have documentations to both ones:</p>
<hr />
<blockquote>
<p>A <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><code>hostPath</code></a> volume mounts a file or directory from the host node's filesystem into your Pod.</p>
</blockquote>
<p>To the required <code>path</code> property, you can also specify a <code>type</code> for a <code>hostPath</code> volume.</p>
<blockquote>
<p><strong>NOTE</strong>:
HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as <strong>ReadOnly</strong>.</p>
</blockquote>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath-configuration-example" rel="nofollow noreferrer"><em>hostPath configuration example</em></a></li>
</ul>
<hr />
<blockquote>
<p>An <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer"><code>emptyDir</code></a> volume is first created when a Pod is assigned to a node, and exists as long as that Pod is running on that node.</p>
</blockquote>
<p><code>EmptyDir</code> volume is initially empty. All containers in the pod have permissions in the <code>emptyDir</code> volume to read and write the same files, despite possibility of diffrent localisations of that volume (it can be mounted at the same or different paths in each container).</p>
<p>In case the Pod is removed from a node for any reason, the data in the <code>emptyDir</code> is deleted <em>permanently</em>.</p>
<blockquote>
<p>Depending on your environment, <code>emptyDir</code> volumes are stored on whatever medium that backs the node such as disk or SSD, or network storage.</p>
</blockquote>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir-configuration-example" rel="nofollow noreferrer"><em>emptyDir configuration example</em></a></li>
</ul>
| kkopczak |
<p>What am I going wrong in the below query?</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx
servicePort: 80
</code></pre>
<p>The error I am getting:</p>
<pre><code>error validating "ingress.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend]; if you choose to ignore these errors, turn validation off with
--validate=false
</code></pre>
| Subhajit Das | <p>According to <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">this documentation</a> you need to change ServiceName and ServicePort.</p>
<blockquote>
<p>Each HTTP rule contains (...) a list of paths (for example, <code>/testpath</code>), each of which has an associated backend defined with a <code>service.name</code> and a <code>service.port.name</code> or <code>service.port.number</code>. Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.</p>
</blockquote>
<p>Here is your yaml file with corrections:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /nginx
backend:
service:
name: nginx
port:
number: 8080
</code></pre>
| kkopczak |
<p>Getting this Error from AKS Jenkins Agent pods. Any idea whats will be reason for this Error?
Troubleshooting steps i did.</p>
<p>revert Jenkins to old version => results in same Error
upgrade Jenkins to all new Version including plugins in use => Results in same Error.
Downgraded Jenkins K8s and K8s API plugins to stable version as per some suggestion in github. => same Error
Created Brand new cluster and install Jenkins and Job pod starting giving same Error. => same Error</p>
<p>How to fix this?</p>
<pre><code>18:23:33 [Pipeline] // podTemplate
18:23:33 [Pipeline] End of Pipeline
18:23:33 io.fabric8.kubernetes.client.KubernetesClientException: not ready after 5000 MILLISECONDS
18:23:33 at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:176)
18:23:33 at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:322)
18:23:33 at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:84)
18:23:33 at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:413)
18:23:33 at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:330)
18:23:33 at hudson.Launcher$ProcStarter.start(Launcher.java:507)
18:23:33 at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176)
18:23:33 at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132)
18:23:33 at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:324)
18:23:33 at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:319)
18:23:33 at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:193)
18:23:33 at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
18:23:33 at jdk.internal.reflect.GeneratedMethodAccessor6588.invoke(Unknown Source)
18:23:33 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
18:23:33 at java.base/java.lang.reflect.Method.invoke(Method.java:566)
18:23:33 at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
18:23:33 at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
18:23:33 at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
18:23:33 at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
18:23:33 at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
18:23:33 at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
18:23:33 at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:163)
18:23:33 at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
18:23:33 at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:158)
18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:161)
18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:165)
18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135)
18:23:33 at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
18:23:33 at WorkflowScript.run(WorkflowScript:114)
18:23:33 at ___cps.transform___(Native Method)
18:23:33 at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:86)
18:23:33 at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
18:23:33 at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
18:23:33 at jdk.internal.reflect.GeneratedMethodAccessor210.invoke(Unknown Source)
18:23:33 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
18:23:33 at java.base/java.lang.reflect.Method.invoke(Method.java:566)
18:23:33 at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
18:23:33 at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
18:23:33 at com.cloudbees.groovy.cps.Next.step(Next.java:83)
18:23:33 at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
18:23:33 at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
18:23:33 at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129)
18:23:33 at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268)
18:23:33 at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
18:23:33 at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
18:23:33 at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
18:23:33 at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:185)
</code></pre>
| rohit534 | <p><strong>EDIT:</strong> Doesn't seem to make any difference. Still getting 5000 ms timeouts, so not sure this method works (with Environment variables at least). It might work if you are actually able to change timeout, but I haven't got that figured out.</p>
<hr />
<p>Started seeing the same issue after updating Jenkins (and plugins) only - not the K8S cluster.</p>
<p>Except I'm getting 5000 instead of 7000 milliseconds.</p>
<pre><code>io.fabric8.kubernetes.client.KubernetesClientException: not ready after 5000 MILLISECONDS
</code></pre>
<p>Digging into the stacktrace and source on Github leads back to <a href="https://github.com/fabric8io/kubernetes-client/blob/677884ca88f3911f9b41ec919db152d441ad2cdd/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/Config.java#L135" rel="nofollow noreferrer">this</a> default timeout (which haven't changed in 6 years), so somehow you seem to have a non-default.</p>
<pre><code>public static final Long DEFAULT_WEBSOCKET_TIMEOUT = 5 * 1000L;
</code></pre>
<p>It seems it can <a href="https://github.com/fabric8io/kubernetes-client/blob/677884ca88f3911f9b41ec919db152d441ad2cdd/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/Config.java#L421" rel="nofollow noreferrer">be overriden with environment variable</a> <strong>KUBERNETES_WEBSOCKET_TIMEOUT_SYSTEM_PROPERTY</strong> on the pod. I just tried to raise mine to 10sec seeing if it makes a difference.</p>
<p>Might be worth a try. If so, it could indicate that the cluster API-server is somehow slower than excepted to respond. I'm not aware of anything that should affect cluster performance around the time of the upgrade in my case, and since the default timeout haven't changed changed for years, it seems odd. Maybe some of the code was refactored somehow no longer ignoring timeouts/retrying etc - Only guessing.</p>
<p>EDIT: I'm on running on bare metal cluster</p>
| Dennis |
<p>I have 5M~ messages (total 7GB~) on my backlog gcp pub/sub subscription and want to pull as many as possible of them. I am using synchronous pull with settings below and waiting for 3 minutes to pile up messages and sent to another db.</p>
<pre><code> defaultSettings := &pubsub.ReceiveSettings{
MaxExtension: 10 * time.Minute,
MaxOutstandingMessages: 100000,
MaxOutstandingBytes: 128e6, // 128 MB
NumGoroutines: 1,
Synchronous: true,
}
</code></pre>
<p>Problem is that if I have around 5 pods on my kubernetes cluster pods are able to pull nearly 90k~ messages almost in each round (3 minutes period).However, when I increase the number of pods to 20 in the first or the second round each pods able to retrieve 90k~ messages however after a while somehow pull request count drastically drops and each pods receives 1k-5k~ messages in each round. I have investigated the go library sync pull mechanism and know that without acking successfully messages you are not able to request for new ones so pull request count may drop to prevent exceed <code>MaxOutstandingMessages</code> but I am scaling down to zero my pods to start fresh pods while there are still millions of unacked messages in my subscription and they still gets very low number of messages in 3 minutes with 5 or 20 pods does not matter. After around 20-30 minutes they receives again 90k~ messages each and then again drops to very low levels after a while (checking from metrics page). Another interesting thing is that while my fresh pods receives very low number of messages, my local computer connected to same subscription gets 90k~ messages in each round.</p>
<p>I have read the quotas and limits page of pubsub, bandwith quotas are extremely high (240,000,000 kB per minute (4 GB/s) in large regions) . I tried a lot of things but couldn't understand why pull request counts drops massively in case I am starting fresh pods. Is there some connection or bandwith limitation for kubernetes cluster nodes on gcp or on pub/sub side? Receiving messages in high volume is critical for my task.</p>
| mustafa.yavuz | <p>If you are using synchronous pull, I suggest using <a href="https://cloud.google.com/pubsub/docs/pull#streamingpull" rel="nofollow noreferrer"><code>StreamingPull</code></a> for your scale Pub/Sub usage.</p>
<blockquote>
<p>Note that to achieve low message delivery latency with synchronous
pull, it is important to have many simultaneously outstanding pull
requests. As the throughput of the topic increases, more pull requests
are necessary. In general, asynchronous pull is preferable for
latency-sensitive applications.</p>
</blockquote>
<p>It is expected that, for a high throughput scenario and synchronous pull, there should always be many idle requests.</p>
<p>A synchronous pull request establishes a connection to one specific server (process). A high throughput topic is handled by many servers. Messages coming in will go to only a few servers, from 3 to 5. Those servers should have an idle process already connected, to be able to quickly forward messages.</p>
<p>The process conflicts with CPU based scaling. Idle connections don't cause CPU load. At least, there should be many more threads per pod than 10 to make CPU-based scaling work.</p>
<p>Also, you can use <a href="https://cloud.google.com/kubernetes-engine/docs/samples/container-pubsub-pull" rel="nofollow noreferrer"><code>Horizontal-Pod-Autoscaler(HPA)</code></a> configured for Pub/Sub consuming GKE pods. With the HPA, you can configure CPU usage.</p>
<p>My last recommendation would be to consider <a href="https://cloud.google.com/pubsub/docs/stream-messages-dataflow" rel="nofollow noreferrer"><code>Dataflow</code></a> for your workload. Consuming from PubSub.</p>
| Raul Saucedo |
<ul>
<li>minikube v1.25.1 on Microsoft Windows 10 Home Single Language 10.0.19043 Build 19043
<ul>
<li>MINIKUBE_HOME=C:\os\minikube\Minikube</li>
</ul>
</li>
<li>Automatically selected the virtualbox driver</li>
<li>Starting control plane node minikube in cluster minikube</li>
<li>Creating virtualbox VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
! StartHost failed, but will try again: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory</li>
<li>Creating virtualbox VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...</li>
<li>Failed to start virtualbox VM. Running "minikube delete" may fix it: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory</li>
</ul>
<p>X Exiting due to HOST_VIRT_UNAVAILABLE: Failed to start host: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory</p>
<ul>
<li>Suggestion: Virtualization support is disabled on your computer. If you are running minikube within a VM, try '--driver=docker'. Otherwise, consult your systems BIOS manual for how to enable virtualization.</li>
<li>Related issues:
<ul>
<li><a href="https://github.com/kubernetes/minikube/issues/3900" rel="noreferrer">https://github.com/kubernetes/minikube/issues/3900</a></li>
<li><a href="https://github.com/kubernetes/minikube/issues/4730" rel="noreferrer">https://github.com/kubernetes/minikube/issues/4730</a></li>
</ul>
</li>
</ul>
| Pramod Pant | <p>Try this command - It will work. I faced the similar issue on my Laptop.
Tried multiple ways to resolve this issue however nothing has worked for me.
Also, the error message states that VT-X/AMD-V should be enabled in BIOS which is mandatory but this cannot be found in my BIOS settings.
I tried the below command to resolve this issue and the minikube started normally.</p>
<p><strong>minikube start --no-vtx-check</strong></p>
<p>Refer this thread:
<a href="https://www.virtualbox.org/ticket/4032" rel="noreferrer">https://www.virtualbox.org/ticket/4032</a></p>
| Guru Tata |
<p>I have two namespaces: <code>n1</code> (for running EC2 instances) and <code>fargate</code> (connected to Fargate profile).<br />
There is <code>data-processor</code> account in n1.<br />
I'd like to allow data-processor account to run pods in <code>fargate</code> name space.</p>
<p>Now, I'm getting the following error:</p>
<pre><code>Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://<cluster-id>.gr7.us-west-2.eks.amazonaws.com/api/v1/namespaces/fargate/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:n1:data-processor" cannot create resource "pods" in API group "" in the namespace "fargate".
</code></pre>
| Lesha Pipiev | <p>You haven't provided any of the roles or rolebindings so we can't see what permissions you have set already, but if you apply the following manifest it should work:</p>
<pre><code>---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: data-processor-role
rules:
- apiGroups: ['']
resources: ['pods']
verbs: ['get', 'watch', 'list', 'create', 'patch', 'update', 'delete']
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: data-processor-rolebinding
namespace: fargate
subjects:
- kind: ServiceAccount
name: data-processor
namespace: n1
roleRef:
kind: ClusterRole
name: data-processor-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>That should allow your data-processor service account read/write permissions to pods in the fargate namespace.</p>
| banbone |
<p>So we are in the process of moving from yarn 1.x to yarn 2 (yarn 3.1.1) and I'm getting a little confused on how to configure yarn in my CI/CD config. As of right now our pipeline does the following to deploy to our kubernetes cluster:</p>
<p>On branch PR:</p>
<ol>
<li><p>Obtain branch repo in gitlab runner</p>
</li>
<li><p>Lint</p>
</li>
<li><p>Run jest</p>
</li>
<li><p>Build with environment variables, dependencies, and devdependencies</p>
</li>
<li><p>Publish image to container registry with tag <code>test</code></p>
<p>a. If success, allow merge to main</p>
</li>
<li><p>Kubernetes watches for updates to test and deploys a test pod to cluster</p>
</li>
</ol>
<p>On merge to main:</p>
<ol>
<li>Obtain main repo in gitlab runner</li>
<li>Lint</li>
<li>Run jest</li>
<li>Build with environment variables and dependencies</li>
<li>Publish image to container registry with tag <code>latest</code></li>
<li>Kubernetes watches for updates to latest and deploys a staging pod to cluster</li>
</ol>
<p>(NOTE: For full-blown production releases we will be using the release feature to manually deploy releases to the production server)</p>
<p>The issue is that we are using yarn 2 with zero installs and in the past we have been able prevent the production environment from using any dev dependencies by running <code>yarn install --production</code>. In yarn 2 this command is deprecated.</p>
<p>Is there any ideal solution to prevent dev dependencies from being installed on production? I've seen some posts mention using workspaces but that seems to be more tailored towards mono-repos where there are more than one application.</p>
<p>Thanks in advance for any help!</p>
| liamlows | <p>I had the same question and came to the same conclusion as you. I could not find an easy way to perform a production build on yarn 2. Yarn Workspaces comes closest but I did find the paragraph below in the documentation:</p>
<blockquote>
<p>Note that this command is only very moderately useful when using zero-installs, since the cache will contain all the packages anyway - meaning that the only difference between a full install and a focused install would just be a few extra lines in the .pnp.cjs file, at the cost of introducing an extra complexity.</p>
</blockquote>
<p>From: <a href="https://yarnpkg.com/cli/workspaces/focus#options-production" rel="nofollow noreferrer">https://yarnpkg.com/cli/workspaces/focus#options-production</a></p>
<p>Does that mean that there essentially is no production install? It would be nice if that was officially addressed somewhere but this was the closest I could find.</p>
<p>Personally, I am using NextJS and upgraded my project to Yarn 2. The features of Yarn 2 seem to work (no node_modules folder) but I can still use <code>yarn build</code> from NextJS to create a production build with output in the <code>.next</code> folder.</p>
| Ezra Friedlander |
<p>I am new to Kubernetes and currently deploy an application on AWS EKS.</p>
<p>I want to configure a <code>Service</code> of my K8s cluster deployed on AWS EKS.</p>
<p>Here is the description of my issue: I did an experiment. I spin up <strong>2</strong> Pods running the same web application and expose them using a service (the Service is using <code>LoadBalancer</code> type). Then I got an external IP of that Service. Then and found that requests that I sent were not distributed evenly to every <code>Pod</code> under the <code>Service</code> I created. To be more precise, I sent <strong>3</strong> requests and all the three requests are processed by <strong>the same Pod</strong>.</p>
<p>Therefore, I want to configure the load balancer algorithm to be <code>round robin</code> or <code>least_connection</code> to resolve this issue.</p>
<p>I asked a similar question before and I am suggested to try the IPVS mode of the <code>kube-proxy</code>, but I did not get the detailed instruction on how to apply that config, and did not find any useful material online. If the IPVS mode is a feasible solution to this issue, please provide some detailed instructions.</p>
<p>Thanks!</p>
| Xiao Ma | <p>Had the exact issues and while using the <code>-H "Connection: close"</code> tag when testing externally load balanced connections helps, I still wanted inter-service communication to benefit from having IPVS with <code>rr</code> or <code>sed</code>.</p>
<p>To summarize, you will need to setup the following dependencies to the nodes. I would suggest adding these to your cloud config.</p>
<pre><code>#!/bin/bash
sudo yum install -y ipvsadm
sudo ipvsadm -lsudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack_ipv4
</code></pre>
<p>Once that is done will need to edit your <code>kube-proxy-config</code> in <code>kube-system</code> namespace to have <code>mode: ipvs</code> and <code>scheduler: <desired lb algo></code>.</p>
<p>Lastly, you will need to update the container commands for the <code>kube-proxy</code> daemonset with the appropriate proxy-mode flag <code>--proxy-mode=ipvs</code> and <code>--ipvs-scheduler=<desired lb algo></code>.</p>
<p>Following are the available lb algos for IPVS:</p>
<pre><code>rr: round-robin
lc: least connection
dh: destination hashing
sh: source hashing
sed: shortest expected delay
nq: never queue
</code></pre>
<p>Source: <a href="https://medium.com/@selfieblue/how-to-enable-ipvs-mode-on-aws-eks-7159ec676965" rel="nofollow noreferrer">https://medium.com/@selfieblue/how-to-enable-ipvs-mode-on-aws-eks-7159ec676965</a></p>
| sesl |
<p>We are trying to configure local-storage in Rancher and storage provisioner configured successfully.
But when I create pvc using local-storage sc its going in pending state with below error.</p>
<pre><code> Normal ExternalProvisioning 4m31s (x62 over 19m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
Normal Provisioning 3m47s (x7 over 19m) rancher.io/local-path_local-path-provisioner-5f8f96cb66-8s9dj_f1bdad61-eb48-4a7a-918c-6827e75d6a27 External provisioner is provisioning volume for claim "local-path-storage/test-pod-pvc-local"
Warning ProvisioningFailed 3m47s (x7 over 19m) rancher.io/local-path_local-path-provisioner-5f8f96cb66-8s9dj_f1bdad61-eb48-4a7a-918c-6827e75d6a27 failed to provision volume with StorageClass "local-path": configuration error, no node was specified
[root@n01-deployer local]#
</code></pre>
<p>sc configuration</p>
<pre><code>[root@n01-deployer local]# kubectl edit sc local-path
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-path"},"provisioner":"rancher.io/local-path","reclaimPolicy":"Delete","volumeBindingMode":"Immediate"}
creationTimestamp: "2022-02-07T16:12:58Z"
name: local-path
resourceVersion: "1501275"
uid: e8060018-e4a8-47f9-8dd4-c63f28eef3f2
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: Immediate
</code></pre>
<p>PVC configuration</p>
<pre><code>[root@n01-deployer local]# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: local-path-storage
name: test-pod-pvc-local-1
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
</code></pre>
<p>I have mounted the local volume in all the worker node still my pvc not getting created. Can some please help me solve this issue?</p>
| SARATH CHANDRAN | <p>The key to your problem was updating PSP.</p>
<p>I would like to add something about PSP:</p>
<p>According to <a href="https://cloud.google.com/kubernetes-engine/docs/deprecations/podsecuritypolicy" rel="nofollow noreferrer">this documentation</a> and <a href="https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/" rel="nofollow noreferrer">this blog</a>:</p>
<blockquote>
<p>As of Kubernetes <strong>version 1.21</strong>, PodSecurityPolicy (beta) is deprecated. The Kubernetes project aims to shut the feature down in <strong>version 1.25</strong>.</p>
</blockquote>
<p>However I haven't found any information in Rancher's case (the documentation is up to date).</p>
<blockquote>
<p>Rancher ships with two default Pod Security Policies (PSPs): the <code>restricted</code> and <code>unrestricted</code> policies.</p>
</blockquote>
<hr />
<p>See also:</p>
<ul>
<li><a href="https://codilime.com/blog/the-benefits-of-pod-security-policy-a-use-case/" rel="nofollow noreferrer"><em>The benefits of Pod Security Policy</em></a></li>
<li><a href="https://docs.bitnami.com/tutorials/secure-kubernetes-cluster-psp/" rel="nofollow noreferrer"><em>Secure Kubernetes cluster PSP</em></a></li>
<li><a href="https://rancher.com/docs/rancher/v2.6/en/cluster-admin/pod-security-policies/" rel="nofollow noreferrer"><em>Pod Security Policies</em></a></li>
</ul>
| kkopczak |
<p>I have deployed a nginx ingress in my baremetal k8s cluster - using metallb. Previously I did it with the</p>
<blockquote>
<p>apiVersion: networking.k8s.io/v1beta1</p>
</blockquote>
<p>tag and it worked fine. However, that tag is no longer supported so I'm using</p>
<blockquote>
<p>apiVersion: networking.k8s.io/v1</p>
</blockquote>
<p>I have 2 frontends both in react and several backends like 7 of them.</p>
<p>I have 2 issues:</p>
<ol>
<li>How do I deploy the two react frontends with default path "/"
Since I need to use the react-router?
N/B They both have "/", "/login". "/logout" ... paths, It is hosted locally so no hostname..
my yaml file looks like this:</li>
</ol>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web1-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web1-service
port:
number: 4000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web2-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web2-service
port:
number: 5000
</code></pre>
<p>If I ran this only one will work and is very slow, perhaps because queries go to both backends or something .. How do I solve this?</p>
<ol start="2">
<li>One of my backend is a maptiler.. It does not load correctly i.e running it outside the cluster it loads the css and files properly but when adding it to the cluster, it has <a href="https://i.stack.imgur.com/NBMCO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NBMCO.png" alt="errors" /></a>
The unexpected '<' is for the tag in the container.</li>
</ol>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-map
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /styles/*
pathType: Exact
backend:
service:
name: map-service
port:
number: 7000
ingressClassName: nginx
</code></pre>
<p>Is there something with the order?</p>
| Denn | <p>check if you are using the correct syntax, I had the following error and met this same problem.
I wrote like this:</p>
<pre><code>...
ingressClassName: nginx
rules:
- host: test.webapi-auth.mpbr.renola.ru
- http:
paths:
- backend:
service:
name: service1
port:
number: 80
path: /(.*)
pathType: Prefix
...
</code></pre>
<p>but it needed to be:</p>
<pre><code>...
ingressClassName: nginx
rules:
- host: test.webapi-auth.mpbr.renola.ru
http:
paths:
- backend:
service:
name: service1
port:
number: 80
path: /(.*)
pathType: Prefix
...
</code></pre>
<p><strong>note that there should not be a "-" character before http</strong></p>
| a_poluyanov |
<p>I'm working with Azure Kubernetes Service (AKS) and want to ensure that my network security is in line with industry standards and best practices. By default, AKS generates a set of network security group without specific rules, but I'd like to add additional rules to improve security.</p>
<p>What are the common security rules that should be added to AKS's default network security rules to enhance security and ensure compliance with industry standards? and any other best practices for securing an AKS cluster.</p>
<p>I'd appreciate any guidance or examples on how to implement these rules effectively within an AKS cluster.</p>
<p>Denying All Inbound by Default would block access to images' repositories.
Would you allow port 443 for image pull from Docker hub for example?</p>
| cloudspawn | <p>You can add network security group to allow outbound traffic https 443 like below:</p>
<p><img src="https://i.imgur.com/tnACYeP.png" alt="enter image description here" /></p>
<p>You can allow traffic for SSH, HTTP, HTTPS, and other protocols that are required for your applications to function like below.</p>
<p><img src="https://i.imgur.com/jb99see.png" alt="enter image description here" /></p>
<p>In Aks cluster configuration under networking allow inbound traffic from specific Ip address. This will allow you to restrict access to your AKS cluster to specific clients, such as your on-premises network or other trusted Azure resources.</p>
<p><img src="https://i.imgur.com/y8EF2xs.png" alt="enter image description here" /></p>
<p>Deny all inbound traffic by default and only allow traffic that is explicitly required for your applications to function. This will prevent unauthorized access to your cluster.</p>
<p>Allowing port 443 for image pull from Docker hub, it is recommended to allow only the necessary ports and addresses required to control egress traffic in Kubernetes.</p>
<p><em><strong>Reference</strong></em>:</p>
<p><a href="https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/outbound-rules-control-egress.md" rel="nofollow noreferrer">azure-docs/articles/aks/outbound-rules-control-egress.md at main · MicrosoftDocs/azure-docs · GitHub</a></p>
| Imran |
<p>I have a number of <code>.yml</code> files with deployments and their services and need to set <code>.spec.progressDeadlineSeconds</code> to all of them preferrably from one common place.</p>
| noname7619 | <p>Some <code>yq</code> with simple <code>bash</code> script should do the work</p>
<pre><code>#!/usr/bin/env bash
set -euo pipefail
ALL_FILES=$(ls | grep yml)
for file in $ALL_FILES; do
echo "File $file"
yq -i '.spec += {"progressDeadlineSeconds": "10"}' $file
done
</code></pre>
| Michał Lewndowski |
<p>I am running a GKE Cluster with the GCP Load Balancer as my Ingress Controller.</p>
<p>However, I noticed that a specific request to the service hosted in this GKE Cluster was being rejected with and 502 error.</p>
<p>I checked the GCP Loadbalancer logs and I was able to see a return with <code>statusDetails: "failed_to_pick_backend"</code>.</p>
<p>The health field is saying that my backend is healthy. Just to make sure I changed the health check type from HTTP to TCP to see if anything would change but it kept green.</p>
<p>So, what can I be missing if my GCP Loadbalancer is saying that my backend is reachable but at the same time it returns me an failed to pick backend message?</p>
<p>I really appreciate some help on that.</p>
<p>My ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
annotations:
kubernetes.io/ingress.global-static-ip-name: my-app
networking.gke.io/managed-certificates: my-app-managed-cert
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.allow-http: "false"
spec:
defaultBackend:
service:
name: my-app-svc
port:
number: 3000
</code></pre>
<p>My service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-app-svc
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 3000
targetPort: 3000
type: NodePort
</code></pre>
<p>Running <code>kubectl describe pod <my-pod></code> I can see</p>
<pre><code>Conditions:
Type Status
cloud.google.com/load-balancer-neg-ready True
Initialized True
Ready True
ContainersReady True
PodScheduled True
</code></pre>
| Leonardo Henrique | <p>Can you verify the timer of your health check? If this is set to 1 second or lower than the health checks that were set to higher value, <code>failed_to_pick_backend</code> error will normally occur.</p>
<p>I recommend you to change the timer to the default value 5 or higher and then test a new deployment. You can check more details about the health check timers on this link[1].</p>
<p>[1] <a href="https://cloud.google.com/load-balancing/docs/health-check-concepts" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/health-check-concepts</a></p>
| Marvin Lucero |
<p>We are using fargate type EKS pods and we would like to setup customized fluent bit logging, which will capture logs from different specified paths in different services containers and outputs them to cloudwatch.</p>
<p>For example,</p>
<p>service 1 keep logs at below paths:-</p>
<pre><code>/var/log/nginx/*
/usr01/app/product/logs/*
</code></pre>
<p>service 2 keep logs at below paths:-</p>
<pre><code>/jboss/product/logs/*
/birt/1.0.0/deploy/birt.war/logs"
/birt/1.0.0/logs/*
</code></pre>
<p>and so on ...</p>
<p>Therefore, we want to customize fluent bit configmap to capture logs generated in different paths for different services and they should be prefixed with the service name. Please help us to achieve that.</p>
| Nitin G | <p>As explained in the documentation for Fargate logging, you cannot define input blocks for fluent-bit configuration.</p>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html</a></p>
<blockquote>
<p>In a typical Fluent Conf the main sections included are Service,
Input, Filter, and Output. The Fargate log router however, only
accepts:</p>
<p>The Filter and Output sections and manages the Service and Input
sections itself.</p>
<p>The Parser section.</p>
<p>if you provide any other section than Filter, Output, and Parser, the
sections are rejected.</p>
</blockquote>
<p>You can try a sidecar container approach (which is a little more expensive) for your use case:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: example
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: varlog
mountPath: /var/log/nginx
- name: fluent-bit
image: amazon/aws-for-fluent-bit:2.14.0
env:
- name: HOST_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
limits:
memory: 200Mi
requests:
cpu: 200m
memory: 100Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc/
- name: varlog
mountPath: /var/log/nginx
readOnly: true
volumes:
- name: fluent-bit-config
configMap:
name: fluent-bit-config
- name: varlog
emptyDir: {}
terminationGracePeriodSeconds: 10
</code></pre>
<p>In this approach you use a fluentbit sidecar container and specify your custom settings for the input blocks. You can use this simple fluent-bit configmap as an example to test:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: example
labels:
k8s-app: fluent-bit
data:
fluent-bit.conf: |
[SERVICE]
Flush 5
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
@INCLUDE application-log.conf
application-log.conf: |
[INPUT]
Name tail
Path /var/log/nginx/*.log
[OUTPUT]
Name stdout
Match *
parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
</code></pre>
<p>You can test this approach by running the example above and sending an http request to the nginx pod. After that you should see in the fluent-bit sidecar logs something similar to this:</p>
<pre><code>AWS for Fluent Bit Container Image Version 2.14.0
Fluent Bit v1.7.4
* Copyright (C) 2019-2021 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io
[2021/09/27 19:06:06] [ info] [engine] started (pid=1)
[2021/09/27 19:06:06] [ info] [storage] version=1.1.1, initializing...
[2021/09/27 19:06:06] [ info] [storage] in-memory
[2021/09/27 19:06:06] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/09/27 19:06:06] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
[2021/09/27 19:06:06] [ info] [sp] stream processor started
[2021/09/27 19:06:06] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1592225 watch_fd=1 name=/var/log/nginx/access.log
[2021/09/27 19:06:06] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1592223 watch_fd=2 name=/var/log/nginx/error.log
[0] tail.0: [1632769609.617430221, {"log"=>"10.42.0.139 - - [27/Sep/2021:19:06:49 +0000] "GET / HTTP/1.1" 200 612 "-" "Wget" "-""}]
[0] tail.0: [1632769612.241034110, {"log"=>"10.42.0.139 - - [27/Sep/2021:19:06:52 +0000] "GET / HTTP/1.1" 200 612 "-" "Wget" "-""}]
</code></pre>
| Emidio Neto |
<p>Below is my kubernetes file and I need to do two things</p>
<ol>
<li>need to mount a folder with a file</li>
<li>need to mount a file with startup script</li>
</ol>
<p>I have on my local /tmp/zoo folder both the files and my zoo folder files never appear in /bitnami/zookeeper inside the pod.</p>
<p>The below is the updated Service,Deployment,PVC and PV</p>
<h2>kubernetes.yaml</h2>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.service.type: nodeport
creationTimestamp: null
labels:
io.kompose.service: zookeeper
name: zookeeper
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
io.kompose.service: zookeeper
type: NodePort
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.service.type: nodeport
creationTimestamp: null
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: zookeeper
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: zookeeper
spec:
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
resources: {}
volumeMounts:
- mountPath: /bitnami/zoo
name: bitnamidockerzookeeper-zookeeper-data
restartPolicy: Always
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
#hostPath:
#path: /tmp/tmp1
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: bitnamidockerzookeeper-zookeeper-data
type: local
name: bitnamidockerzookeeper-zookeeper-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: v1
kind: PersistentVolume
metadata:
name: foo
spec:
storageClassName: manual
claimRef:
name: bitnamidockerzookeeper-zookeeper-data
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
hostPath:
path: /tmp/tmp1
status: {}
kind: List
metadata: {}
</code></pre>
| Madhuri Devidi | <p>A service cannot be assigned a volume. In line 4 of your YAML, you specify "Service" when it should be "Pod" and every resource used in Kubernetes must have a name, in metadata you could add it. That should fix the simple problem.</p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: Pod #POD
metadata:
name: my-pod #A RESOURCE NEEDS A NAME
creationTimestamp: null
labels:
io.kompose.service: zookeeper
spec:
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
resources: {}
volumeMounts:
- mountPath: /bitnami/zookeeper
name: bitnamidockerzookeeper-zookeeper-data
restartPolicy: Always
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
status: {}
</code></pre>
<p>Now, I don't know what you're using but hostPath works exclusively on a local cluster like Minikube. In production things change drastically. If everything is local, you need to have the directory "/ tmp / zoo" in the node, NOTE not on your local pc but inside the node. For example, if you use minikube then you run <code>minikube ssh</code> to enter the node and there copies "/ tmp / zoo". An excellent guide to this is given in the official kubernetes documentation: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/</a></p>
| Alonso Valdivia |
Subsets and Splits