Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I'm deploying an <strong>grpc</strong> application that uses <strong>tls</strong> application (<strong>grpcs</strong> to be precise). App is deployed to <strong>kubernetes</strong> cluster in <strong>AWS</strong> cloud created using <strong>Kops</strong> tool. I'm using self signed certs for auth (by now). The problem is I can't find any guideline on <strong>how to properly expose such service to outer world</strong>. There are bits and pieces here and there, but nothing that seems to do what I want. Additional level of complexity - I need to expose several ports on same service, so I can't use Ingress rules for k8s as my client application restricts using any paths in urls and requires me to specify only host and port. Inside cluster everything is setup and works as expected with authorisation and proper DNS resolution.</p>
<p>So ideally I want:</p>
<ul>
<li>Solution that proxies grpcs requests to specific service in cluster.</li>
<li>Solution that introduces load balancer that does the same</li>
</ul>
<p>Thank you very much for your time!</p>
| Ilya | <p>Ok. Thank was enough to fix the issue. Notice the annotation section.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "9999"
spec:
ports:
- port: 9999
targetPort: 9999
protocol: TCP
selector:
app: my-service
type: LoadBalancer
</code></pre>
<p>Then I had to add A record with balancer dns name to Route 53, and it worked.</p>
| Ilya |
<p>I got my own cluster which has a control-plane and worker. i trying to connect to PostgreSQL in the cluster using <code>psql -h <ingress-url> -p 5432 -U postgres -W</code>, but it occur an error:</p>
<pre><code>psql: error: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
</code></pre>
<p>But <code>curl <ingress-url></code> response like this:</p>
<pre><code>2022-05-27 04:00:50.406 UTC [208] LOG: invalid length of startup packet
</code></pre>
<p>That response mean my request have reached to PostgreSQL Server, but why i cannot connect to my PostgreSQL Server?</p>
<p>Here are my resources:</p>
<p>Ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: preflight
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
service:
name: preflight
port:
number: 5432
</code></pre>
<p>Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: preflight
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
spec:
replicas: 1
selector:
matchLabels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
template:
metadata:
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
spec:
containers:
- name: preflight
image: postgres:14
env:
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: preflight
key: postgresPassword
ports:
- containerPort: 5432
</code></pre>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: preflight
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
spec:
type: ClusterIP
selector:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
ports:
- port: 5432
targetPort: 5432
</code></pre>
<p>ConfigMap values is a <code>POSTGRES_PASSWORD=postgres</code>.</p>
| wwDig | <p>Nginx ingress controller is an HTTP proxy. You are trying to route PGSQL traffic over HTTP, and that simply can't work.</p>
<p>What you need to do is expose a TCP service through nginx ingress controller. See <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">this page</a>.</p>
<p>In a nutshell, you need to create a configmap like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: <namespace where you deployed ingress controller>
data:
5432: "default/preflight:5432"
</code></pre>
<p>then ensure your nginx ingress controller starts with the <code>--tcp-services-configmap=tcp-services</code> flag.</p>
<p>Finally, ensure the nginx ingress controller Service (the one with type == <code>LoadBalancer</code>) exposes port 5432:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
...
spec:
type: LoadBalancer
ports:
...
- name: pgsql
port: 5432
targetPort: 5432
protocol: TCP
...
</code></pre>
<p>Please note that your provider's Load Balancer should support TCP backends (most should do, but worth mentioning).</p>
| whites11 |
<p>I restarted my system today. After that my main system and the web browser are not connected to look for a kubernetes GUI.</p>
<p>When I ran the command <code>systemctl status kube-apiserver.service</code>,
it gives output as shown below:</p>
<pre><code>kube-apiserver.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
</code></pre>
<p>How can <code>api-server</code> be restarted?</p>
| Deepak Nayak | <p><strong>Did you download and installed the <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#download-and-install-the-kubernetes-controller-binaries" rel="noreferrer">Kubernetes Controller Binaries</a> directly?</strong></p>
<p>1 ) <strong>If so</strong>, check if the <code>kube-apiserver.service</code> systemd unit file exists:</p>
<pre><code>cat /etc/systemd/system/kube-apiserver.service
</code></pre>
<p>2 ) <strong>If not</strong>, you probably installed K8S with <a href="/questions/tagged/kubeadm" class="post-tag" title="show questions tagged 'kubeadm'" rel="tag">kubeadm</a>. <br>
<strong>With this setup the <em>kubeapi-server</em> is running as a pod on the master node</strong>:</p>
<pre><code>kubectl get pods -n kube-system
NAME READY STATUS
coredns-f9fd979d6-jsn6w 1/1 Running ..
coredns-f9fd979d6-tv5j6 1/1 Running ..
etcd-master-k8s 1/1 Running ..
kube-apiserver-master-k8s 1/1 Running .. #<--- Here
kube-controller-manager-master-k8s 1/1 Running ..
kube-proxy-5kzbc 1/1 Running ..
kube-scheduler-master-k8s 1/1 Running ..
</code></pre>
<p><strong>And not as a systemd service.</strong></p>
<p>So, because you can't restart pods in K8S you'll have to delete it:</p>
<pre><code>kubectl delete pod/kube-apiserver-master-k8s -n kube-system
</code></pre>
<p>And a new pod will be created immediately.</p>
<hr />
<p>(*) When you run <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/" rel="noreferrer">kubeadm init</a> you should see the creation of the manifests for the control plane <a href="https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/" rel="noreferrer">static Pods</a>:</p>
<pre><code>.
.
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
.
.
</code></pre>
<p>The corresponding yamls:</p>
<pre><code>ubuntu@master-k8s:/etc/kubernetes/manifests$ ls -la
total 24
drwxr-xr-x 2 root root 4096 Oct 14 00:13 .
drwxr-xr-x 4 root root 4096 Sep 29 02:30 ..
-rw------- 1 root root 2099 Sep 29 02:30 etcd.yaml
-rw------- 1 root root 3863 Oct 14 00:13 kube-apiserver.yaml <----- Here
-rw------- 1 root root 3496 Sep 29 02:30 kube-controller-manager.yaml
-rw------- 1 root root 1384 Sep 29 02:30 kube-scheduler.yaml
</code></pre>
<p>And the <em>kube-apiserver</em> spec:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.100.102.5:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=10.100.102.5
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
.
.
.
</code></pre>
| Rotem jackoby |
<p>Node.js has introduced the Cluster module to scale up applications for performance optimization. We have Kubernetes doing the same thing.</p>
<p>I'm confused if both are serving the same purpose? My assumption is clustering can spawn up to max 8 processes (if there are 4 cpu cores with 2 threads each) and there is no such limitation in Kubernetes.</p>
| Rajneesh Mishra | <p>Kubernetes and the Node.js <a href="https://nodejs.org/api/cluster.html" rel="nofollow noreferrer">Cluster module</a> operate at different levels.</p>
<p>Kubernetes is in charge of orchestrating containers (amongst <em>many</em> other things). From its perspective, there are resources to be allocated, and deployments that require or use a specific amount of resources.</p>
<p>The Node.js Cluster module behaves as a load-balancer that forks N times and spreads the requests between the various processes it owns, all within the limits defined by its environment (CPU, RAM, Network, etc).</p>
<p>In practice, Kubernetes has the possibility to spawn additional Node.js containers (scaling horizontally). On the other hand, Node.js can only grow within its environment (scaling vertically). You can read about this <a href="https://en.wikipedia.org/wiki/Scalability#Horizontal_(scale_out)_and_vertical_scaling_(scale_up)" rel="nofollow noreferrer">here</a>.</p>
<p>While from a performance perspective both approaches might be relatively similar (you can use the same number of cores in both cases); the problem with vertically scaling <em>on a single machine</em> is that you lose the high-availability aspect that Kubernetes provides. On the other hand, if you decide to deploy several Node.js containers on different machines, you are much more tolerant for the day one of them is going down.</p>
| aymericbeaumet |
<p>Trying to plan out a deployment for an application and am wondering if it makes sense to have multiple pods in a container vs putting them in separate pods. I expect one of the containers to potentially be operating near its allocated memory limit. My understanding is that this presents the risk of this container getting OOMKilled. If that's the case, would it restart the entire pod (so the other container in the pod is restarted as well) or will it only restart the OOMKilled container?</p>
| Sakeeb Hossain | <p>No, only the specific container.</p>
<p>For the whole pod to be recreated there needs to be a change in the Pod's <code>ownerObject</code> (tipically a <code>Replicaset</code>) or a scheduling decision by <code>kube-scheduler</code>.</p>
| whites11 |
<p>I'm deploying a test application onto kubernetes on my local computer (minikube) and trying to pass database connection details into a deployment via environment variables.</p>
<p>I'm passing in these details using two methods - a <code>ConfigMap</code> and a <code>Secret</code>. The username (<code>DB_USERNAME</code>) and connection url (<code>DB_URL</code>) are passed via a <code>ConfigMap</code>, while the DB password is passed in as a secret (<code>DB_PASSWORD</code>).</p>
<p>My issue is that while the values passed via <code>ConfigMap</code> are fine, the <code>DB_PASSWORD</code> from the secret appears jumbled - like there's some encoding issue (see image below).</p>
<p><a href="https://i.stack.imgur.com/jmB4W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jmB4W.png" alt="DB_PASSWORD not showing up properly" /></a></p>
<p>My deployment yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
envFrom:
- configMapRef:
name: gweb-cm
- secretRef:
name: password
</code></pre>
<p>My <code>ConfigMap</code> and <code>Secret</code> yaml</p>
<pre><code>apiVersion: v1
data:
DB_URL: jdbc:mysql://mysql/test?serverTimezone=UTC
DB_USERNAME: webuser
SPRING_PROFILES_ACTIVE: prod
SPRING_DDL_AUTO: create
kind: ConfigMap
metadata:
name: gweb-cm
---
apiVersion: v1
kind: Secret
metadata:
name: password
type: Generic
data:
DB_PASSWORD: test
</code></pre>
<p>Not sure if I'm missing something in my Secret definition?</p>
| tvicky4j247 | <p>The secret value should be base64 encoded. Instead of <code>test</code>, use the output of</p>
<pre class="lang-bash prettyprint-override"><code>echo -n 'test' | base64
</code></pre>
<p>P.S. the Secret's type should be <code>Opaque</code>, not <code>Generic</code></p>
| Grisha Levit |
<p>I have an existing generic Kubernetes secret that exported as YAML (using <code>kubectl get secret -o yaml > secret.yaml</code>), looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Secret
apiVersion: v1
type: Opaque
metadata:
name: some-secret-key-files
data:
host1.example.com.key: c2VjcmV0IG51bWJlciBvbmUK
host2.example.com.key: c2VjcmV0IG51bWJlciB0d28K
</code></pre>
<p>Now I have a new key file named <code>host3.example.com.key</code>, with these contents:</p>
<pre><code>secret number three
</code></pre>
<p>What is easiest way to add the contents of this file base64-encoded to <code>secret.yaml</code>, so that in the end it looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Secret
apiVersion: v1
type: Opaque
metadata:
name: some-secret-key-files
data:
host1.example.com.key: c2VjcmV0IG51bWJlciBvbmUK
host2.example.com.key: c2VjcmV0IG51bWJlciB0d28K
host3.example.com.key: c2VjcmV0IG51bWJlciB0aHJlZQo=
</code></pre>
| hvtilborg | <p>In the end, exporting the secret to a YAML file was not needed at all. With <code>kubectl patch secret</code> this can be done 'online' like this:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl patch secret some-secret-key-files --patch="{\"data\": {\"host3.example.com.key\": \"$(base64 -w0 host3.example.com.key)\"}}"
</code></pre>
<p>This will add a new file entry to the existing secret <code>some-secret-key-files</code>, and use <code>base64(1)</code> to base64 encode the contents of the local <code>host3.example.com.key</code> file.</p>
| hvtilborg |
<p>I am trying to setup kubernetes on my bare metal cluster using <code>kubeadm</code>. But during initialization <code>kubeadm init</code> i get following error :</p>
<pre><code>[root@server docker]# kubeadm init
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING HTTPProxy]: Connection to "https://192.111.141.4" uses proxy "http://lab:[email protected]:3122". If that is not intended, adjust your proxy settings
[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://lab:[email protected]:3122". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
I0827 16:33:00.426176 34482 kernel_validator.go:81] Validating kernel version
I0827 16:33:00.426374 34482 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [server.test.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.111.141.4]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [server.test.com localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [server.test.com localhost] and IPs [192.111.141.4 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-amd64:v1.11.2
- k8s.gcr.io/kube-controller-manager-amd64:v1.11.2
- k8s.gcr.io/kube-scheduler-amd64:v1.11.2
- k8s.gcr.io/etcd-amd64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
</code></pre>
<p>Preflight images are also present on my system but still I get this error. After <code>[init] this might take a minute or longer if the control plane images have to be pulled</code> this statement, kubeadm waits about 5-10 minutes before giving this error. What is cause of this error?</p>
| rishi007bansod | <p>I would suggest:</p>
<p>1 ) If you already ran <code>kubeadm init</code> on this node - try running <code>kubeadm reset</code>.</p>
<p>2 ) Try running <code>kubeadm init</code> with latest version or with the specific version by addding <code>--kubernetes-version=X.Y.Z</code> (changing from <code>v1.19.2</code> to <code>v1.19.3</code> solved the issue for me).</p>
<p>3 ) Try all actions related to Kubelet - like @cgrim suggested.</p>
<p>4 ) Check the firewall rules (don't stop firewalld): <code>sudo firewall-cmd --zone public --list-all</code> and open just the relevant ports.</p>
| Rotem jackoby |
<p>I need to create a deployment descriptor "A" yaml in which I can find the endpoint IP address of a pod (that belongs to a deployment "B") . There is an option to use Downward API but I don't know if I can use it in that case.</p>
| Angel | <p>What you are looking for is an <code>Headless service</code> (see <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">documentation</a>).</p>
<p>With an headless service, the service will not have an own IP address. If you specify a selector for the service, the DNS service will return the pods' IP when you query the service's name.</p>
<p>Quoting the documentation:</p>
<blockquote>
<p>For headless Services that define selectors, the endpoints controller
creates Endpoints records in the API, and modifies the DNS
configuration to return A records (IP addresses) that point directly
to the Pods backing the Service.</p>
</blockquote>
<p>In order to create an headless service, simply set the <code>.spec.clusterIP</code> to <code>None</code> and specify the selector as you would normally do with a traditional service.</p>
| whites11 |
<p>How to configure Network Security Group for a load balancer in Azure Kubernetes. Basically, want to restrict incoming external traffic to service for a range of IP Addresses only.</p>
| SaiNageswar S | <p>According to the <a href="https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard#restrict-inbound-traffic-to-specific-ip-ranges" rel="nofollow noreferrer">Azure documentation</a> you should be able to specify the IP addresses you want to allow in the <code>spec/loadBalancerSourceRanges</code> field of your <code>Service</code>, such as:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-lb-name
spec:
type: LoadBalancer
...
loadBalancerSourceRanges:
- 1.2.3.0/16
</code></pre>
| whites11 |
<p>I have successfully deployed <code>efs-provisioner</code> following the steps outlined in <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs" rel="nofollow noreferrer">efs-provisioner</a>.</p>
<p><a href="https://i.stack.imgur.com/Y4AHo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y4AHo.png" alt="enter image description here"></a></p>
<p>But the PVC is hanging in <code>Pending</code> State displaying the same message:</p>
<p><code>waiting for a volume to be created, either by external provisioner "example.com/aws-efs" or manually created by system administrator</code>.</p>
<p><a href="https://i.stack.imgur.com/vhFAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vhFAw.png" alt="enter image description here"></a></p>
<p>What could be a reason why PVC is not created properly?</p>
| alphanumeric | <p>The solution was described by ParaSwarm posted <a href="https://github.com/kubernetes-incubator/external-storage/issues/754#issuecomment-418207930" rel="nofollow noreferrer">here</a> </p>
<blockquote>
<p><em>"...The quick fix is to give the cluster-admin role to the default service account. Of course, depending on your environment and
security, you may need a more elaborate fix. If you elect to go the
easy way, you can simply apply this:"</em></p>
</blockquote>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: default-admin-rbac (or whatever)
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
| alphanumeric |
<p>We have hit a strange issue in GKE on GCP where we have a few seconds to a minute if intermittent HTTP 500/520/525 errors trying to access our API every 6h10m give or take a couple minutes, and our logs haven't given us much to go on yet.</p>
<p>Our pipeline looks like:</p>
<pre><code>user request -> CloudFlare -> GKE nginx LoadBalancer (ssl termination) -> GKE router pod -> API
</code></pre>
<p>Hitting CloudFlare or the GKE loadbalancer directly shows the same error, so seems like the issue is within our GCP setup somewhere.</p>
<p>In the past I've run into a <a href="https://github.com/GoogleCloudPlatform/cloudsql-proxy/issues/87" rel="nofollow noreferrer">CloudSQL Proxy issue</a> where it renews an SSL cert every hour and caused very predictable, very brief outages.</p>
<p>Does GKE have a similar system we might be running into where it does something every 6h that is causing these errors for us?</p>
<p>Pingdom report:
<a href="https://i.stack.imgur.com/xdZWy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xdZWy.png" alt="brief outage every 6h10m"></a></p>
| xref | <p>The problem turned out to be that only 1 of 2 <a href="https://cloud.google.com/compute/docs/load-balancing/health-checks#health_check_source_ips_and_firewall_rules" rel="nofollow noreferrer">required healthcheck IPs</a> for internal load balancing was whitelisted. not sure how that caused the error to be so clockwork, but updating our firewall rules has stopped the issue. Hope that helps someone in the future!</p>
| xref |
<p>I am configuring Jenkins on Kubernetes system. It works fine to build. But in order to deploy, we need to call kubectl or helm. Currently, I am using</p>
<ul>
<li>lachlanevenson/k8s-kubectl:v1.8.8</li>
<li>lachlanevenson/k8s-helm:latest</li>
</ul>
<p>It is fail and throw exception: "Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:jenkins:default" cannot list pods in the namespace "jenkins""</p>
<p>The jenkins script is simple:</p>
<pre><code>def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(label: label,containers: [
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true)
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]){
node(label) {
stage('Run kubectl') {
container('kubectl') {
sh "kubectl get pods"
}
}
}
}
</code></pre>
<p>Could you please let me know what is wrong?</p>
<p>Thanks,</p>
| Jacky Phuong | <p>The Kubernetes (k8s) master, as of Kubernetes v1.8, by default implements <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">role-based access control (RBAC)</a> security controls on accesses to its API. The RBAC controls limit access to the k8s API by your workloads to only those resources and methods which you have explicitly permitted.</p>
<p>You should create a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#role-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer">role</a> which permits access to the <code>pod</code> resource's <code>list</code> verb (and any other resources you require<sup>1</sup>), create a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#serviceaccount-v1-core" rel="nofollow noreferrer">service account</a> object, and finally create a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#rolebinding-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer">role binding</a> which assigns the role to the service account.</p>
<p>Finally, provide the service account to your Jenkins deployment by supplying its name in the <code>serviceAccountName</code> property of the Pod template. Ensure <code>automountServiceAccountToken</code> is <code>true</code> to have k8s install an API key into your Pod. Attempts to access the k8s API using the native k8s API wrappers and libraries should find this key and automatically authenticate your requests.</p>
<p><sup>1</sup><sub>If you are planning to make deployments from Jenkins, you will certainly require more than the ability to list Pods, as you will be required to mutate objects in the system. However, if you use Helm, it is Helm's Tiller pod which influences the downstream k8s objects for your deployments, so the set of permissions you require for the Helm Tiller and for Jenkins to communicate with the Tiller will vary.</sub></p>
| Cosmic Ossifrage |
<p>I was investigating certain things things about <code>cert-manager</code>.</p>
<p><code>TLS certificates</code> are automatically recreated by cert-manager.</p>
<p>I need to somehow <strong>deregister a domain / certificate</strong> from being regenerated. I guess I would need to tell cert-manager not to take care about a given domain anymore.</p>
<p>I do not have any clue how to do that right now. Can someone help?</p>
| newar68 | <p><code>cert-manager</code> is an application implemented using the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">operator pattern</a>.</p>
<p>In one sentence, it watches for a <code>Custom Resource</code> (<code>CR</code> for short) named <code>Certificate</code> in the Kubernetes API and it creates and updates <code>Secrets</code> resources to store certificate data.</p>
<p>If you delete the <code>Secret</code> resource but don't delete the <code>Certificate</code> CR, <code>cert-manager</code> will recreate the secret for you.</p>
<p>The right way of "deregister a domain" or to better say it "make cert-manager not generate a certificate for a domain any more" is to delete the <code>Certificate</code> CR related to your domain.</p>
<p>To get a list of all the <code>Certificate</code> CRs in your cluster you can use <code>kubectl</code></p>
<pre><code>kubectl get certificate -A
</code></pre>
<p>When you found the <code>Certificate</code> related to the domain you want to delete, simply delete it</p>
<pre><code>kubectl -n <namespace> delete certificate <certificate name>
</code></pre>
<p>Once you deleted the certificate CR, you might also want to delete the <code>Secret</code> containing the TLS cert one more time. This time <code>cert-manager</code> will not recreate it.</p>
| whites11 |
<p>I wanted to bring up zookeeper using <code>helm install .</code>, but it says <code>Error: release <servicename> failed: services "zookeeper" already exists</code>. I don't see anything if I execute <code>helm list</code>too. Before installing the service, I checked using <code>helm list</code> if it already exists, and it doesn't.</p>
<p>How to check the reason for failure? </p>
| Bitswazsky | <p>I think that the simplest solution is to <strong>add the <code>--debug</code> flag for the installation command</strong>:</p>
<pre><code>helm install chart my-chart --debug
</code></pre>
<p>Or if you prefer:</p>
<pre><code>helm upgrade --install chart my-chart --debug
</code></pre>
<p>It displays all the resources which are created one by one and also related errors which occured during installation.</p>
| Rotem jackoby |
<p>I am following a tutorial regarding RBAC, I think I understand the main idea but I don't get why this is failing:</p>
<pre class="lang-sh prettyprint-override"><code>kc auth can-i "*" pod/compute --as [email protected]
no
kc create clusterrole deploy --verb="*" --resource=pods --resource-name=compute
clusterrole.rbac.authorization.k8s.io/deploy created
kc create clusterrolebinding deploy [email protected] --clusterrole=deploy
clusterrolebinding.rbac.authorization.k8s.io/deploy created
# this tells me that [email protected] should be able to create a pod named compute
kc auth can-i "*" pod/compute --as [email protected]
yes
# but it fails when trying to do so
kc run compute --image=nginx --as [email protected]
Error from server (Forbidden): pods is forbidden: User "[email protected]" cannot create resource "pods" in API group "" in the namespace "default"
</code></pre>
<p>the namespace name should be irrelevant afaik, since this is a clusterrole.</p>
| Simon Ernesto Cardenas Zarate | <p>Restricting the <code>create</code> permission to a specific resource name is not supported.</p>
<p>This is from the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources" rel="nofollow noreferrer">Kubernetes documentation</a>:</p>
<blockquote>
<p>Note: You cannot restrict create or deletecollection requests by resourceName. For create, this limitation is because the object name is not known at authorization time.</p>
</blockquote>
<p>This means the <code>ClusterRole</code> you created doesn't allow you to create any Pod.
You need to have another <code>ClusterRole</code> assigned where you don't specify the resource name.</p>
| whites11 |
<p>I apologize for my inexperience, but I need to ask a question and I believe this is the right place. I set up a HA k3s cluster with 1 master and 2 replicas, but I've been told that I must put a load balancer in front of this infrastructure to avoid service disruption in case of failovers. I would like to use a high-availability configuration for the load balancer too, as I would like to avoid single points of failure. Does anybody know about open-source tools providing a high available load balancer? Thank you in advance.</p>
| Gipy | <p>If you are on bare metal, one open source tool that implements the Load Balancer feature in kubernetes and AFAIK supports k3s is <code>metallb</code>.
Take a look at the <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">official documentation</a> to get started.</p>
| whites11 |
<p>Recently <a href="https://github.com/helm/charts/tree/b9278fa98cef543f5473eb55160eaf45833bc74e/stable/prometheus-operator" rel="nofollow noreferrer">prometheus-operator</a> chart is deprecated and the chart has been renamed <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#configuration" rel="nofollow noreferrer">kube-prometheus-stack</a> to more clearly reflect that it installs the kube-prometheus project stack, within which Prometheus Operator is only one component.</p>
<p>I checked both old and new chart and also I read all related documentation, but I couldn't find changes in the files(especially in <code>values.yaml</code>), could someone explain me to migrate from prometheus-operator to kube-prometheus-stack what I should do exactly?</p>
| sasan | <p><strong>March 18, 2021</strong>.</p>
<p>I migrated from <em>prometheus-operator</em> to <em>kube-prometheus-stack</em> and faced some difficulties.</p>
<p>Below is a list of errors which I encountered and the steps I took to work-around this.</p>
<hr />
<p><strong>Error 1</strong>: <code>unknown field "metricRelabelings"</code>.</p>
<p><strong>Solution</strong>: Comment out all appearances.</p>
<hr />
<p><strong>Error 2</strong>: <code>unknown field "relabelings"</code>.</p>
<p><strong>Solution</strong>: Comment out all appearances.</p>
<hr />
<p><strong>Error 3</strong>: <code>unknown field "selector" in com.coreos.monitoring.v1.Alertmanager.spec.storage.volumeClaimTemplate</code>.</p>
<p><strong>Solution</strong>: Comment out all this specific field under the <code>volumeClaimTemplate</code>.</p>
<hr />
<p><strong>Error 4</strong>: <code>unknown field "shards" in com.coreos.monitoring.v1.Prometheus.spec</code>.</p>
<p><strong>Solution</strong>: Comment out the specific location, or follow suggestions from <a href="https://github.com/prometheus-community/helm-charts/issues/579" rel="nofollow noreferrer">here</a>.</p>
<hr />
<p><strong>Error 5</strong>: <code>prometheus-kube-stack unknown fields probenamespaceselector and probeSelector</code>.</p>
<p><strong>Solution</strong>: As mentioned in <a href="https://github.com/prometheus-community/helm-charts/issues/250#issuecomment-740813317" rel="nofollow noreferrer">here</a>, Delete all CRDs:</p>
<pre><code>kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com
kubectl delete crd podmonitors.monitoring.coreos.com
kubectl delete crd alertmanagers.monitoring.coreos.com
kubectl delete crd thanosrulers.monitoring.coreos.com
</code></pre>
<p>And ran <code>helm install</code> again.</p>
<hr />
<p>Link to the chart <a href="https://github.com/prometheus-community/helm-charts/issues" rel="nofollow noreferrer">issues page</a> in Github.</p>
| Rotem jackoby |
<p>I have a running cluster with single master. A load balancer(kube.company.com) is configured to accept traffic at 443 and forwards it to the k8s master 6443.</p>
<p>I tried to change my ~/.kube/config <code>server</code> field definition from $masterIP:6443 to kube.company.com:443.</p>
<p>It throws the error x509: certificate signed by unknown authority.</p>
<p>I guess there should be some configuration that should be done to make this work, I just can't find it in the official docs</p>
<p>This is a bare metal setup using k8s version 1.21.2, containerd in RHEL env. The load balancer is nginx. Cluster is installed via kubeadm</p>
| letthefireflieslive | <p>When using <code>kubeadm</code> to deploy a cluster, if you want to use a custom name to access the <code>Kubernetes API Server</code>, you need to specify the <code>--apiserver-cert-extra-sans</code> flag of <code>kubeadm init</code>.</p>
<blockquote>
<p>Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.</p>
</blockquote>
<p>This is untested, but theoretically if you want to do this on an existing cluster, you should be able to log in <code>in every master node</code> and run this:</p>
<pre><code># remove current apiserver certificates
sudo rm /etc/kubernetes/pki/apiserver.*
# generate new certificates
sudo kubeadm init phase certs apiserver --apiserver-cert-extra-sans=<your custom dns name here>
</code></pre>
| whites11 |
<p>When I deploy a docker image to <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">Kubernetes Engine</a>,</p>
<p><a href="https://i.stack.imgur.com/LtwaW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LtwaW.png" alt="enter image description here"></a>
the pods can't be initialize, pods are just making a simple get request to <a href="https://jsonplaceholder.typicode.com/" rel="nofollow noreferrer">https://jsonplaceholder.typicode.com/</a>
<a href="https://i.stack.imgur.com/qtxIt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtxIt.png" alt="code"></a></p>
<p>I get an error message <code>certificate signed by unknown authority</code></p>
<p><a href="https://i.stack.imgur.com/jeX7D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jeX7D.png" alt="enter image description here"></a></p>
| John Balvin Arias | <p>From the comments in your question, I expect you are running up against the common problem of Alpine base images not being populated with the <code>ca-certificates</code> package, which contains a number of root CA certificates to anchor your root of trust.</p>
<p>Add the following command to your <code>Dockerfile</code> to ensure these are installed in the produced image:</p>
<pre><code>RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
</code></pre>
<p>(we run multiple operations in a single <code>RUN</code> step to <a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run" rel="noreferrer">avoid introducing unnecessary bloat</a> in the layers of your final image).</p>
<p>Base images which include the CA certificates package are also available in the container registry (although with this statement I make no claims as to their suitability or provenance).</p>
| Cosmic Ossifrage |
<p>I have service exposed on the kubernetes cluster, I have an ingress controller routing traffic to the service,but it is returning 503, everything is connected and configured, and the service is working when executed with its external IP, but when connected with the ingress its returning 503 Here is the INGRESS YAML-</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: yucreat-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- yuccoin.org
secretName: echo-tls
rules:
- host: yuccoin.org
http:
paths:
- backend:
serviceName: yucdocker1
servicePort: 80
</code></pre>
<p>Working scv YAML-</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: yucdocker1
namespace: default
uid: 13bb70c3-8....
resourceVersion: '3449902'
creationTimestamp: '2021-05-05T13:55:53Z'
labels:
app: yucdocker1
annotations:
kubernetes.digitalocean.com/load-balancer-id: e25202f7-8422-4...
managedFields:
- manager: kubectl-expose
operation: Update
apiVersion: v1
time: '2021-05-05T13:55:53Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:labels':
.: {}
'f:app': {}
'f:spec':
'f:externalTrafficPolicy': {}
'f:ports':
.: {}
'k:{"port":8080,"protocol":"TCP"}':
.: {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'f:selector':
.: {}
'f:app': {}
'f:sessionAffinity': {}
- manager: digitalocean-cloud-controller-manager
operation: Update
apiVersion: v1
time: '2021-05-05T13:58:25Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:kubernetes.digitalocean.com/load-balancer-id': {}
- manager: k8saasapi
operation: Update
apiVersion: v1
time: '2021-05-21T08:24:07Z'
fieldsType: FieldsV1
fieldsV1:
'f:spec':
'f:type': {}
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 4500
selector:
app: yucdocker1
clusterIP: 10.245.171.173
clusterIPs:
- 10.245.171.173
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
</code></pre>
<p>So, I've already tried to change svc from loadBalancer to clusterIP... and nothing worked, any help would be welcome.</p>
| Himanshu | <p>Your ingress is sending traffic to service on port 80:</p>
<pre><code>- backend:
serviceName: yucdocker1
servicePort: 80
</code></pre>
<p>But your service exposes port 8080 only:</p>
<pre><code>spec:
ports:
- protocol: TCP
port: 8080
</code></pre>
<p>The ports have to match for your ingress to work properly.</p>
| whites11 |
<p>We have two clusters. cluster1 has namespace- test1 and a service running as clusterip
we have to call that service from another cluster(cluster2) from namespace dev1.</p>
<p>I have defined externalname service in cluster2 pointing to another externalname service in cluster1.
And externalname service in cluster1 points to the original service running as clusterip.</p>
<p>In cluster2:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: service
namespace: dev1
labels:
app: service
spec:
selector:
app: service
type: ExternalName
sessionAffinity: None
externalName: service2.test.svc.cluster.local
status:
loadBalancer: {}
</code></pre>
<p>In cluster1:Externalname service</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: service2
namespace: test1
labels:
app: service
spec:
selector:
app: service
type: ExternalName
sessionAffinity: None
externalName: service1.test1.svc.cluster.local
status:
loadBalancer: {}
</code></pre>
<p>in cluster1 clusterip service:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: service1
namespace: test1
labels:
app: service1
spec:
ports:
- name: http
protocol: TCP
port: 9099
targetPort: 9099
selector:
app: service1
clusterIP: 102.11.20.100
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
</code></pre>
<p>But, there is no hit to the service in cluster1. I tried to add spec:port:9099 in externalname services as well, still it does not work.</p>
<p>What could be the reason. Nothing specific in logs too</p>
| abindlish | <p>This is not what <code>ExternalName</code> services are for.</p>
<p><code>ExternalName</code> services are used to have a cluster internal service name that forwards traffic to another (internal or external) DNS name. In practice what an <code>ExternalName</code> does is create a CNAME record that maps the external DNS name to a cluster-local name. It does not expose anything out of your cluster. See <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">documenation</a>.</p>
<p>What you need to do is expose your services outside of your kubernetes clusters and they will become usable from the other cluster as well.</p>
<p>There are different ways of doing this.
For example:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort service</code></a>: when using a NodePort, your service will be exposed on each node in the cluster on a random high port (by default in the 30000-32767 range). If your firewall allows traffic to such port you could reach your service from using that port.</li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer service</code></a>: if you are running kubernetes in an environment that supports Load Balancer allocation you could expose your service to the internet using a load balancer.</li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer"><code>Ingress</code></a>: if you have an ingress controller running in your cluster you could expose your workload using an <code>Ingress</code>.</li>
</ul>
<p>On the other cluster, you could simply reach the service exposed.</p>
| whites11 |
<p>I have a k8s cluster with 3 masters and 10 workers. As I know physically we can send requests to any nodes in a k8s cluster. But what's better? my client's requests reach my load balancer and I can route them to any kind of nodes I want.
Should I send client requests to master nodes or worker nodes and why?
of course, I've seen this post, and it is not answered my question.
<a href="https://stackoverflow.com/questions/49266357/k8s-should-traffic-goes-to-master-nodes-or-worker-nodes">k8s should traffic goes to master nodes or worker nodes?</a></p>
| ehsan shirzadi | <p>I don't think there is a strict rule about this, it really depends on the use case you have. But I think it is much better to send ingress traffic to worker nodes rather than master nodes. This for two reasons:</p>
<ol>
<li><p>The goal of master nodes is to keep the kubernetes cluster functional and working at all times.
Overloading masters with ingress traffic could potentially make them slower or even unable to do what they're meant to.</p>
</li>
<li><p>Another, maybe secondary, reason to send ingress traffic to workers only is that by default cluster autoscaler acts on workers rather than masters so you might end up with overloaded clusters that don't autoscale because part of the load is going to Masters instead of workers.</p>
</li>
</ol>
| whites11 |
<p>I have deployed minikube on Windows VM and the minikube VM is created on Virtualbox with the host-only IP.</p>
<p>I have deployed the Kubernetes dashboard with NodePort IP so I can access it from outside the cluster. The svc is as follows:</p>
<pre><code>PS C:\Users\XXX\Desktop\ingress> kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.111.167.61 <none> 8000/TCP 5d20h
kubernetes-dashboard NodePort 10.111.220.57 <none> 443:30613/TCP 5d20h
</code></pre>
<p>With the help of the minikube ingress addon, I installed the Ingress controller which is of Nginx. Its svc details are as follows:</p>
<pre><code>PS C:\Users\XXX\Desktop\ingress> kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.98.29.41 <none> 80:32628/TCP,443:31194/TCP 5d20h
ingress-nginx-controller-admission ClusterIP 10.96.35.36 <none> 443/TCP 5d20h
</code></pre>
<p>Then I have created an Ingress Rule for my dashboard application as follows:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/dashboard)$ $1/ permanent;
spec:
rules:
- host: k8s.dashboard.com
http:
paths:
- path: /dashboard
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
</code></pre>
<p>But now when I am trying to access the dashboard with the following URL <code>https://k8s.dashboard.com/dashboard</code> then I am facing the error of 404 Not Found. I also tried multiple URL to access the dashboard such as :</p>
<pre><code>https://k8s.dashboard.com:30613/dashboard
http://k8s.dashboard.com:30613/dashboard
https://k8s.dashboard.com/dashboard
</code></pre>
<p>But this URL is working for me: <code>https://k8s.dashboard.com:30613</code>
I have added the minikube IP to hosts files in the Windows machine.
Ingress rule describe the output is as follows:</p>
<pre><code>PS C:\Users\XXX\Desktop\ingress> kubectl describe ingress -n kubernetes-dashboard
Name: dashboard-ingress
Namespace: kubernetes-dashboard
Address: 192.168.56.100
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
k8s.dashboard.com
/dashboard kubernetes-dashboard:443 (172.17.0.4:8443)
Annotations: ingress.kubernetes.io/configuration-snippet: rewrite ^(/dashboard)$ $1/ permanent;
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: true
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/force-ssl-redirect: false
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 26m (x16 over 5d20h) nginx-ingress-controller Scheduled for sync
</code></pre>
<p>Any help regarding this really helps. Thanks</p>
<p><code>EDITED</code>
My ingress controller logs are as follows:</p>
<pre><code>192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] "GET /dashboard HTTP/2.0" 200 746 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" 418 0.019 [kubernetes-dashboard-kubernetes-dashboard-443] [] 172.17.0.4:8443 746 0.018 200 1a2793052f70031c6c9fa59b0d4374d1
192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] "GET /styles.aa1f928b22a88c391404.css HTTP/2.0" 404 548 "https://k8s.dashboard.com/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" 101 0.002 [upstream-default-backend] [] 127.0.0.1:8181 548 0.002 404 1974258442f8b4c46d8badd1dda3e3f5
192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] "GET /runtime.2a456dd93bf6c4890676.js HTTP/2.0" 404 548 "https://k8s.dashboard.com/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" 49 0.008 [upstream-default-backend] [] 127.0.0.1:8181 548 0.007 404 96c17c52e6337f29dd8b2b2b68b088ac
192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] "GET /polyfills.f4f05ad675be9638106e.js HTTP/2.0" 404 548 "https://k8s.dashboard.com/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" 40 0.008 [upstream-default-backend] [] 127.0.0.1:8181 548 0.007 404 096ae29cb168523aa9191f27a967e47a
192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] "GET /scripts.128068f897fc721c4673.js HTTP/2.0" 404 548 "https://k8s.dashboard.com/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" 38 0.008 [upstream-default-backend] [] 127.0.0.1:8181 548 0.007 404 728f73f75276167b387dc87a69b65a72
192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] "GET /en.main.09bf52db2dbc808e7279.js HTTP/2.0" 404 548 "https://k8s.dashboard.com/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" 38 0.014 [upstream-default-backend] [] 127.0.0.1:8181 548 0.014 404 b11e5ae324a828508d488816306399c2
</code></pre>
<p>and this is dashboard logs</p>
<pre><code>172.17.0.1 - - [16/Jun/2021:06:59:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.20"
172.17.0.1 - - [16/Jun/2021:06:59:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.20"
172.17.0.1 - - [16/Jun/2021:07:00:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.2.0"
172.17.0.1 - - [16/Jun/2021:07:00:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.20"
172.17.0.1 - - [16/Jun/2021:07:00:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.20"
172.17.0.1 - - [16/Jun/2021:07:00:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.20"
172.17.0.1 - - [16/Jun/2021:07:00:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.2.0"
172.17.0.1 - - [16/Jun/2021:07:00:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.20"
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2021-06-16T07:00:41Z"}
</code></pre>
| Sakar Mehra | <p>According to <a href="https://github.com/kubernetes/dashboard/issues/5017" rel="nofollow noreferrer">this issue</a> this is a limitation/bug of the kubernetes dashboard.</p>
<p>They suggest using this config as a workaround:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard
labels:
app.kubernetes.io/name: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: nginx
# Add https backend protocol support for ingress-nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Accept-Encoding "";
sub_filter '<base href="/">' '<base href="/dashboard/">';
sub_filter_once on;
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: my.example.com
http:
paths:
- path: /dashboard(/|$)(.*)
backend:
serviceName: kubernetes-dashboard
servicePort: 443
</code></pre>
| whites11 |
<p>According to Kubernetes documentation</p>
<blockquote>
<p>The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.</p>
<p>Annotations, like labels, are key/value maps</p>
</blockquote>
<p>Then there is <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set" rel="nofollow noreferrer">a detailed explanation</a> on the syntax of the annotation keys. But it says nothing about the value part.</p>
<p>Where can I find more about the allowed length and character set for the value of an annotation in Kubernetes?</p>
| Thomas | <p><a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/validation/objectmeta.go#L47" rel="nofollow noreferrer">Here</a> you can find the code that validates annotations in current master:</p>
<pre class="lang-golang prettyprint-override"><code>func ValidateAnnotations(annotations map[string]string, fldPath *field.Path) field.ErrorList {
allErrs := field.ErrorList{}
for k := range annotations {
for _, msg := range validation.IsQualifiedName(strings.ToLower(k)) {
allErrs = append(allErrs, field.Invalid(fldPath, k, msg))
}
}
if err := ValidateAnnotationsSize(annotations); err != nil {
allErrs = append(allErrs, field.TooLong(fldPath, "", TotalAnnotationSizeLimitB))
}
return allErrs
}
</code></pre>
<p>The <code>keys</code> are validated according to the rules that you mentioned.
The only validation applied to the values is the <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/validation/objectmeta.go#L60" rel="nofollow noreferrer">total length of all annotations</a> (size of keys + size of values for all annotations) that can't be longer than 256 kB.</p>
<pre class="lang-golang prettyprint-override"><code>const TotalAnnotationSizeLimitB int = 256 * (1 << 10) // 256 kB
...
func ValidateAnnotationsSize(annotations map[string]string) error {
var totalSize int64
for k, v := range annotations {
totalSize += (int64)(len(k)) + (int64)(len(v))
}
if totalSize > (int64)(TotalAnnotationSizeLimitB) {
return fmt.Errorf("annotations size %d is larger than limit %d", totalSize, TotalAnnotationSizeLimitB)
}
return nil
}
</code></pre>
| whites11 |
<p>I have a ASPNET Core application. when i run this using docker container, it works fine end to end. Then i move the the image to Azure AKS and create a load balancer, browse the web application using IP, and get no issues. But when i create ingress, website loads, register/forget password work on home page but on login click i get 502 bad gateway error. I tried looking at logs using kubectl logs pod -follow but no error popped up.</p>
<p>Have already tried changing images, recreating ingress and running code locally. Error only comes when i click on login button in azure AKS, accessing it via ingress. accessing same pod using load balancer doesnt replicate issue.</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: devthecrmwebsite
spec:
replicas: 1
template:
metadata:
labels:
app: devthecrmwebsite
spec:
containers:
- name: devthecrmwebsite
image: somewhere.azurecr.io/thecrmwebsite:latest
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: acr-auth
---
apiVersion: v1
kind: Service
metadata:
name: devthecrmwebsite
spec:
ports:
- name: http-port
port: 8081
targetPort: 80
selector:
app: devthecrmwebsite
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: devthecrmwebsite
labels:
app: devthecrmwebsite
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: devthecrmwebsite.ac2d980d4f3a4397a96b.southeastasia.aksapp.io
http:
paths:
- backend:
serviceName: devthecrmwebsite
servicePort: 8081
path: /
</code></pre>
| RB. | <p>I would suggest using wildcard in path , and if you plan to use the code in production you would need Nginx Ingress without using http load balancer routing addon.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/http-application-routing" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/http-application-routing</a></p>
<pre><code> http:
paths:
- backend:
serviceName: devthecrmwebsite
servicePort: 80
path: /(.*)
</code></pre>
| Anass Kartit |
<p>I have a kubernetes service on azure and it has own virtual network.My local network is using pfsense for gateway and has a public ip.Can i define static route between azure and my local network for communication kubernetes nodes and my local machines?If yes how ?</p>
<p>I know i can use VPN gateway or LoadBalancer but i am wondering about just static routing or some solution like that.</p>
| akuscu | <p>To connect to Azure Vnet you need to have a VPN (Point to Site) or Site to Site or Azure Express Route. if you want to use a connection to a pod directly you have to use port forward or an ingress controller.</p>
| Anass Kartit |
<p>Kubernetes version:
V1.22.2</p>
<p>Cloud Provider Vsphere version 6.7</p>
<p>Architecture:</p>
<ul>
<li>3 Masters</li>
<li>15 Workers</li>
</ul>
<p>What happened:
One of the pods for some "unknown" reason went down, and when we try to lift him up, it couldn't attach the existing PVC.
This only happened to a specific pod, all the others didn't have any kind of problem.</p>
<p>What did you expect to happen:
Pods should dynamically assume PVCs</p>
<p>Validation:
First step: The connection to Vsphere has been validated, and we have confirmed that the PVC exists.
Second step: The Pod was restarted (Statefulset 1/1 replicas) to see if the pod would rise again and assume the pvc, but without success.
Third step: Made a restart to the services (kube-controller, kube-apiserve, etc)
Last step: All workers and masters were rebooted but without success, each time the pod was launched it had the same error ""Multi-Attach error for volume "pvc......" Volume is already exclusively attached to one node and can't be attached to another""</p>
<p>When I delete a pod and try to recreate it, I get this warning:
Multi-Attach error for volume "pvc-xxxxx" The volume is already exclusively attached to a node
and cannot be attached to another</p>
<p>Anything else we need to know:
I have a cluster (3 master and 15 nodes)</p>
<p>Temporary resolution:
Erase the existing PVC and launch the pod again to recreate the PVC.
Since this is data, it is not the best solution to delete the existing PVC.</p>
<blockquote>
<p><strong>Multi-Attach error for volume "pvc-xxx" Volume is already
exclusively attached to one node and can't be attached to another</strong></p>
</blockquote>
<p><a href="https://i.stack.imgur.com/Q68qn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q68qn.png" alt="ERROR" /></a></p>
| Ruben Gonçalves | <p>A longer term solution is referring to 2 facts:</p>
<ol>
<li><p>You're using <code>ReadWriteOnce</code> <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access mode</a> where the volume can be mounted as read-write by a single node.</p>
</li>
<li><p>Pods might be schedule by K8S Scheduler on a different node for multiple reason.</p>
</li>
</ol>
<p>Consider switching to <code>ReadWriteMany</code> where the volume can be mounted as read-write by many nodes.</p>
| Rotem jackoby |
<p>Hi Installed Kubernetes using kubeadm in centos
When i create the deployment using type Load Balancer in yaml file the External Ip is <code>Pending</code> for Kubernetes LB it is stuck in Pending state</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m
service LoadBalancer 10.101.168.76 <pending> 80:32225/TCP 4m52s
</code></pre>
| J Jedidiah | <p>Please try to run:</p>
<pre><code>kubectl describe svc <service-name>
</code></pre>
<p>And check for errors / warnings.</p>
<p>An example of a possible error is described under the events field in the example output below - (<strong>SyncLoadBalancerFailed - could not find any suitable subnets for creating the ELB</strong>):</p>
<pre><code>Name: some-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"some-service","namespace":"default"},"spec":{"ports":[{"port":80,...
Selector: app=some
Type: LoadBalancer
IP: 10.100.91.19
Port: <unset> 80/TCP
TargetPort: 5000/TCP
NodePort: <unset> 31022/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 68s (x8 over 11m) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 67s (x8 over 11m) service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
</code></pre>
| Rotem jackoby |
<p>I want to know if there is a way to use autoscalers in Kubernetes with pods directly created from a "pod creation yaml files" not the pods created as part of a higher-level controller like deployments or replicasets?</p>
| Saeid Ghafouri | <p>The short answer to your question is no.</p>
<p><code>Horizontal Pod Autoscaler</code> changes the number of replicas of a <code>Deployment</code> reacting to changes in load utilization. So you need a <code>Deployment</code> for it to work.</p>
<p>Regarding <code>Vertical Pod Autoscaler</code>, I think it should work with spare pods as well, but only at Pod creation time. In fact, I read the following statement in the <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#known-limitations" rel="nofollow noreferrer">Known limitations</a> section of the README:</p>
<blockquote>
<p>VPA does not evict pods which are not run under a controller. For such
pods Auto mode is currently equivalent to Initial.</p>
</blockquote>
<p>That sentence make me conclude that VPA should work on Pods not backed by a Controller, but in a limited way. In fact, the <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#quick-start" rel="nofollow noreferrer">documentation about <code>Initial</code> mode</a> states:</p>
<blockquote>
<p>VPA only assigns resource requests on pod creation and never changes
them later.</p>
</blockquote>
<p>Making it basically useless.</p>
| whites11 |
<p>I created a simple <strong>EKS</strong> cluster on aws as described in <a href="https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started" rel="nofollow noreferrer">https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started</a>.</p>
<p>I this cluster I created an <strong>nginx deployment</strong> and a <strong>service</strong> of type <strong>Loadbalancer</strong> as described below.
The configuration works locally on minikube.</p>
<p>On AWS I can see that pod and service are started, the service has an external ip, I can access the pod with kubectl port-forward and I can ping the LoadBalancer.</p>
<p>However I cannot access the Loadbalancer via the browser via <a href="http://a53439687c6d511e8837b02b7cab13e7-935938560.eu-west-1.elb.amazonaws.com:3001" rel="nofollow noreferrer">http://a53439687c6d511e8837b02b7cab13e7-935938560.eu-west-1.elb.amazonaws.com:3001</a><br>
I'm getting a <code>This site can’t be reached</code></p>
<p>Any idea where I should look into?</p>
<p>NGinx Deployment</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
run: nginx
name: nginx
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
run: nginx
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
</code></pre>
<p>NGinx Service</p>
<pre><code>{
"kind":"Service",
"apiVersion":"v1",
"metadata":{
"name":"nginx",
"labels":{
"app":"nginx"
}
},
"spec":{
"ports": [
{
"port":3001,
"targetPort":80
}
],
"selector":{
"run":"nginx"
},
"type": "LoadBalancer"
}
}
</code></pre>
<p>Checks</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 1h
nginx LoadBalancer 172.20.48.112 a53439687c6d511e8837b02b7cab13e7-935938560.eu-west-1.elb.amazonaws.com 3001:31468/TCP 45m
kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-768979984b-vqz94 1/1 Running 0 49m
kubectl port-forward pod/nginx-768979984b-vqz94 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
ping a53439687c6d511e8837b02b7cab13e7-935938560.eu-west-1.elb.amazonaws.com
PING a53439687c6d511e8837b02b7cab13e7-935938560.eu-west-1.elb.amazonaws.com (62.138.238.45) 56(84) bytes of data.
64 bytes from 62.138.238.45 (62.138.238.45): icmp_seq=1 ttl=250 time=7.21 ms
</code></pre>
<p>Service description</p>
<pre><code>Name: nginx
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"ports":[{"port...
Selector: run=nginx
Type: LoadBalancer
IP: 172.20.48.112
LoadBalancer Ingress: a53439687c6d511e8837b02b7cab13e7-935938560.eu-west-1.elb.amazonaws.com
Port: <unset> 3001/TCP
TargetPort: 80/TCP
NodePort: <unset> 31468/TCP
Endpoints: 10.0.0.181:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 57m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 57m service-controller Ensured load balancer
</code></pre>
| christian | <p>Please try the 3 steps below:</p>
<ol>
<li><p>Check again that the selectors and labels were set correctly between the Service and the Deployment.</p>
</li>
<li><p>Inside AWS, Go to "<strong>Instances</strong>" tab of the Load Balancer (Probably Classic) that was created, and check the <strong>Status</strong> and <strong>Healty</strong> state of all the instances which are related to the LB:</p>
</li>
</ol>
<p><a href="https://i.stack.imgur.com/7BgZF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7BgZF.png" alt="enter image description here" /></a></p>
<p>If the status is not "InService" or State is not "Healthy" - Check the security group of those instances: <br> <strong>The NodePort (31468 in your case) should be open</strong> to accept traffic.</p>
<ol start="3">
<li>View the pod logs with <code>kubectl logs <pod-name></code>.</li>
</ol>
| Rotem jackoby |
<p>When running a Kubernetes job I've set <code>spec.spec.restartPolicy: OnFailure</code> and <code>spec.backoffLimit: 30</code>. When a pod fails it's sometimes doing so because of a hardware incompatibility (matlab segfault on some hardware). Kubernetes is restarting the pod each time on the same node, having no chance of correcting the problem.</p>
<blockquote>
<p>Can I instruct Kubernete to try a different node on restart?</p>
</blockquote>
| David Parks | <p>Once <em>Pod</em> is scheduled it cannot be moved to another <em>Node</em>.</p>
<p>The <em>Job</em> controller can create a new <em>Pod</em> if you specify <code>spec.spec.restartPolicy: Never</code>.
There is a chance that this new <em>Pod</em> will be scheduled on different <em>Node</em>.</p>
<p>I did a quick experiment with <code>podAntiAffinity:</code> but it looks like it's ignored by scheduler (makes sense as the previous Pod is in Error state).</p>
<p>BTW: If you can add labels to failing nodes it will be possible to avoid them by using <code>nodeSelector: <label></code>.</p>
| kupson |
<p>I have been trying to deploy my Kubernetes service, but it says still pending and keep pending. I waited 1 day and half but still pending.</p>
<p><a href="https://i.stack.imgur.com/xItDM.png" rel="nofollow noreferrer">Kubernetes service</a></p>
<h3 id="static-site-service.yaml-kxr2">Static-Site-Service.yaml</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: static-site-service
annotations:
imageregistry: "https://hub.docker.com/"
labels:
app: static-site
spec:
type: LoadBalancer
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: static-site
sessionAffinity: None
</code></pre>
| Thinh Nguyen | <p>According to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#external-load-balancer-providers" rel="nofollow noreferrer">the documentation</a>:</p>
<blockquote>
<p>It is important to note that the datapath for this functionality is provided by a load balancer external to the Kubernetes cluster.</p>
<p>When the Service type is set to LoadBalancer, Kubernetes provides functionality equivalent to type equals ClusterIP to pods within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the Kubernetes pods. The Kubernetes service controller automates the creation of the external load balancer, health checks (if needed), firewall rules (if needed) and retrieves the external IP allocated by the cloud provider and populates it in the service object.</p>
</blockquote>
<p>In other words your kubernetes cluster need to know how to set up a Load Balancer, because it's a feature provided by the environment where the cluster is running rather than by kubernetes itself.</p>
<p>For example, if you are running kubernetes on AWS and your <code>Controller Manager</code> is set up correctly, every time you create a <code>Service</code> with type <code>LoadBalancer</code> a new ELB will be created for you.</p>
<p>The same happens with all other supported cloud providers (Azure, GCE, Digital Ocean and others).</p>
<p>If you are running on-premise or on an unsupported cloud provider, nothing will create a <code>LoadBalancer</code> for you and the service will be <code>pending</code> forever as long as you don't set up a dedicated solution such as <code>MetalLB</code> just to name a random one.</p>
<p>I suggest reading <a href="https://banzaicloud.com/blog/load-balancing-on-prem/" rel="nofollow noreferrer">this blog post</a> for more details.</p>
| whites11 |
<p>I am trying to update entrypoint in specific container</p>
<p>the struture is :
statefulset -> list of pods -> specificPod -> specific contianer</p>
<p>I tried to do that using the javascript client and got the following:</p>
<pre><code> body: {
kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message: 'Pod "name-0" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)\n' +
' core.PodSpec{\n' +
</code></pre>
<p>I did the following:</p>
<pre><code> for(let c of pod.spec.containers){
if(c.name === 'name'){
console.log('in name contianer');
c.args = ['test;sleep 3600'];
}
}
await coreV1Api.replaceNamespacedPod(podName,namespace, pod);
</code></pre>
<p>this is works if I will update the stateful set args but I need only for specific pod</p>
<p>is it possible?</p>
| Tuz | <p>This is not possible. The whole point of using a <code>StatefulSet</code> is to have a bunch of <code>Pods</code> that are basically identical. It is the precise goal of the <code>Controller Manager</code> to reconcile the <code>StatefulSet</code> resource and ensure there are <code>replicas</code> number of <code>Pods</code> that match the <code>StatefulSet</code> spec.</p>
<p>If you want to have different pods you need to have different StatefulSets.</p>
| whites11 |
<p>I understood kube-proxy can run in iptables or ipvs mode. Also, calico sets up iptables rules.</p>
<p>But does calico iptables rules are only installed when kube proxy is running in iptables mode OR these iptables rules are installed irrespective to kube-proxy mode?</p>
| Amit | <p>According to the <a href="https://docs.projectcalico.org/networking/enabling-ipvs" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Calico ipvs support is activated automatically if Calico detects that
kube-proxy is running in that mode.</p>
</blockquote>
| whites11 |
<p>I have been trying to figure this out for a few hours. I cant see what is causing this issue. It appears to be something with line 10 of the YAML. I have tried with and without quotes and starting a new file in case there was some corrupt values.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: "foyer-api"
namespace: "foyer-api"
spec:
replicas: 3
selector:
matchLabels:
app: "foyer-api"
template:
metadata:
labels:
app: "foyer-api"
spec:
containers:
- image: foyer-api
imagePullPolicy: Always
name: "foyer-api"
ports:
- containerPort: 80
</code></pre>
| Michael McDermott | <p>This error occurs mostly when copy-and-pasting into a yaml which might cause syntax error.</p>
<p>Consider pasting the yaml in YAML Linters (for example <a href="http://www.yamllint.com/" rel="nofollow noreferrer">this</a> one) that in some cases help to identify the problem more quickly.</p>
| Rotem jackoby |
<p>I have a nginx deployment in kubernetes that I would like to run commands from while it is being CURLed by another pod. It appears that while a nginx pod is executing a deployment command, it is unable to be CURLed.</p>
<p>For example: if there are two pods, <code>nginx-1</code> and <code>nginx-2</code>. <code>nginx-1</code> is repeatedly running CURLs to <code>nginx-2</code>, while <code>nginx-2</code> is repeatedly running it's own commands, the CURLs fail with <code>Connection Refused</code>.</p>
<p>Deployment Snippets:
Note: env `${TARGET_HTTP_ADDR} is declared in deployment.</p>
<ul>
<li><p><code>nginx-1</code>:</p>
<pre><code> command: [ "/bin/sh", "-c", "--" ]
args: [ "while sleep 30; do curl -v --head ${TARGET_HTTP_ADDR}; done"]
</code></pre>
</li>
<li><p><code>nginx-2</code>:</p>
<pre><code> command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do echo hello; sleep 10; done" ]
</code></pre>
</li>
</ul>
<p>Error resp:</p>
<pre><code>* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55ad4f13df50)
* connect to <IP> port 8080 failed: Connection refused
* Failed to connect to nginx-2 80: Connection refused
* Closing connection 0
* curl: (7) Failed to connect to nginx-2 port 80: Connection refused
</code></pre>
<p>Why does this occur and is there any way around this? It appears that the deployment command does not allow the pod to respond to CURLs.</p>
<p>If the loop was run within a shell of <code>nginx-2</code>, the CURLs from <code>nginx-1</code> work.</p>
| AnthonyT | <p>With 'command' and 'args' you are overwriting the usual docker entrypoint. So in your pods there is no more nginx process running and so nothing can answer to the cURL.
Maybe you should have two container in one pod. The first container with your nginx, the second one just a busybox image where you are doing the curl.</p>
| Johnson |
<p>I want to customize the certificate validity duration and renewal throughout the cluster. Iguess doing that with ClusterIssuer is feasible. Is there a way to do so ?</p>
| Aman | <p>You can specify the duration of a self signed certificate by specifying the <code>duration</code> field in the <code>Certificate</code> CR:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: example
spec:
duration: 24h
...
</code></pre>
<p>You can control how long before the certificate expires it gets renewed using the <code>renewBefore</code> field:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: example
spec:
renewBefore: 12h
...
</code></pre>
<p>Details in <a href="https://docs.cert-manager.io/en/release-0.8/reference/certificates.html#certificate-duration-and-renewal-window" rel="nofollow noreferrer">the documentation</a>.</p>
| whites11 |
<p>I have a cron job that continues to run though I have no deployments or jobs. I am running minikube:</p>
<pre><code>$ kubectl get deployments
No resources found in default namespace.
$ kubectl delete pods --all && kubectl delete jobs --all && get deployments
pod "hello-27125612-lmcb5" deleted
pod "hello-27125613-w5ln9" deleted
pod "hello-27125614-fz84r" deleted
pod "hello-27125615-htf4z" deleted
pod "hello-27125616-k5czn" deleted
pod "hello-27125617-v79hx" deleted
pod "hello-27125618-bxg52" deleted
pod "hello-27125619-d6wps" deleted
pod "hello-27125620-66b65" deleted
pod "hello-27125621-cj8m9" deleted
pod "hello-27125622-vx5kp" deleted
pod "hello-27125623-xj7nj" deleted
job.batch "hello-27125612" deleted
job.batch "hello-27125613" deleted
job.batch "hello-27125614" deleted
...
$ kb get jobs
No resources found in default namespace.
$ kb get deployments
No resources found in default namespace.
$ kb get pods
No resources found in default namespace.
</code></pre>
<p>Yet a few seconds later:</p>
<pre><code>$ kb get jobs
NAME COMPLETIONS DURATION AGE
hello-27125624 0/1 79s 79s
hello-27125625 0/1 19s 19s
</code></pre>
<p>Get the job:</p>
<pre><code>$ kubectl get job hello-27125624 -oyaml
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: "2021-07-29T05:44:00Z"
labels:
controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
job-name: hello-27125624
name: hello-27125624
namespace: default
ownerReferences:
- apiVersion: batch/v1
blockOwnerDeletion: true
controller: true
kind: CronJob
name: hello
uid: 32be2372-d827-4971-a659-129823de18e2
resourceVersion: "551585"
uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
spec:
backoffLimit: 6
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
template:
metadata:
creationTimestamp: null
labels:
controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
job-name: hello-27125624
spec:
containers:
- command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
image: kahunacohen/hello-kube:latest
imagePullPolicy: IfNotPresent
name: hello
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
active: 1
startTime: "2021-07-29T05:44:00Z"
</code></pre>
<p>I tried this:</p>
<pre><code>$ kubectl get ReplicationController
No resources found in default namespace.
</code></pre>
<p>Here is the pod running the job:</p>
<pre><code>$ kubectl get pod hello-27125624-kc9zw -oyaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-07-29T05:44:00Z"
generateName: hello-27125624-
labels:
controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
job-name: hello-27125624
name: hello-27125624-kc9zw
namespace: default
ownerReferences:
- apiVersion: batch/v1
blockOwnerDeletion: true
controller: true
kind: Job
name: hello-27125624
uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
resourceVersion: "551868"
uid: f0c10049-b3f9-4352-9201-774dbd91d7c3
spec:
containers:
- command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
image: kahunacohen/hello-kube:latest
imagePullPolicy: IfNotPresent
name: hello
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-7cw4q
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: minikube
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-7cw4q
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-07-29T05:44:00Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2021-07-29T05:44:00Z"
message: 'containers with unready status: [hello]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2021-07-29T05:44:00Z"
message: 'containers with unready status: [hello]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2021-07-29T05:44:00Z"
status: "True"
type: PodScheduled
containerStatuses:
- image: kahunacohen/hello-kube:latest
imageID: ""
lastState: {}
name: hello
ready: false
restartCount: 0
started: false
state:
waiting:
message: Back-off pulling image "kahunacohen/hello-kube:latest"
reason: ImagePullBackOff
hostIP: 192.168.49.2
phase: Pending
podIP: 172.17.0.2
podIPs:
- ip: 172.17.0.2
qosClass: BestEffort
startTime: "2021-07-29T05:44:00Z"
</code></pre>
<p>How do I track down who is spawning these jobs and how do I stop it?</p>
| Aaron | <p>These pods are managed by <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">cronjob controller</a>.</p>
<p>Use <code>kubectl get cronjobs</code> to list them.</p>
| rkosegi |
<p>In Kubernetes, we have ClusterIp/Nodeport/LoadBalancer as the <strong>service</strong> to expose pods.
When there are <strong>multiple</strong> endpoints binds to one serivce (like deployment), then what is the policy Kubernetes route the traffic to one of the endpoints? Will it always try to respect a <code>load balancing</code> policy, or randomly selection?</p>
| cherishty | <p>Kubernetes uses <a href="https://en.wikipedia.org/wiki/Iptables" rel="noreferrer">iptables</a> to distribute traffic across a set of pods, as officially <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="noreferrer">explained by kubernetes.io</a>. Basically what happens is when you create a <code>kind: service</code> object, K8s creates a virtual ClusterIP and instructs the kube-proxy daemonset to update iptables on each node so that requests matching that virtual IP will get load balanced across a set of pod IPs. The word "virtual" here means that ClusterIPs, unlike pod IPs, are not real IP addresses allocated by a network interface, and are merely used as a "filter" to match traffic and forward them to the right destination.</p>
<p>Kubernetes documentation says the load balancing method by default is round robin, but this is not entirely accurate. If you look at iptables on any of the worker nodes, you can see that for a given service <code>foo</code> with ClusterIP of 172.20.86.5 and 3 pods, the [overly simplified] iptables rules look like this:</p>
<pre><code>$ kubectl get service foo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
foo ClusterIP 172.20.86.5 <none> 443:30937/TCP 12m
</code></pre>
<pre><code>Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-SVC-4NIQ26WEGJLLPEYD tcp -- anywhere 172.20.86.5 /* default/foo:https cluster IP */ tcp dpt:https
</code></pre>
<p>This <code>KUBE-SERVICES</code> chain rule looks for all traffic whose <code>destination</code> is 172.20.86.5, and applies rules defined in another chain called <code>KUBE-SVC-4NIQ26WEGJLLPEYD</code>:</p>
<pre><code>Chain KUBE-SVC-4NIQ26WEGJLLPEYD (2 references)
target prot opt source destination
KUBE-SEP-4GQBH7D5EV5ANHLR all -- anywhere anywhere /* default/foo:https */ statistic mode random probability 0.33332999982
KUBE-SEP-XMNJYETXA5COSMOZ all -- anywhere anywhere /* default/foo:https */ statistic mode random probability 0.50000000000
KUBE-SEP-YGQ22DTWGVO4D4MM all -- anywhere anywhere /* default/foo:https */
</code></pre>
<p>This chain uses <code>statistic mode random probability</code> to randomly send traffic to one of the three chains defined (since I have three pods, I have three chains here each with 33.3% chance of being chosen to receive traffic). Each one of these chains is the final rule in sending the traffic to the backend pod IP. For example looking at the first one:</p>
<pre><code>Chain KUBE-SEP-4GQBH7D5EV5ANHLR (1 references)
target prot opt source destination
DNAT tcp -- anywhere anywhere /* default/foo:https */ tcp to:10.100.1.164:12345
</code></pre>
<p>the <code>DNAT</code> directive forwards packets to IP address 10.100.1.164 (real pod IP) and port 12345 (which is what <code>foo</code> listens on). The other two chains (<code>KUBE-SEP-XMNJYETXA5COSMOZ</code> and <code>KUBE-SEP-YGQ22DTWGVO4D4MM</code>) are similar except each will have a different IP address.</p>
<p>Similarly, if your service type is <code>NodePort</code>, Kubernetes assigns a random port (from 30000-32767 by default) on the node. What's interesting here is that there is no process on the worker node actively listening on this port - instead, this is yet another iptables rule to match traffic and send it to the right set of pods:</p>
<pre><code>Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
KUBE-SVC-4NIQ26WEGJLLPEYD tcp -- anywhere anywhere /* default/foo:https */ tcp dpt:30937
</code></pre>
<p>This rule matches inbound traffic going to port 30937 (<code>tcp dpt:30937</code>), and forwards it to chain <code>KUBE-SVC-4NIQ26WEGJLLPEYD</code>. But guess what: <code>KUBE-SVC-4NIQ26WEGJLLPEYD</code> is the same exact chain that cluster ip 172.20.86.5 matches on and sends traffic to, as shown above.</p>
| Arian Motamedi |
<p>How can I use <code>kubectl</code> to list all the installed operators in my cluster? For instance running:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/reactive-tech/kubegres/v1.9/kubegres.yaml
</code></pre>
<p>installs the <strong>Kubegres</strong> (Postgres cluster provider) operator, but then how do I actually see that in a list of operators. Equally important to that, how do I uninstall the operator from my cluster via <code>kubectl</code>, or is that not possible to do?</p>
| hotmeatballsoup | <p>Unless you are using <a href="https://github.com/operator-framework/operator-lifecycle-manager" rel="noreferrer">OLM</a> to manage operator, there is no universal way to get rid of it.</p>
<p>Some operator might be installed using <a href="https://helm.sh" rel="noreferrer">Helm</a>, then it's just matter of <code>helm delete ...</code></p>
<p>You can always try to remove it using</p>
<pre><code>kubectl delete -f https://raw.githubusercontent.com/reactive-tech/kubegres/v1.9/kubegres.yaml
</code></pre>
<p>Generally speaking, to remove something, use same tool that you used for installation.</p>
| rkosegi |
<p>Is there any default options in kubernetes, to trigger some actions when the resource gets deleted from the cluster?</p>
| Karthiknathan.C | <p>The default Kubernetes way of doing this is to use an operator.</p>
<p>In a nutshell, you have a software running that is watching resources (<code>Namespaces</code> in your case) and react when some namespace changes (deleted in your case).</p>
<p>You might want to add <code>finalizers</code> to Namespaces for proper cleanup.</p>
<p>Please refer to <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">the documentation</a> for more details.</p>
| whites11 |
<p>On an Azure AKS cluster with the Calico network policies plugin enabled, I want to:</p>
<ol>
<li>by default block all incoming traffic.</li>
<li>allow all traffic within a namespace (from a pod in a namespace, to another pod in the <strong>same</strong> namespace.</li>
</ol>
<p>I tried something like:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny.all
namespace: test
spec:
podSelector: {}
policyTypes:
- Ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow.same.namespace
namespace: test
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
policyTypes:
- Ingress
</code></pre>
<p>But is seems to block traffic between two deployments/pods in the same namespace. What am I doing wrong, am I misreading the documentation?</p>
<p>Perhaps it is good to mention that the above setup seems to work on an AWS EKS based Kubernetes cluster.</p>
| Wouter | <p>After investigation it turned out that:</p>
<ol>
<li>I used terraform to create a k8s cluster with two node pools. System, and worker. (Note: that this is (not yet) possible in the GUI).</li>
<li>Both node pools are in different subnets (system subnet, and worker subnet).</li>
<li>AKS configures kubeproxy to masquerade traffic that goes outside the system subnet.</li>
<li>Pods are deployed on the worker node, and thus use the worker subnet. All traffic that they send outside the node they are running on, is masqueraded.</li>
<li>Calico managed iptables drop the masqueraded traffic. I did not look into more details here.</li>
<li>However, if I change the kubeproxy masquerade setting to either a larger CIDR range, or remove it all together, it works. Azure however resets this setting after a while.</li>
</ol>
<p>In conclusion. I tried to use something that is not yet supported by Azure. I now use a single (larger) subnet for both node pools.</p>
| Wouter |
<p>I'm attempting to run <code>kubeadm init</code> on a brand new VM initialised for running Kubernetes.</p>
<p>I'm following along some course notes so all should be fine but am getting:</p>
<pre><code>vagrant@c1-master1:~$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.16.3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [c1-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [c1-master1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [c1-master1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>but <code>kubelet</code> seems OK:</p>
<pre><code>vagrant@c1-master1:~$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2019-11-22 15:15:52 UTC; 20min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 11188 (kubelet)
Tasks: 15 (limit: 547)
CGroup: /system.slice/kubelet.service
└─11188 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf -
</code></pre>
<p>Any suggestions what the problem might be or where to start debugging this?</p>
| Snowcrash | <p>I would suggest the steps below:</p>
<p>1 ) Try running <code>kubeadm reset</code>.</p>
<p>2 ) If it doesn't help - try running <code>kubeadm init</code> again with latest version or with the specific version by adding the <code>--kubernetes-version=X.Y.Z</code> flag.</p>
<p>3 ) Try restarting Kubelet.</p>
<p>4 ) Check the node's firewall rules and open just the relevant ports.</p>
| Rotem jackoby |
<p>My ingress.yml file is bellow</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "example-issuer"
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: example-service
servicePort: http
path: /
tls:
- secretName: example-tls-cert
hosts:
- example.com
</code></pre>
<p>After changing apiVersion: networking.k8s.io/v1beta1 to networking.k8s.io/v1 getting bellow error.</p>
<p>error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend)</p>
| Parnit Das | <p>Try bellow</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "example-issuer"
spec:
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: example-service
port:
number: 80
tls:
- secretName: example-tls-cert
hosts:
- example.com
</code></pre>
| Nanhe Kumar |
<p>Is there any kubernetes api or kubectl command to delete older docker images that is lying on the device.</p>
<p>I know we can delete by using docker rm image but i want to do remotely through API.</p>
<p>Any alternative?</p>
| Jayashree Madanala | <p>The <code>kubelet</code> removes unused images automatically when the docker disk fullness reaches a configurable threshold.</p>
<p>See <a href="https://kubernetes.io/docs/concepts/architecture/garbage-collection/#containers-images" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/architecture/garbage-collection/#containers-images</a> for details on how that works.</p>
<p>There is no API-lead way to forcefully delete an image.</p>
<p>If you really need to manually clean up an image from nodes, you could run a container that connects to the docker daemon and runs <code>docker rmi <image></code> there, but it smells like an antipattern to me.</p>
| whites11 |
<p>I have been trying to use <a href="https://helm.sh/docs/topics/charts_hooks/" rel="nofollow noreferrer">helm hook</a> example from this page. I had just changed the command to <code>touch /tmp/flag</code> from the first example. But that doesn't seem to get executed, I couldn't find any flag file in the /tmp. Is there any way I can attach stdout of the hook command and visualit output.</p>
| Shivendra Mishra | <p>Helm hooks runs in their own containers.</p>
<p>So your file <code>/tmp/flag</code> is touched inside hook container, which is then discarded upon completion.</p>
<p>Use PVC to share filesystem between hook POD and your application POD</p>
| rkosegi |
<p>I'm trying to understand basic Kubernetes concepts but its documentation a bit confusing, as for me.</p>
<p>For example, the <code>Replication Controller</code> is mentioned in the <a href="https://kubernetes.io/docs/concepts/overview/components/?origin_team=T08E6NNJJ#kube-controller-manager" rel="nofollow noreferrer">kube-controller-manager</a>.</p>
<p>At the same time, <a href="https://kubernetes.io/docs/concepts/?origin_team=T08E6NNJJ#kubernetes-objects" rel="nofollow noreferrer">Kubernetes Concepts</a> page says about <code>ReplicaSet</code> object.</p>
<p>And only after some googling I found <a href="https://towardsdatascience.com/key-kubernetes-concepts-62939f4bc08e" rel="nofollow noreferrer">this post</a> on Medium:</p>
<blockquote>
<p>Replication Controllers perform the same function as ReplicaSets, but Replication Controllers are old school. ReplicaSets are the smart way to manage replicated Pods in 2019.</p>
</blockquote>
<p>And this is not mentioned anywhere in the official docs.</p>
<p>Can somebody please explain to me about <strong><em>Endpoints</em></strong> and <strong><em>Namespace Controllers</em></strong>?</p>
<p>Are they still "valid" Controllers - or they are also outdated/replaced by some other controller/s?</p>
| setevoy | <h3>Replica Controller Vs Replica Set</h3>
<p>The functionality of both Replica Controller and Replica Set are quite the same - they are responsible to make sure that <strong>X</strong> number of pods with label that is equal to there label selector will be scheduled to different nodes on the cluster.<br>
(Where <strong>X</strong> is the value that is specified in the <code>spec.replicas</code> field in the Replica Controller / Replica Set yaml).</p>
<p>ReplicaSet is a replacement for the Replica controller and supports richer expressions for the label selector.
You can choose between 4 values of operators <code>In, NotIn, Exists, DoesNotExist</code> - see <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement" rel="nofollow noreferrer">Set-based requirement</a>.</p>
<p><strong>A rule of thumb:</strong> When you see Replica Controller is mentioned in one the docs or other tutorials - refer to it as ReplicaSet AND consider using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> instead.</p>
<hr />
<h3>Regarding Endpoints and Namespace Controllers</h3>
<p>The K8S control plane contains multiple controllers - each controller watches a desired state of the resource that it responsible for (Pods, Endpoints, Namespaces etc') via an infinite control loop - also called <a href="https://coreos.com/kubernetes/docs/latest/replication-controller.html#the-reconciliation-loop-in-detail" rel="nofollow noreferrer">Reconciliation Loop</a>.</p>
<p>When a change is made to the desired state (by external client like kubectl) the reconciliation loop detects this and attempts to mutate the existing state in order to match the desired state.</p>
<p>For example, if you increase the value of the <code>replicas</code> field from 3 to 4, the ReplicaSet controller would see that one new instance needs to be created and will make sure it is scheduled in one of the nodes on the cluster. This reconciliation process applies to any modified property of the pod template.</p>
<p>K8S supports the following controllers (at least those which I'm familiar with):</p>
<pre><code>1 ) ReplicaSet controller.
2 ) DaemonSet controller.
4 ) Job controller.
5 ) Deployment controller.
6 ) StatefulSet controller.
7 ) Service controller.
8 ) Node controller.
9 ) Endpoints controller. # <---- Yes - its a valid controller.
10 ) Namespace controller. # <---- Yes - its a valid controller.
11 ) Serviceaccounts controller.
12 ) PersistentVolume controller.
13 ) More?
</code></pre>
<p>All resides in the control plane under a parent unit which is called the 'Controller Manager'.</p>
<hr />
<h3>Additional point</h3>
<p>There is also a small difference in the syntax between Replica Controller:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
</code></pre>
<p>And the ReplicaSet which contains <code>matchLabels</code> field under the <code>selector</code>:</p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels: #<-- This was added
tier: nginx
</code></pre>
| Rotem jackoby |
<p>I would like to validate deployments based on custom logic before scale.
I created an admission webhook to do that, but unfortunately the scale operation is undetected by the webook.</p>
<pre><code>apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: deployment-validator
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: example-name
namespace: example-namespace
path: /validate-deployment
port: 9443
failurePolicy: Ignore
matchPolicy: Equivalent
name: validation.deploy.example-domain.com
namespaceSelector: {}
objectSelector: {}
rules:
- apiGroups:
- apps
apiVersions:
- v1
operations:
- '*'
resources:
- deployment
scope: '*'
sideEffects: None
timeoutSeconds: 10
</code></pre>
<p>If I CREATE or UPDATE the deployment, the action is detected by the webhook server, also if I PATCH (kubectl patch ...).
Unfortunately if I use kubectl scale ..., the webhook server does not detect the action, and I'm unable to validate the request.</p>
<p>How can I resolve this issue?</p>
| Miklós | <p>When you run <code>kubectl scale</code> you are not actually patching the <code>Deployment</code> resource, but you are editing a subresource named <code>Scale</code> instead.</p>
<p>This is the API doc entry of the scale call: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#replace-scale-deployment-v1-apps" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#replace-scale-deployment-v1-apps</a></p>
<pre><code>PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale
</code></pre>
<p>Also, I think you need the plural name for your resouce.
So you might have to change the rule in your admission controller like this:</p>
<pre><code> rules:
- apiGroups:
- apps
apiVersions:
- v1
operations:
- '*'
resources:
- deployments/scale
scope: '*'
</code></pre>
<p>and that should work.</p>
| whites11 |
<p>I have two kubernetes pods running via Rancher:</p>
<p>#1 - busybox
#2 - dnsutils</p>
<p>From the pod #1:</p>
<pre><code>/ # cat /etc/resolv.conf
nameserver 10.43.0.10
search testspace.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
<p>and then</p>
<pre><code>/ # nslookup kubernetes.default
Server: 10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'kubernetes.default'
/ # nslookup kubernetes.default
Server: 10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'kubernetes.default'
/ # nslookup kubernetes.default
Server: 10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.43.0.1 kubernetes.default.svc.cluster.local
</code></pre>
<p>so sometimes it works but mostly not.</p>
<p>then from the pod #2:</p>
<pre><code>nameserver 10.43.0.10
search testspace.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
<p>and then:</p>
<pre><code>/ # nslookup kubernetes.default
;; connection timed out; no servers could be reached
/ # nslookup kubernetes.default
;; connection timed out; no servers could be reached
/ # nslookup kubernetes.default
Server: 10.43.0.10
Address: 10.43.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.43.0.1
;; connection timed out; no servers could be reached
</code></pre>
<p>so it mostly doesn't work.</p>
<p>The same problem is when I try to reach any external hostname.</p>
<p>Also tried to troubleshoot based on article from <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">here</a></p>
<p>ConfigMap:</p>
<pre><code>kubectl -n kube-system edit configmap coredns
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
Corefile: |
.:53 {
log
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . "/etc/resolv.conf"
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . \"/etc/resolv.conf\"\n cache 30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"coredns","namespace":"kube-system"}}
creationTimestamp: "2020-08-07T19:28:25Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:Corefile: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
manager: kubectl
operation: Update
time: "2020-08-24T19:22:17Z"
name: coredns
namespace: kube-system
resourceVersion: "4118524"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 1f3615b0-9349-4bc5-990b-7fed31879fa2
~
</code></pre>
<p>Any thought on that?</p>
| JackTheKnife | <p>It came up that <code>kube-dns</code> service was not able to get CoreDNS pods</p>
<pre><code>> kubectl get svc -o wide --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 24d k8s-app=kube-dns
</code></pre>
<p>when CoreDNS from one node called directly to the pod was able</p>
<pre><code>/ # nslookup google.com 10.42.1.18
Server: 10.42.1.18
Address: 10.42.1.18#53
Non-authoritative answer:
Name: google.com
Address: 172.217.10.110
Name: google.com
Address: 2607:f8b0:4006:802::200e
</code></pre>
<p>and another node was not:</p>
<pre><code>/ # nslookup google.com 10.42.2.37
;; connection timed out; no servers could be reached
</code></pre>
<p>which may created problem for the <code>kube-dns</code> service.</p>
<p>In this case I have decided to rebuild that problematic node and problem went away.</p>
| JackTheKnife |
<p>Small question regarding MongoDB please.</p>
<p>I am currently using the version <strong>4.4.18</strong> of MongoDB</p>
<p>I am deploying it using this manifest in Kubernetes, and no problem at all, everything is working fine, very happy.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo.conf: |
storage:
dbPath: /data/db
ensure-users.js: |
const targetDbStr = 'test';
const rootUser = cat('/etc/k8-test/admin/MONGO_ROOT_USERNAME');
const rootPass = cat('/etc/k8-test/admin/MONGO_ROOT_PASSWORD');
const usersStr = cat('/etc/k8-test/MONGO_USERS_LIST');
const adminDb = db.getSiblingDB('admin');
adminDb.auth(rootUser, rootPass);
print('Successfully authenticated admin user');
const targetDb = db.getSiblingDB(targetDbStr);
const customRoles = adminDb
.getRoles({rolesInfo: 1, showBuiltinRoles: false})
.map(role =&gt; role.role)
.filter(Boolean);
usersStr
.trim()
.split(';')
.map(s =&gt; s.split(':'))
.forEach(user =&gt; {
const username = user[0];
const rolesStr = user[1];
const password = user[2];
if (!rolesStr || !password) {
return;
}
const roles = rolesStr.split(',');
const userDoc = {
user: username,
pwd: password,
};
userDoc.roles = roles.map(role =&gt; {
if (!~customRoles.indexOf(role)) {
return role;
}
return {role: role, db: 'admin'};
});
try {
targetDb.createUser(userDoc);
} catch (err) {
if (!~err.message.toLowerCase().indexOf('duplicate')) {
throw err;
}
}
});
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: mongo
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 30
containers:
- name: mongo
image: docker.io/mongo:4.4.18
# image: docker.io/mongo:6.0
command: ["/bin/sh"]
args: ["-c", "mongod --replSet=rs0 --bind_ip_all"]
resources:
limits:
cpu: 1000m
memory: 1G
requests:
cpu: 100m
memory: 1G
ports:
- containerPort: 27017
name: mongo
protocol: TCP
volumeMounts:
- name: data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs-1
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
name: mongo
clusterIP: None
</code></pre>
<p>Now, I just want to bump the version up to <strong>6.0</strong>, literally just replacing this one line (the one commented out), leaving everything else exactly the same.</p>
<p>I then deploy this new version, and unfortunately, this happens.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/mongo-0 0/1 CrashLoopBackOff 1 (10s ago) 24s
</code></pre>
<p>When tailing the log, I do see:</p>
<pre><code>{"t":{"$date":"2022-12-07T06:50:10.048+00:00"},"s":"F", "c":"CONTROL", "id":20573, "ctx":"initandlisten","msg":"Wrong mongod version","attr":{"error":"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \"featureCompatibilityVersion\", version: \"4.2\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures."}}
</code></pre>
<p>I went to read the docs, but it is mainly about migrating to 5.0. May I ask what am I missing for 6.0 please?</p>
<p>Thank you</p>
| PatPanda | <p>The error is due to mongodb version is not compatible with data files mounted to <code>/data/db</code>.</p>
<p>The gap between version is too large. As @WernfriedDomscheit commented you will need to upgrade it by steps:</p>
<ul>
<li>4.4 to 5.0 <a href="https://www.mongodb.com/docs/v6.0/release-notes/5.0-upgrade-replica-set/" rel="nofollow noreferrer">https://www.mongodb.com/docs/v6.0/release-notes/5.0-upgrade-replica-set/</a></li>
<li>5.0 to 6.0 <a href="https://www.mongodb.com/docs/v6.0/release-notes/6.0-upgrade-replica-set/" rel="nofollow noreferrer">https://www.mongodb.com/docs/v6.0/release-notes/6.0-upgrade-replica-set/</a></li>
</ul>
<p>If the dataset size allows, you can shortcut it by <a href="https://www.mongodb.com/docs/database-tools/mongodump/" rel="nofollow noreferrer">backing up your data</a> from v4.4, starting v6 with new empty volume mounted to /data/db, and <a href="https://www.mongodb.com/docs/database-tools/mongorestore/" rel="nofollow noreferrer">restoring</a> the database from the backup.</p>
| Alex Blex |
<p>I have an architecture with multiple pods subscribing to a GCP Topic.</p>
<p>Every pod handles messages while it's up but is not interested in receiving messages it missed when it was not up.</p>
<p>In ActiveMQ this was Non Persistent messages, but I don't see the equivalent in GCP.
The only thing I thought is message lifetime with a minimum of 10 minutes.</p>
<p>Is this possible in GCP and where can it be configured ?</p>
| pmpm | <p>There is no option in Cloud Pub/Sub to disable storage. You have two options.</p>
<ol>
<li><p>As you suggest, set the message retention duration to the minimum, 10 minutes. This does mean you'll get messages that are up to ten minutes old when the pod starts up. The disadvantage to this approach is that if there is an issue with your pod and it falls more than ten minutes behind in processing messages, then it will not get those messages, even when it is up.</p>
</li>
<li><p>Use the <a href="https://cloud.google.com/pubsub/docs/replay-overview#seek_to_a_time" rel="nofollow noreferrer">seek operation</a> and seek to seek forward to the current timestamp. When a pod starts up, the first thing it could do is issue a Seek command that would acknowledge all messages before the provided timestamp. Note that this operation is eventually consistent and so it is possible you may still get some older messages when you initially start your subscriber (or, if you are using push, once your endpoint is up). Also keep in mind that the seek operation is an administrative operation and therefore is limited to <a href="https://cloud.google.com/pubsub/quotas" rel="nofollow noreferrer">6,000 operations a minute</a> (100 operations per second). Therefore, if you have a lot of pods that are often restarting, you may run into this limit.</p>
</li>
</ol>
| Kamal Aboul-Hosn |
<p>So I'm trying to bring up my kubernetes dashboard (remote server) but I'm having issues. How do I resolve this issue?</p>
<ol>
<li>using <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard</a></li>
</ol>
<blockquote>
<p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml</a></p>
</blockquote>
<ol start="2">
<li>Created a ServiceAccount</li>
</ol>
<blockquote>
<p>kubectl create serviceaccount dashboard-admin-sa</p>
</blockquote>
<ol start="3">
<li>Created an RBAC profile</li>
</ol>
<blockquote>
<p>kubectl create clusterrolebinding dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=default:dashboard-admin-sa</p>
</blockquote>
<p>When I load the page I get this not the kubernetes dashboard</p>
<pre><code>{
"paths": [
"/apis",
"/apis/",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1",
"/apis/apiextensions.k8s.io/v1beta1",
"/healthz",
"/healthz/etcd",
"/healthz/log",
"/healthz/ping",
"/healthz/poststarthook/crd-informer-synced",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/livez",
"/livez/etcd",
"/livez/log",
"/livez/ping",
"/livez/poststarthook/crd-informer-synced",
"/livez/poststarthook/generic-apiserver-start-informers",
"/livez/poststarthook/start-apiextensions-controllers",
"/livez/poststarthook/start-apiextensions-informers",
"/metrics",
"/openapi/v2",
"/readyz",
"/readyz/etcd",
"/readyz/log",
"/readyz/ping",
"/readyz/poststarthook/crd-informer-synced",
"/readyz/poststarthook/generic-apiserver-start-informers",
"/readyz/poststarthook/start-apiextensions-controllers",
"/readyz/poststarthook/start-apiextensions-informers",
"/readyz/shutdown",
"/version"
]
}
</code></pre>
<p><strong>Details:</strong></p>
<blockquote>
<p>kubectl config view</p>
</blockquote>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://100.xx.xx.x27:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
</code></pre>
<blockquote>
<p>kubectl get svc --all-namespaces</p>
</blockquote>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h19m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 7h19m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.110.162.231 <none> 8000/TCP 84m
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.104.136.25 <none> 443/TCP 84m
</code></pre>
<blockquote>
<p>kubectl get pods --all-namespaces</p>
</blockquote>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-jk8ql 1/1 Running 1 7h27m
kube-system coredns-66bff467f8-wxsnf 1/1 Running 1 7h27m
kube-system etcd-ip-100-xx-xx-x27 1/1 Running 1 7h28m
kube-system kube-apiserver-ip-100-xx-xx-x27 1/1 Running 1 7h28m
kube-system kube-controller-manager-ip-100-xx-xx-x27 1/1 Running 1 7h28m
kube-system kube-proxy-vbddf 1/1 Running 1 7h27m
kube-system kube-scheduler-ip-100-xx-xx-x27 1/1 Running 1 7h28m
kube-system weave-net-cfk2m 2/2 Running 3 7h27m
kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-fwljp 1/1 Running 0 93m
kubernetes-dashboard kubernetes-dashboard-7f99b75bf4-x2hpq 1/1 Running 0 93m
</code></pre>
| Lacer | <p>Here is the really good guide which i would suggest to follow for setting up kubernetes dashaboard - <a href="https://jhooq.com/setting-up-kubernetes-dashboard/#kubernetes-dashboard-local-cluster" rel="nofollow noreferrer">https://jhooq.com/setting-up-kubernetes-dashboard/#kubernetes-dashboard-local-cluster</a></p>
<p>But what i see here is -</p>
<ol>
<li>Keep the <code>kubectl proxy</code> running otherwise you will not be able to access the dashboard and it might result in http 404</li>
<li>Also check the validity of the token.</li>
<li>Check the service account, here is what i used for service account</li>
</ol>
<pre><code>cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
EOF
</code></pre>
<ol start="4">
<li>ClusterRoleBinding</li>
</ol>
<pre><code>cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF
</code></pre>
<p>I hope it should solve your issue. If not then please check the guide and compare your steps which you did while setting up the dashboard</p>
| Rahul Wagh |
<p>As I can see in below diagram I figure out in kubernetes we have two <strong>loadbalancer</strong>. One of them loadbalance between nodes and one of them loadbalance between pods.</p>
<p>If I use them both I have two <strong>loadbalancer</strong>.</p>
<p>Imagine some user want to connect to <code>10.32.0.5</code> the kubernetes send its request to <code>node1(10.0.0.1)</code> and after that send the request to pod <code>(10.32.0.5)</code> in <code>nod3(10.0.0.3)</code> but it is unuseful because the best route is to send request <code>nod3(10.0.0.3)</code> directly.</p>
<p>Why the NodePort is insufficient for load-balancing?</p>
<p>Why the NodePort is not LoadBalancer?(it LoadBalance between pods in different node but why we need another load balancer?)</p>
<p>note: I know that if I use NodePort and the node goes down it creates problem but I can say that I can use keepalived for it. The question is </p>
<p>why we need to loadbalance between nodes? keepalived attract all request to one IP.
Why we have two loadbalancer?<a href="https://i.stack.imgur.com/pIRU3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pIRU3.png" alt="enter image description here"></a></p>
| yasin lachini | <p>Wether you have two load-balancers depends on your setup.</p>
<p>In your example you have 3 nginx pods and 1 nginx service to access the pods. The service builds an abstraction layer, so you don't have to know how many pods there are and what IP addresses they have. You just have to talk to the service and it will loadbalance to one of the pods (<a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">docs</a>).</p>
<p>It now depends on your setup how you access the service:</p>
<ul>
<li>you might want to publish the service via NodePort. Then you can directly access the service on a node.</li>
<li>you might also publish it via LoadBalancer. This gives you another level of abstraction and the caller needs to know less about the actual setup of your cluster.</li>
</ul>
<p>See <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">docs</a> for details.</p>
| D-rk |
<p>I use <code>kubectl rollout restart deployment mydeployment-container</code> as an ad-hoc way to spin up a new container without starting a whole pod over again.</p>
<p>Another behavior I like about this command is that it brings up the new container and gracefully switches once the readiness probe passes before terminating the old container.</p>
<p>Except when using Persistent Volumes Claims:</p>
<p>When I try to to use the same command my new container stays in ContainerCreating with a <code>Multi-Attach error</code> It seems as though the persistent volume claim is effectively blocking the creation of a new container, destined to use said persistent volume claim!</p>
<p>Yes, they are configured for <code>ReadWriteOnce</code> for internal policy reasons, but no, I can't change that.</p>
<p>I come here because I'm hoping learn a new workflow that will come close to the one I currently use.</p>
| qwerty10110 | <p>What is your <code>strategy</code>, id you replace <code>RollingUpdate</code> with <code>Recreate</code>, kubernetes will destroy the original pod before starting the successor, thus detaching the volume before it is mounted again.</p>
<pre><code>strategy:
type: Recreate
</code></pre>
| Bimal |
<p>I would love to have a way to use kubectl to filter exception from a bunch of pods.</p>
<p>For example, suppose I have the following pods:</p>
<pre><code>Service-1-1
Service-1-2
Service-1-3
Service-1-4
</code></pre>
<p>Until now I moved one by one and executed:</p>
<pre><code>k logs Service-1-1 | grep exception
k logs Service-1-2 | grep exception
k logs Service-1-3 | grep exception
k logs Service-1-4 | grep exception
</code></pre>
<p>Can I have a joined filtering for all of pods at once?</p>
| dushkin | <p>If the pods have a subset of labels you can filter them with, then you can use a label selector to do so.</p>
<p>Let's say all 4 pods have the label: <code>app: service</code> (and no other pods have the same label) then you can run:</p>
<pre><code>kubectl logs -l app=service | grep exception
</code></pre>
<p>See documentation here: <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs</a></p>
| whites11 |
<p>I am doing a POC around the kubernetes Go Operator to perform some asynchronous actions in an application, and I expect to get a callback from a python application into the Operator, which can then go ahead to update the resource metadata, such as make changes to the Resource status fields.</p>
<p>I know that the controller used by the Kubernetes Go Operator SDK uses an API Server running on a specific port. But can that be used as a custom API server where I can set up paths for the webhook to work on?</p>
<p>Example of an expected callback API:</p>
<pre><code>curl -XPOST http://cyber-operator.svc/application/updateClusterState
</code></pre>
<p>I expect to run a procedure inside the operator when this API is called.</p>
<p>I searched the documentation and could not find something relevant. Can I run a separate API Server in the Operator? I am fine if it has to listen to a different port than the in-built controller.</p>
| Aditya Aggarwal | <p>operator-sdk doesn't start server, usually it list-watch k8s resources and reconcile, unless you add validating/mutating webhook explicitly (<a href="https://github.com/operator-framework/operator-sdk/blob/7e029625dde8f0d4cb88ac914af4deb7f5f85c4a/website/content/en/docs/building-operators/golang/webhooks.md" rel="nofollow noreferrer">https://github.com/operator-framework/operator-sdk/blob/7e029625dde8f0d4cb88ac914af4deb7f5f85c4a/website/content/en/docs/building-operators/golang/webhooks.md</a>)</p>
<p>Even it's possible, I suggest don't do this, just create a new http server on a new port.</p>
| jxiewei |
<p>I see there are 2 separate metrics <code>ApproximateNumberOfMessagesVisible</code> and <code>ApproximateNumberOfMessagesNotVisible</code>.</p>
<p>Using number of messages visible causes processing pods to get triggered for termination immediately after they pick up the message from queue, as they're no longer visible. If I use number of messages not visible, it will not scale up.</p>
<p>I'm trying to scale a kubernetes service using horizontal pod autoscaler and external metric from SQS. Here is template external metric:</p>
<pre><code>apiVersion: metrics.aws/v1alpha1
kind: ExternalMetric
metadata:
name: metric-name
spec:
name: metric-name
queries:
- id: metric_name
metricStat:
metric:
namespace: "AWS/SQS"
metricName: "ApproximateNumberOfMessagesVisible"
dimensions:
- name: QueueName
value: "queue_name"
period: 60
stat: Average
unit: Count
returnData: true
</code></pre>
<p>Here is HPA template:</p>
<pre><code>kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: hpa-name
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: deployment-name
minReplicas: 1
maxReplicas: 50
metrics:
- type: External
external:
metricName: metric-name
targetAverageValue: 1
</code></pre>
<p>The problem will be solved if I can define another custom metric that is a sum of these two metrics, how else can I solve this problem?</p>
| Chakradar Raju | <p>We used a lambda to fetch two metrics and publish a custom metric that is sum of messages in-flight and waiting, and trigger this lambda using cloudwatch events at whatever frequency you want, <a href="https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#rules:action=create" rel="nofollow noreferrer">https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#rules:action=create</a></p>
<p>Here is lambda code for reference:</p>
<pre><code>const AWS = require('aws-sdk');
const cloudwatch = new AWS.CloudWatch({region: ''}); // fill region here
const sqs = new AWS.SQS();
const SQS_URL = ''; // fill queue url here
async function getSqsMetric(queueUrl) {
var params = {
QueueUrl: queueUrl,
AttributeNames: ['All']
};
return new Promise((res, rej) => {
sqs.getQueueAttributes(params, function(err, data) {
if (err) rej(err);
else res(data);
});
})
}
function buildMetric(numMessages) {
return {
Namespace: 'yourcompany-custom-metrics',
MetricData: [{
MetricName: 'mymetric',
Dimensions: [{
Name: 'env',
Value: 'prod'
}],
Timestamp: new Date(),
Unit: 'Count',
Value: numMessages
}]
}
}
async function pushMetrics(metrics) {
await new Promise((res) => cloudwatch.putMetricData(metrics, (err, data) => {
if (err) {
console.log('err', err, err.stack); // an error occurred
res(err);
} else {
console.log('response', data); // successful response
res(data);
}
}));
}
exports.handler = async (event) => {
console.log('Started');
const sqsMetrics = await getSqsMetric(SQS_URL).catch(console.error);
var queueSize = null;
if (sqsMetrics) {
console.log('Got sqsMetrics', sqsMetrics);
if (sqsMetrics.Attributes) {
queueSize = parseInt(sqsMetrics.Attributes.ApproximateNumberOfMessages) + parseInt(sqsMetrics.Attributes.ApproximateNumberOfMessagesNotVisible);
console.log('Pushing', queueSize);
await pushMetrics(buildMetric(queueSize))
}
} else {
console.log('Failed fetching sqsMetrics');
}
const response = {
statusCode: 200,
body: JSON.stringify('Pushed ' + queueSize),
};
return response;
};
</code></pre>
| Chakradar Raju |
<p>I've read kubernetes and minikube docs and <strong>it's not explicit if minikube implementation supports automatically log rotation</strong> (deleting the pod logs periodically) in order to prevent the memory to be overloaded by the logs.</p>
<p>I'm not talking about the various centralized logging stacks used to collect, persist and analyze logs, but the standard pod log management of minikube.</p>
<p>In kubernetes official documentation is specified:</p>
<blockquote>
<p>An important consideration in node-level logging is implementing log rotation, so that logs don’t consume all available storage on the node. Kubernetes currently is not responsible for rotating logs, but rather a deployment tool should set up a solution to address that. For example, in Kubernetes clusters, deployed by the kube-up.sh script, there is a logrotate tool configured to run each hour. You can also set up a container runtime to rotate application’s logs automatically, for example by using Docker’s log-opt. In the kube-up.sh script, the latter approach is used for COS image on GCP, and the former approach is used in any other environment. In both cases, by default rotation is configured to take place when log file exceeds 10MB.</p>
</blockquote>
<p>Of course if we're not in GCP and we don't use kube-up.sh to start the cluster (or we don't use Docker as container tool) but we spin up our Cluster with Minikube what happens?</p>
| Alessandro Argentieri | <p>As per the implementation</p>
<blockquote>
<p>Minikube now uses systemd which has built in log rotation</p>
</blockquote>
<p>Refer this <a href="https://github.com/kubernetes/minikube/issues/700#issuecomment-272249692" rel="nofollow noreferrer">issue</a></p>
| Bimal |
<p>I have the following service:</p>
<pre class="lang-sh prettyprint-override"><code># kubectl get svc es-kib-opendistro-es-client-service -n logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-kib-opendistro-es-client-service ClusterIP 10.233.19.199 <none> 9200/TCP,9300/TCP,9600/TCP,9650/TCP 279d
#
</code></pre>
<p>When I perform a curl to the IP address of the service it works fine:</p>
<pre class="lang-sh prettyprint-override"><code># curl https://10.233.19.199:9200/_cat/health -k --user username:password
1638224389 22:19:49 elasticsearch green 6 3 247 123 0 0 0 0 - 100.0%
#
</code></pre>
<p>I created an ingress so I can access the service from outside:</p>
<pre class="lang-sh prettyprint-override"><code># kubectl get ingress ingress-elasticsearch -n logging
NAME HOSTS ADDRESS PORTS AGE
ingress-elasticsearch elasticsearch.host.com 10.32.200.4,10.32.200.7,10.32.200.8 80, 443 11h
#
</code></pre>
<p>When performing a curl to either 10.32.200.4, 10.32.200.7 or 10.32.200.8 I am getting a openresty 502 Bad Gateway response:</p>
<pre class="lang-sh prettyprint-override"><code>$ curl https://10.32.200.7 -H "Host: elasticsearch.host.com" -k
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>
$
</code></pre>
<p>When tailing the pod logs, I am seeing the following when performing the curl command:</p>
<pre class="lang-sh prettyprint-override"><code># kubectl logs deploy/es-kib-opendistro-es-client -n logging -f
[2021-11-29T22:22:47,026][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [es-kib-opendistro-es-client-6c8bc96f47-24k2l] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 414554202a20485454502f312e310d0a486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d526571756573742d49443a2034386566326661626561323364663466383130323231386639366538643931310d0a582d5265212c2d49503a2031302e33322e3230302e330d0a582d466f727761726465642d466f723a2031302e33322e3230302e330d0a582d466f727761726465642d486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d466f727761721235642d506f72743a203434330d0a582d466f727761726465642d50726f746f3a2068747470730d0a582d536368656d653a2068747470730d0a557365722d4167656e743a206375726c2f372e32392e300d0a4163636570743a202a2f2a0d1b0d0a
#
</code></pre>
<p>My ingress:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: elasticsearch
name: ingress-elasticsearch
namespace: logging
spec:
rules:
- host: elasticsearch.host.com
http:
paths:
- backend:
serviceName: es-kib-opendistro-es-client-service
servicePort: 9200
path: /
tls:
- hosts:
- elasticsearch.host.com
secretName: cred-secret
status:
loadBalancer:
ingress:
- ip: 10.32.200.4
- ip: 10.32.200.7
- ip: 10.32.200.8
</code></pre>
<p>My service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
labels:
app: es-kib-opendistro-es
chart: opendistro-es-1.9.0
heritage: Tiller
release: es-kib
role: client
name: es-kib-opendistro-es-client-service
namespace: logging
spec:
clusterIP: 10.233.19.199
ports:
- name: http
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
- name: metrics
port: 9600
protocol: TCP
targetPort: 9600
- name: rca
port: 9650
protocol: TCP
targetPort: 9650
selector:
role: client
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>What is wrong with my setup?</p>
| C-nan | <p>By default, the ingress controller proxies incoming requests to your backend using the HTTP protocol.</p>
<p>You backend service is expecting requests in HTTPS though, so you need to tell nginx ingress controller to use HTTPS.</p>
<p>You can do so by adding an annotation to the <code>Ingress</code> resource like this:</p>
<pre><code>nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
</code></pre>
<p>Details about this annotation are in <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="nofollow noreferrer">the documentation</a>:</p>
<blockquote>
<p>Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI</p>
<p>By default NGINX uses HTTP.</p>
</blockquote>
| whites11 |
<p>I setup kubernetes V1.20.1 with <code>containerd</code> instead of Docker. Now I failed to pull Docker images from my private registry (Harbor).</p>
<p>I already changed the /etc/containerd/config.toml like this:</p>
<pre><code>[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.foo.com"]
endpoint = ["https://registry.foo.com"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.foo.com"]
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.foo.com".auth]
username = "admin"
password = "Harbor12345"
</code></pre>
<p>But this did not work. The pull failed with the message:</p>
<pre><code>Failed to pull image "registry.foo.com/library/myimage:latest": rpc error: code = Unknown
desc = failed to pull and unpack image "registry.foo.com/library/myimage:latest": failed to
resolve reference "registry.foo.com/library/myimage:latest": unexpected status code
[manifests latest]: 401 Unauthorized
</code></pre>
<p>My Harbor registry is available via HTTPS with a Let's Encrypt certificate. So https should not be the problem here.</p>
<p>Even if I try to create a docker-secret this did not work:</p>
<pre><code>kubectl create secret docker-registry registry.foo.com --docker-server=https://registry.foo.com --docker-username=admin --docker-password=Harbor12345 [email protected]
</code></pre>
<p>Can anybody give me an example how to configure a private registry in Kubernetes with containerd?</p>
| Ralph | <p>Set <code>imagePullSecrets</code> in the pod/deployment specification:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: registry.foo.com
</code></pre>
<p>More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
| Dávid Molnár |
<p>I am trying to get Kafka topic lag into Prometheus and finally to the APIServer in order to utilize an external metrics HPA for my application.</p>
<p>I am getting the error <strong>no metrics returned from external metrics API</strong></p>
<pre><code>70m Warning FailedGetExternalMetric horizontalpodautoscaler/kafkademo-hpa unable to get external metric default/kafka_lag_metric_sm0ke/&LabelSelector{MatchLabels:map[string]string{topic: prices,},MatchExpressions:[]LabelSelectorRequirement{},}: no metrics returned from external metrics API
66m Warning FailedComputeMetricsReplicas horizontalpodautoscaler/kafkademo-hpa invalid metrics (1 invalid out of 1), first error is: failed to get external metric kafka_lag_metric_sm0ke: unable to get external metric default/kafka_lag_metric_sm0ke/&LabelSelector{MatchLabels:map[string]string{topic: prices,},MatchExpressions:[]LabelSelectorRequirement{},}: no metrics returned from external metrics API
</code></pre>
<p>This happens <strong>even though</strong> I can see the following output when querying the external API:</p>
<pre><code>kubectl get --raw /apis/external.metrics.k8s.io/v1beta1 | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "external.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "kafka_lag_metric_sm0ke",
"singularName": "",
"namespaced": true,
"kind": "ExternalMetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
<p>Here's the set-up:</p>
<ul>
<li>Kafka: v2.7.0</li>
<li>Prometheus: v2.26.0</li>
<li>Prometheus Adapter: v0.8.3</li>
</ul>
<p><strong>Prometheus Adapter Values</strong></p>
<pre><code>rules:
external:
- seriesQuery: 'kafka_consumergroup_group_lag{topic="prices"}'
resources:
template: <<.Resource>>
name:
as: "kafka_lag_metric_sm0ke"
metricsQuery: 'avg by (topic) (round(avg_over_time(<<.Series>>{<<.LabelMatchers>>}[1m])))'
</code></pre>
<p><strong>HPA</strong></p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: kafkademo-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: kafkademo
minReplicas: 3
maxReplicas: 12
metrics:
- type: External
external:
metricName: kafka_lag_metric_sm0ke
metricSelector:
matchLabels:
topic: prices
targetValue: 5
</code></pre>
<p><strong>HPA information</strong></p>
<pre><code>kubectl describe hpa kafkademo-hpa
Name: kafkademo-hpa
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Sat, 17 Apr 2021 20:01:29 +0300
Reference: Deployment/kafkademo
Metrics: ( current / target )
"kafka_lag_metric_sm0ke" (target value): <unknown> / 5
Min replicas: 3
Max replicas: 12
Deployment pods: 3 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetExternalMetric the HPA was unable to compute the replica count: unable to get external metric default/kafka_lag_metric_sm0ke/&LabelSelector{MatchLabels:map[string]string{topic: prices,},MatchExpressions:[]LabelSelectorRequirement{},}: no metrics returned from external metrics API
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 70m (x335 over 155m) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get external metric kafka_lag_metric_sm0ke: unable to get external metric default/kafka_lag_metric_sm0ke/&LabelSelector{MatchLabels:map[string]string{topic: prices,},MatchExpressions:[]LabelSelectorRequirement{},}: no metrics returned from external metrics API
Warning FailedGetExternalMetric 2m30s (x366 over 155m) horizontal-pod-autoscaler unable to get external metric default/kafka_lag_metric_sm0ke/&LabelSelector{MatchLabels:map[string]string{topic: prices,},MatchExpressions:[]LabelSelectorRequirement{},}: no metrics returned from external metrics API
</code></pre>
<p><strong>-- Edit 1</strong></p>
<p>When i query the default namespace i get this:</p>
<pre><code>kubectl get --raw /apis/external.metrics.k8s.io/v1beta1/namespaces/default/kafka_lag_metric_sm0ke |jq
{
"kind": "ExternalMetricValueList",
"apiVersion": "external.metrics.k8s.io/v1beta1",
"metadata": {},
"items": []
}
</code></pre>
<p>I can see that the "items" field is empty. What does this mean?</p>
<p>What i don't seem to comprehend is the chain of events that happen behind the scenes.</p>
<p>AFAIK this is what happens. <em><strong>Is this correct?</strong></em></p>
<ul>
<li>prometheus-adapter queries Prometheus, executes the seriesQuery, computes the metricsQuery and creates "kafka_lag_metric_sm0ke"</li>
<li>It registers an endpoint with the api server for external metrics.</li>
<li>The API Server will periodically update its stats based on that endpoint.</li>
<li>The HPA checks "kafka_lag_metric_sm0ke" from the API server and performs the scaling according to the supplied values.</li>
</ul>
<p>I also don't seem to understand the significance of namespaces in all this. I can see that the stat is namespaced. Does that mean that there will be 1 stat per namespace? How does that make sense?</p>
| sm0ke21 | <p>In a long tradition of answering my own questions after I ask them, here's what's wrong with the above configuration.</p>
<p>The error lies in the prometheus-adapter yaml:</p>
<pre><code>rules:
external:
- seriesQuery: 'kafka_consumergroup_group_lag{topic="prices"}'
resources:
template: <<.Resource>>
name:
as: "kafka_lag_metric_sm0ke"
metricsQuery: 'avg by (topic) (round(avg_over_time(<<.Series>>{<<.LabelMatchers>>}[1m])))'
</code></pre>
<p>I removed <code><<.LabelMatchers>></code> and now it works:</p>
<pre><code>kubectl get --raw /apis/external.metrics.k8s.io/v1beta1/namespaces/default/kafka_lag_metric_sm0ke |jq
{
"kind": "ExternalMetricValueList",
"apiVersion": "external.metrics.k8s.io/v1beta1",
"metadata": {},
"items": [
{
"metricName": "kafka_lag_metric_sm0ke",
"metricLabels": {
"topic": "prices"
},
"timestamp": "2021-04-21T16:55:18Z",
"value": "0"
}
]
}
</code></pre>
<p>I am still unsure as to why it works. I know that <code><<.LabelMatchers>></code> in this case will be substituted with something that doesn't produce a valid query, but I don't know what it is.</p>
| sm0ke21 |
<p>I created a StatefulSet on GKE, and it provisioned a bunch of GCE disks that are attached to the pods that belong to that StatefulSet. Suppose I scale the StatefulSet to 0: the constituent pods are destroyed and the disks are released. When I scale back up, the disks are reattached and mounted inside the correct pods. </p>
<p>My questions are:</p>
<ul>
<li>How does Kubernetes keep track of which GCE disk to reconnect to which StatefulSet pod?</li>
<li>Suppose I want to restore a StatefulSet Pod's PV from a snapshot. How can I get Kubernetes to use the disk that was created from the snapshot, instead of old disk?</li>
</ul>
| Dmitry Minkovsky | <p>When you scale the StatefulSet to 0 replicas, the pods get destroyed but the persistent volumes and persistent volume claims are kept. The association with the GCE disk is written inside the PersistentVolume object. When you scale the RS up again, pods are assigned to the correct PV and thus get the same volume from GCE.</p>
<p>In order to change the persistent volume - GCE disk association after a snapshot restore, you need to edit the PV object.</p>
| whites11 |
<p>Currently, I am facing an issue when K8s scale up new pods on old deployment and Rancher shows stuck on scheduling pods into K8s worker nodes. It eventually will be scheduled but will take some time, as I understand is to wait for the scheduler to find the node which fits the resource request.
In the Event section of that deployment, it shows:</p>
<p><code>Warning FailedScheduling 0/8 nodes are available: 5 Insufficient memory, 3 node(s) didn't match node selector.</code></p>
<p>Then I go to the Nodes tab to check if there is any lack of memory on the worker nodes, and it shows my worker nodes like this:</p>
<pre><code>STATE NAME ROLES VERSION CPU RAM POD
Active worker-01 Worker v1.19.5 14/19 Cores 84.8/86.2 GiB 76/110
Active worker-02 Worker v1.19.5 10.3/19 Cores 83.2/86.2 GiB 51/110
Active worker-03 Worker v1.19.5 14/19 Cores 85.8/86.2 GiB 63/110
Active worker-04 Worker v1.19.5 13/19 Cores 84.4/86.2 GiB 53/110
Active worker-05 Worker v1.19.5 12/19 Cores 85.2/86.2 GiB 68/110
</code></pre>
<p>But when I go into each server and check memory with top and free command, they output smimilar result like this one on worker-1 node:</p>
<pre><code>top:
Tasks: 827 total, 2 running, 825 sleeping, 0 stopped, 0 zombie
%Cpu(s): 34.9 us, 11.9 sy, 0.0 ni, 51.5 id, 0.0 wa, 0.0 hi, 1.7 si, 0.0 st
KiB Mem : 98833488 total, 2198412 free, 81151568 used, 15483504 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 17101808 avail Mem
free -g:
total used free shared buff/cache available
Mem: 94 77 1 0 14 16
Swap: 0 0 0
</code></pre>
<p>So the memory available in the nodes are about 16-17 GB but still cannot schedule new pod into them. So my wonder is what causes this conflict of memory number, is it the amount between 86.2 (on Rancher GUI) and 94 (on server) is for the OS and other processes? And why Rancher shows K8s workload currently takes about 83-85 GB but in server the memory available is about 16-17GB. Is there any way to check deeper into this?</p>
<p>I'm still learning K8s so please explain in detail if you can or topics that talk about this.</p>
<p>Thanks in advance!</p>
| UglyPrince | <p>It doesn't matter what's actual resource consumption on worker nodes.
What's really important is resource requests.</p>
<p>Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource.</p>
<p>Read more about <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">Resource Management for Pods and Containers</a></p>
<blockquote>
<p>but my wonder is why it shows almost full of 86.2 GB when the actual
memory is 94GB</p>
</blockquote>
<p>use <code>kubectl describe node <node name></code> to see how much memory has been given available to <code>kubelet</code> on particular node</p>
<p>You will see something like</p>
<pre><code>Capacity:
cpu: 8
ephemeral-storage: 457871560Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32626320Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 457871560Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32626320Ki
pods: 110
......
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (1%) 100m (1%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
</code></pre>
<blockquote>
<p>K8s workload currently takes about 83-85 GB but in server the memory
available is about 16-17GB.</p>
</blockquote>
<p>From output of <code>free</code> in question, this is not really true</p>
<p><code>KiB Mem : 98833488 total, 2198412 free, 81151568 used, 15483504 buff/cache</code></p>
<p><code>2198412 free</code> which is ~2GB and you have ~1.5GB in buff/cache.</p>
<p>You can use <code>cat /proc/meminfo</code> to get more details about OS-level memory info.</p>
| rkosegi |
<p>Is it possible to retrieve all pods without taking jobs?</p>
<pre><code>kubectl get pods
pod1 1/1 Running 1 28d
pod2 1/1 Running 1 28d
pods3 0/1 Completed 0 30m
pod4 0/1 Completed 0 30m
</code></pre>
<p>I don't want to see jobs, but only the other pod.<br>
I don't want to fetch them basing on "Running State" because I would like to verify if all deployment I am trying to install are "deployed".<br>
Basing on that I wanted to use the following command, but it is fetching also jobs I am trying to exclude:</p>
<pre><code>kubectl wait --for=condition=Ready pods --all --timeout=600s
</code></pre>
| Prisco | <p>Add a special label (e.g. <code>kind=pod</code>) to your job pods. Then use <code>kubectl get pods -l kind!=pod</code>.</p>
| Dávid Molnár |
<p>I am currently creating a helm chart for my fullstack application, and I would like to install a nats helm chart to it as a dependency. I know I can add it as a dependency in the Charts.yml file, but how can I provide the nats chart with a Values.yml to overrride the default nats chart values? What I would like is that when I do a helm install for my application, it also installs the nats dependency but with custom values.yml.</p>
<p>chart.yml dependency section</p>
<pre><code>dependencies:
- name: nats
repository: "https://nats-io.github.io/k8s/helm/charts"
version: "0.11.0"
</code></pre>
<p>Then I run <code>helm dependency upgrade</code>. This creates a .tgz under my subcharts as follows</p>
<p><a href="https://i.stack.imgur.com/EtggE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EtggE.png" alt="enter image description here" /></a></p>
<p>I would like to override the default values.yml which are present in the nats chart. I tried by adding the following to the parent values.yml but it did not work (as taken from <a href="https://docs.nats.io/running-a-nats-service/introduction/running/nats-kubernetes/helm-charts#clustering" rel="nofollow noreferrer">https://docs.nats.io/running-a-nats-service/introduction/running/nats-kubernetes/helm-charts#clustering</a>)</p>
<pre><code>nats:
cluster:
enabled: true
replicas: 5
tls:
secret:
name: nats-server-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
</code></pre>
<p>Do I need to unpack the chart for it to work?</p>
| arthhhhh | <p>This is documented in <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#overriding-values-from-a-parent-chart" rel="nofollow noreferrer">Helm website</a></p>
<p><em>Overriding Values from a Parent Chart</em></p>
<p>so in your top level chart's <code>values.yaml</code>, use construct like this:</p>
<pre><code>mysubchart: # this is dependent sub-chart
subchart-value-key: .... # this is key in subchart's values.yaml
</code></pre>
<p><strong>UPDATE</strong></p>
<p>In your case</p>
<pre class="lang-yaml prettyprint-override"><code>nats:
cluster:
enabled: true
replicas: 5
</code></pre>
<p>works if I do <code>helm template .</code>, rendered statefullset seems to reflect values correctly:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: RELEASE-NAME-nats
namespace: "default"
labels:
helm.sh/chart: nats-0.11.0
app.kubernetes.io/name: nats
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "2.6.5"
app.kubernetes.io/managed-by: Helm
spec:
selector:
matchLabels:
app.kubernetes.io/name: nats
app.kubernetes.io/instance: RELEASE-NAME
replicas: 5 # This value has been taken from values.yaml
</code></pre>
<p>So what exactly is not working?</p>
| rkosegi |
<p>I have this basic Dockerfile:</p>
<pre><code>FROM nginx
RUN apt-get -y update && apt install -y curl
</code></pre>
<p>In the master node of my Kubernetes cluster I build that image:</p>
<pre><code>docker build -t cnginx:v1 .
</code></pre>
<p><code>docker images</code> shows that the image has been correctly generated:</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
cgninx v1 d3b1b19d069e 39 minutes ago 141MB
</code></pre>
<p>I use this deployment referencing this custom image:</p>
<pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: cnginx
image: cnginx:v1
imagePullPolicy: Never
ports:
- containerPort: 80
nodeSelector:
nodetype: webserver
</code></pre>
<p>However the image is not found: </p>
<pre><code>NAME READY STATUS RESTARTS AGE
nginx-deployment-7dd98bd746-lw6tp 0/1 ErrImageNeverPull 0 4s
nginx-deployment-7dd98bd746-szr9n 0/1 ErrImageNeverPull 0 4s
</code></pre>
<p>Describe pod info:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned nginx-deployment-7dd98bd746-szr9n to kubenode2
Normal SuccessfulMountVolume 1m kubelet, kubenode2 MountVolume.SetUp succeeded for volume "default-token-bpbpl"
Warning ErrImageNeverPull 9s (x9 over 1m) kubelet, kubenode2 Container image "cnginx:v1" is not present with pull policy of Never
Warning Failed 9s (x9 over 1m) kubelet, kubenode2 Error: ErrImageNeverPull
</code></pre>
<p>I have also tried using the default imagePullPolicy, and some other things such as tagging the image with latest...</p>
<p>So, how can I make Kubernetes use a locally generated docker image?</p>
| codependent | <p>Your PODs are scheduled on your worker nodes. Since you set <code>imagePullPolicy</code> to <code>Never</code> you need to make your image available to both nodes. In other words, you need to build it on both nodes as you did on the master.</p>
<p>As a sidenote, it would be probably easier in the long term if you setup a custom docker registry and push your images there.</p>
| whites11 |
<p>Lets say I have two computers; A and B. I created a kubernetes cluster using kops on AWS from computer A. How do I access the API of that cluster (like I do <code>kubectl get nodes</code>, it gives me the nodes of that cluster) using computer B? </p>
| Harith | <p>You need to configure <code>kubectl</code> by defining a configuration file.</p>
<p>Since you are using <code>kops</code> you can use the instructions they're giving you here:</p>
<p><a href="https://github.com/kubernetes/kops/blob/master/docs/kubectl.md" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/docs/kubectl.md</a></p>
<pre><code>export KOPS_STATE_STORE=s3://<somes3bucket>
NAME=<kubernetes.mydomain.com>
/path/to/kops export kubecfg ${NAME}
</code></pre>
<p>You need to run the above instructions on <code>computer B</code> and it has to be correctly configured to have access to the <code><somes3bucket></code> bucket.</p>
<p>What the command will do is create a configuration file that holds the URL of your <code>apiserver</code> and the authentication certificates. If you are on a unix-like environment, that file will be created in <code>$HOME/.kube/config</code>.</p>
| whites11 |
<h1>Kubernetes Cluster Upgrades <em>The Hard Way</em></h1>
<p>What are the (high-level) steps required to upgrade a HA kubernetes cluster? </p>
<p>In the spirit of "Kubernetes the Hard Way", what are the manual steps that would form the basis of an automated process to achieve an upgrade of:
the control plane components?
the worker components?</p>
<p>The official docs assume the use of kubeadm, which is outside the scope of this question.</p>
| alansigudo | <p>It depends on how your current installation looks like. If the control plane components are static pods, you need to update the yaml file in the <strong><em>/etc/kubernetes/manifests</em></strong> folder. And if it is systemctl service you need to install the latest version and reload the service.</p>
| Bimal |
<p>I've Elasticsearch running on Kubernetes cluster (exposed using NodePort)</p>
<pre><code>helm install --name my-elasticsearch stable/elasticsearch
kubectl get svc
my-elasticsearch-client NodePort 10.123.40.199 <none> 9200:31429/TCP 107m
my-elasticsearch-discovery ClusterIP None <none> 9300/TCP 107m
</code></pre>
<p>I'm trying to connect it remotely from my spring boot app running in my IDE.</p>
<p>My Spring boot <code>application.properties</code>
Note I'm using a K8s Host IP and the my-elasticsearch-client NodePort</p>
<pre><code>spring.data.elasticsearch.cluster-name=elasticsearch
spring.data.elasticsearch.cluster-nodes=10.123.45.147:31429
spring.data.elasticsearch.repositories.enabled=true
</code></pre>
<p>I can also reach cluster-node in the <a href="http://10.134.39.147:31429/" rel="nofollow noreferrer">http://10.134.39.147:31429/</a> in browser</p>
<pre><code>{
"name" : "my-elasticsearch-client-797bc4dff6-wz2gq",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Js3zbqs_SJikVb42ZLGvpw",
"version" : {
"number" : "6.5.1",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "8c58350",
"build_date" : "2018-11-16T02:22:42.182257Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
</code></pre>
<p>GET _cluster/health</p>
<pre><code>{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 7,
"number_of_data_nodes" : 2,
"active_primary_shards" : 6,
"active_shards" : 12,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
</code></pre>
<p><strong>The problem</strong> is I'm hitting <em>org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available</em></p>
<p>I can confirm Elasticsearch is up cause I can access data via Kibana running on same cluster.</p>
<pre><code>2019-02-06 16:25:46.936 INFO 66980 --- [ restartedMain] o.s.d.e.c.TransportClientFactoryBean : Adding transport node : 10.123.45.147:31429
2019-02-06 16:26:18.330 INFO 66980 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2019-02-06 16:26:18.343 ERROR 66980 --- [ restartedMain] .d.e.r.s.AbstractElasticsearchRepository : failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{mT2_e3FzSTiDu3UHzRbOVg}{10.123.45.147}{10.123.45.147:31429}]
2019-02-06 16:26:18.536 INFO 66980 --- [ restartedMain] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-02-06 16:26:18.744 INFO 66980 --- [ restartedMain] o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s) beneath base path '/actuator'
2019-02-06 16:26:18.794 INFO 66980 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2019-02-06 16:26:18.796 INFO 66980 --- [ restartedMain] c.a.o.c.CustomerJourneyApplication : Started CustomerJourneyApplication in 35.786 seconds (JVM running for 37.054)
hello world, I have just started up
2019-02-06 16:26:18.861 ERROR 66980 --- [ restartedMain] o.s.boot.SpringApplication : Application run failed
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{mT2_e3FzSTiDu3UHzRbOVg}{10.123.45.147}{10.123.45.147:31429}]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:349) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:247) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:381) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:407) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:396) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.springframework.data.elasticsearch.core.ElasticsearchTemplate.index(ElasticsearchTemplate.java:577) ~[spring-data-elasticsearch-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.save(AbstractElasticsearchRepository.java:156) ~[spring-data-elasticsearch-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.index(AbstractElasticsearchRepository.java:175) ~[spring-data-elasticsearch-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments.invoke(RepositoryComposition.java:359) ~[spring-data-commons-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.data.repository.core.support.RepositoryComposition.invoke(RepositoryComposition.java:200) ~[spring-data-commons-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$ImplementationMethodExecutionInterceptor.invoke(RepositoryFactorySupport.java:644) ~[spring-data-commons-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:608) ~[spring-data-commons-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.lambda$invoke$3(RepositoryFactorySupport.java:595) ~[spring-data-commons-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:595) ~[spring-data-commons-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:59) ~[spring-data-commons-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93) ~[spring-aop-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:61) ~[spring-data-commons-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) ~[spring-aop-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at com.sun.proxy.$Proxy84.index(Unknown Source) ~[na:na]
at com.avaya.oceana.customerjourney.service.JourneyService.putJourney(JourneyService.java:40) ~[classes/:na]
at com.avaya.oceana.customerjourney.CustomerJourneyApplication.addJourneys(CustomerJourneyApplication.java:81) ~[classes/:na]
at com.avaya.oceana.customerjourney.CustomerJourneyApplication.doSomethingAfterStartup(CustomerJourneyApplication.java:39) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.context.event.ApplicationListenerMethodAdapter.doInvoke(ApplicationListenerMethodAdapter.java:259) ~[spring-context-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.context.event.ApplicationListenerMethodAdapter.processEvent(ApplicationListenerMethodAdapter.java:179) ~[spring-context-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.context.event.ApplicationListenerMethodAdapter.onApplicationEvent(ApplicationListenerMethodAdapter.java:142) ~[spring-context-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172) ~[spring-context-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165) ~[spring-context-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139) ~[spring-context-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:398) ~[spring-context-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:355) ~[spring-context-5.1.4.RELEASE.jar:5.1.4.RELEASE]
at org.springframework.boot.context.event.EventPublishingRunListener.running(EventPublishingRunListener.java:105) ~[spring-boot-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at org.springframework.boot.SpringApplicationRunListeners.running(SpringApplicationRunListeners.java:78) ~[spring-boot-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:332) ~[spring-boot-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260) ~[spring-boot-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248) ~[spring-boot-2.1.2.RELEASE.jar:2.1.2.RELEASE]
at com.avaya.oceana.customerjourney.CustomerJourneyApplication.main(CustomerJourneyApplication.java:26) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) ~[spring-boot-devtools-2.1.2.RELEASE.jar:2.1.2.RELEASE]
2019-02-06 16:26:18.864 INFO 66980 --- [ restartedMain] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
</code></pre>
| DarVar | <p>I found the issue.</p>
<p>The Elasticsearch Client <code>TransportClient</code> that <code>spring-boot-starter-data-elasticsearch</code> autoconfigures is deprecated. It gets auto configured by and needs to connect to the 9300 "transport" API. But this is a headless service for internal Kubernetes discovery and there's no way of exposing it.</p>
<p>So, I've switched to the newer <code>RestHighLevelClient</code>:
<a href="https://www.elastic.co/guide/en/elasticsearch/client/java-rest/6.3/java-rest-high.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/elasticsearch/client/java-rest/6.3/java-rest-high.html</a></p>
<p>Also, I needed to disable this <code>management.health.elasticsearch.enabled=false</code> since it was trying to connect to an elasticsearh on localhost??</p>
<p>I just need to figure out if <code>RestHighLevelClient</code> can be used by spring-data <code>ElasticsearchRepository</code></p>
| DarVar |
<p>We are having a Kubernetes service whose pods take some time to warm up with first requests. Basically first incoming requests will read some cached values from Redis and these requests might take a bit longer to process. When these newly created pods become ready and receive full traffic, they might become not very responsive for up to 30 seconds, before everything is correctly loaded from Redis and cached.</p>
<p>I know, we should definitely restructure the application to prevent this, unfortunately that is not feasible in a near future (we are working on it). </p>
<p>It would be great if it was possible to reduce the weight of the newly created pods, so they would receive 1/10 of the traffic in the beggining with the weight increasing as the time would pass. This would be also great for newly deployed versions of our application to see if it behaves correctly.</p>
| Vojtěch | <p>Why you need the cache loading in first call instead of having in heartbeat which is hooked to readiness probe? One other option is to make use of <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#using-init-containers" rel="nofollow noreferrer">init containers</a> in kubernetes</p>
| Bimal |
<p>If I attempt to create a Pod using the latest version of an image from Azure Container Registry, i get a "ImagePullBackOff" error. If I explicitly specify the image version, the Pod is created successfully. </p>
<p>I have created a new Image Pull Secret and can confirm that Kubernetes is pulling the image from ACR as the Pod starts up successfully when setting the version. When I set the image name to ACR_ImageName:latest (or when ommitting :latest, or when setting the imagePullPolicy to "Always", the pod fails to create reporting the following error: <em>note that i have replace the acr name and image name</em></p>
<blockquote>
<p>Warning Failed 27m (x3 over 28m) kubelet, aks-agentpool-15809833-vmss000007<br>
Failed to pull image "acrPath/imageName": [rpc error: code = Unknown desc = Error response from daemon:
manifest for acrPath/imageName:latest not found: manifest unknown: manifest unknown, rpc error: code = Unknown desc = Error response from daemon:
manifest for acrPath/imageName:latest not found: manifest unknown: manifest unknown, rpc error: code = Unknown desc = Error response from daemon:
Get <a href="https://acrPath/imageName/manifests/latest" rel="nofollow noreferrer">https://acrPath/imageName/manifests/latest</a>: unauthorized: authentication required]</p>
</blockquote>
<h1>This DOES NOT works</h1>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: k8spocfrontend
labels:
app: k8spocfrontend
type: frontend
spec:
containers:
- name: k8spocfrontend
image: dteuwhorizonacrhorizonmain.azurecr.io/k8spocfront:latest
imagePullPolicy: "Always"
imagePullSecrets:
- name: acr-auth-poc
</code></pre>
<h1>This works</h1>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: k8spocfrontend
labels:
app: k8spocfrontend
type: frontend
spec:
containers:
- name: k8spocfrontend
image: dteuwhorizonacrhorizonmain.azurecr.io/k8spocfront:2617
imagePullPolicy: "Always"
imagePullSecrets:
- name: acr-auth-poc
</code></pre>
<p>As you can see from the pods below, the pods gets created when setting the version tag.</p>
<blockquote>
<p>k8spocfront-5dbf7544f8-ccnxj | 2/2 | Running | 0 | 33m</p>
</blockquote>
| Tinus | <p>Looks like you don't have the image with <strong>latest</strong> tag. Basically we overwrite the latest tag to latest version of image.</p>
| Bimal |
<p>I have applied a config file like this <code>kubectl apply -f deploy/mysql.yml</code></p>
<p>How can I unapply this specific config?</p>
| Noitidart | <p>Use kubectl delete command</p>
<pre><code>kubectl delete -f deploy/mysql.yml
</code></pre>
| Bimal |
<p>Try to get started with skaffold, hitting lots of issues. So I went back to basics and tried to get the examples running:</p>
<ol>
<li>cloned <a href="https://github.com/GoogleContainerTools/skaffold" rel="nofollow noreferrer">https://github.com/GoogleContainerTools/skaffold</a></li>
<li>ran <code>cd ~/git/skaffold/examples/getting-started/</code> and then tried to get going;</li>
</ol>
<pre><code>$ skaffold dev --default-repo=aliwatters
Listing files to watch...
- skaffold-example
Generating tags...
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce
Checking cache...
- skaffold-example: Not found. Building
Building [skaffold-example]...
Sending build context to Docker daemon 3.072kB
Step 1/8 : FROM golang:1.12.9-alpine3.10 as builder
---> e0d646523991
Step 2/8 : COPY main.go .
---> Using cache
---> fb29e25db0a3
Step 3/8 : ARG SKAFFOLD_GO_GCFLAGS
---> Using cache
---> aa8dd4cbab42
Step 4/8 : RUN go build -gcflags="${SKAFFOLD_GO_GCFLAGS}" -o /app main.go
---> Using cache
---> 9a666995c00a
Step 5/8 : FROM alpine:3.10
---> be4e4bea2c2e
Step 6/8 : ENV GOTRACEBACK=single
---> Using cache
---> bdb74c01e0b9
Step 7/8 : CMD ["./app"]
---> Using cache
---> 15c248dd54e9
Step 8/8 : COPY --from=builder /app .
---> Using cache
---> 73564337b083
Successfully built 73564337b083
Successfully tagged aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce
The push refers to repository [docker.io/aliwatters/skaffold-example]
37806ae41d23: Preparing
1b3ee35aacca: Preparing
37806ae41d23: Pushed
1b3ee35aacca: Pushed
v1.18.0-2-gf0bfcccce: digest: sha256:a8defaa979650baea27a437318a3c4cd51c44397d6e2c1910e17d81d0cde43ac size: 739
Tags used in deployment:
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce@sha256:a8defaa979650baea27a437318a3c4cd51c44397d6e2c1910e17d81d0cde43ac
Deploy Failed. Could not connect to cluster microk8s due to "https://127.0.0.1:16443/version?timeout=32s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "10.152.183.1"). Check your connection for the cluster.
</code></pre>
<p>So this isn't making sense to me, the <a href="https://127.0.0.1:16443/version?timeout=32s" rel="nofollow noreferrer">https://127.0.0.1:16443/version?timeout=32s</a> is kubectl by the looks of it and it has a self-signed cert (viewed in the browser)</p>
<pre><code>$ snap version
snap 2.48.2
snapd 2.48.2
series 16
ubuntu 20.04
kernel 5.4.0-60-generic
$ snap list
# ...
microk8s v1.20.1 1910 1.20/stable canonical✓ classic
# ...
$ microk8s kubectl version
Client Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.1-34+e7db93d188d0d1", GitCommit:"e7db93d188d0d12f2fe5336d1b85cdb94cb909d3", GitTreeState:"clean", BuildDate:"2021-01-11T23:48:42Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.1-34+e7db93d188d0d1", GitCommit:"e7db93d188d0d12f2fe5336d1b85cdb94cb909d3", GitTreeState:"clean", BuildDate:"2021-01-11T23:50:46Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"linux/amd64"}
$ skaffold version
v1.18.0
$ docker version
Client: Docker Engine - Community
Version: 19.03.4-rc1
...
</code></pre>
<p>Where do I start with debugging this?</p>
<p>Thanks for any ideas!</p>
| Ali W | <p>Solved via the github issues <a href="https://github.com/GoogleContainerTools/skaffold/issues/5283" rel="nofollow noreferrer">https://github.com/GoogleContainerTools/skaffold/issues/5283</a> (thx briandealwis)</p>
<p>Combo of an alias, and an export of config was needed, so skaffold can understand the my setup.</p>
<pre><code>$ sudo snap unalias kubectl
$ sudo snap install kubectl --classic
$ microk8s.kubectl config view --raw > $HOME/.kube/config
$ skaffold dev --default-repo=<your-docker-repository>
</code></pre>
<p>Full output</p>
<pre><code>$ sudo snap unalias kubectl
# just in case
ali@stinky:~/git/skaffold/examples/getting-started (master)$ sudo snap install kubectl --classic
kubectl 1.20.2 from Canonical✓ installed
ali@stinky:~/git/skaffold/examples/getting-started (master)$ which kubectl
/snap/bin/kubectl
ali@stinky:~/git/skaffold/examples/getting-started (master)$ microk8s.kubectl config view --raw > $HOME/.kube/config
ali@stinky:~/git/skaffold/examples/getting-started (master)$ skaffold dev --default-repo=aliwatters
Listing files to watch...
- skaffold-example
Generating tags...
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce
Checking cache...
- skaffold-example: Found Remotely
Tags used in deployment:
- skaffold-example -> aliwatters/skaffold-example:v1.18.0-2-gf0bfcccce@sha256:a8defaa979650baea27a437318a3c4cd51c44397d6e2c1910e17d81d0cde43ac
Starting deploy...
- pod/getting-started created
Waiting for deployments to stabilize...
Deployments stabilized in 23.793238ms
Press Ctrl+C to exit
Watching for changes...
[getting-started] Hello world!
[getting-started] Hello world!
[getting-started] Hello world!
# ^C
Cleaning up...
- pod "getting-started" deleted
</code></pre>
| Ali W |
<p>I’ve a config map which I need to read from K8S via api </p>
<p>I Created a cluster role</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: zrole
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list"]
</code></pre>
<p>and cluster role binding</p>
<pre><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: z-role-binding
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: zrole
</code></pre>
<p>Config Map</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: z-config
namespace: fdrs
data:
avr: client1
fuss: xurbz
</code></pre>
<p>The <a href="https://github.com/kubernetes/client-go/blob/master/kubernetes/typed/core/v1/configmap.go" rel="nofollow noreferrer">code</a> is used like</p>
<p>clientSet.CoreV1().ConfigMaps(uNamespcae)</p>
<p>when I run the code locally (and provide to the the GO api the kubeconfig) I was able to get the config map data,
However when I run the code inside the cluster I got error: <code>invalid token</code> , any idea what am I missing here? </p>
| Jon lib | <p>Check <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#pod-v1-core" rel="nofollow noreferrer"><code>automountServiceAccountToken</code></a> in the pod spec. By default it's set to <code>true</code>, but maybe you have it disabled.</p>
<p>Use the official GO client. It reads the correct configuration and tokens by default. <a href="https://github.com/kubernetes/client-go/blob/master/examples/in-cluster-client-configuration/main.go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/blob/master/examples/in-cluster-client-configuration/main.go</a></p>
<p>If you don't use it, then use the correct configuration:
<a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#directly-accessing-the-rest-api-1" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#directly-accessing-the-rest-api-1</a></p>
<p>Check the token in the pod: <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> and use the <code>kubernetes</code> service.</p>
| Dávid Molnár |
<p>Just curious if it is possible to execute a command inside <code>minikube</code> without doing <code>minikube ssh</code> and then executing the command.</p>
<p>Something like:</p>
<p><code>minikube ssh exec -it <command></code></p>
| cjones | <p>According to the minikube documentation (<a href="https://minikube.sigs.k8s.io/docs/reference/commands/ssh/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/reference/commands/ssh/</a>) there is no such option.</p>
| Dávid Molnár |
<p>I am using <code>kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml</code> to create deployment.</p>
<p>I want to create deployment in my namespace <code>examplenamespace</code>.</p>
<p>How can I do this?</p>
| favok20149 | <p>There are three possible solutions.</p>
<ol>
<li>Specify namespace in the <code>kubectl</code> <code>apply</code> or <code>create</code> command:</li>
</ol>
<pre><code>kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml -n my-namespace
</code></pre>
<ol start="2">
<li>Specify namespace in your <code>yaml</code> files:</li>
</ol>
<pre><code> apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
</code></pre>
<ol start="3">
<li>Change default namespace in <code>~/.kube/config</code>:</li>
</ol>
<pre><code>apiVersion: v1
kind: Config
clusters:
- name: "k8s-dev-cluster-01"
cluster:
server: "https://example.com/k8s/clusters/abc"
namespace: "my-namespace"
</code></pre>
| Dávid Molnár |
<p>One of the points in the <code>kubectl</code> <a href="https://kubernetes.io/docs/reference/kubectl/conventions/#best-practices" rel="nofollow noreferrer">best practices section in Kubernetes Docs</a> state below:</p>
<blockquote>
<p>Pin to a specific generator version, such as <code>kubectl run
--generator=deployment/v1beta1</code></p>
</blockquote>
<p>But then a little down in the doc, we get to learn that except for Pod, the use of <code>--generator</code> option is deprecated and that it would be removed in future versions.</p>
<p>Why is this being done? Doesn't generator make life easier in creating a template file for resource definition of deployment, service, and other resources? What alternative is the kubernetes team suggesting? This isn't there in the docs :(</p>
| karthiks | <p>For deployment you can try</p>
<pre><code>kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
</code></pre>
<p>and </p>
<p><strong>Note</strong>: <code>kubectl run --generator except for run-pod/v1</code> is deprecated in v1.12.</p>
| Bimal |
<p>I have a cluster Kubernetes cluster on a local machine and one raspberry pi. In order to test the cluster I created a nginx deployment and a service that I want to access as a NodePort. But for God know's why, I can't access said service. Bellow are my deployment and service files.</p>
<p><code>kubectl get nodes</code></p>
<pre><code>NAME STATUS ROLES AGE VERSION
anima Ready master 7d5h v1.16.1
bahamut Ready <none> 7d4h v1.16.1
</code></pre>
<p>My service and deployment files:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: bahamut
</code></pre>
<pre><code>
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- port: 3030
targetPort: 80
type: NodePort
</code></pre>
<p>After <code>kubectl get pods -o wide</code>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-67c8c4b564-6x7g5 1/1 Running 0 6m21s 10.244.1.13 bahamut <none> <none>
</code></pre>
<p>My Deployments, <code>kubectl get deployments -o wide</code>:</p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx 1/1 1 1 7m55s nginx nginx:latest app=nginx
</code></pre>
<p>My Services, <code>kubectl get svc -o wide</code>:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d5h <none>
nginx NodePort 10.102.203.77 <none> 3030:30508/TCP 8m54s app=nginx
</code></pre>
<p>And finally, <code>kubectl get endpoints -o wide</code>:</p>
<pre><code>NAME ENDPOINTS AGE
kubernetes 192.168.25.4:6443 7d5h
nginx 10.244.1.13:80 9m41s
</code></pre>
<p>My Kubernetes master local IP is <code>192.168.25.4</code> and my raspberry ip is <code>192.168.25.6</code>. After deploying the service I tried:</p>
<pre><code>curl 192.168.25.6:3030
curl: (7) Failed to connect to 192.168.25.6 port 3030: Connection refused
curl 192.168.25.6:80
curl: (7) Failed to connect to 192.168.25.6 port 80: Connection refused
curl 192.168.25.6:30508 (hangs)
</code></pre>
<p>Also tried using the master node IP, the Service IP and the listed Cluster IP, but nothing works.</p>
<p><strong>EDIT</strong></p>
<p>It works if I use <code>hostNetwork=true</code> on the deployment and access it using the node local IP on the container port, but obviously that's not what I want. I want to understand why Kubernetes isn't let me access the container through the service.</p>
| alessandrocb | <p>NodePort exposes the Service on each Node’s IP at a static port (the NodePort) in your case it is <code>30508</code>. Please see more details <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">here</a>. </p>
<p>And <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">this</a> will share more details on the bare-metal clusters.</p>
| Bimal |
<p>I am using the awesome Powerlevel9k theme for my Zsh.</p>
<p>I defined a custom kubecontext element to show my kubernetes cluster (context) and namespace (see code below).</p>
<p>While I conditionally set the foreground color through the color variable I would like to set the background color instead to be able to better see when I work on the production cluster.
Is that somehow possible with Powerlevel9k? All I could find is that I can set the background color of the prompt element statically with <code>POWERLEVEL9K_CUSTOM_KUBECONTEXT_BACKGROUND='075'</code></p>
<pre><code># Kubernetes Current Context/Namespace
custom_prompt_kubecontext() {
local kubectl_version="$(kubectl version --client 2>/dev/null)"
if [[ -n "$kubectl_version" ]]; then
# Get the current Kuberenetes context
local cur_ctx=$(kubectl config view -o=jsonpath='{.current-context}')
cur_namespace="$(kubectl config view -o=jsonpath="{.contexts[?(@.name==\"${cur_ctx}\")].context.namespace}")"
# If the namespace comes back empty set it default.
if [[ -z "${cur_namespace}" ]]; then
cur_namespace="default"
fi
local k8s_final_text="$cur_ctx/$cur_namespace"
local color='%F{black}'
[[ $cur_ctx == "prod" ]] && color='%F{196}'
echo -n "%{$color%}\U2388 $k8s_final_text%{%f%}" # \U2388 is Kubernetes Icon
#"$1_prompt_segment" "$0" "$2" "magenta" "black" "$k8s_final_text" "KUBERNETES_ICON"
fi
}
POWERLEVEL9K_CUSTOM_KUBECONTEXT="custom_prompt_kubecontext"
# Powerlevel9k configuration
POWERLEVEL9K_LEFT_PROMPT_ELEMENTS=(context dir vcs custom_kubecontext)
</code></pre>
<p>Here is a screenshot of the current setup in action:</p>
<p><a href="https://i.stack.imgur.com/2cAFI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cAFI.png" alt="screenshot showing the different kubecontext prompt colors depending on the current kubecontext"></a></p>
| Hedge | <p>Disclaimer: I'm the author of powerlevel10k.</p>
<p>No, this is not possible in powerlevel9k. It is, however, possible in powerlevel10k. Powerlevel10k is backward compatible with powerlevel9k configuration, meaning that you won't have to change any <code>POWERLEVEL9K</code> parameters if you decide to switch.</p>
<p>Powerlevel10k has several advantages over its predecessor:</p>
<ol>
<li>It's over 10 times faster.</li>
<li>It has a builtin configuration wizard. Type <code>p10k configure</code> to access it.</li>
<li>It has many new features. One of them is relevant to you. The builtin <code>kubecontext</code> supports <em>context classes</em> that allow you to style this prompt segment differently depending on which kubernetes context is currently active. Here's the excerpt from the configuration that <code>p10k configure</code> generates:</li>
</ol>
<pre><code># Kubernetes context classes for the purpose of using different colors, icons and expansions with
# different contexts.
#
# POWERLEVEL9K_KUBECONTEXT_CLASSES is an array with even number of elements. The first element
# in each pair defines a pattern against which the current kubernetes context gets matched.
# More specifically, it's P9K_CONTENT prior to the application of context expansion (see below)
# that gets matched. If you unset all POWERLEVEL9K_KUBECONTEXT_*CONTENT_EXPANSION parameters,
# you'll see this value in your prompt. The second element of each pair in
# POWERLEVEL9K_KUBECONTEXT_CLASSES defines the context class. Patterns are tried in order. The
# first match wins.
#
# For example, given these settings:
#
# typeset -g POWERLEVEL9K_KUBECONTEXT_CLASSES=(
# '*prod*' PROD
# '*test*' TEST
# '*' DEFAULT)
#
# If your current kubernetes context is "deathray-testing/default", its class is TEST
# because "deathray-testing/default" doesn't match the pattern '*prod*' but does match '*test*'.
#
# You can define different colors, icons and content expansions for different classes:
#
# typeset -g POWERLEVEL9K_KUBECONTEXT_TEST_FOREGROUND=0
# typeset -g POWERLEVEL9K_KUBECONTEXT_TEST_BACKGROUND=2
# typeset -g POWERLEVEL9K_KUBECONTEXT_TEST_VISUAL_IDENTIFIER_EXPANSION='⭐'
# typeset -g POWERLEVEL9K_KUBECONTEXT_TEST_CONTENT_EXPANSION='> ${P9K_CONTENT} <'
typeset -g POWERLEVEL9K_KUBECONTEXT_CLASSES=(
# '*prod*' PROD # These values are examples that are unlikely
# '*test*' TEST # to match your needs. Customize them as needed.
'*' DEFAULT)
typeset -g POWERLEVEL9K_KUBECONTEXT_DEFAULT_FOREGROUND=7
typeset -g POWERLEVEL9K_KUBECONTEXT_DEFAULT_BACKGROUND=5
typeset -g POWERLEVEL9K_KUBECONTEXT_DEFAULT_VISUAL_IDENTIFIER_EXPANSION='⎈'
</code></pre>
<p>You can also customize the text content of <code>kubecontext</code>. You'll find more info in <code>~/.p10k.zsh</code> once you run <code>p10k configure</code>. Oh, and <code>kubecontext</code> is about 1000 times faster in powerlevel10k.</p>
| Roman Perepelitsa |
<p>I want design AWS architecture like this, but not sure how to handle high bandwidth (>100GB) traffic.
A kubernetes cluster with lots of microservices , both frontend and backend. An LB in front of the worker nodes. K8s replica can scale high bandwidth traffic.
My question is where should I create the Kubernetes cluster? I know there is no bandwidth constraints in Public subnet, but AWS NAT Gateway has bandwidth constraints. What is the approach by big companies to serve high bandwidth through NAT Gateway. Or should I put my K8s cluster in public subnet itself.?
Any help is appreciated .Thanks</p>
| jithin raj | <p>If the burst bandwidth of a NAT Gateway doesn't meet your requirements (currently 45Gbps), you will most likely have to configure a NAT instance.</p>
<p>(Bear in mind you can have one NAT Gateway per AZ)</p>
<p>The bandwidth of a NAT instance is dependent upon the the instance type you use.</p>
<p>There is more information about the comparison <a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html" rel="nofollow noreferrer">here</a> </p>
<p>I would stay away from deploying your services in a public subnet unless it's absolutely necessary.</p>
| GreenyMcDuff |
<p>I'm configuring loki-distributed on a kubernetes cluster via helm, but I'm getting the following error:</p>
<p>failed to create memberlist: Failed to get final advertise address: no private IP address found, and explicit IP not provided.
I found only one solution on the internet, which was on a forum about Grafana Tempo, in which I added the following snippet:</p>
<pre><code>loki:
extraMemberlistConfig:
bind_addr:
- ${MY_POD_IP}
backend:
replicas: 2
persistence:
size: 1gi
storageClass: nfs
extraArgs:
- -config.expand-env=true
extraEnv:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
</code></pre>
<p>However this configuration, some requests I get the following error:
"too many unhealthy instances in the ring"</p>
<p>My current custom-values.yaml is:</p>
<pre><code>loki:
auth_enabled: false
persistence:
enabled: true
storageClass: nfs
size: 2Gi
limits_config:
max_global_streams_per_user: 10000
max_query_series: 10000
retention_period: 30d
storage:
bucketNames:
chunks: loki
ruler: test
admin: test
type: s3
s3:
accessKeyId: lokiaccess
endpoint: minio.databases.svc.cluster.local
insecure: true
s3ForcePathStyle: true
secretAccessKey: *****
storage_config:
boltdb_shipper:
active_index_directory: /var/loki/index
cache_location: /var/loki/index_cache
resync_interval: 5s
shared_store: s3
aws:
s3: http://lokiaccess:******@minio.databases.svc.cluster.local/loki
s3forcepathstyle: true
write:
replicas: 2
persistence:
size: 1Gi
storageClass: nfs
backend:
replicas: 2
persistence:
size: 1Gi
storageClass: nfs
read:
replicas: 2
persistence:
size: 1Gi
storageClass: nfs
table_manager:
retention_deletes_enabled: true
retention_period: 30d
monitoring:
lokiCanary:
enabled: false
selfMonitoring:
enabled: false
test:
enabled: false
</code></pre>
<p>Has anyone experienced this and have a solution?</p>
| SrCabra | <p>You can check the ring members via the API, for example if you do a port forward to loki-gateway, you can open these links on a web browser:</p>
<ul>
<li>Check the member list:
<a href="http://localhost:8080/memberlist" rel="nofollow noreferrer">http://localhost:8080/memberlist</a></li>
<li>Check the datasource (what Grafana actually does for testing the DS):
<a href="http://localhost:8080/loki/api/v1/labels" rel="nofollow noreferrer">http://localhost:8080/loki/api/v1/labels</a></li>
</ul>
<p>If your pods are not in the memberlist, make sure you add the <code>extraArgs</code> and <code>extraEnv</code> to each <code>backend</code>, <code>write</code>, <code>read</code> block.</p>
<p>Here's the catch: You need at least 2 <code>write</code> pods to not get the <code>"too many unhealthy instances in the ring"</code>.</p>
| kntx |
<p>I have generated a bunch of namespaces like below and now i want to delete only these name spaces without deleting kube-system namespaces,i tried with grep but no success</p>
<blockquote>
<p>kubectl delete namespaces | grep "gatling*"
error: resource(s) were provided, but no name, label selector, or --all flag specified</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/UNL86.png" rel="nofollow noreferrer">Multiple namespaces</a></p>
| mark liberhman | <p>First get the names of the namespaces you want to delete:</p>
<pre><code>kubectl get namespaces --no-headers=true -o custom-columns=:metadata.name | grep gatling
</code></pre>
<p>With <code>-o custom-columns=:metadata.name</code> we output only the names of the services. The output is piped to <code>grep</code>, which filters them by looking for <code>gatling</code>.</p>
<p>Then run the delete command for each line with <code>xargs</code>:</p>
<pre><code>kubectl get namespaces --no-headers=true -o custom-columns=:metadata.name | grep gatling | xargs kubectl delete namespace
</code></pre>
| Dávid Molnár |
<p>I'm have a Gitlab cloud connected to a k8s cluster running on Google (GKE).
The cluster was created via Gitlab cloud.</p>
<p>I want to customise the <code>config.toml</code> because I want to <em>fix</em> the cache on k8s as suggested in <a href="https://gitlab.com/gitlab-org/gitlab-runner/issues/1906" rel="noreferrer">this issue</a>.</p>
<p>I found the <code>config.toml</code> configuration in the <code>runner-gitlab-runner</code> ConfigMap.
I updated the ConfigMap to contain this <code>config.toml</code> setup:</p>
<pre><code> config.toml: |
concurrent = 4
check_interval = 3
log_level = "info"
listen_address = '[::]:9252'
[[runners]]
executor = "kubernetes"
cache_dir = "/tmp/gitlab/cache"
[runners.kubernetes]
memory_limit = "1Gi"
[runners.kubernetes.node_selector]
gitlab = "true"
[[runners.kubernetes.volumes.host_path]]
name = "gitlab-cache"
mount_path = "/tmp/gitlab/cache"
host_path = "/home/core/data/gitlab-runner/data"
</code></pre>
<p>To apply the changes I deleted the <code>runner-gitlab-runner-xxxx-xxx</code> pod so a new one gets created with the updated <code>config.toml</code>.</p>
<p>However, when I look into the new pod, the <code>/home/gitlab-runner/.gitlab-runner/config.toml</code> now contains 2 <code>[[runners]]</code> sections:</p>
<pre><code>listen_address = "[::]:9252"
concurrent = 4
check_interval = 3
log_level = "info"
[session_server]
session_timeout = 1800
[[runners]]
name = ""
url = ""
token = ""
executor = "kubernetes"
cache_dir = "/tmp/gitlab/cache"
[runners.kubernetes]
host = ""
bearer_token_overwrite_allowed = false
image = ""
namespace = ""
namespace_overwrite_allowed = ""
privileged = false
memory_limit = "1Gi"
service_account_overwrite_allowed = ""
pod_annotations_overwrite_allowed = ""
[runners.kubernetes.node_selector]
gitlab = "true"
[runners.kubernetes.volumes]
[[runners.kubernetes.volumes.host_path]]
name = "gitlab-cache"
mount_path = "/tmp/gitlab/cache"
host_path = "/home/core/data/gitlab-runner/data"
[[runners]]
name = "runner-gitlab-runner-xxx-xxx"
url = "https://gitlab.com/"
token = "<my-token>"
executor = "kubernetes"
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.kubernetes]
host = ""
bearer_token_overwrite_allowed = false
image = "ubuntu:16.04"
namespace = "gitlab-managed-apps"
namespace_overwrite_allowed = ""
privileged = true
service_account_overwrite_allowed = ""
pod_annotations_overwrite_allowed = ""
[runners.kubernetes.volumes]
</code></pre>
<p>The file <code>/scripts/config.toml</code> is the configuration as I created it in the ConfigMap.
So I suspect the <code>/home/gitlab-runner/.gitlab-runner/config.toml</code> is somehow updated when registering the Gitlab-Runner with the Gitlab cloud.</p>
<p>If if changing the <code>config.toml</code> via the ConfigMap does not work, how should I then change the configuration? I cannot find anything about this in Gitlab or Gitlab documentation.</p>
| Joost den Boer | <p>Inside the mapping you can try to append the volume and the extra configuration parameters:</p>
<pre><code># Add docker volumes
cat >> /home/gitlab-runner/.gitlab-runner/config.toml << EOF
[[runners.kubernetes.volumes.host_path]]
name = "var-run-docker-sock"
mount_path = "/var/run/docker.sock"
EOF
</code></pre>
<p>I did the runner deployment using a helm chart; I guess you did the same, in the following link you will find more information about the approach I mention: <a href="https://gitlab.com/gitlab-org/gitlab-runner/issues/2578" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/gitlab-runner/issues/2578</a></p>
<p>If after appending the config your pod is not able to start, check the logs, I did test the appending approach and had some errors like "Directory not Found," and it was because I was appending in the wrong path, but after fixing those issues, the runner works fine.</p>
| Ariskay |
<p>I'm attempting to expose a server on port 80 via kubernetes.</p>
<p>Start minikube :</p>
<pre><code>minikube start
</code></pre>
<p>Create a deployment by running the command</p>
<pre><code>"kubectl create deployment apache --image=httpd:2.4"
</code></pre>
<p>Create a service by running the command</p>
<pre><code>"kubectl create service nodeport apache --tcp=80:80"
kubectl get svc
</code></pre>
<p>returns :</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache NodePort 10.105.48.77 <none> 80:31619/TCP 5s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43s
</code></pre>
<p>I've attempted to open 10.105.48.77 & 10.96.0.1 on port 80 but the service is not running.</p>
<p>How to start a simple http server on port 80 via kubernetes that will serve requests to that same port ?</p>
| blue-sky | <p><code>NodePort</code> has a range <code>30000-32767</code>. Your log shows <code>31619</code> is assigned, you may try that. If you really want port 80 you will need other types of service, for example <code>LoadBalancer</code>. You can also use <code>port-forward</code> to forward you local port <code>80</code> to the <code>apache</code> pod.</p>
| Derrick Wong |
<p>I have a bash script in a Docker image to which I can pass a command line argument through <code>docker run</code> (having specified the bash script in <code>ENTRYPOINT</code> and a default parameter in <code>CMD</code> like in this <a href="https://stackoverflow.com/a/40312311/7869068">answer</a>). So I would run something like</p>
<pre><code>docker run my_docker_test argument_1
</code></pre>
<p>Now I would like to deploy multiple (ca. 50) containers to OpenShift or Kubernetes, each with a different value of the argument. I understand that in Kubernetes I could specify the <code>command</code> and <code>args</code> in the object configuration yaml file. Is there a possibility to pass the argument directly from the command line like in <code>docker run</code>, e.g. passing to <code>kubectl</code> or <code>oc</code>, without the need to create a new yaml file each time I want to change the value of the argument?</p>
| Daniel P | <p>The right answer is from @Jonas but you can also use environment variables in your yaml file as stated <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments" rel="noreferrer">below</a>:</p>
<blockquote>
<p>As an alternative to providing strings directly, you can define arguments by using environment variables</p>
</blockquote>
<pre><code>env:
- name: ARGUMENT
value: {{ argument_1 }}
args: ["$(ARGUMENT)"]
</code></pre>
<p>Where {{ argument_1 }} is an environment variable.</p>
| georgeos |
<p>Initialy i test with mode tcp and that works. The haproxy tcp passthru config is below:</p>
<pre><code>frontend https_in
bind *:443
mode tcp
option forwardfor
option tcplog
log global
default_backend https_backend
backend https_backend
mode tcp
server s1 10.21.0.60:31390 check
server s2 10.21.0.169:31390 check
server s3 10.21.0.173:31390 check
</code></pre>
<p>How should a mode http configuration look like? I need to decrypt traffic, inject some headers (like forwarded-for) and encrypt it again, sending it to ssl istio ingress-gateway backend.</p>
<p>My configuration attempts were many (and unsuccessful) here is one snapshot:</p>
<pre><code>frontend https_in
mode http
bind *:443 ssl crt /etc/haproxy/prod.pem crt /etc/haproxy/dev.pem crt /etc/haproxy/stg.pem no-sslv3
option http-server-close
option forwardfor
reqadd X-Forwarded-Proto:\ https
reqadd X-Forwarded-Port:\ 443
rspadd Strict-Transport-Security:\ max-age=15768000
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
acl acl_app1 req_ssl_sni -i mydomain.test
use_backend https_backend if acl_app1
backend https_backend
mode http
server s1 10.21.0.60:31390 check ssl verify none
</code></pre>
<p>In haproxy logs i see</p>
<pre><code>haproxy[12734]: Server https_backend/s1 is DOWN, reason: Layer6 invalid response, info: "SSL handshake failure (Connection reset by peer)", check duration: 1ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy[12734]: Server https_backend/s1 is DOWN, reason: Layer6 invalid response, info: "SSL handshake failure (Connection reset by peer)", check duration: 1ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy[12734]: backend https_backend has no server available!
</code></pre>
<p>If i remove the check, and still try to query haproxy:</p>
<pre><code>haproxy[13381]: https_in~ https_in/<NOSRV> -1/-1/-1/-1/0 503 213 - - SC-- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
</code></pre>
<p>I cannot figure the SNI settings to pass from haproxy into istio to make it work.</p>
<p>I cannot find anything usefull in logs from envoyproxy and istio-ingressgateway on debug loglevel either.</p>
| strzelecki.maciek | <p>It's been a while since you posted this problem, but I encountered the same issue. So I was able to verify the HAProxy error using the <code>openssl s_client</code> command.</p>
<p><code>openssl s_client -connect ip:port</code> would return a <code>write:errno=104</code> which means it's a <code>Connection reset by peer</code> error.</p>
<p>But when I supplied the server name, I was able to successfully connect to the backend: <code>openssl s_client -connect ip:port -servername server.name</code>.</p>
<p>After digging around in the backend option from HAProxy, I stumbled on the <a href="https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#5.2-check-sni" rel="nofollow noreferrer">check-sni</a> option, which mentions:</p>
<blockquote>
<p>This option allows you to specify the SNI to be used when doing health checks
over SSL. It is only possible to use a string to set . ...</p>
</blockquote>
<p>So I use this option to configure the backend server as follows</p>
<pre><code>server lokomotive-contour ip:port ssl verify none check-sni server.name sni str(server.name) check
</code></pre>
<p>The critical part is the <code>check-sni</code> option like mentioned before; this configures the SNI during the health checks.</p>
<p>But I also found the <code>sni str()</code> option to be necessary for getting the regular traffic routing through this backend to work properly.</p>
<p>ps: make sure you are on HAProxy 1.8 or later since the <code>check-sni</code> is only supported starting from 1.8 😉</p>
<p>Hope my answer can help you in the future</p>
| Niels |
<p>We are using azuredevops agent configured in AKS cluster with the Keda scaledjobs. The AKS node pool sku is Standard_E8ds_v5 (1 instance) and we are using persistent volume mounted on azure disk .</p>
<p>the scaledJob property is as below.</p>
<pre><code>apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
annotations:
name: azdevops-scaledjob
namespace: ado
spec:
failedJobsHistoryLimit: 5
jobTargetRef:
template:
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: kubernetes.azure.com/mode
operator: In
values:
- mypool
- key: topology.disk.csi.azure.com/zone
operator: In
values:
- westeurope-1
weight: 2
containers:
- env:
- name: AZP_URL
value: https://azuredevops.xxxxxxxx/xxxxxxx/organisation
- name: AZP_TOKEN
value: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- name: AZP_POOL
value: az-pool
image: xxxxxxxxxxxxxx.azurecr.io/vsts/dockeragent:xxxxxxxxx
imagePullPolicy: Always
name: azdevops-agent-job
resources:
limits:
cpu: 1500m
memory: 6Gi
requests:
cpu: 500m
memory: 3Gi
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /mnt
name: ado-cache-storage
volumes:
- name: ado-cache-storage
persistentVolumeClaim:
claimName: azure-disk-pvc
maxReplicaCount: 8
minReplicaCount: 1
pollingInterval: 30
successfulJobsHistoryLimit: 5
triggers:
- metadata:
organizationURLFromEnv: AZP_URL
personalAccessTokenFromEnv: AZP_TOKEN
poolID: "xxxx"
type: azure-pipelines
</code></pre>
<p>But we noticed a strange behavior as when trying to trigger a build, Error message in the pipeline:</p>
<pre><code>"We stopped hearing from agent azdevops-scaledjob-xxxxxxx. Verify the agent machine is running and has a healthy network connection. Anything that terminates an agent process, starves it for CPU, or blocks its network access can cause this error".
</code></pre>
<p>The pipeline will be in hang state and will be continuing without error, but in backend the pod is already in state error. So we have to cancel the pipeline each time when it occures and need to iniatiate a new build, so that the pipeline will be scheduled to a available pod.</p>
<p>On describing the pod which is in error state, we could identify this.</p>
<pre><code>azdevops-scaledjob-6xxxxxxxx-b 0/1 Error 0 27h
</code></pre>
<p>Pod has error as below.</p>
<pre><code>Annotations: <none>
Status: Failed
Reason: Evicted
Message: The node was low on resource: ephemeral-storage. Container azdevops-agent-job was using 23001896Ki, which exceeds its request of 0.
</code></pre>
| Vowneee | <p>I have set the <code>safe-to-evict</code> to false, so the AKS won't relocate the pod/job because node <code>downscale</code>.</p>
<p>The drawback here is that AKS can stay with more nodes than needed. So you must ensure the pod/job won't be there forever.</p>
<pre><code>spec:
jobTargetRef:
template:
metadata:
annotations:
"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
</code></pre>
<p>Another possibility is to change the node downscale timeout</p>
<p>Terraform code</p>
<pre><code> auto_scaler_profile {
scale_down_unneeded = "90m"
}
</code></pre>
| Thalles Noce |
<p>I have setup a <code>php:7.4-apache</code> pod in k8s and <code>curl</code>ing to any domain works only if a <code>.</code> is appended to the back of the domain name.</p>
<p>Standalone docker container works as expected.</p>
<p>For example:</p>
<pre><code>root@testpod1-67655784f8-lbzlw:/var/www/html# curl -I https://www.google.com.sg.
HTTP/2 200
content-type: text/html; charset=ISO-8859-1
p3p: CP="This is not a P3P policy! See g.co/p3phelp for more info."
date: Mon, 15 Mar 2021 07:28:29 GMT
server: gws
x-xss-protection: 0
x-frame-options: SAMEORIGIN
expires: Mon, 15 Mar 2021 07:28:29 GMT
cache-control: private
set-cookie: 1P_JAR=2021-03-15-07; expires=Wed, 14-Apr-2021 07:28:29 GMT; path=/; domain=.google.com.sg; Secure
set-cookie: NID=211=diZZqWJ8q_Z2Uv76GGJB3hCVZgW3DJdshJC6046-lim-eupG0XaiLz9jtCGdrYJ0H06ihwwuB8QSTWyDX1oJ5bn-s_NdSn0qnPCc3YFl-lgi1fHRc3PQ-Zzm43c1WC462MOLDniIpRsWd8ixCxGcmCK6OE7l7dyI_mh72DdKYSM; expires=Tue, 14-Sep-2021 07:28:29 GMT; path=/; domain=.google.com.sg; HttpOnly
alt-svc: h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
</code></pre>
<pre><code># kubectl logs --follow -n kube-system --selector 'k8s-app=kube-dns'
[INFO] 10.244.0.11:51529 - 65397 "AAAA IN www.google.com.sg. udp 35 false 512" NOERROR qr,rd,ra 80 0.003877824s
[INFO] 10.244.0.11:51529 - 62826 "A IN www.google.com.sg. udp 35 false 512" NOERROR qr,rd,ra 68 0.00382946s
</code></pre>
<pre><code>root@testpod1-67655784f8-lbzlw:/var/www/html# curl -I https://www.google.com.sg
curl: (35) error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name
</code></pre>
<pre><code># kubectl logs --follow -n kube-system --selector 'k8s-app=kube-dns'
[INFO] 10.244.0.11:41210 - 18404 "AAAA IN www.google.com.sg.production.svc.cluster.local. udp 64 false 512" NXDOMAIN qr,aa,rd 157 0.000227919s
[INFO] 10.244.0.11:41210 - 44759 "A IN www.google.com.sg.production.svc.cluster.local. udp 64 false 512" NXDOMAIN qr,aa,rd 157 0.000222998s
[INFO] 10.244.0.11:37292 - 52263 "AAAA IN www.google.com.sg.svc.cluster.local. udp 53 false 512" NXDOMAIN qr,aa,rd 146 0.000149362s
[INFO] 10.244.0.11:37292 - 6177 "A IN www.google.com.sg.svc.cluster.local. udp 53 false 512" NXDOMAIN qr,aa,rd 146 0.000220946s
[INFO] 10.244.0.11:33258 - 6845 "AAAA IN www.google.com.sg.cluster.local. udp 49 false 512" NXDOMAIN qr,aa,rd 142 0.00012002s
[INFO] 10.244.0.11:33258 - 51638 "A IN www.google.com.sg.cluster.local. udp 49 false 512" NXDOMAIN qr,aa,rd 142 0.000140393s
[INFO] 10.244.0.11:42947 - 8517 "A IN www.google.com.sg.xxxx.com. udp 46 false 512" NOERROR qr,rd,ra 144 0.006529064s
[INFO] 10.244.0.11:42947 - 57930 "AAAA IN www.google.com.sg.xxxx.com. udp 46 false 512" NOERROR qr,rd,ra 209 0.00684084s
</code></pre>
<p>Pods's /etc/resolv.conf</p>
<pre><code>root@testpod1-67655784f8-lbzlw:/var/www/html# cat /etc/resolv.conf
nameserver 10.96.0.10
search production.svc.cluster.local svc.cluster.local cluster.local xxxx.com
options ndots:5
</code></pre>
| user2176499 | <p>This is the expected behavior:
<code>www.google.com.sg.</code> is a fully qualified name while <code>www.google.com.sg</code> is not.</p>
<p>The problem is with your <code>ndot</code> option value, read the following from the resolv.conf man:</p>
<blockquote>
<p>ndots:n</p>
<p>sets a threshold for the number of dots which must appear in a name
before an initial absolute query will be made. The default for n is 1,
meaning that if there are any dots in a name, the name will be tried
first as an absolute name before any search list elements are appended
to it.</p>
</blockquote>
<p>Basically reducing the <code>ndot</code> in your example to 3 would allow <code>curl</code> to work on the pod.</p>
<p>Here is a good read about this topic: <a href="https://mrkaran.dev/posts/ndots-kubernetes/" rel="nofollow noreferrer">https://mrkaran.dev/posts/ndots-kubernetes/</a></p>
| mrbm |
<p>I started using <a href="https://k8slens.dev/" rel="noreferrer">Lens</a> and noticed that it gives you some warnings when the pods inside the nodes have limits higher than the actual capacity.
<a href="https://i.stack.imgur.com/Txase.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Txase.png" alt="A graphic visualization from Lens where you can see a warning telling that the specified limits are higher than the node capacity" /></a></p>
<p>So I tried to get this information with <em>kubectl</em> but I'm new to <em>jsonpath</em> and I just managed to get the raw info using something like this:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get pods -o=jsonpath='{.items..resources.limits}' -A
</code></pre>
<p>That produces something like this:</p>
<pre class="lang-json prettyprint-override"><code>{"cpu":"200m","memory":"1Gi"} {"cpu":"200m","memory":"1Gi"} {"cpu":"200m","memory":"512Mi"} {"cpu":"500m","memory":"250Mi"} {"memory":"170Mi"} {"memory":"170Mi"} {"cpu":"2","memory":"2Gi"} {"cpu":"2","memory":"2Gi"} {"cpu":"2","memory":"2Gi"} {"cpu":"1","memory":"1Gi"} {"cpu":"1","memory":"1Gi"} {"cpu":"2","memory":"2Gi"} {"cpu":"100m","memory":"128Mi"} {"cpu":"100m","memory":"128Mi"} {"cpu":"500m","memory":"600Mi"} {"cpu":"1","memory":"1Gi"} {"cpu":"100m","memory":"25Mi"} {"cpu":"100m","memory":"25Mi"}
</code></pre>
<p>So, my questions are, how can I sum all these values? Will these values be accurate or am I missing any other query? I've checked using LimitRange and the values I got seem to be correct, the results include the limits set by the LimitRange configuration.</p>
| jmservera | <p>you can use a kubectl plugin to list/sort pods with cpu limits:</p>
<pre><code>kubectl resource-capacity --sort cpu.limit --util --pods
</code></pre>
<p><a href="https://github.com/robscott/kube-capacity" rel="noreferrer">https://github.com/robscott/kube-capacity</a></p>
| guoqiao |
<p>I understand the difference between Declarative and Imperative Management, well explained in this thread <a href="https://stackoverflow.com/questions/47369351/kubectl-apply-vs-kubectl-create">kubectl apply vs kubectl create?</a> and in the official doc <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/</a></p>
<p>But my residual doubt is that even in the Declarative Management a (manual) command like </p>
<blockquote>
<p>kubectl scale</p>
</blockquote>
<p>although persists further </p>
<blockquote>
<p>kubectl apply</p>
</blockquote>
<p>commands still "won't survive" a cluster restart (since its configuration change is stored in the cluster store, like <em>etcd</em>), right? If so, shouldn't we make changes only to the </p>
<blockquote>
<p>object configuration file</p>
</blockquote>
<p>and redeploy through </p>
<blockquote>
<p>kubectl apply</p>
</blockquote>
<p>command?
thanks</p>
| toto' | <p>As far as I understand <code>kubectl scale</code> will change the <code>ReplicaSet</code> configuration object in <code>etcd</code>. So, it'll survive a restart.</p>
<p>However, you should store your configuration objects in a version control system (git). If you execute commands like <code>kubectl scale</code>, that won't update the copies stored in git. The next usage of those configuration files will override the values previously set.</p>
| Dávid Molnár |
<p>I'm attempting to run Minikube in a VMWare Workstation guest, running Ubuntu 18.04.</p>
<p><code>kubectl version</code> results in:</p>
<p><code>Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
</code></p>
<p><code>minikube version</code> results in:</p>
<pre><code>minikube version: v0.29.0
</code></pre>
<p>I have enabled Virtualize Intel VT-x/EPT or AMD-V/RVI on the VMWare guest configuration. I have 25GB of hard drive space. Yet, regardless of how I attempt to start Minikube, I get the following error:</p>
<pre><code>Starting local Kubernetes v1.10.0 cluster...
Starting VM...
E1005 11:02:32.495579 5913 start.go:168] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: Error creating VM: virError(Code=1, Domain=10, Message='internal error: qemu unexpectedly closed the monitor: 2018-10-05T09:02:29.926633Z qemu-system-x86_64: error: failed to set MSR 0x38d to 0x0
qemu-system-x86_64: /build/qemu-11gcu0/qemu-2.11+dfsg/target/i386/kvm.c:1807: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.').
Retrying.
</code></pre>
<p>Commands I've tried:</p>
<pre><code>minikube start --vm-driver=kvm2
minikube start --vm-driver=kvm
minikube start --vm-driver=none
</code></pre>
<p>All result in the same thing.</p>
<p>I notice that on the Ubuntu guest, the network will shortly disconnect and re-connect when I run <code>minikube start</code>. Is it a problem with the network driver? How would I debug this?</p>
| cbll | <p>I observed a similar issue on Ubuntu 18.04.1 VM (Intel), the solution I found is:</p>
<ol>
<li>Run this from the console:</li>
</ol>
<pre><code>$ sudo cat > /etc/modprobe.d/qemu-system-x86.conf << EOF
options kvm_intel nested=1 enable_apicv=n
options kvm ignore_msrs=1
EOF
</code></pre>
<ol start="2">
<li>Reboot the VM</li>
</ol>
| alexey |
<p>I am trying to deploy a pyspark application on k8s (with minikube) and followed the instructions from here: <a href="https://spark.apache.org/docs/2.4.6/running-on-kubernetes.html" rel="nofollow noreferrer">https://spark.apache.org/docs/2.4.6/running-on-kubernetes.html</a></p>
<p>I've built the images using docker tools and pushed it to my registry as well. Later I invoke spark-submit like this:</p>
<pre><code>./bin/spark-submit --master k8s://https://127.0.0.1:49154 --deploy-mode cluster --name pyspark-on-k8s --conf spark.executor.instances=1 --conf spark.kubernetes.driver.container.image=jsoft88/conda_spark:2.4.6 --conf spark.kubernetes.executor.container.image=jsoft88/conda_spark:2.4.6 --conf spark.kubernetes.pyspark.pythonVersion=3 --conf spark.kubernetes.driverEnv.PYSPARK_DRIVER_PYTHON=/opt/miniconda3/envs/spark_env/bin/python --conf spark.kubernetes.driverEnv.PYSPARK_PYTHON=/opt/miniconda3/envs/spark_env/bin/python --conf spark.kubernetes.driverEnv.PYTHON_VERSION=3.7.3 /home/bitnami/spark-sample/app/main/sample_app.py --top 10
</code></pre>
<p>The *.driverEnv are just attempts that I made, because by default it is not using this python version, but python 3.8.5, which causes spark to throw an error like this:</p>
<pre><code>++ id -u
+ myuid=0
++ id -g
+ mygid=0
+ set +e
++ getent passwd 0
+ uidentry=root:x:0:0:root:/root:/bin/bash
+ set -e
+ '[' -z root:x:0:0:root:/root:/bin/bash ']'
+ SPARK_K8S_CMD=driver-py
+ case "$SPARK_K8S_CMD" in
+ shift 1
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sort -t_ -k4 -n
+ sed 's/[^=]*=\(.*\)/\1/g'
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -n '' ']'
+ PYSPARK_ARGS=
+ '[' -n '--top 10' ']'
+ PYSPARK_ARGS='--top 10'
+ R_ARGS=
+ '[' -n '' ']'
+ '[' 3 == 2 ']'
+ '[' 3 == 3 ']'
++ python3 -V
+ pyv3='Python 3.8.5'
+ export PYTHON_VERSION=3.8.5
+ PYTHON_VERSION=3.8.5
+ export PYSPARK_PYTHON=python3
+ PYSPARK_PYTHON=python3
+ export PYSPARK_DRIVER_PYTHON=python3
+ PYSPARK_DRIVER_PYTHON=python3
+ case "$SPARK_K8S_CMD" in
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@" $PYSPARK_PRIMARY $PYSPARK_ARGS)
+ exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=172.17.0.3 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class org.apache.spark.deploy.PythonRunner file:/home/bitnami/spark-sample/app/main/sample_app.py --top 10
21/03/04 12:55:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Traceback (most recent call last):
File "/home/bitnami/spark-sample/app/main/sample_app.py", line 4, in <module>
from pyspark.sql import DataFrame, SparkSession, functions
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/opt/spark/python/lib/pyspark.zip/pyspark/__init__.py", line 51, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/opt/spark/python/lib/pyspark.zip/pyspark/context.py", line 31, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/opt/spark/python/lib/pyspark.zip/pyspark/accumulators.py", line 97, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/opt/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 72, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/opt/spark/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 145, in <module>
File "/opt/spark/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 126, in _make_cell_set_template_code
TypeError: an integer is required (got type bytes)
</code></pre>
<p>The idea is to have conda environment inside the container, with the application installed in the environment; so I extended the docker image generated by the <code>docker-image-tool.sh</code> provided in the spark binaries, my dockerfile looks like this:</p>
<pre><code>FROM jsoft88/spark-py:2.4.6
ENV PATH="/opt/miniconda3/bin:${PATH}"
ARG PATH="/opt/miniconda3/bin:${PATH}"
WORKDIR /home/bitnami
RUN apt update -y && apt install wget -y && wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
RUN chmod +x ./miniconda.sh
RUN ./miniconda.sh -b -f -p /opt/miniconda3
RUN rm -f miniconda.sh
RUN /opt/miniconda3/bin/conda init bash
COPY . /home/bitnami/spark-sample
RUN conda config --add channels conda-forge
RUN conda create --name spark_env --file /home/bitnami/spark-sample/requirements.txt --yes python=3.7.3
RUN . /opt/miniconda3/etc/profile.d/conda.sh && conda activate spark_env && cd /home/bitnami/spark-sample && pip install .
</code></pre>
<p>Requirements.txt:</p>
<pre><code>python==3.7.3
pyspark==2.4.6
pytest
</code></pre>
| Jorge Cespedes | <p>Well, turns out that in spark 2.4.6, virtual environments are not supported in K8s:</p>
<p><a href="https://github.com/apache/spark/blob/v2.4.6/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/bindings/python/Dockerfile" rel="nofollow noreferrer">https://github.com/apache/spark/blob/v2.4.6/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/bindings/python/Dockerfile</a></p>
<pre><code># TODO: Investigate running both pip and pip3 via virtualenvs
</code></pre>
<p>So I went ahead and introduced some hacks in the bindings, which you is fully documented in my personal repo: <a href="https://github.com/jsoft88/pyspark-conda-k8s" rel="nofollow noreferrer">https://github.com/jsoft88/pyspark-conda-k8s</a>.</p>
<p>Basically, it was about modifying the entrypoint.sh provided by the spark <code>docker-image-tool.sh</code> and adding the required lines for conda environment.</p>
| Jorge Cespedes |
<p>files stored on PV persistent storage by pod application are not visible on host machine. Configuration shows no errors. Config - single pv, pvc, pod.
I am quite new to this environment.</p>
<p>pv:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: notowania-pv
spec:
storageClassName: manual
capacity:
storage: 10Gi #Size of the volume
accessModes:
- ReadWriteOnce #type of access
hostPath:
path: "/home/user1684/dane" #host location
</code></pre>
<p>pv status:</p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
notowania-pv 10Gi RWO Retain Bound default/notowania-pv manual 22m
</code></pre>
<p>pvc:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: notowania-pv
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<p>pvc status:</p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
notowania-pv Bound notowania-pv 10Gi RWO manual 24m
</code></pre>
<p>pod:</p>
<pre><code>apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "notowania"
namespace: "default"
labels:
app: "notowania"
spec:
replicas: 1
selector:
matchLabels:
app: "notowania"
template:
metadata:
labels:
app: "notowania"
spec:
containers:
- name: "selenium-docker-sha256"
image: "eu.gcr.io/firstapp-249912/selenium_docker@sha256:da15e666c3472e93979d821c912c2855951e579a91238f35f0e339b85343ed6b"
volumeMounts:
- name: notowania
mountPath: /notowania
volumes:
- name: notowania
persistentVolumeClaim:
claimName: notowania-pv
</code></pre>
<p>pod status:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
notowania-79d68c8c56-79q55 1/1 Running 0 25m
</code></pre>
<p>files on pod:</p>
<pre><code>user1684@cloudshell:~ (firstapp-249912)$ kubectl exec -it notowania-79d68c8c56-79q55 -- /bin/bash
root@notowania-79d68c8c56-79q55:/usr/src/app# ll /notowania/
total 8
drwxr-xr-x 2 root root 4096 Sep 18 12:54 ./
drwxr-xr-x 1 root root 4096 Sep 18 12:51 ../
-rw-r--r-- 1 root root 0 Sep 18 12:54 aaa
</code></pre>
<p>files on host:</p>
<pre><code>user1684@cloudshell:~ (firstapp-249912)$ pwd
/home/user1684
user1684@cloudshell:~ (firstapp-249912)$ ll dane
total 8
drwxr-xr-x 2 user1684 user1684 4096 Sep 17 23:13 ./
drwxr-xr-x 15 user1684 user1684 4096 Sep 18 14:47 ../
</code></pre>
<p>So I have no idea why aaa is not visible on the host machine in google cloud - as I think 'aaa' file should be here.</p>
| Wojtas.Zet | <p>I think the issue is caused by on what host you examine the directory contents.</p>
<ul>
<li>You've executed the last command on the "cloudshell" VM which is only meant for interacting with GCP and is not a cluster node.</li>
<li>And you should rather inspect it on a cluster node.</li>
<li>To inspect the state of the cluster node you should do something like this:</li>
</ul>
<pre><code>$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-tt-test1-default-pool-a4cf7d86-rgt2 europe-west3-a g1-small 10.156.0.2 1.2.3.4 RUNNING
$ gcloud compute ssh gke-tt-test1-default-pool-a4cf7d86-rgt2
user@gke-tt-test1-default-pool-a4cf7d86-rgt2 ~ $ ls -la /home/user1684/dane
total 8
drwxr-xr-x 2 root root 4096 Sep 18 14:16 .
drwxr-xr-x 3 root root 4096 Sep 18 14:12 ..
-rw-r--r-- 1 root root 0 Sep 18 14:16 aaa
</code></pre>
| Tomasz Tarczynski |
<p>I have a GKE cluster with autoscaling enabled, and a single node pool. This node pool has a minimum of 1 node, and maximum of 5. When I have been testing the autoscaling of this cluster it has correctly scaled up (added a new node) when I added more replicas to my deployment. When I removed my deployment I would have expected it to scale down, but looking at the logs it is failing because it cannot evict the kube-dns deployment from the node:</p>
<pre><code>reason: {
messageId: "no.scale.down.node.pod.kube.system.unmovable"
parameters: [
0: "kube-dns-7c976ddbdb-brpfq"
]
}
</code></pre>
<p>kube-dns isn't being run as a daemonset, but I do not have any control over that as this is a managed cluster.</p>
<p>I am using Kubernetes 1.16.13-gke.1.</p>
<p>How can I make the cluster node pool scale down?</p>
| rj93 | <p>The autoscaler will not evict pods from the kube-system namespace unless they are a daemonset OR they have a PodDisruptionBudget.</p>
<p>For kube-dns, as well as kube-dns-autoscaler, and a few other GKE managed deployment in kube-dns, you need to add a poddisruptionbudget.</p>
<p>e.g:</p>
<pre><code>apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
annotations:
labels:
k8s-app: kube-dns
name: kube-dns-bbc
namespace: kube-system
spec:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
</code></pre>
| Félix Cantournet |
<p>I have an up&running SolrCloud v8.11 cluster on Kubernetes, with solr-operator.</p>
<p>The backup is enabled on S3 bucket.</p>
<p>How can I correctly write the request to perform a <code>RESTORE</code> of a backup stored in a S3 bucket?</p>
<p>I'm unable to figure out what should it be the <code>location</code> and the <code>snapshotName</code> I have to provide in the <code>Restore API</code> request made to Solr.</p>
<p>In order to discover those values, I tried to execute the <code>LISTBACKUP</code> action, but in this case the <code>location</code> values is also wrong...</p>
<pre class="lang-sh prettyprint-override"><code>$ curl https://my-solrcloud.example.org/solr/admin/collections\?action=LISTBACKUP\&name=collection-name\&repository=collection-backup\&location=my-s3-bucket/collection-backup
{
"responseHeader":{
"status":400,
"QTime":70},
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"specified location s3:///my-s3-bucket/collection-backup/ does not exist.",
"code":400}}
## The Log in cluster writes:
org.apache.solr.common.SolrException: specified location s3:///my-s3-bucket/collection-backup/ does not exist. => org.apache.solr.common.SolrException: specified location s3:///my-s3-bucket/collection-backup/ does not exist.
</code></pre>
<p>After all, the recurring backup works as expected, but sooner or later a <code>RESTORE action</code> will be performed and it's not clear how could it be done correctly.</p>
<p>Thank you in advance.</p>
| Fernando Aspiazu | <p>A bit late, but I came across this question while searching for the same answer. There was <a href="https://lists.apache.org/[email protected]:2022-2:S3%20backup" rel="nofollow noreferrer">a thread on the mailing list</a> that helped me to figure out how this is supposed to work.</p>
<p>I found the documentation on this pretty confusing, but the <code>location</code> seems to be <em>relative to the backup repository</em>. So, the <code>repository</code> argument already accounts for the bucket name, and the <code>name</code> argument would be the name of the backup you are attempting to list. Solr then builds the S3 path as <code>{repository bucket} + {location} + {backup name}</code>. So, location should simply be: <code>/</code></p>
<p>Assume you've set up a <code>backupRepository</code> for the SolrCloud deployment like the following:</p>
<pre><code>backupRepositories:
- name: "my-backup-repo"
s3:
region: "us-east-1"
bucket: "my-s3-bucket"
</code></pre>
<p>and you have created a SolrBackup like the following:</p>
<pre><code>---
apiVersion: solr.apache.org/v1beta1
kind: SolrBackup
metadata:
name: "my-collection-backup"
spec:
repositoryName: "my-backup-repo"
solrCloud: "my-solr-cloud"
collections:
- "my-collection"
</code></pre>
<p>The full cURL command for LISTBACKUP would be:</p>
<pre><code>$ curl https://my-solrcloud.example.org/solr/admin/collections \
-d action=LISTBACKUP \
-d name=my-collection-backup \
-d repository=my-backup-repo \
-d location=/
</code></pre>
<p>Similarly for the RESTORE command:</p>
<pre><code>$ curl https://my-solrcloud.example.org/solr/admin/collections \
-d action=RESTORE \
-d name=my-collection-backup \
-d repository=my-backup-repo \
-d location=/ \
-d collection=my-collection-restore
</code></pre>
| Matthew Hanlon |
<p>I am using Docker for Windows (docker-desktop) which ships with a small single node kubernetes instance. I have a scenario where my pods needs to communicate with some external services running on the same localhost(windwos 10 machine), but outside of the k8s cluster.</p>
<p>I know that I can use <code>kubernetes.docker.internal</code> from within the cluster to reach my <code>node/localhost</code>. But unfortunately the pods has some default connection string within the image which I don't want to change - say the pods are by default trying to connect to a dns string - "my-server". So in my scenario, I want to define a K8s service with name "my-server" which has an Endpoint reference to kubernetes.docker.internal so that the kube-proxy will route that correctly to my localhost which is my windows 10 machine.</p>
<p>Is this somehow possible? I have already checked <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services" rel="nofollow noreferrer">this</a> solution, but it talks about external services running on some other node or cloud. I am also considering the local machine hostname as an ExternalName, but this is not entirely reliable in dns resolution for my usecase. So I really want to use the <code>kubernetes.docker.internal</code> as a service endpoint. Any thoughts?</p>
| Vipin P | <p>One trick I've used was to forward an internal IP address back to 127.0.0.1
It worked perfectly.</p>
<pre class="lang-sh prettyprint-override"><code># linux
sudo iptables -t nat -A OUTPUT -p all -d 10.0.100.100 -j DNAT --to-destination 127.0.0.1
# osx
sudo ifconfig lo0 alias 10.0.100.100
</code></pre>
| Segev -CJ- Shmueli |
<p>I am learning to use k8s and I have a problem. I have been able to perform several deployments with the same yml without problems. My problem is that when I mount the secret volume it loads me the directory with the variables but it does not detect them as environments variable</p>
<p>my secret</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
namespace: insertmendoza
name: authentications-sercret
type: Opaque
data:
DB_USERNAME: aW5zZXJ0bWVuZG96YQ==
DB_PASSWORD: aktOUDlaZHRFTE1tNks1
TOKEN_EXPIRES_IN: ODQ2MDA=
SECRET_KEY: aXRzaXNzZWd1cmU=
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: insertmendoza
name: sarys-authentications
spec:
replicas: 1
selector:
matchLabels:
app: sarys-authentications
template:
metadata:
labels:
app: sarys-authentications
spec:
containers:
- name: sarys-authentications
image: 192.168.88.246:32000/custom:image
imagePullPolicy: Always
resources:
limits:
memory: "500Mi"
cpu: "50m"
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: authentications-config
volumeMounts:
- name: config-volumen
mountPath: /etc/config/
readOnly: true
- name: secret-volumen
mountPath: /etc/secret/
readOnly: true
volumes:
- name: config-volumen
configMap:
name: authentications-config
- name: secret-volumen
secret:
secretName: authentications-sercret
> [email protected] start
> node dist/index.js
{
ENGINE: 'postgres',
NAME: 'insertmendoza',
USER: undefined, <-- not load
PASSWORD: undefined,<-- not load
HOST: 'db-service',
PORT: '5432'
}
</code></pre>
<p>if I add them manually if it recognizes them</p>
<pre><code> env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: authentications-sercret
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: authentications-sercret
key: DB_PASSWORD
> [email protected] start
> node dist/index.js
{
ENGINE: 'postgres',
NAME: 'insertmendoza',
USER: 'insertmendoza', <-- work
PASSWORD: 'jKNP9ZdtELMm6K5', <-- work
HOST: 'db-service',
PORT: '5432'
}
listening queue
listening on *:8000
</code></pre>
<p>in the directory where I mount the secrets exist!</p>
<pre><code>/etc/secret # ls
DB_PASSWORD DB_USERNAME SECRET_KEY TOKEN_EXPIRES_IN
/etc/secret # cat DB_PASSWORD
jKNP9ZdtELMm6K5/etc/secret #
</code></pre>
<h5>EDIT</h5>
<p>My solution speed is</p>
<pre><code>envFrom:
- configMapRef:
name: authentications-config
- secretRef: <<--
name: authentications-sercret <<--
</code></pre>
<p>I hope it serves you, greetings from Argentina Insert Mendoza</p>
| Nelson Javier Avila | <p>If I understand the problem correctly, you aren't getting the secrets loaded into the environment. It looks like you're loading it incorrectly, use the <code>envFrom</code> form as documented <a href="https://kubernetes.io/docs/concepts/configuration/secret/#use-case-as-container-environment-variables" rel="nofollow noreferrer">here</a>.</p>
<p>Using your example it would be:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
namespace: insertmendoza
name: authentications-sercret
type: Opaque
data:
DB_USERNAME: aW5zZXJ0bWVuZG96YQ==
DB_PASSWORD: aktOUDlaZHRFTE1tNks1
TOKEN_EXPIRES_IN: ODQ2MDA=
SECRET_KEY: aXRzaXNzZWd1cmU=
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: insertmendoza
name: sarys-authentications
spec:
replicas: 1
selector:
matchLabels:
app: sarys-authentications
template:
metadata:
labels:
app: sarys-authentications
spec:
containers:
- name: sarys-authentications
image: 192.168.88.246:32000/custom:image
imagePullPolicy: Always
resources:
limits:
memory: "500Mi"
cpu: "50m"
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: authentications-config
- secretRef:
name: authentications-sercret
volumeMounts:
- name: config-volumen
mountPath: /etc/config/
readOnly: true
volumes:
- name: config-volumen
configMap:
name: authentications-config
</code></pre>
<p>Note the volume and mount was removed and just add the <code>secretRef</code> section. Those should now be exported as environment variables in your pod.</p>
| Joshua Hansen |
<p>I am using the nginx ingress controller with an AKS cluster.</p>
<p>The problem I have at the moment is that whenever a query string is added to a URL, nginx returns a 502 bad gateway.</p>
<p>The ingress is:</p>
<pre><code>kind: Ingress
metadata:
name: myapp
namespace: dh2
labels:
helm.sh/chart: myapp-0.1.0
app.kubernetes.io/name: myapp
app.kubernetes.io/instance: myapp-1680613380
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: Helm
spec:
ingressClassName: nginx
tls:
- hosts:
- "example.com"
secretName: qa-tls
rules:
- host: "example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-1680613380
port:
number: 80
</code></pre>
<p>When I access <a href="http://example.com" rel="nofollow noreferrer">http://example.com</a> everything works as expected. However if I pass any query string in the URL such as <a href="http://example.com/login?=1" rel="nofollow noreferrer">http://example.com/login?=1</a> I receive a 502.</p>
<p>I have tried to use Regex with the following annotation and path:</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /$1
...
path: /(.*)
</code></pre>
<p>But this also fails to work. What am I missing?</p>
| David Hirst | <p>It turns out that Rafael was onto something.</p>
<p>Having checked the controller logs it seems that an error was being thrown about the length of the URL.</p>
<p>Naturally I would have thought that nginx should have then thrown a 414 error but instead resorted to a 502. Since the issue is not with my ingress code Turing is correct that this should now be moved to ServerFault.</p>
<p>However all I needed to do with increase the buffer sizes on the nginx controller config map since it appears nginx has very small buffer sizes by default.</p>
<p>I simply added the following under data:</p>
<pre><code> proxy-buffer-size: "16k"
large-client-header-buffers: "4 16k"
</code></pre>
| David Hirst |
<p>First what happened is
We updated a configmap (key changed), then updated deployment to use the new key.
Both were successful.
After it finished, we checked the events, found out that there was a volume mounting error because of referring to the old key.</p>
<p>Below is how I investigated the error and why.
First I thought since the error was because of referring the old key, it must have been a pod crash after I updated the configmap but before I updated the deployment, because volume mounting only happens when pod starting, which now I'm not so sure.</p>
<p>The I checked the events again, there was no crash event.</p>
<p>My question is
Is there anything else other than crash that causes volume to mount?
If there's not, what could be the possible reason?</p>
| newme | <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically</a></p>
<p>When a ConfigMap already being consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period (1 minute by default) + ttl of ConfigMaps cache (1 minute by default) in kubelet. You can trigger an immediate refresh by updating one of the pod’s annotations.</p>
| newme |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.