Question
stringlengths 65
39.6k
| Answer
stringlengths 38
29.1k
|
---|---|
<p>Was trying to restrict IAM users with the rbac of AWS EKS cluster. Mistakenly updated the configmap "aws-auth" from kube-system namespace. This removed the complete access to the EKS cluster. </p>
<p>Missed to add the <strong>groups:</strong> in the configmap for the user.</p>
<p>Tried providing full admin access to the user/role that is lastly mentioned in the configmap, But no luck.</p>
<p>Any idea of recovering access to the cluster would be highly appreciable.</p>
<p>The config-map.yaml:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapUsers: |
- userarn: arn:aws:iam::1234567:user/test-user
username: test-user
</code></pre>
|
<p>Did a work-around for this issue:</p>
<p>Since the IAM user who created the EKS Cluster by default possess complete access over the cluster, inspite of the aws-auth configmap. Since the IAM user who created, had been deleted, we re-created the IAM user, as it would have the same arn (if IAM user is created with the same name as before).</p>
<p>Once created the user credentials(access & secret keys) for the user, we got back access to the EKS cluster. Following which, we modified the config-map as required.</p>
|
<p>I am working on a demo project using InfluxDB, Telegraf and Grafana running in a Kubernetes minikube under my local Linux user. I do not have root or sudo rights in the machine. I set up everything such that the minikube is running fine and I can see the exposed Grafana-service in a Webbrowser running on my local machine (url <code>http://<mini pod IP>:3000</code>).</p>
<p>Now I would like to make this service available to the outside world, so Grafana can be accessed by my colleagues. I played around with ingress, but got stuck as my PC is not registered at our company’s DNS-server, so I guess I do not have the option to use a host URL with the ingress, but have to use the ip of my PC such that the requests to the Grafana service from the outside world (company domain, not internet) look like <code>http://<pc IP address>:3000</code></p>
<p>Is this possible? I am Fine with a Kubernetes solution or with proxy configurations for my local user, or another solution running under local user.</p>
|
<p>You could convert the service type from ClusterIP to NodePort, then you can access the service using your host/PC ip. </p>
<pre><code>kubectl edit svc/grafana-service
or
kubectl port-forward --address 0.0.0.0 pod/<grafana pod name> 3000:3000 &
</code></pre>
|
<p>Is it possible to load balance ingress traffic based on the resource availability on the pods? The closest I can find is <a href="https://nginx.org/en/docs/http/ngx_http_upstream_module.html#least_time" rel="nofollow noreferrer">Nginx's Least Time load balance method</a> which uses lowest average latency.</p>
<p>I'm specifically looking for a way to route to the pod with the most average memory availability.</p>
|
<p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>As already mentioned in the comments, that approach is not recommended as the main issues with resource based load balancing is that the load information becomes stale by the time you would make the routing decision. See the sources below:</p>
<ul>
<li><p><a href="https://serverfault.com/questions/400899/why-dont-load-balancer-algorithms-allow-selection-of-worker-machines-based-on/400939">Why don't load balancer algorithm's allow selection of worker machines based on current CPU or memory usage</a></p>
</li>
<li><p><a href="https://www.cs.utexas.edu/users/dahlin/papers/tpds-loadBalance00.pdf" rel="nofollow noreferrer">Interpreting Stale Load Information</a></p>
</li>
</ul>
|
<p>I can't access the pod which scheduled to the another node. But i can access the pod which scheduled to the current node, vice versa, when I on the another node, I only can access the pod which scheduled on current node, And can't access the pod which scheduled to another node. And the route rules on the current node is different from other node(In fact, all three nodes in my cluster have different route rules). some info are list below:</p>
<p>on the master node 172.16.5.150:</p>
<pre><code>[root@localhost test-deploy]# kubectl get node
NAME STATUS ROLES AGE VERSION
172.16.5.150 Ready <none> 9h v1.16.2
172.16.5.151 Ready <none> 9h v1.16.2
172.16.5.152 Ready <none> 9h v1.16.2
[root@localhost test-deploy]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-controller-5qvwn 1/1 Running 0 46m
default nginx-controller-kgjwm 1/1 Running 0 46m
kube-system calico-kube-controllers-6dbf77c57f-kcqtt 1/1 Running 0 33m
kube-system calico-node-5zdt7 1/1 Running 0 33m
kube-system calico-node-8vqhv 1/1 Running 0 33m
kube-system calico-node-w9tq8 1/1 Running 0 33m
kube-system coredns-7b6b59774c-lzfh7 1/1 Running 0 9h
[root@localhost test-deploy]#
[root@localhost test-deploy]# kcp -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-controller-5qvwn 1/1 Running 0 23m 192.168.102.135 172.16.5.151 <none> <none>
nginx-controller-kgjwm 1/1 Running 0 23m 192.168.102.134 172.16.5.150 <none> <none>
[root@localhost test-deploy]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 ens32
172.0.0.0 0.0.0.0 255.0.0.0 U 100 0 0 ens32
192.168.102.128 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.102.129 0.0.0.0 255.255.255.255 UH 0 0 0 calia42aeb87aa8
192.168.102.134 0.0.0.0 255.255.255.255 UH 0 0 0 caliefbc513267b
[root@localhost test-deploy]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.10.0.1 <none> 443/TCP 9h
nginx-svc ClusterIP 10.10.189.192 <none> 8088/TCP 23m
[root@localhost test-deploy]# curl 192.168.102.135
curl: (7) Failed to connect to 192.168.102.135: 无效的参数
[root@localhost test-deploy]# curl 192.168.102.134
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@localhost test-deploy]# curl 10.10.189.192:8088
curl: (7) Failed connect to 10.10.189.192:8088; 没有到主机的路由
[root@localhost test-deploy]# curl 10.10.189.192:8088
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@localhost test-deploy]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:4b:76:b7 brd ff:ff:ff:ff:ff:ff
inet 172.16.5.150/8 brd 172.255.255.255 scope global noprefixroute ens32
valid_lft forever preferred_lft forever
inet6 fe80::92f8:9957:1651:f41/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 12:00:37:16:be:95 brd ff:ff:ff:ff:ff:ff
4: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether b2:9f:49:ff:31:3f brd ff:ff:ff:ff:ff:ff
inet 10.10.0.1/32 brd 10.10.0.1 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.10.0.200/32 brd 10.10.0.200 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.10.189.192/32 brd 10.10.189.192 scope global kube-ipvs0
valid_lft forever preferred_lft forever
5: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.102.128/32 brd 192.168.102.128 scope global tunl0
valid_lft forever preferred_lft forever
6: calia42aeb87aa8@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
7: caliefbc513267b@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
<p>On the another node 172.16.5.150:</p>
<pre><code>[root@localhost ~]# curl 10.10.189.192:8088
curl: (7) Failed connect to 10.10.189.192:8088; 没有到主机的路由
[root@localhost ~]# curl 10.10.189.192:8088
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@localhost ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 ens192
172.16.5.0 0.0.0.0 255.255.255.0 U 100 0 0 ens192
192.168.102.128 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.102.135 0.0.0.0 255.255.255.255 UH 0 0 0 cali44ab0f7df0f
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:38:a2:95 brd ff:ff:ff:ff:ff:ff
inet 172.16.5.151/24 brd 172.16.5.255 scope global noprefixroute ens192
valid_lft forever preferred_lft forever
inet6 fe80::e24a:6e5c:3a44:a7ee/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 76:91:46:b1:06:a7 brd ff:ff:ff:ff:ff:ff
4: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 1a:0d:f4:cf:ab:69 brd ff:ff:ff:ff:ff:ff
inet 10.10.0.1/32 brd 10.10.0.1 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.10.0.200/32 brd 10.10.0.200 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.10.189.192/32 brd 10.10.189.192 scope global kube-ipvs0
valid_lft forever preferred_lft forever
5: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.102.128/32 brd 192.168.102.128 scope global tunl0
valid_lft forever preferred_lft forever
8: cali44ab0f7df0f@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
|
<p>The route table doesn't have route for tunl0 interface. You can include the environement IP_AUTODETECTION_METHOD on calico.yaml file under the calico-node section. </p>
<pre><code>Example:
containers:
- name: calico-node
image: xxxxxxx
env:
- name: IP_AUTODETECTION_METHOD
value: interface=ens192
</code></pre>
|
<p>I have this script to develop CI/CD pipeline using GitHub action. Pipeline's flow is like GitHub -> Docker Hub -> IBM Cloud Kubernetes.
I'm facing this issue for 2-3 days while running kubernetes set image command. Tried many things but no luck. And I'm kind a new at this GitHub Actions and Kubernetes. My script;</p>
<pre><code>on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Login to DockerHub
run: echo ${{ secrets.DOCKER_PASSWORD }} | docker login -u ${{ secrets.DOCKER_USERNAME }} --password-stdin
- name: Check docker login status
if: failure()
run: echo Docker login process failed. Please check the GitHub secrets and your credentials.
# - name: Build the Docker image
# run: docker build -t ${{ secrets.DOCKER_REPO }}:${GITHUB_SHA::8} .
# - name: Publish to Docker Hub
# run: docker push ${{ secrets.DOCKER_USERNAME }}/nodedemoapp:${GITHUB_SHA::8}
- name: Install IBM Cloud CLI
run: curl -sL https://ibm.biz/idt-installer | bash
- name: Login to IBM Cloud
run: ibmcloud login -u ${{ secrets.IBMCLOUD_USER }} -p ${{ secrets.IBMCLOUD_PWD }} -r au-syd
- name: Check IBM Cloud login status
if: failure()
run: echo IBM Cloud login process failed. Please check the GitHub secrets and your credentials.
- name: Select Cloud Cluster
run: ibmcloud cs cluster-config <cluster-name>
- name: Deploy to Cluster / Set Docker Image tag
uses: steebchen/kubectl@master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
DOCKER_REPO: ${{ secrets.DOCKER_REPO }}
with:
args: set image --record deployment/demo-nodeapp nodeapp=$DOCKER_REPO:dd317a15
# - name: Verify Deployment
# uses: steebchen/kubectl@master
# env:
# KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
# DEPLOYMENT_NAME: ${{ secrets.DEPLOYMENT_NAME }}
# with:
# args: '"rollout status deployment/$DEPLOYMENT_NAME"'
</code></pre>
<p>And while executing it this is the error I'm facing;</p>
<pre><code>Deploy to Cluster / Set Docker Image tag1s
Run steebchen/kubectl@master
/usr/bin/docker run --name dfb4c0a53d7da9d4a0da4645e42a044b525_8e2c1f --label 488dfb --workdir /github/workspace --rm -e KUBE_CONFIG_DATA -e DOCKER_REPO -e INPUT_ARGS -e HOME -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/nodeApp/nodeApp":"/github/workspace" 488dfb:4c0a53d7da9d4a0da4645e42a044b525 set image --record deployment/*** ***=$DOCKER_REPO:dd317a15
error: unable to read certificate-authority /tmp/ca-syd01-launchpad.pem for launchpad/bl7ncphs04difh8d6be0 due to open /tmp/ca-syd01-launchpad.pem: no such file or directory
##[error]Docker run failed with exit code 1
</code></pre>
<p><a href="https://i.stack.imgur.com/3WLNM.png" rel="nofollow noreferrer">Error in GitHub Action</a></p>
<p>Thanks in advance.</p>
|
<p>I think you need to check the <code>ibmcloud cs cluster-config <cluster-name></code> output to ensure it's getting what you want. Unless you set the <code>IKS_BETA_VERSION=1</code> env var before running the getting the cluster config, you'll get kubeconfig and cert data downloaded to your local machine at $HOME/.bluemix/plugins/container-service/clusters/. If you use the <code>IKS_BETA_VERSION=1</code> env var, then you'll get the context/cluster/auth info in the default kubeconfig location of $HOME/.kube/config</p>
|
<p>I have installed <strong>kubectl</strong> and <strong>minikube</strong> on my windows environment, but when running <strong>minikube start</strong> it creates the VM on vitualBox but I got this error when it trying to prepare kubernetes on Docker.</p>
<pre><code>C:\Users\asusstrix>minikube start
* minikube v1.6.0 on Microsoft Windows 10 Home 10.0.18362 Build 18362
* Selecting 'virtualbox' driver from user configuration (alternates: [])
* Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
* Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
*
X Failed to setup kubeconfig: writing kubeconfig: Error writing file C:\Users\asusstrix/.kube/config: error acquiring lock for C:\Users\asusstrix/.kube/config: timeout acquiring mutex
*
* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
- https://github.com/kubernetes/minikube/issues/new/choose
</code></pre>
|
<p>According to the official documentation:</p>
<blockquote>
<p>To confirm successful installation of both a hypervisor and Minikube,
you can run the following command to start up a local Kubernetes
cluster: </p>
<p><code>minikube start --vm-driver=<driver_name></code></p>
<p>For setting the --vm-driver with minikube start, enter the name of the
hypervisor you installed in lowercase letters where is
mentioned below. A full list of --vm-driver values is available in
specifying the <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver" rel="nofollow noreferrer">VM driver
documentation</a>.</p>
</blockquote>
<p>So in your case it would be: <code>minikube start --vm-driver=<virtualbox></code></p>
<p>If you want ot make sure your previous steps were correct you can go through the whole <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer">tutorial</a>.</p>
<p>Please let me know if that helped. </p>
<p><strong>EDIT:</strong></p>
<p>There is a <a href="https://github.com/kubernetes/minikube/issues/6058" rel="nofollow noreferrer">Github thread</a> showing the same issue.</p>
<p>Basically you still should use <code>minikube start --vm-driver=<driver_name></code> but it will not work with v1.6.0 yet. Consider downgrading to v1.5.2 instead. </p>
|
<p>So I'm just trying to get a web app running on GKE experimentally to familiarize myself with Kubernetes and GKE. </p>
<p>I have a statefulSet (Postgres) with a persistent volume/ persistent volume claim which is mounted to the Postgres pod as expected. The problem I'm having is having the Postgres data endure. If I mount the PV at <code>var/lib/postgres</code> the data gets overridden with each pod update. If I mount at <code>var/lib/postgres/data</code> I get the warning:</p>
<p><code>initdb: directory "/var/lib/postgresql/data" exists but is not empty
It contains a lost+found directory, perhaps due to it being a mount point.
Using a mount point directly as the data directory is not recommended.
Create a subdirectory under the mount point.</code></p>
<p>Using Docker alone having the volume mount point at <code>var/lib/postgresql/data</code> works as expected and data endures, but I don't know what to do now in GKE. How does one set this up properly?</p>
<p>Setup file:</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sm-pd-volume-claim
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
---
apiVersion: "apps/v1"
kind: "StatefulSet"
metadata:
name: "postgis-db"
namespace: "default"
labels:
app: "postgis-db"
spec:
serviceName: "postgis-db"
replicas: 1
selector:
matchLabels:
app: "postgis-db"
template:
metadata:
labels:
app: "postgis-db"
spec:
terminationGracePeriodSeconds: 25
containers:
- name: "postgis"
image: "mdillon/postgis"
ports:
- containerPort: 5432
name: postgis-port
volumeMounts:
- name: sm-pd-volume
mountPath: /var/lib/postgresql/data
volumes:
- name: sm-pd-volume
persistentVolumeClaim:
claimName: sm-pd-volume-claim
</code></pre>
|
<p>You are getting this error because the postgres pod has tried to mount the data directory on / folder. It is not recommended to do so. </p>
<p>You have to create subdirectory to resolve this issues on the statefulset manifest yaml files.</p>
<pre><code> volumeMounts:
- name: sm-pd-volume
mountPath: /var/lib/postgresql/data
subPath: data
</code></pre>
|
<p>I have following setup deployed on an Azure Kubernetes Services (K8S version 1.18.14) cluster:</p>
<ul>
<li>Nginx installed via <a href="https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx" rel="nofollow noreferrer">helm chart</a> and scaled down to a single instance. It is deployed in namespace "ingress".</li>
<li>A simple stateful application (App A) deployed in a separate namespace with 5 replicas. The "statefulness" of the application is represented by a single random int generated at startup. The application exposes one http end point that just returns the random int. It is deployed in namespace "test".</li>
<li>service A of type ClusterIP exposing the http port of App A and also deployed in namespace "test":</li>
</ul>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: stateful-service
namespace: "test"
spec:
selector:
app: stateful
ports:
- name: http
port: 80
targetPort: 8080
type: ClusterIP
</code></pre>
<ul>
<li>service B of type "ExternalName" (proxy service) pointing to the cluster name Service A deployed in namespace "ingress":</li>
</ul>
<pre><code>apiVersion: "v1"
kind: "Service"
metadata:
name: "stateful-proxy-service"
namespace: "ingress"
spec:
type: "ExternalName"
externalName: "stateful-service.test.svc.cluster.local"
</code></pre>
<ul>
<li>ingress descriptor for the application with sticky sessions enabled:</li>
</ul>
<pre><code>apiVersion: extensions/v1beta1
kind: "Ingress"
metadata:
annotations:
kubernetes.io/ingress.class: internal
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
name: "ingress-stateful"
namespace: "ingress"
spec:
rules:
- host: stateful.foo.bar
http:
paths:
- path: /
backend:
serviceName: "stateful-proxy-service"
servicePort: 80
</code></pre>
<p>The issue is that sticky sessions is not working correctly with this setup. The "route" cookie is issued but does not guarantee "stickiness". Requests are dispatched to different pods of the backend service although the same sticky session cookie is sent. To be precise the pod changes every 100 requests which seems to be the default round-robin setting - it is the same also without sticky sessions enabled.</p>
<p>I was able to make sticky sessions work when everything is deployed in the same namespace and no "proxy" service is used. Then it is OK - request carrying the same "route" cookie always land on the same pod.</p>
<p>However my setup uses multiple namespaces and using a proxy service is the recommended way of using ingress on applications deployed in other namespaces.</p>
<p>Any ideas how to resolve this?</p>
|
<p>This is a community wiki answer. Feel free to expand it.</p>
<p>There are two ways to resolve this issue:</p>
<ol>
<li><p>Common approach: Deploy your Ingress rules in the same namespace where the app that they configure resides.</p>
</li>
<li><p>Potentially tricky approach: try to use the <code>ExternalName</code> type of Service. You can define ingress and a service with <code>ExternalName</code> type in namespace A, while the <code>ExternalName</code> points to DNS of the service in namespace B. There are two well-written answers explaining this approach in more detail:</p>
</li>
</ol>
<ul>
<li><p><a href="https://stackoverflow.com/a/51899301/11560878">aurelius' way</a></p>
</li>
<li><p><a href="https://stackoverflow.com/a/61753491/11560878">garlicFrancium's way</a></p>
</li>
</ul>
<p>Notice the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">official docs</a> and bear in mind that:</p>
<blockquote>
<p><strong>Warning:</strong> You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the
hostname used by clients inside your cluster is different from the
name that the ExternalName references.</p>
<p>For protocols that use hostnames this difference may lead to errors or
unexpected responses. HTTP requests will have a <code>Host:</code> header that
the origin server does not recognize; TLS servers will not be able to
provide a certificate matching the hostname that the client connected
to.</p>
</blockquote>
|
<p>As a part of university course I had to deploy an application to IBM Kubernetes.
I have <em>pay-as-you-go</em> type of account with my credit card attached to it.</p>
<p>I deployed the application to the cluster (the paid tier with public IP) and after few days and the demonstration the cluster was not needed anymore.
The cluster was configured to use dynamic provisioning with persistent storage via <a href="https://cloud.ibm.com/docs/containers?topic=containers-block_storage" rel="nofollow noreferrer">ibmcloud-block-storage-plugin</a>.</p>
<p>The problem is that the cluster provisioned tens of discs and then when I removed it using IBM Cloud UI (with option to remove all persistent volumes checked) the discs are still displayed as active.</p>
<p>Result of invoking <code>ibmcloud sl block volume-list</code>:</p>
<pre><code>77394321 SL02SEL1854117-1 dal13 endurance_block_storage 20 - 161.26.114.100 0 1
78180815 SL02SEL1854117-2 dal10 endurance_block_storage 20 - 161.26.98.107 0 1
78180817 SL02SEL1854117-3 dal10 endurance_block_storage 20 - 161.26.98.107 1 1
78180827 SL02SEL1854117-4 dal10 endurance_block_storage 20 - 161.26.98.106 3 1
78180829 SL02SEL1854117-5 dal10 endurance_block_storage 20 - 161.26.98.108 2 1
78184235 SL02SEL1854117-6 dal10 endurance_block_storage 20 - 161.26.98.88 4 1
78184249 SL02SEL1854117-7 dal10 endurance_block_storage 20 - 161.26.98.86 5 1
78184285 SL02SEL1854117-8 dal10 endurance_block_storage 20 - 161.26.98.107 6 1
78184289 SL02SEL1854117-9 dal10 endurance_block_storage 20 - 161.26.98.105 7 1
78184457 SL02SEL1854117-10 dal10 endurance_block_storage 20 - 161.26.98.85 9 1
78184465 SL02SEL1854117-11 dal10 endurance_block_storage 20 - 161.26.98.88 8 1
78184485 SL02SEL1854117-12 dal10 endurance_block_storage 20 - 161.26.98.86 10 1
78184521 SL02SEL1854117-13 dal10 endurance_block_storage 20 - 161.26.98.106 0 1
78184605 SL02SEL1854117-14 dal10 endurance_block_storage 20 - 161.26.98.87 1 1
78184643 SL02SEL1854117-15 dal10 endurance_block_storage 20 - 161.26.98.85 2 1
78184689 SL02SEL1854117-16 dal10 endurance_block_storage 20 - 161.26.98.87 3 1
78184725 SL02SEL1854117-17 dal10 endurance_block_storage 20 - 161.26.98.108 11 1
[ ... more entries there ... ]
</code></pre>
<p>All of those discs was created using default ibm bronze block storage class for Kubernetes clusters and have standard <code>Remove</code> policy set (so should've been delted automatically).</p>
<p>When I'm trying to delete any of those with <code>ibmcloud sl block volume-cancel --immediate --force 77394321</code> I got:</p>
<pre><code>Failed to cancel block volume: 77394321.
No billing item is found to cancel.
</code></pre>
<p>What's more the IBM Cloud displays those discs as active and there's no option to delete them (the option in the menu is grayed):</p>
<p><a href="https://i.stack.imgur.com/ZvJW3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZvJW3.png" alt="Screenshot of grayed delete button"></a></p>
<p>I don't want to get billing for more than <strong>40 x 20GB</strong> discs as the cluster don't even need that many resources (the fault was in badly defined Kubernetes configs).</p>
<p>What is correct way to remove the discs or is it only a delay on IBM Cloud and everything will be fine with my billings (my billings show only around <code>$19</code> for public IP for cluster, nothing more)?</p>
<p><strong>Edit</strong>
It seems that after some time the problem was resolved (I created a ticket, but don't know if the sales team solved the problem. Probably it was just enough to wait as @Sandip Amin suggested in comments).</p>
|
<p>Opening a support case would probably be the best course of action here as we'll likely need some account info from you to figure out what happened (or rather, why the expected actions <em>didn't</em> happen). </p>
<p>Log into the cloud and visit <a href="https://cloud.ibm.com/unifiedsupport/supportcenter" rel="nofollow noreferrer">https://cloud.ibm.com/unifiedsupport/supportcenter</a> (or click the Support link in the masthead of the page). If you'll comment back here with your case number, I'll help follow up on it. </p>
|
<p>I have a kubernetes deployment using environment variables and I wonder how to set dynamic endpoints in it.</p>
<p>For the moment, I use</p>
<pre><code>$ kubectl get ep rtspcroatia
NAME ENDPOINTS AGE
rtspcroatia 172.17.0.8:8554 3h33m
</code></pre>
<p>And copy/paste the endpoint's value in my deployment.yaml. For me, it's not the right way to do it, but I can't find another method..</p>
<p>Here is a part of my deployment.yaml :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: person-cam0
name: person-cam0
spec:
template:
metadata:
labels:
io.kompose.service: person-cam0
spec:
containers:
- env:
- name: S2_LOGOS_INPUT_ADDRESS
value: rtsp://172.17.0.8:8554/live.sdp
image: ******************
name: person-cam0
</code></pre>
<p>EDIT : And the service of the rtsp container</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: rtspcroatia
name: rtspcroatia
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 8551
targetPort: 8554
selector:
io.kompose.service: rtspcroatia
</code></pre>
<p>Can you help me to have something like :</p>
<pre><code> containers:
- env:
- name: S2_LOGOS_INPUT_ADDRESS
value: rtsp://$ENDPOINT_ADDR:$ENDPOINT_PORT/live.sdp
</code></pre>
<p>Thank you !</p>
|
<p>You could set dynamic ENDPOINTS values like "POD_IP:SERVICE_PORT" as shown on below sample yaml code. </p>
<pre><code> containers:
- env:
- name: MY_ENDPOINT_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: S2_LOGOS_INPUT_ADDRESS
value: rtsp://$MY_ENDPOINT_IP:$RTSPCROATI_SERVICE_PORT/live.sdp
</code></pre>
|
<p>As per these two issues on <code>ingress-nginx</code> Github, it seems that the only way to get grpc/http2 working on port 80 without TLS is by using a custom config template:</p>
<ol>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/6313" rel="nofollow noreferrer">ingress does not supporting http2 at port 80 without tls #6313</a></li>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/6736" rel="nofollow noreferrer">Add new annotation to support listen 80 http2 #6736</a></li>
</ol>
<p>Unfortunately I could not find any straightforward examples on how to set-up a custom nginx-ingress config. Here are the links I tried:</p>
<ol>
<li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/" rel="nofollow noreferrer">Custom NGINX template</a></li>
<li><a href="https://github.com/nginxinc/kubernetes-ingress/tree/v1.11.1/examples/custom-templates" rel="nofollow noreferrer">Custom Templates</a></li>
</ol>
<p>Can anyone help me with the exact steps and config on how to get grpc/http2 working with nginx-ingress on port 80 <strong>without TLS</strong>?</p>
|
<p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>As already mentioned in the comments, the steps to make it work are as follows:</p>
<ol>
<li><p>Launch a separate nginx controller in an empty namespace to avoid issues with the main controller.</p>
</li>
<li><p>Create custom templates, using <a href="https://github.com/nginxinc/kubernetes-ingress/tree/v1.11.1/internal/configs/version1" rel="nofollow noreferrer">these</a> as a reference.</p>
</li>
<li><p>Put them in a <code>ConfigMap</code> like <a href="https://github.com/nginxinc/kubernetes-ingress/tree/v1.11.1/examples/custom-templates#example" rel="nofollow noreferrer">this</a>.</p>
</li>
<li><p>Mount the templates into the controller pod as in this <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/#custom-nginx-template" rel="nofollow noreferrer">example</a>.</p>
</li>
</ol>
|
<p>i want to access to the server with this route but not worked . <code>https://ticketing.dev/api/user/currentuser</code></p>
<p>i craete in the root folder <code>skaffold.yaml</code>:</p>
<pre><code>apiVersion: skaffold/v2beta11
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: kia9372/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
</code></pre>
<p>and i crate a folder by name <code>infra</code> and in that i create a folder by name <code>k8s</code> . in this folder i create two file :</p>
<p><strong>A :</strong> <code>auth-depl.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: kia9372/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 4000
targetPort: 4000
</code></pre>
<p><strong>B :</strong> <code>ingress-srv.yaml</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: ticketing.dev
- http:
paths:
- path: /api/user/?(.*)
backend:
service:
name: auth-srv
port:
number: 4000
</code></pre>
<p>and into the <code>/etc/hosts</code> i writeing this :</p>
<pre><code> # The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.0.1 localhost
127.0.1.1 mr-programmer
127.0.1.1 ticketing.dev
</code></pre>
<p>but now i have a problem . when i want to go in this route <code>https://ticketing.dev/api/user/currentuser</code> it not show my any things . i test the server seprate the kubernetes by this <code>https://localhost:4000/api/user/currentuser</code> and it works .</p>
<p><strong>whats the prblem ? how can i solve this problem ?</strong></p>
|
<p><code>Solution</code> :</p>
<ol>
<li>Go to your terminal</li>
<li>Enter <code>minikube ip</code> - you will get minikube ip ( examle : 172.17.0.2)</li>
<li><code>Edit /etc/hosts</code> :</li>
</ol>
<p>change <code>127.0.1.1 ticketing.dev</code> to
<code>172.17.0.2 (minikube ip) ticketing.dev</code></p>
<p>You can not write here local ip address(127.0.1.1), you should write here minikube ip address(172.17.0.2), because you are using minikube.</p>
|
<p>I have 3 node Kubernetes cluster on 1.11 deployed with kubeadm and weave(CNI) running of version 2.5.1. I am providing weave CIDR of IP range of 128 IP's. After two reboot of nodes some of the pods stuck in <code>containerCreating</code> state. </p>
<p>Once you run <code>kubectl describe pod <pod_name></code> you will see following errors:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ----
-------
Normal SandboxChanged 20m (x20 over 1h) kubelet, 10.0.1.63 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 30s (x25 over 1h) kubelet, 10.0.1.63 Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
</code></pre>
<p>If I check how many containers are running and how many IP address are allocated to those, I can see 24 containers:</p>
<pre><code>[root@ip-10-0-1-63 centos]# weave ps | wc -l
26
</code></pre>
<p>The number of total IP's to weave at that node is 42.</p>
<pre><code>[root@ip-10-0-1-212 centos]# kubectl exec -n kube-system -it weave-net-6x4cp -- /home/weave/weave --local status ipam
Defaulting container name to weave.
Use 'kubectl describe pod/weave-net-6x4cp -n kube-system' to see all of the containers in this pod.
6e:0d:f3:d7:f5:49(10.0.1.63) 42 IPs (32.8% of total) (42 active)
7a:24:6f:3c:1b:be(10.0.1.212) 40 IPs (31.2% of total)
ee:00:d4:9f:9d:79(10.0.1.43) 46 IPs (35.9% of total)
</code></pre>
<p>You can see all 42 IP's are active so no more IP's are available to allocate to new containers. But out of 42 only 26 are actually allocated to containers, I am not sure where are remaining IP's. It is happening on all three nodes.</p>
<p>Here is the output of weave status for your reference:</p>
<pre><code>[root@ip-10-0-1-212 centos]# weave status
Version: 2.5.1 (version 2.5.2 available - please upgrade!)
Service: router
Protocol: weave 1..2
Name: 7a:24:6f:3c:1b:be(10.0.1.212)
Encryption: disabled
PeerDiscovery: enabled
Targets: 3
Connections: 3 (2 established, 1 failed)
Peers: 3 (with 6 established connections)
TrustedSubnets: none
Service: ipam
Status: waiting for IP(s) to become available
Range: 192.168.13.0/25
DefaultSubnet: 192.168.13.0/25
</code></pre>
<p>If you need anymore information, I would happy to provide. Any Clue?</p>
|
<p>I guess that 16 IP's have reserved for Pods reuse purpose. These are the maximum pods per node based on CIDR ranges.</p>
<pre><code> Maximum Pods per Node CIDR Range per Node
8 /28
9 to 16 /27
17 to 32 /26
33 to 64 /25
65 to 110 /24
</code></pre>
|
<p>I use <code>kubectl exec -it</code> for logging into a single Kubernetes pod.</p>
<p>Is there any way to log in to multiple pods in a cluster, at the same time with a single command (just like <code>csshX</code>)?</p>
|
<p>There is a plugin that could help you with this. It's called <a href="https://github.com/predatorray/kubectl-tmux-exec" rel="nofollow noreferrer">kubectl-tmux-exec</a>:</p>
<blockquote>
<p>A kubectl plugin that controls multiple pods simultaneously using
<a href="https://github.com/tmux/tmux" rel="nofollow noreferrer">Tmux</a>.</p>
<p>It is to <code>kubectl exec</code> as <code>csshX</code> or <code>pssh</code> is to <code>ssh</code>.</p>
<p>Instead of exec bash into multiple pod's containers one-at-a-time,
like <code>kubectl exec pod{N} /bin/bash</code>.</p>
<p>You can now use</p>
<pre><code>kubectl tmux-exec -l app=nginx /bin/bash
</code></pre>
</blockquote>
<p>All necessary details regarding <a href="https://github.com/predatorray/kubectl-tmux-exec#installation" rel="nofollow noreferrer">Installation</a> and <a href="https://github.com/predatorray/kubectl-tmux-exec#usage" rel="nofollow noreferrer">Usage</a> can be found in the linked docs.</p>
|
<p>I have installed prometheus-operator via helm and now want to set custom alert rule, email notifications are set up, currently i'm getting every notification, i want to "silence it" so i can get emails for custom alerts.</p>
<p>alertmanager.yaml:</p>
<pre><code>global:
resolve_timeout: 5m
route:
receiver: 'email-alert'
group_by: ['job']
routes:
- receiver: 'email-alert'
match:
alertname: etcdInsufficientMembers
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receivers:
- name: email-alert
email_configs:
- to: [email protected]
from: [email protected]
# Your smtp server address
smarthost: smtp.office365.com:587
auth_username: [email protected]
auth_identity: [email protected]
auth_password: pass
</code></pre>
<p>Above file is applied sucessfully,</p>
<p>i added following lines at the end of above file, as referenced <a href="https://stackoverflow.com/questions/49472375/alert-when-docker-container-pod-is-in-error-or-carshloopbackoff-kubernetes">here</a>:</p>
<pre><code># Example group with one alert
groups:
- name: example-alert
rules:
# Alert about restarts
- alert: RestartAlerts
expr: count(kube_pod_container_status_restarts_total) > 0
for: 1s
annotations:
summary: "More than 5 restarts in pod {{ $labels.pod-name }}"
description: "{{ $labels.container-name }} restarted (current value: {{ $value }}s) times in pod {{ $labels.pod-namespace }}/{{ $labels.pod-name }}
</code></pre>
<p>And then in pod logs i'm getting this:</p>
<pre><code>="Loading configuration file failed" file=/etc/alertmanager/config/alertmanager.yaml err="yaml: unmarshal errors:\n line 28: field groups not found in type config.plain"
</code></pre>
|
<p>Solved, first, need to list all available rules:</p>
<pre><code> kubectl -n monitoring get prometheusrules
NAME AGE
prometheus-prometheus-oper-alertmanager.rules 29h
prometheus-prometheus-oper-etcd 29h
prometheus-prometheus-oper-general.rules 29h
prometheus-prometheus-oper-k8s.rules 29h
prometheus-prometheus-oper-kube-apiserver-error 29h
prometheus-prometheus-oper-kube-apiserver.rules 29h
prometheus-prometheus-oper-kube-prometheus-node-recording.rules 29h
prometheus-prometheus-oper-kube-scheduler.rules 29h
prometheus-prometheus-oper-kubernetes-absent 29h
prometheus-prometheus-oper-kubernetes-apps 29h
prometheus-prometheus-oper-kubernetes-resources 29h
prometheus-prometheus-oper-kubernetes-storage 29h
prometheus-prometheus-oper-kubernetes-system 29h
prometheus-prometheus-oper-kubernetes-system-apiserver 29h
prometheus-prometheus-oper-kubernetes-system-controller-manager 29h
prometheus-prometheus-oper-kubernetes-system-kubelet 29h
prometheus-prometheus-oper-kubernetes-system-scheduler 29h
prometheus-prometheus-oper-node-exporter 29h
prometheus-prometheus-oper-node-exporter.rules 29h
prometheus-prometheus-oper-node-network 29h
prometheus-prometheus-oper-node-time 29h
prometheus-prometheus-oper-node.rules 29h
prometheus-prometheus-oper-prometheus 29h
prometheus-prometheus-oper-prometheus-operator 29h
</code></pre>
<p>Then choose one to edit, or delete all except default one: <code>prometheus-prometheus-oper-general.rules</code></p>
<p>i choose to edit node-exporter rule</p>
<pre><code>kubectl edit prometheusrule prometheus-prometheus-oper-node-exporter -n monitoring
</code></pre>
<p>Added these lines at the end of file</p>
<pre><code>- alert: RestartAlerts
annotations:
description: '{{ $labels.container }} restarted (current value: {{ $value}}s)
times in pod {{ $labels.namespace }}/{{ $labels.pod }}'
summary: More than 5 restarts in pod {{ $labels.container }}
expr: kube_pod_container_status_restarts_total{container="coredns"} > 5
for: 1min
labels:
severity: warning
</code></pre>
<p>And soon after, i received email for this alert.</p>
|
<p>To untaint my nodes, my scripts use ...</p>
<pre><code># kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
<p>but I want to deliberately taint my node to test this untaint functionality. I tried this variant and it works fine ...</p>
<pre><code># kubectl taint nodes foo foo=DoNotSchedulePods:NoExecute
# kubectl taint nodes foo foo:NoExecute-
node/foo untainted
</code></pre>
<p><strong>but</strong> I can't see how to set up so I can test the specific taint that I need to get rid of. I tried e.g.</p>
<pre><code># kubectl taint nodes foo foo=node-role.kubernetes.io/master
error: unknown taint spec: foo=node-role.kubernetes.io/master
</code></pre>
<p>How to put it into situation where I can test the aforementioned untainted command? </p>
|
<p>To untaint the node:</p>
<pre><code> kubectl taint node <Node Name> node-role.kubernetes.io/master-
</code></pre>
<p>To taint the node:</p>
<pre><code> kubectl taint node <Node Name> node-role.kubernetes.io/master=:NoSchedule
</code></pre>
|
<p>I have a deployment with memory limits</p>
<pre><code>resources:
limits:
memory: 128Mi
</code></pre>
<p>But my app starts to fail when it is near to the limit, so, there is any way to restart the pod before it reaches a percentage of the memory limit?</p>
<p>For example if the limit is 128Mi, restart the pod when it reach 85% of it.</p>
|
<p>I am going to address this question from the Kuberentes side.</p>
<p>As already mentioned by arjain13, the solution you thought of is not the way to go as it is against the idea of <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">Requests and limits</a>:</p>
<blockquote>
<p>If you set a memory limit of 4GiB for that Container, the kubelet (and
container runtime) enforce the limit. The runtime prevents the
container from using more than the configured resource limit. For
example: when a process in the container tries to consume more than
the allowed amount of memory, the system kernel terminates the process
that attempted the allocation, with an out of memory (OOM) error.</p>
</blockquote>
<p>You can also find an example of <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#exceed-a-container-s-memory-limit" rel="nofollow noreferrer">Exceeding a Container's memory limit</a>:</p>
<blockquote>
<p>A Container can exceed its memory request if the Node has memory
available. But a Container is not allowed to use more than its memory
limit. If a Container allocates more memory than its limit, the
Container becomes a candidate for termination. If the Container
continues to consume memory beyond its limit, the Container is
terminated. If a terminated Container can be restarted, the kubelet
restarts it, as with any other type of runtime failure.</p>
</blockquote>
<p>There are two things I would like to recommend you to try in your current use case:</p>
<ol>
<li><p>Debug your application in order to eliminate the memory leak which looks like to be the source of this issue.</p>
</li>
<li><p>Use a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="nofollow noreferrer">livenessProbe</a>:</p>
</li>
</ol>
<blockquote>
<p>Indicates whether the container is running. If the liveness probe
fails, the kubelet kills the container, and the container is subjected
to its restart policy.</p>
</blockquote>
<p>It can be configured using the fields below:</p>
<ul>
<li><p><code>initialDelaySeconds</code>: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.</p>
</li>
<li><p><code>periodSeconds</code>: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.</p>
</li>
<li><p><code>timeoutSeconds</code>: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</p>
</li>
<li><p><code>successThreshold</code>: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.</p>
</li>
<li><p><code>failureThreshold</code>: When a probe fails, Kubernetes will try <code>failureThreshold</code> times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.</p>
</li>
</ul>
<p>If you set the minimal values for <code>periodSeconds</code>, <code>timeoutSeconds</code>, <code>successThreshold</code> and <code>failureThreshold</code> you can expect more frequent checks and faster restarts.</p>
<p>Below you will find some useful sources and guides:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">Configure Liveness, Readiness and Startup Probes</a></p>
</li>
<li><p><a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">Kubernetes best practices: Resource requests and limits</a></p>
</li>
<li><p><a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="nofollow noreferrer">Kubernetes best practices: Setting up health checks with readiness and liveness probes</a></p>
</li>
</ul>
|
<p>I'm running my service at K8S on-prem, is there any way to deploy my service (such as an eComm website) accross K8S on-prem and public cloud's K8S (for example Google Compute Engine run K8S or GKE) as hybrid K8S? </p>
<p>Can K8s (or Istio) support this kind of requirement for two different location? </p>
|
<p>If I understand you correctly the answer is yes.</p>
<p>According to the <a href="https://cloud.google.com/gke-on-prem/docs/overview" rel="nofollow noreferrer">GKE documentation</a>.</p>
<blockquote>
<p>GKE On-Prem is hybrid cloud software that brings Google Kubernetes
Engine (GKE) to on-premises data centers. With GKE On-Prem, you can
create, manage, and upgrade Kubernetes clusters in your on-prem
environment and connect those clusters to Google Cloud Console.</p>
</blockquote>
<p>You can find the Overview of installation <a href="https://cloud.google.com/gke-on-prem/docs/how-to/installation/install-overview" rel="nofollow noreferrer">here</a>. And rest of the documentation <a href="https://cloud.google.com/gke-on-prem/docs/" rel="nofollow noreferrer">here</a>.</p>
<p>As for <a href="https://cloud.google.com/gke-on-prem/docs/how-to/add-ons/istio" rel="nofollow noreferrer">Istio</a>:</p>
<blockquote>
<p>Istio is an open source framework for connecting, monitoring, and
securing microservices, including services running on <strong>GKE On-Prem</strong>.</p>
</blockquote>
<p>Please let me know if that helps.</p>
|
<p>I'm experimenting with using kompose on <code>k3s</code> to turn the compose file into a K8s file, but when I type <code>kompose up</code>, it asks me to enter a <code>username and password</code>, but I don't know what to write.</p>
<p>The specific output is as follows</p>
<pre><code># kompose up
INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
Please enter Username: test
Please enter Password: test
FATA Error while deploying application: Get https://127.0.0.1:6443/api: x509: certificate signed by unknown authority
</code></pre>
<p>However, the <code>kompose convert</code> command was successfully executed</p>
<p>I would appreciate it if you could tell me how to solve it?</p>
<blockquote>
<p>The kompose version is <code>1.21.0 (992df58d8)</code>, and install it by 'curl and chmod'</p>
<p>The k3s version is <code>v1.17.3+k3s1 (5b17a175)</code>, and install it by 'install.sh script'</p>
<p>OS is <code>Ubuntu18.04.3 TLS</code></p>
</blockquote>
|
<p>I seem to have found my problem, because I use k3s the install.sh scripts installed by default, it will k8s configuration file stored in the <code>/etc/rancher/k3s/k3s.yaml</code> instead of k8s <code>~/.Kube/config</code>.</p>
<p>This caused kompose up to fail to obtain certs.</p>
<p>You can use the <code>/etc/rancher/k3s/k3s.yaml</code> copied to <code>~/.Kube/config</code>.</p>
<pre><code>cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
</code></pre>
<p>Then <code>kompose up</code> executes successfully.</p>
|
<p>I'm having trouble in installing Helm to one of my GKE cluster through gcloud shell.</p>
<p>When I run: <code>helm install --name mongo-rs-mongodb-replicaset -f 3-values.yaml stable/mongodb-replicaset --debug</code> This is what I get:</p>
<pre><code>[debug] Created tunnel using local port: '39387'
[debug] SERVER: "127.0.0.1:39387"
[debug] Original chart version: ""
[debug] Fetched stable/mongodb-replicaset to /home/idan/.helm/cache/archive/mongodb-replicaset-3.9.6.tgz
[debug] CHART PATH: /home/idan/.helm/cache/archive/mongodb-replicaset-3.9.6.tgz
Error: the server has asked for the client to provide credentials
</code></pre>
<p>My service account is set properly: </p>
<pre><code>kubectl describe serviceaccount tiller --namespace kube-system
Name: tiller
Namespace: kube-system
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: tiller-token-vbrrn
Tokens: tiller-token-vbrrn
Events: <none>
kubectl describe clusterrolebinding tiller
Name: tiller
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount tiller kube-system
</code></pre>
<p>I'm owner of my project's IAM, and I'm not sure which credentials should I provide - I have never seen it in the past. Tried to initialize it with <code>helm --upgrade</code> too.</p>
|
<p>After every solution I found didn't work, I tried re-creating my cluster and running the same commands and it has simply worked... </p>
|
<p>I have a Kubernetes cluster running in AWS, and I am working through upgrading various components. Internally, we are using NGINX, and it is currently at <code>v1.1.1</code> of the <code>nginx-ingress</code> chart (as served from <a href="https://charts.helm.sh/stable" rel="nofollow noreferrer">old stable</a>), with the following configuration:</p>
<pre><code>controller:
publishService:
enabled: "true"
replicaCount: 3
service:
annotations:
external-dns.alpha.kubernetes.io/hostname: '*.MY.TOP.LEVEL.DOMAIN'
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: [SNIP]
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
targetPorts:
http: http
https: http
</code></pre>
<p>My service's ingress resource looks like...</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
[SNIP]
spec:
rules:
- host: MY-SERVICE.MY.TOP.LEVEL.DOMAIN
http:
paths:
- backend:
serviceName: MY-SERVICE
servicePort: 80
path: /
status:
loadBalancer:
ingress:
- hostname: [SNIP]
</code></pre>
<p>This configuration works just fine, however, when I upgrade to <code>v3.11.1</code> of the <code>ingress-nginx</code> chart (as served from <a href="https://kubernetes.github.io/ingress-nginx" rel="nofollow noreferrer">the k8s museum</a>).</p>
<p>With an unmodified config, curling to the HTTPS scheme redirects back to itself:</p>
<pre><code>curl -v https://MY-SERVICE.MY.TOP.LEVEL.DOMAIN/INTERNAL/ROUTE
* Trying W.X.Y.Z...
* TCP_NODELAY set
* Connected to MY-SERVICE.MY.TOP.LEVEL.DOMAIN (W.X.Y.Z) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=*.MY.TOP.LEVEL.DOMAIN
* start date: Aug 21 00:00:00 2020 GMT
* expire date: Sep 20 12:00:00 2021 GMT
* subjectAltName: host "MY-SERVICE.MY.TOP.LEVEL.DOMAIN" matched cert's "*.MY.TOP.LEVEL.DOMAIN"
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
> GET INTERNAL/ROUTE HTTP/1.1
> Host: MY-SERVICE.MY.TOP.LEVEL.DOMAIN
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 308 Permanent Redirect
< Content-Type: text/html
< Date: Wed, 28 Apr 2021 19:07:57 GMT
< Location: https://MY-SERVICE.MY.TOP.LEVEL.DOMAIN/INTERNAL/ROUTE
< Content-Length: 164
< Connection: keep-alive
<
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host MY-SERVICE.MY.TOP.LEVEL.DOMAIN left intact
* Closing connection 0
</code></pre>
<p>(I wish I had captured more verbose output...)</p>
<p>I tried modifying the NGINX config to append the following:</p>
<pre><code>config:
use-forwarded-headers: "true"
</code></pre>
<p>and then...</p>
<pre><code>config:
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
</code></pre>
<p>These did not seem to make a difference. It was in the middle of the day, so I wasn't able to dive too far in before rolling back.</p>
<p>What should I look at, and how should I debug this?</p>
<p><strong>Update:</strong></p>
<p>I wish that I had posted a complete copy of the updated config, because I would have noticed that I did <em>not</em> correctly apply the change to add <code>config.compute-full-forwarded-for: "true"</code>. It need to be within the <code>controller</code> block, and I had placed it elsewhere.</p>
<p>Once the <code>compute-full-forwarded-for: "true"</code> config was added, everything started to work immediately.</p>
|
<p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>As confirmed by @object88 the issue was with misplaced <code>config.compute-full-forwarded-for: "true"</code> configuration which was located in the wrong block. Adding it to the <code>controller</code> block solved the issue.</p>
|
<p>We have an application which uses SSH to copy artifact from one node to other. While creating the Docker image (Linux Centos 8 based), I have installed the Openssh server and client, when I run the image from Docker command and exec into it, I am successfully able to run the SSH command and I also see the port 22 enabled and listening ( <code>$ lsof -i -P -n | grep LISTEN</code>).</p>
<p>But if I start a POD/Container using the same image in the Kubernetes cluster, I do not see port 22 enabled and listening inside the container. Even if I try to start the <code>sshd</code> from inside the k8s container then it gives me below error:</p>
<pre><code>Redirecting to /bin/systemctl start sshd.service Failed to get D-Bus connection: Operation not permitted.
</code></pre>
<p>Is there any way to start the K8s container with SSH enabled?</p>
|
<p>There are three things to consider:</p>
<ol>
<li>Like David said in his comment: </li>
</ol>
<blockquote>
<p>I'd redesign your system to use a communication system that's easier
to set up, like with HTTP calls between pods.</p>
</blockquote>
<ol start="2">
<li><p>If you put a service in front of your deployment, it is not going to relay any SSH connections. So you have to point to the pods directly, which might be pretty inconvenient.</p></li>
<li><p>In case you have missed that: you need to declare port 22 in your deployment template. </p></li>
</ol>
<p>Please let me know if that helped. </p>
|
<p>I bootstrap a kubernetes cluster using kubeadm. After a few month of inactivity, when I get our running pods, I realize that the kube-apiserver sticks in the CreatecontainerError!</p>
<pre><code>kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-bcv8m 1/1 Running 435 175d
coredns-576cbf47c7-dwvmv 1/1 Running 435 175d
etcd-master 1/1 Running 23 175d
kube-apiserver-master 0/1 CreateContainerError 23 143m
kube-controller-manager-master 1/1 Running 27 175d
kube-proxy-2s9sx 1/1 Running 23 175d
kube-proxy-rrp7m 1/1 Running 20 127d
kube-scheduler-master 1/1 Running 24 175d
kubernetes-dashboard-65c76f6c97-7cwwp 1/1 Running 34 169d
tiller-deploy-779784fbd6-cwrqn 1/1 Running 0 152m
weave-net-2g8s5 2/2 Running 62 170d
weave-net-9r6cp 2/2 Running 44 127d
</code></pre>
<p>I delete this pod to restart it, but still goes same problem.</p>
<p>More info :</p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 175d v1.12.1
worker Ready worker 175d v1.12.1
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl describe pod kube-apiserver-master -n kube-system
Name: kube-apiserver-master
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: master/192.168.88.205
Start Time: Wed, 07 Aug 2019 17:58:29 +0430
Labels: component=kube-apiserver
tier=control-plane
Annotations: kubernetes.io/config.hash: ce0f74ad5fcbf28c940c111df265f4c8
kubernetes.io/config.mirror: ce0f74ad5fcbf28c940c111df265f4c8
kubernetes.io/config.seen: 2019-08-07T17:58:28.178339939+04:30
kubernetes.io/config.source: file
scheduler.alpha.kubernetes.io/critical-pod:
Status: Running
IP: 192.168.88.205
Containers:
kube-apiserver:
Container ID: docker://3328849ad82745341717616f4ef6e951116fde376d19990610f670c30eb1e26f
Image: k8s.gcr.io/kube-apiserver:v1.12.1
Image ID: docker-pullable://k8s.gcr.io/kube-apiserver@sha256:52b9dae126b5a99675afb56416e9ae69239e012028668f7274e30ae16112bb1f
Port: <none>
Host Port: <none>
Command:
kube-apiserver
--authorization-mode=Node,RBAC
--advertise-address=192.168.88.205
--allow-privileged=true
--client-ca-file=/etc/kubernetes/pki/ca.crt
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
--etcd-servers=https://127.0.0.1:2379
--insecure-port=0
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--secure-port=6443
--service-account-key-file=/etc/kubernetes/pki/sa.pub
--service-cluster-ip-range=10.96.0.0/12
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key
State: Waiting
Reason: CreateContainerError
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Wed, 07 Aug 2019 17:58:30 +0430
Finished: Wed, 07 Aug 2019 13:28:11 +0430
Ready: False
Restart Count: 23
Requests:
cpu: 250m
Liveness: http-get https://192.168.88.205:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
$ kubectl get pods kube-apiserver-master -n kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/config.hash: ce0f74ad5fcbf28c940c111df265f4c8
kubernetes.io/config.mirror: ce0f74ad5fcbf28c940c111df265f4c8
kubernetes.io/config.seen: 2019-08-07T17:58:28.178339939+04:30
kubernetes.io/config.source: file
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: 2019-08-13T08:33:18Z
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver-master
namespace: kube-system
resourceVersion: "19613877"
selfLink: /api/v1/namespaces/kube-system/pods/kube-apiserver-master
uid: 0032d68b-bda5-11e9-860c-000c292f9c9e
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=192.168.88.205
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.12.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.88.205
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
nodeName: master
priority: 2000000000
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
operator: Exists
volumes:
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2019-08-07T13:28:29Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2019-08-07T08:58:11Z
message: 'containers with unready status: [kube-apiserver]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2019-08-07T08:58:11Z
message: 'containers with unready status: [kube-apiserver]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2019-08-07T13:28:29Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://3328849ad82745341717616f4ef6e951116fde376d19990610f670c30eb1e26f
image: k8s.gcr.io/kube-apiserver:v1.12.1
imageID: docker-pullable://k8s.gcr.io/kube-apiserver@sha256:52b9dae126b5a99675afb56416e9ae69239e012028668f7274e30ae16112bb1f
lastState:
terminated:
containerID: docker://3328849ad82745341717616f4ef6e951116fde376d19990610f670c30eb1e26f
exitCode: 255
finishedAt: 2019-08-07T08:58:11Z
reason: Error
startedAt: 2019-08-07T13:28:30Z
name: kube-apiserver
ready: false
restartCount: 23
state:
waiting:
message: 'Error response from daemon: Conflict. The container name "/k8s_kube-apiserver_kube-apiserver-master_kube-system_ce0f74ad5fcbf28c940c111df265f4c8_24"
is already in use by container 14935b714aee924aa42295fa5d252c760264d24ee63ea74e67092ccc3fb2b530.
You have to remove (or rename) that container to be able to reuse that name.'
reason: CreateContainerError
hostIP: 192.168.88.205
phase: Running
podIP: 192.168.88.205
qosClass: Burstable
startTime: 2019-08-07T13:28:29Z
</code></pre>
<p>If any other information is needed let me know.</p>
<p>How can I make it run properly?</p>
|
<p>The issue is explained by this error message from docker daemon:</p>
<blockquote>
<p>message: 'Error response from daemon: Conflict. The container name
"/k8s_kube-apiserver_kube-apiserver-master_kube-system_ce0f74ad5fcbf28c940c111df265f4c8_24"
is already in use by container 14935b714aee924aa42295fa5d252c760264d24ee63ea74e67092ccc3fb2b530.
You have to remove (or rename) that container to be able to reuse that name.'
reason: CreateContainerError</p>
</blockquote>
<p>List all containers using:</p>
<p><code>docker ps -a</code></p>
<p>You should be able to find on the list container with following name:</p>
<p><code>/k8s_kube-apiserver_kube-apiserver-master_kube-system_ce0f74ad5fcbf28c940c111df265f4c8_24</code></p>
<p>or ID:</p>
<p><code>14935b714aee924aa42295fa5d252c760264d24ee63ea74e67092ccc3fb2b530</code></p>
<p>Then you can try to delete it by running:</p>
<p><code>docker rm "/k8s_kube-apiserver_kube-apiserver-master_kube-system_ce0f74ad5fcbf28c940c111df265f4c8_24"</code></p>
<p>or by providing its ID:</p>
<p><code>docker rm 14935b714aee924aa42295fa5d252c760264d24ee63ea74e67092ccc3fb2b530</code></p>
<p>If there is still any problem with removing it, add the <code>-f</code> flag to delete it forcefully:</p>
<p><code>docker rm -f 14935b714aee924aa42295fa5d252c760264d24ee63ea74e67092ccc3fb2b530</code></p>
<p>Once done that, you can try to delete <code>kube-apiserver-master</code> pod, so it can be recreated.</p>
|
<p>Not sure if such if there was such a question, so pardon me if I couldn't find such.</p>
<p>I have a cluster based on 3 nodes, my application consists of a frontend and a backend with each running 2 replicas:</p>
<ul>
<li>front1 - running on <code>node1</code></li>
<li>front2 - running on <code>node2</code></li>
<li>be1 - <code>node1</code></li>
<li>be2 - <code>node2</code></li>
<li>Both <code>FE</code> pods are served behind <code>frontend-service</code></li>
<li>Both <code>BE</code> pods are service behind <code>be-service</code></li>
</ul>
<p>When I shutdown <code>node-2</code>, the application stopped and in my UI I could see application errors.</p>
<p>I've checked the logs and found out that my application attempted to reach the service type of the backend pods and it failed to respond since <code>be2</code> wasn't running, the scheduler is yet to terminate the existing one.</p>
<p>Only when the node was terminated and removed from the cluster, the pods were rescheduled to the 3rd node and the application was back online.</p>
<p>I know a service mesh can help by removing the pods that aren't responding from the traffic, however, I don't want to implement it yet, and trying to understand what is the best solution to route the traffic to the healthy pods in a fast and easy way, 5 minutes of downtime is a lot of time.</p>
<p>Here's my <code>be</code> deployment spec:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: backend
name: backend
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: backend
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-Application
operator: In
values:
- "true"
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- backend
topologyKey: kubernetes.io/hostname
containers:
- env:
- name: SSL_ENABLED
value: "false"
image: quay.io/something:latest
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /liveness
port: 16006
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 10
name: backend
ports:
- containerPort: 16006
protocol: TCP
- containerPort: 8457
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readiness
port: 16006
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: 1500m
memory: 8500Mi
requests:
cpu: 6m
memory: 120Mi
dnsPolicy: ClusterFirst
</code></pre>
<p>Here's my backend service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
labels:
app: identity
name: backend
namespace: default
spec:
clusterIP: 10.233.34.115
ports:
- name: tcp
port: 16006
protocol: TCP
targetPort: 16006
- name: internal-http-rpc
port: 8457
protocol: TCP
targetPort: 8457
selector:
app: backend
sessionAffinity: None
type: ClusterIP
</code></pre>
|
<p>This is a community wiki answer. Feel free to expand it.</p>
<p>As already mentioned by @TomerLeibovich the main issue here was due to the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">Probes Configuration</a>:</p>
<blockquote>
<p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#probe-v1-core" rel="nofollow noreferrer">Probes</a> have a number of fields that you can use to more precisely
control the behavior of liveness and readiness checks:</p>
<ul>
<li><p><code>initialDelaySeconds</code>: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to
0 seconds. Minimum value is 0.</p>
</li>
<li><p><code>periodSeconds</code>: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.</p>
</li>
<li><p><code>timeoutSeconds</code>: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</p>
</li>
<li><p><code>successThreshold</code>: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1
for liveness and startup Probes. Minimum value is 1.</p>
</li>
<li><p><code>failureThreshold</code>: When a probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness
probe means restarting the container. In case of readiness probe the
Pod will be marked Unready. Defaults to 3. Minimum value is 1.</p>
</li>
</ul>
</blockquote>
<p>Plus the proper <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="nofollow noreferrer">Pod eviction configuration</a>:</p>
<blockquote>
<p>The <code>kubelet</code> needs to preserve node stability when available compute
resources are low. This is especially important when dealing with
incompressible compute resources, such as memory or disk space. If
such resources are exhausted, nodes become unstable.</p>
</blockquote>
<p>Changing the threshold to 1 instead of 3 and reducing the pod-eviction solved the issue as the Pod is now being evicted sooner.</p>
<p><strong>EDIT:</strong></p>
<p>The other possible solution in this scenario is to label other nodes with the app backend to make sure that each backend/pod was deployed on different nodes. In your current situation one pod deployed on the node was removed from the endpoint and the application became unresponsive.</p>
<p>Also, the workaround for triggering pod eviction from the unhealthy node is to add tolerations to</p>
<pre><code>deployment.spec. template.spec: tolerations: - key: "node.kubernetes.io/unreachable" operator: "Exists" effect: "NoExecute" tolerationSeconds: 60
</code></pre>
<p>instead of using the default value: <code>tolerationSeconds: 300</code>.</p>
<p>You can find more information in <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions" rel="nofollow noreferrer">this documentation</a>.</p>
|
<p>I've created an nginx ingress controller that is linked to two services. The website works fine but the js and css files are not loaded in the HTML page (404) error. I've created nginx pod using helm, and Included the nginx config in the ingress.yaml. The error is raised when I use nginx, I've I run the docker image locally, it works fine. Also, if I made the services' types as a Load balancer, the applications work fine.</p>
<p><a href="https://i.stack.imgur.com/F256W.png" rel="nofollow noreferrer">![here is the error in the webpage
]<a href="https://i.stack.imgur.com/F256W.png" rel="nofollow noreferrer">1</a></a></p>
<p><a href="https://i.stack.imgur.com/F256W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F256W.png" alt="enter image description here"></a></p>
<p>here is the GKE services:</p>
<p><a href="https://i.stack.imgur.com/HgDvT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HgDvT.png" alt="enter image description here"></a></p>
<p>ingress.yaml:</p>
<pre class="lang-py prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
creationTimestamp: 2019-07-08T08:35:52Z
generation: 1
name: www
namespace: default
resourceVersion: "171166"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/www
uid: 659594d6-a15b-11e9-a671-42010a980160
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- host: twitter.domain.com
http:
paths:
- backend:
serviceName: twitter
servicePort: 6020
- host: events.omarjs.com
http:
paths:
- backend:
serviceName: events
servicePort: 6010
- http:
paths:
- backend:
serviceName: twitter
servicePort: 6020
path: /twitter
- backend:
serviceName: events
servicePort: 6010
path: /events
tls:
- secretName: omarjs-ssl
status:
loadBalancer: {}
</code></pre>
<p>twitter.yaml:</p>
<pre class="lang-py prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-07-07T20:43:49Z
labels:
run: twitter
name: twitter
namespace: default
resourceVersion: "27299"
selfLink: /api/v1/namespaces/default/services/twitter
uid: ec8920ca-a0f7-11e9-ac47-42010a98008f
spec:
clusterIP: 10.7.253.177
externalTrafficPolicy: Cluster
ports:
- nodePort: 31066
port: 6020
protocol: TCP
targetPort: 3020
selector:
run: twitter
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
<pre class="lang-py prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-07-07T20:43:49Z
labels:
run: twitter
name: twitter
namespace: default
resourceVersion: "27299"
selfLink: /api/v1/namespaces/default/services/twitter
uid: ec8920ca-a0f7-11e9-ac47-42010a98008f
spec:
clusterIP: 10.7.253.177
externalTrafficPolicy: Cluster
ports:
- nodePort: 31066
port: 6020
protocol: TCP
targetPort: 3020
selector:
run: twitter
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
|
<p>You can probably solve your problem by adding additional ingress rules which will route the requests for static files to proper directories. As far as I can see on the attached print screen, your static files are placed in <code>css</code> and <code>js</code> directories respectively, so you need to add 4 additional rules to your <code>ingress.yaml</code>. The final version of this fragment may look like that:</p>
<pre><code> - http:
paths:
- backend:
serviceName: twitter
servicePort: 6020
path: /twitter
- backend:
serviceName: twitter
servicePort: 6020
path: /css
- backend:
serviceName: twitter
servicePort: 6020
path: /js
- backend:
serviceName: events
servicePort: 6010
path: /events
- backend:
serviceName: events
servicePort: 6010
path: /css
- backend:
serviceName: events
servicePort: 6010
path: /js
</code></pre>
|
<p>I have a static website bundle I want to serve on my cluster. The bundle is stored in a google cloud storage bucket, which makes me think I may not actually need a separate "server" to return the files. </p>
<p>I have been able to get Python-Flask to reference the files from the bucket, but I can't seem to figure out how to get Ambassador to do the same. I could do something like add the bundle to an nginx instance, but I don't want to build the JS bundle into any docker image so I can do rapid updates.</p>
<p>I can't figure out how to set up an ambassador route to do the following:</p>
<p>If a user goes to</p>
<p><a href="https://my-website.com/" rel="nofollow noreferrer">https://my-website.com/</a></p>
<p>They get the <code>index.html</code> served from my Google Bucket <code>my-bucket/index.html</code></p>
<p>and when the index.html references a file internally (/static/js/main.js), Ambassador serves up the file found at <code>my-bucket/static/js/main.js</code></p>
<p>I have tried setting up a service like so:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: website_mapping
prefix: /website/
service: https://my-bucket-url/index.html
name: website-service
labels:
app: website-service
spec:
ports:
- port: 80
targetPort: 80
name: http-website
selector:
app: website
</code></pre>
<p>But navigating to <code>my-website.com/website/</code> only gets me a 503 error with the console complaining "the character of the encoding of the plain text document was not declared"</p>
<p>I feel like I'm going about this wrong. Can I serve straight from the bucket like this using Ambassador, or do I really need something like nginx?</p>
|
<p>Ambassador is actually not a web server (as Laszlo Valko points out). It needs to proxy your request to some other web server for this to work -- that can certainly be Flask (in fact, the Ambassador diagnostic service is a Flask application started at boot time inside the Ambassador pod), but it needs to be running somewhere. :)</p>
|
<p>I've installed a single node cluster with kubeadm but the log symlink on /var/log/containers is empty.
What I need to do to configure it?</p>
|
<p>On machines with systemd, the kubelet and container runtime write to <code>journald</code>. Check if your log output runs to <code>journald</code>. By defoult it should write those logs to <code>json.log</code> files but I don't know any specifics of your setup. Check <code>/etc/sysconfig/</code> for <code>--log-driver=journald</code> and delete it if needed. What we want here is to have the log driver set to <code>json</code>. Therefore you would see the logs files in <code>/var/log/containers</code>. </p>
<p>Please let me know if that helped.</p>
|
<p>Here is part of my CronJob spec:</p>
<pre><code>kind: CronJob
spec:
schedule: #{service.schedule}
</code></pre>
<p>For a specific environment a cron job is set up, but I never want it to run. Can I write some value into <code>schedule:</code> that will cause it to never run?</p>
<p>I haven't found any documentation for all supported syntax, but I am hoping for something like:</p>
<p><code>@never</code> or <code>@InABillionYears</code></p>
|
<p><code>@reboot</code> doesn't guarantee that the job will never be run. It will actually be run <strong>always when your system is booted/rebooted</strong> and <strong>it may happen</strong>. It will be also run <strong>each time when cron daemon is restarted</strong> so you need to rely on that <strong>"typically it should not happen"</strong> on your system...</p>
<p>There are far more certain ways to ensure that a <code>CronJob</code> will never be run:</p>
<ol>
<li><em>On Kubernetes level</em> by <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#suspend" rel="noreferrer">suspending</a> a job by setting its <code>.spec.suspend</code> field to <code>true</code></li>
</ol>
<p>You can easily set it using patch:</p>
<pre><code>kubectl patch cronjobs <job-name> -p '{"spec" : {"suspend" : true }}'
</code></pre>
<ol start="2">
<li><em>On Cron level.</em> Use a trick based on fact that <strong>crontab</strong> syntax is not strictly validated and set a date that you can be sure will never happen like 31th of February. <strong>Cron</strong> will accept that as it doesn't check <code>day of the month</code> in relation to value set in a <code>month</code> field. It just requires that you put valid numbers in both fields (<code>1-31</code> and <code>1-12</code> respectively). You can set it to something like:</li>
</ol>
<p><code>* * 31 2 *</code></p>
<p>which for <strong>Cron</strong> is perfectly valid value but we know that such a date is impossible and it will never happen.</p>
|
<p>I am trying to run Ambassador API gateway on my local dev environment so I would simulate what I'll end up with on production - the difference is that on prod my solution will be running in Kubernetes. To do so, I'm installing Ambassador into Docker Desktop and adding the required configuration to route requests to my microservices. Unfortunately, it did not work for me and I'm getting the error below:</p>
<p><code>upstream connect error or disconnect/reset before headers. reset reason: connection failure</code></p>
<p>I assume that's due to an issue in the mapping file, which is as follows:</p>
<pre><code>apiVersion: ambassador/v2
kind: Mapping
name: institutions_mapping
prefix: /ins/
service: localhost:44332
</code></pre>
<p>So what I'm basically trying to do is rewrite all requests coming to <code>http://{ambassador_url}/ins</code> to a service running locally in IIS Express (through Visual Studio) on port <code>44332</code>.</p>
<p>What am I missing?</p>
|
<p>I think you may be better off using another one of Ambassador Labs tools called Telepresence.</p>
<p><a href="https://www.telepresence.io/" rel="nofollow noreferrer">https://www.telepresence.io/</a></p>
<p>With Telepresence you can take your local service you have running on localhost and project it into your cluster to see how it performs. This way you don't need to spin up a local cluster, and can get real time feedback on how your service operates with other services in the cluster.</p>
|
<p>I follow <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/" rel="nofollow noreferrer">the guide</a> to install a test minikube on my virtualbox of ubuntu-18.04.
It's a virtualbox on my windows computer.so I use
sudo minikube start --vm-driver=none
to start minikube.
then execute minikube dashboard ....I can access <a href="http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/</a> with the generated token.
Everything is well by now.</p>
<p>BUT I need to poweroff my computer on weekends. So I stop minikube and shutdown the ubuntu vm.</p>
<pre><code>sudo minikube stop
sudo shutdown
</code></pre>
<p>When I back to work on Monday, I can't access the dashboard UI WEB,
<code>sudo minikube dashboard</code> hangs until I press Ctrl+C.
<a href="https://i.stack.imgur.com/YHSHh.png" rel="nofollow noreferrer">minikube dashboard hangs until I press Ctrl+C</a></p>
<p>How can I restore the wei ui? or is there anything I need to do before shutdown the vm?</p>
|
<p><code>minikube dashborad</code> starts alongside <code>kubectl proxy</code>. The process waits for <code>kubectl proxy</code> to finish but apparently it never does, therefore the command never exits or ends. This is happening because of a security precautions. <code>kubectl proxy</code> runs underneath to enforce additional security restrictions in order to prevent DNS repining attacks.</p>
<p>What you can do is restart the <code>minikube</code> with a current config and data cleanup and than start a fresh new instance:</p>
<pre><code>minikube stop
rm -rf ~/.minikube
minikube start
</code></pre>
<p>Please let me know if that helped. </p>
|
<p>I have a <code>CronJob</code> that runs a process in a container in Kubernetes.</p>
<p>This process takes in a time window that is defined by a <code>--since</code> and <code>--until</code> flag. This time window needs to be defined at container start time (when the cron is triggered) and is a function of the current time. An example running this process would be:</p>
<pre class="lang-sh prettyprint-override"><code>$ my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
</code></pre>
<p>So for the example above, I would like the time window to be from 1 hour ago to 1 hour in the future. Is there a way in Kubernetes to pass in a formatted datetime as a command argument to a process?</p>
<p>An example of what I am trying to do would be the following config:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-process
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: my-process
image: my-image
args:
- my-process
- --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ")
- --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
</code></pre>
<p>When doing this, the literal string <code>"$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ")"</code> would be passed in as the <code>--since</code> flag.</p>
<p>Is something like this possible? If so, how would I do it?</p>
|
<p>Note that in your <code>CronJob</code> you don't run <code>bash</code> or any other shell and <code>command substitution</code> is a shell feature and without one will not work. In your example only one command <code>my-process</code> is started in the container and as it is not a shell, it is unable to perform <code>command substitution</code>.</p>
<p>This one:</p>
<pre><code>$ my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
</code></pre>
<p>will work properly because it is started in a <code>shell</code> so it may take advantage of shell features such as mentioned <code>command substitution</code></p>
<p>One thing: <code>date -v -1H +"%Y-%m-%dT%H:%M:%SZ"</code> doesn't expand properly in <code>bash shell</code> with default <code>GNU/Linux</code> <code>date</code> implementation. Among others <code>-v</code> option is not recognized so I guess you're using it on <code>MacOSX</code> or some kind of <code>BSD</code> system. In my examples below I will use date version that works on <code>Debian</code>.</p>
<p>So for testing it on <code>GNU/Linux</code> it will be something like this:</p>
<p><code>date --date='-1 hour' +"%Y-%m-%dT%H:%M:%SZ"</code></p>
<p>For testing purpose I've tried it with simple <strong>CronJob</strong> from <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#creating-a-cron-job" rel="noreferrer">this</a> example with some modifications:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
env:
- name: FROM
value: $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ")
- name: TILL
value: $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
args:
- /bin/sh
- -c
- date; echo from $(FROM) till $(TILL)
restartPolicy: OnFailure
</code></pre>
<p>It works properly. Below you can see the result of <code>CronJob</code> execution:</p>
<pre><code>$ kubectl logs hello-1569947100-xmglq
Tue Oct 1 16:25:11 UTC 2019
from 2019-10-01T15:25:11Z till 2019-10-01T17:25:11Z
</code></pre>
<p>Apart from the example with use of <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments" rel="noreferrer">environment variables</a> I tested it with following code:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
args:
- /bin/sh
- -c
- date; echo from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
restartPolicy: OnFailure
</code></pre>
<p>and as you can see here <code>command substitution</code> also works properly:</p>
<pre><code>$ kubectl logs hello-1569949680-fk782
Tue Oct 1 17:08:09 UTC 2019
from 2019-10-01T16:08:09Z till 2019-10-01T18:08:09Z
</code></pre>
<p>It works properly because in both examples <strong>first</strong> we spawn <code>bash shell</code> in our container and <strong>subsequently</strong> it runs other commands as simple <code>echo</code> provided as its argument. You can use your <code>my-process</code> command instead of <code>echo</code> only you'll need to provide it in one line with all its arguments, like this:</p>
<pre><code>args:
- /bin/sh
- -c
- my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
</code></pre>
<p><strong>This example will not work</strong> as there is no <code>shell</code> involved. <code>echo</code> command not being a shell will not be able to perform command substitution which is a shell feature:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
args:
- /bin/echo
- from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
restartPolicy: OnFailure
</code></pre>
<p>and the results will be a literal string:</p>
<pre><code>$ kubectl logs hello-1569951180-fvghz
from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
</code></pre>
<p>which is similar to your case as your command, like <code>echo</code> isn't a <code>shell</code> and it cannot perform <code>command substitution</code>.</p>
<p>To sum up: The solution for that is <strong>wrapping your command as a shell argument</strong>. In first two examples <code>echo</code> command is passed along with other commands as shell argument.</p>
<p>Maybe it is visible more clearly in the following example with a bit different syntax:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
command: ["/bin/sh","-c"]
args: ["FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'); TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ') ;echo from $FROM till $TILL"]
restartPolicy: OnFailure
</code></pre>
<p><code>man bash</code> says:</p>
<blockquote>
<p>-c If the -c option is present, then commands are read from the first non-option argument command_string.</p>
</blockquote>
<p>so <code>command: ["/bin/sh","-c"]</code> basically means <em>run a shell and execute following commands</em> which then we pass to it using <code>args</code>. In <code>bash</code> commands should be separated with semicolon <code>;</code> so they are run independently (subsequent command is executed no matter what was the result of executing previous command/commands).</p>
<p>In the following fragment:</p>
<pre><code>args: ["FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'); TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ') ;echo from $FROM till $TILL"]
</code></pre>
<p>we provide to <code>/bin/sh -c</code> three separate commands:</p>
<pre><code>FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ')
</code></pre>
<p>which sets <code>FROM</code> environment variable to result of execution of <code>date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'</code> command,</p>
<pre><code>TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ')
</code></pre>
<p>which sets <code>TILL</code> environment variable to result of execution of <code>date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ'</code> command</p>
<p>and finally we run </p>
<pre><code>echo from $FROM till $TILL
</code></pre>
<p>which uses both variables.</p>
<p>Exactly the same can be done with any other command.</p>
|
<p>I need to downsize a cluster from 3 to 2 nodes.
I have critical pods running on some nodes (0 and 1). As I found that the last node (2) in the cluster is the one that has the non critical pods, I have "cordoned" it so it won't get any new ones.
I wonder is if I can make sure that that last node (2) is the one that will be removed when I go to Azure portal and downsize my cluster to 2 nodes (it is the last node and it is cordoned).</p>
<p>I have read that if I manually delete the node, the system will still consider there are 3 nodes running so it's important to use the cluster management to downsize it.</p>
|
<p>You cannot control which node will be removed when scaling down the AKS cluster.</p>
<p>However, there are some workarounds for that:</p>
<ol>
<li><p>Delete the cordoned node manually via portal and than launch upgrade. It would try to add the node but with no success because the subnet has no space left.</p></li>
<li><p>Another option is to:</p>
<ul>
<li><p>Set up cluster autoscaler with two nodes</p></li>
<li><p>Scale up the number of nodes in the UI</p></li>
<li><p>Drain the node you want to delete and wait for autoscaler do it's job</p></li>
</ul></li>
</ol>
<p>Here are some sources and useful info:</p>
<ul>
<li><p><a href="https://learn.microsoft.com/en-us/azure/aks/scale-cluster" rel="nofollow noreferrer">Scale the node count in an Azure Kubernetes Service (AKS) cluster</a></p></li>
<li><p><a href="https://feedback.azure.com/forums/914020-azure-kubernetes-service-aks/suggestions/35702938-support-selection-of-nodes-to-remove-when-scaling" rel="nofollow noreferrer">Support selection of nodes to remove when scaling down</a></p></li>
<li><p><a href="https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-scale" rel="nofollow noreferrer"><code>az aks scale</code></a></p></li>
</ul>
<p>Please let me know if that helped.</p>
|
<p>community:</p>
<p>I used kubeadm to set up a kubernetes.</p>
<p>I used a YAML file to create serviceaccount, role and rolebindings to the serviceaccount.</p>
<p>Then I curl the pods in the default namespace, the kubernetes always returns "Unauthorized"</p>
<p>I do not know what exactly I got wrong here.</p>
<p>The yaml file is like below:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pzhang-test
namespace: default
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: ServiceAccount
name: pzhang-test
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>the secrets and token like below:</p>
<pre><code>root@robota:~# kubectl get secrets
NAME TYPE DATA AGE
default-token-9kg87 kubernetes.io/service-account-token 3 2d6h
pzhang-test-token-wz9zj kubernetes.io/service-account-token 3 29m
root@robota:~# kubectl get secrets pzhang-test-token-wz9zj -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1ETXlOekF6TkRjd04xb1hEVEk1TURNeU5EQXpORGN3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFpHCkUxajJaQnBrdzhsSmRRa2lDSlI1N1J6N0lXendudmtvNVlUK1BneXhqZm5NUjJXaWV3M3F0QWZEdi9oOWI3OUcKN1dTRFFCWG9qcmFkQjNSTTROVHhTNktCQlg1SlF6K2JvQkNhTG5hZmdQcERueHc3T082VjJLY1crS2k5ZHlOeApDQ1RTNTVBRWc3OTRkL3R1LzBvcDhLUjhhaDlMdS8zeVNRdk0zUzFsRW02c21YSmVqNVAzRGhDbUVDT1RnTHA1CkZQSStuWFNNTS83cWpYd0N4WjUyZFZSR3U0Y0NYZVRTWlBSM1R0UzhuU3luZ2ZiN1NVM1dYbFZvQVIxcXVPdnIKb2xqTmllbEFBR1lIaDNUMTJwT01HMUlOakRUNVVrdEM5TWJYeDZoRFc5ZkxwNTFkTEt4bnN5dUtsdkRXVDVOWQpwSDE5UTVvbDJRaVpnZzl0K2Y4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHb2RnL0ozMUhRQkVOZVNrdEF4dS9WRU43NmQKZ1k2NG5aRTdDQTZJY2pXaEJBNmlpblh3S0E1Q0JuVEVxS05QVEdWTG5KWVd4WDlwVE9jZkhWTkROT3RDV0N4NApDN3JMWjZkRDJrUGs2UUVYSEg3MXJ4cUdKS21WbFJ0UytrOC9NdHAzNmxFVE5LTUc5VXI1RUNuWDg0dVZ5dUViCnRWWlRUcnZPQmNDSzAyQ2RSN3Q0V3pmb0FPSUFkZ1ZTd0xxVGVIdktRR1orM1JXOWFlU2ZwWnpsZDhrQlZZVzkKN1MvYXU1c3NIeHIwVG85ZStrYlhqNDl5azJjU2hKa1Y2M3JOTjN4SEZBeHdzUUVZYTNMZ1ZGWFlHYTJFWHdWdwpLbXRUZmhoQWE0Ujd5dW5SdkIrdndac3ZwbHc2RmhQQldHVTlpQnA3aU9vU0ZWVmNlMUltUks3VkRqbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluQjZhR0Z1WnkxMFpYTjBMWFJ2YTJWdUxYZDZPWHBxSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW5CNmFHRnVaeTEwWlhOMElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaVlURTNPR1F3T1RrdE5USXdZaTB4TVdVNUxUa3lNMlF0TURBd1l6STVZbVJrTlRBMklpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbVJsWm1GMWJIUTZjSHBvWVc1bkxYUmxjM1FpZlEubnNlY1lPTjJkRUIwLVVSdXFJNm1tQVJlOHBSRGlES01STXJvRHc5SThIU24wNE9Qd0JvUzdhSDRsNjlSQ19SMDFNNUp0Rm9OcVFsWjlHOGJBNW81MmsxaVplMHZJZnEzNVkzdWNweF95RDlDT2prZ0xCU2k1MXgycUtURkE5eU15QURoaTFzN2ttT2d0VERDRVpmS1l3ME1vSjgtQUZPcXJkVndfZU15a2NGU3ZpYWVEQTRYNjFCNzhXYWpYcUttbXdfTUN1XzZVaG4wdklOa3pqbHBLaGs5anRlb0JvMFdGX0c3b1RzZXJVOTRuSGNCWkYwekRQcEpXTzlEVlc1a1B0Mm1Fem1NeWJoeVBfNTBvS0NKMTB4NGF4UzlIdXlwOTZ4SzV0NmNZZVNrQkx4bmVEb19wNzlyUlNXX1FLWFZCWm1UaWI1RHlZUHZxSGdSRFJiMG5B
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: pzhang-test
kubernetes.io/service-account.uid: a178d099-520b-11e9-923d-000c29bdd506
creationTimestamp: "2019-03-29T10:15:51Z"
name: pzhang-test-token-wz9zj
namespace: default
resourceVersion: "77488"
selfLink: /api/v1/namespaces/default/secrets/pzhang-test-token-wz9zj
uid: a179dae0-520b-11e9-923d-000c29bdd506
type: kubernetes.io/service-account-token
# the TOKEN is:
root@robota:~# echo $TOKEN
ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluQjZhR0Z1WnkxMFpYTjBMWFJ2YTJWdUxYZDZPWHBxSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW5CNmFHRnVaeTEwWlhOMElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaVlURTNPR1F3T1RrdE5USXdZaTB4TVdVNUxUa3lNMlF0TURBd1l6STVZbVJrTlRBMklpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbVJsWm1GMWJIUTZjSHBvWVc1bkxYUmxjM1FpZlEubnNlY1lPTjJkRUIwLVVSdXFJNm1tQVJlOHBSRGlES01STXJvRHc5SThIU24wNE9Qd0JvUzdhSDRsNjlSQ19SMDFNNUp0Rm9OcVFsWjlHOGJBNW81MmsxaVplMHZJZnEzNVkzdWNweF95RDlDT2prZ0xCU2k1MXgycUtURkE5eU15QURoaTFzN2ttT2d0VERDRVpmS1l3ME1vSjgtQUZPcXJkVndfZU15a2NGU3ZpYWVEQTRYNjFCNzhXYWpYcUttbXdfTUN1XzZVaG4wdklOa3pqbHBLaGs5anRlb0JvMFdGX0c3b1RzZXJVOTRuSGNCWkYwekRQcEpXTzlEVlc1a1B0Mm1Fem1NeWJoeVBfNTBvS0NKMTB4NGF4UzlIdXlwOTZ4SzV0NmNZZVNrQkx4bmVEb19wNzlyUlNXX1FLWFZCWm1UaWI1RHlZUHZxSGdSRFJiMG5B
</code></pre>
<p>I use this command:</p>
<pre><code>root@robota:~# curl --cacert ./ca.crt --header "Authorization: Bearer $TOKEN" https://192.16.208.142:6443/api/v1/namespaces/default/pods
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
</code></pre>
<p>As you can see the curl returns:</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
</code></pre>
<p>I expected the output to be a list of the pods in my <code>default</code> namespace</p>
<pre><code>root@robota:~# kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
my-nginx-64fc468bd4-8fq6j 1/1 Running 0 2d6h
my-nginx-64fc468bd4-ffkhb 1/1 Running 0 2d6h
</code></pre>
|
<p>maybe you can try:</p>
<pre><code>TOKEN=$(kubectl get secret pzhang-test-token-wz9zj -o yaml | grep "token:" | awk '{print $2}' | base64 -d)
</code></pre>
<pre><code>kubectl get secret prometheus-k8s-token-x8t45 -o yaml | grep "ca.crt" | awk '{print $2}' | base64 -d > ca.crt
</code></pre>
<pre><code>curl -H "Authorization: Bearer $TOKEN" --cacert ca.crt https://192.16.208.142:6443/api/v1/namespaces/default/pods
</code></pre>
|
<p>I have a kubernetes setup with following</p>
<pre><code>sn. type service namespace
1. statefulset rabbitmq rabbitmq
2. deployment pods default
3. hpa hpa default
</code></pre>
<p>The metrics is exported on `/apis/custom.metrics.k8s.io/v1beta1 as follows</p>
<pre><code>{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "services/rabbitmq_queue_messages_ready",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/rabbitmq_queue_messages_ready",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "pods/rabbitmq_queue_messages_ready",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
<p>Configuration of hpa</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: demo-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-pod2
minReplicas: 1
maxReplicas: 2
metrics:
- type: Object
object:
metric:
name: "rabbitmq_queue_messages_ready"
describedObject:
apiVersion: custom.metrics.k8s.io/v1/beta1
kind: Service
name: rabbitmq
target:
type: Value
value: 50
</code></pre>
<p>Whenever I try to deploy the hpa in <code>default</code> namespace I am getting the following error.</p>
<pre><code> ScalingActive False FailedGetObjectMetric the HPA was unable to compute the replica count: unable to get metric rabbitmq_queue_messages_ready: Service on default rabbitmq/unable to fetch metrics from custom metrics API: the server could not find the metric rabbitmq_queue_messages_ready for services rabbitmq
</code></pre>
<p>But when the same hpa is as set do namespace <code>rabbitmq</code> with hpa</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: demo-hpa
namespace: rabbitmq
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-pod2
minReplicas: 1
maxReplicas: 2
metrics:
- type: Object
object:
metric:
name: "rabbitmq_queue_messages_ready"
describedObject:
apiVersion: custom.metrics.k8s.io/v1/beta1
kind: Service
name: rabbitmq
target:
type: Value
value: 50
</code></pre>
<p>and deployment.yaml as</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-pod2
labels:
app: test-pod2
namespace: rabbimtq
spec:
replicas: 1
selector:
matchLabels:
app: test-pod2
template:
metadata:
labels:
app: test-pod2
spec:
containers:
- name: test-pod2
image: golang:1.16
command: ["sh", "-c", "tail -f /etc/hosts"]
</code></pre>
<p>The deployment works perfect.</p>
<p>Is there a way, I can export the metrics for service rabbitmq from <code>rabbitmq</code> namespace to <strong>any other namespaces</strong> which I can then use for scaling???</p>
|
<p>HPA is a <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">namespaced</a> resource. It means that it can only scale Deployments which are in the same Namespace as the HPA itself. That's why it is only working when both HPA and Deployment are in the <code>namespace: rabbitmq</code>. You can check it within your cluster by running:</p>
<pre><code> kubectl api-resources --namespaced=true | grep hpa
NAME SHORTNAMES APIGROUP NAMESPACED KIND
horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler
</code></pre>
<p>The easiest way to make it work would be to simply set the <code>Namespace</code> value of the HPA to the same Namespace of Deployment which you want to scale. For example, if your deployment is set like below:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
namespace: rabbimtq
</code></pre>
<p>the HPA must also be in the same Namespace:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
namespace: rabbitmq
</code></pre>
<p>In other words: Set the <code>metadata.namespace</code> value of your HPA to the <code>metadata.namespace</code> value of your Deployment as the HPA is not able to scale Deployments outside of its Namespace.</p>
<p>Namespaces are like separate "boxes" in your Cluster. Namespaced resources cannot work outside of the Namespace they are into. They can't see resources that are not in their "box".</p>
<p>By doing it that way there would be no need to reconfigure the custom metrics.</p>
|
<p>I need to run kubemq cluster on my desktop for development. I have installed kubemq docker container as described here <a href="https://docs.kubemq.io/installation/docker.html" rel="nofollow noreferrer">https://docs.kubemq.io/installation/docker.html</a>. But I can't figure out how to connect to it using kubemqctl utility? Is it possible in general? It shows only kubernetes clusters from kubectl config. And I don't see the way how to pass connection information. Thanks. </p>
<p>update #1</p>
<p>after following installation instructions, I see only those cluster which is working in kubernetes cluster and listed in my kubectl config. </p>
<pre><code>Getting KubeMQ Cluster List...
Current Kubernetes cluster context connection: gke_xxxxx_us-central1-a_xxxx
NAME DESIRED RUNNING READY IMAGE AGE SERVICES
env-dev/kubemq 3 3 3 kubemq/kubemq:latest 1792h54m57s ClusterIP 10.0.2.211:8080,50000,9090,5228
</code></pre>
<p>when I try to switch conext it shows only that cluster again</p>
<pre><code>Current Kubernetes cluster context connection: gke_xxxxx_us-central1-a_xxxx ? Select kubernetes cluster context [Use arrows to move, type to filter, ? for more help]
> gke_xxxxx_us-central1-a_xxxx
</code></pre>
<p>My .kubemqctl.yaml content is:</p>
<pre><code>autointegrated: false
currentnamespace: kubemq
currentstatefulset: kubemq-cluster
host: localhost
grpcport: 50000
restport: 9090
apiport: 8080
issecured: false
certfile: ""
kubeconfigpath: ""
connectiontype: grpc
defaulttoken: XXXXXX-ed0c-4077-9a74-b53805ee3214
</code></pre>
<p>update #2</p>
<p>I am able to connect from my go code, to my locally running kubemq cluster</p>
<pre><code>archi@eagle:~/prj/kubemq-go/examples/rpc/command$ docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3cd60e4373e4 kubemq/kubemq "./kubemq-run" 46 hours ago Up 46 hours 0.0.0.0:8080->8080/tcp, 0.0.0.0:9090->9090/tcp, 0.0.0.0:50000->50000/tcp modest_wescoff
</code></pre>
<p>But I cannot figure out how it is possible to connect to it using kubemq utility, because it does looking once to my kubectl config which cannot contain my local kubemq cluster, running in docker container, not kubernetes cluster. To list my kubemq clusters I use command: </p>
<pre><code>kubemqctl cluster get
</code></pre>
<p>the output of the command is shown above (in update #1 section of my question) </p>
<p>update #3</p>
<p>As @mario said. I am able to attach to my queries channel and see all messages! </p>
<pre><code>$kubemqctl queries attach userservice_channel
Adding 'userservice_channel' to attach list
[queries] [userservice_channel] kubemq-3cd60e4373e4 Server connected.
[queries] [userservice_channel] { "Kind": "request", "ID": "user_token_validate", "Channel": "userservice_channel", "ReplyChannel": "_INBOX.OYnfIQX2k7V9hTmevxHApp.Iqfr3HUw", "Metadata": "some-metadata", "Body": "\n\ufffd\u0001eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoxLCJyb2xlIjoic3VwZXJ1c2Vy.......
</code></pre>
|
<p>I assume you've already installed <strong>kubemqctl</strong> command following <a href="https://docs.kubemq.io/kubemqctl/kubemqctl.html" rel="nofollow noreferrer">this</a> instruction but you probably haven't configured it yet.</p>
<p>Once you run <code>kubemqctl help</code> command, it will show you all available options including:</p>
<pre><code>Usage:
kubemqctl [command]
Available Commands:
cluster Executes KubeMQ cluster management commands
commands Execute KubeMQ 'commands' RPC commands
config Run Kubemqctl configuration wizard command
</code></pre>
<p>First you need to run <code>kubemqctl config</code> which will start the <strong>configuration wizard</strong>, guiding you step by step through the basic configuration:</p>
<p>In second step you should select (using arrow keys) your <code>Local docker container</code>:</p>
<pre><code>? Set Default KubeMQ Token (press Enter for default):
? Select KubeMQ install location: [Use arrows to move, type to filter, ? for more help]
Kubernetes cluster
MicroK8s
K3s
Minikube
Other Kubernetes distribution
> Local docker container
</code></pre>
<p>Other options you can probably left at their defaults.</p>
<p>I assume you already checked that your <strong>KubeMQ</strong> is up and running and its API is available with:</p>
<pre><code>curl --location --request GET "http://localhost:8080/health" \
--header "Content-Type: application/json"
</code></pre>
<p>If so, you should be able to interact with it using your <code>kubemqctl</code> command line tool.</p>
<p>Please let me know if it helps.</p>
<hr />
<h3>Update:</h3>
<p><code>kubemqctl config</code> is merely a <code>wizard</code> tool which creates your <code>.kubemqctl.yaml</code> file based on which you can run <code>kubemqctl commands</code> against your <code>kubemq</code> instance. My <code>.kubemqctl.yaml</code> looks exactly the same as yours (only with different token).</p>
<p>As I already mentioned in my comment, <strong>single docker container isn't considered a cluster at all</strong>. From <strong>kubernetes</strong> perspective it is merely a container runtime and it is totally normal that <code>kubemqctl</code> similarly to <code>kubectl</code> will not list it as a <code>cluster</code> and won't be able to perform on it any cluster-related operation such as <code>kubemqctl cluster get</code>. You can easily verify it by changing your .kube/config name:</p>
<pre><code>mv .kube/config .kube/config_temporarily_disabled
</code></pre>
<p>If I run <code>kubemqctl cluster get</code> now, it shows me the following output:</p>
<pre><code>Getting KubeMQ Cluster List...
Error: Kubernetes Config File: Stat <path_to_the_kube_config_file>: No Such File Or Directory
</code></pre>
<p>which means that <code>kubemqctl</code> similarly to <code>kubectl</code> always looks for <code>kubeconfig</code> file when performing any <code>cluster-related</code> operation. This group of <code>kubemqctl</code> commands is simply designed only for this purpose and is not intended to be used with your <code>kubemq</code> instance deployed as a single docker container following <a href="https://docs.kubemq.io/installation/docker.html" rel="nofollow noreferrer">this</a> instruction.</p>
<h3><strong>Important:</strong> single docker container <strong>IS NOT</strong> a cluster</h3>
<p>It is not a cluster both in common sense and it is not considered as such by <code>kubemqctl</code> utility.</p>
<p>Note that you didn't even create it with:</p>
<pre><code>kubemqctl cluster create -t <YOUR_KUBEMQ_TOKEN>
</code></pre>
<p>which is used to create a <code>cluster</code> in all <strong>kubernetes-based</strong> solutions described in the <a href="https://docs.kubemq.io/installation/kubernetes.html" rel="nofollow noreferrer">documentation</a>: <strong>Kubernetes, MicroK8s and K3s</strong> but you won't find it in <a href="https://docs.kubemq.io/installation/docker.html" rel="nofollow noreferrer">Docker</a> section. Basically if you didn't create your <code>kubemq</code> instance using <code>kubemqctl cluster create</code> command, you also won't be able to list it using <code>kubemqctl cluster get</code> as this is not a <code>cluster</code>.</p>
<p>However you can still successfully run other <code>kubemqctl</code> commands and their sub-commands against your standalone <code>kubemq</code> instance running on a single <code>docker</code> container. You can actually run most of commands listed below (excluding <code>cluster</code>):</p>
<pre><code>$ kubemqctl
Usage:
kubemqctl [command]
Available Commands:
cluster Executes KubeMQ cluster management commands
commands Execute KubeMQ 'commands' RPC commands
config Run Kubemqctl configuration wizard command
events Execute KubeMQ 'events' Pub/Sub commands
events_store Execute KubeMQ 'events_store' Pub/Sub commands
help Help about any command
queries Execute KubeMQ 'queries' RPC based commands
queues Execute KubeMQ 'queues' commands
Flags:
-h, --help help for kubemqctl
--version version for kubemqctl
Use "kubemqctl [command] --help" for more information about a command.
</code></pre>
<p>As I already mentioned your <code>.kubemqctl.yaml</code> looks totally fine. It is properly configured to run commands against your <code>kubemq</code> running in <code>docker</code>. When I run <code>docker ps</code> on my machine I can see properly deployed <code>kubemq</code> container:</p>
<pre><code>c9adac88484f kubemq/kubemq "./kubemq-run" 3 hours ago Up 3 hours 0.0.0.0:8080->8080/tcp, 0.0.0.0:9090->9090/tcp, 0.0.0.0:50000->50000/tcp sleepy_ganguly
</code></pre>
<p>As you can see in the output above (or in the output added to your question as it is almost the same), it properly maps required ports exposed by <code>container</code> to machine's localhost address. You can also check it with <code>netstat</code> command:</p>
<pre><code>$ sudo netstat -ntlp | egrep "8080|9090|50000"
tcp6 0 0 :::8080 :::* LISTEN 21431/docker-proxy
tcp6 0 0 :::50000 :::* LISTEN 21394/docker-proxy
tcp6 0 0 :::9090 :::* LISTEN 21419/docker-proxy
</code></pre>
<p>It's basically enough to be able to use such <code>kubemq</code> instance. Let's run a few example commands.</p>
<p>Running simple:</p>
<pre><code>kubemqctl queues
</code></pre>
<p>will show us its <code>help</code> page which specifies how it should be used and gives some useful examples. Let's pick <code>kubemqctl queues list</code> as first:</p>
<pre><code>$ kubemqctl queues list
CHANNELS:
NAME CLIENTS MESSAGES BYTES FIRST_SEQUENCE LAST_SEQUENCE
q1 1 12 2621 1 12
TOTAL CHANNELS: 1
CLIENTS:
CLIENT_ID CHANNEL ACTIVE LAST_SENT PENDING STALLED
##################################### q1 false 2 1 true
TOTAL CLIENTS: 1
</code></pre>
<p>Let's pick another one:</p>
<pre><code>kubemqctl queues send
</code></pre>
<p>Again, when run without required flags and parameters, it shows us some helpful usage examples:</p>
<pre><code>Error: Missing Arguments, Must Be 2 Arguments, Channel And A Message
Try:
# Send message to a queue channel channel
kubemqctl queue send q1 some-message
# Send message to a queue channel with metadata
kubemqctl queue send q1 some-message --metadata some-metadata
# Send 5 messages to a queues channel with metadata
kubemqctl queue send q1 some-message --metadata some-metadata -m 5
# Send message to a queue channel with a message expiration of 5 seconds
kubemqctl queue send q1 some-message -e 5
# Send message to a queue channel with a message delay of 5 seconds
kubemqctl queue send q1 some-message -d 5
# Send message to a queue channel with a message policy of max receive 5 times and dead-letter queue 'dead-letter'
kubemqctl queue send q1 some-message -r 5 -q dead-letter
</code></pre>
<p>Let's run one of them (slightly modified):</p>
<pre><code>kubemqctl queue send queue2 some-message --metadata some-metadata
</code></pre>
<p>You should see similar output:</p>
<pre><code>[Channel: queue2] [client id: ##############################] -> {id: ############################, metadata: some-metadata, body: some-message - (0), sent at: #### }
</code></pre>
<p>Now if you list our available queues with <code>kubemqctl queues list</code> command, you'll see among others (q1 was previously created by me) our recently created queue named <code>queue2</code>:</p>
<pre><code>CHANNELS:
NAME CLIENTS MESSAGES BYTES FIRST_SEQUENCE LAST_SEQUENCE
q1 1 12 2621 1 12
queue2 0 1 232 1 1
TOTAL CHANNELS: 2
...
</code></pre>
<p>It was just an example to show you that it can be run against <strong><code>kubemq</code> instance running on a single docker container</strong> but you can run the same way other <code>kubemqctl</code> commands such as <code>commands</code>, <code>events</code> or <code>queries</code>. <code>kubemqctl</code> has really good <code>context help</code> with many usage examples so you should find easily what you need.</p>
<p>I hope I was able to clear up all the ambiguities.</p>
|
<p>I am trying to use a constant in skaffold, and to access it in skaffold profile:</p>
<p>example <code>export SOME_IP=199.99.99.99 && skaffold run -p dev</code></p>
<p>skaffold.yaml</p>
<pre><code>...
deploy:
helm:
flags:
global:
- "--debug"
releases:
- name: ***
chartPath: ***
imageStrategy:
helm:
explicitRegistry: true
createNamespace: true
namespace: "***"
setValueTemplates:
SKAFFOLD_SOME_IP: "{{.SOME_IP}}"
</code></pre>
<p>and in dev.yaml profile I need somehow to access it,<br />
something like:<br />
<code>{{ .Template.SKAFFOLD_SOME_IP }}</code> and it should be rendered as <code>199.99.99.99</code></p>
<p>I tried to use skaffold <strong>envTemplate</strong> and <strong>setValueTemplates</strong> fields, but could not get success, and could not find any example on web</p>
|
<p>Basically found a solution which I truly don't like, but it works:</p>
<p>in <strong>dev</strong> profile: <strong>values.dev.yaml</strong> I added a placeholder</p>
<pre><code> _anchors_:
- &_IPAddr_01 "<IPAddr_01_TAG>" # will be replaced with SOME_IP
</code></pre>
<p>The <strong><IPAddr_01_TAG></strong> will be replaced with const <strong>SOME_IP</strong> which will become <strong>199.99.99.99</strong> at the skaffold run</p>
<p>Now to run skaffold I will do:</p>
<pre><code>export SOME_IP=199.99.99.99
sed -i "s/<IPAddr_01_TAG>/$SOME_IP/g" values/values.dev.yaml
skaffold run -p dev
</code></pre>
<p>so after the above sed, in <strong>dev</strong> profile: <strong>values.dev.yaml</strong>, we will see the SOME_IP const instead of placeholder</p>
<pre><code> _anchors_:
- &_IPAddr_01 "199.99.99.99"
</code></pre>
|
<p>I'm adding an Ingress as follows:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cheddar
spec:
rules:
- host: cheddar.213.215.191.78.nip.io
http:
paths:
- backend:
service:
name: cheddar
port:
number: 80
path: /
pathType: ImplementationSpecific
</code></pre>
<p>but the logs complain:</p>
<pre><code>W0205 15:14:07.482439 1 warnings.go:67] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
time="2021-02-05T15:14:07Z" level=info msg="Updated ingress status" namespace=default ingress=cheddar
W0205 15:18:19.104225 1 warnings.go:67] networking.k8s.io/v1beta1 IngressClass is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 IngressClassList
</code></pre>
<p>Why? What's the correct yaml to use?
I'm currently on microk8s 1.20</p>
|
<p>I have analyzed you issue and came to the following conclusions:</p>
<ol>
<li>The Ingress will work and these Warnings you see are just to inform you about the available api versioning. You don't have to worry about this. I've seen the same Warnings:</li>
</ol>
<hr />
<pre><code>@microk8s:~$ kubectl describe ing
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
</code></pre>
<ol start="2">
<li>As for the "why" this is happening even when you use <code>apiVersion: networking.k8s.io/v1</code>, I have found the <a href="https://github.com/kubernetes/kubernetes/issues/94761#issuecomment-691982480" rel="noreferrer">following explanation</a>:</li>
</ol>
<blockquote>
<p>This is working as expected. When you create an ingress object, it can
be read via any version (the server handles converting into the
requested version). <code>kubectl get ingress</code> is an ambiguous request,
since it does not indicate what version is desired to be read.</p>
<p>When an ambiguous request is made, kubectl searches the discovery docs
returned by the server to find the first group/version that contains
the specified resource.</p>
<p>For compatibility reasons, <code>extensions/v1beta1</code> has historically been
preferred over all other api versions. Now that ingress is the only
resource remaining in that group, and is deprecated and has a GA
replacement, 1.20 will drop it in priority so that <code>kubectl get ingress</code> would read from <code>networking.k8s.io/v1</code>, but a 1.19 server
will still follow the historical priority.</p>
<p>If you want to read a specific version, you can qualify the get
request (like <code>kubectl get ingresses.v1.networking.k8s.io</code> ...) or can
pass in a manifest file to request the same version specified in the
file (<code>kubectl get -f ing.yaml -o yaml</code>)</p>
</blockquote>
<p>Long story short: despite the fact of using the proper <code>apiVersion</code>, the deprecated one is still being seen as the the default one and thus generating the Warning you experience.</p>
<p>I also see that <a href="https://github.com/concourse/concourse-chart/issues/204" rel="noreferrer">changes are still being made</a> recently so I assume that it is still being worked on.</p>
|
<p>Full error message: <code>connect ECONNREFUSED 127.0.0.1:30561 at TCPConnectWrap.afterConnect</code></p>
<p>The axios request is running in a Node.js environment (Next.js), which is where the error occurs, strangely the axios request works perfectly fine when it is being run in the browser. </p>
<p>My component (running in Node.js) that calls axios:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>import axios from 'axios'
import Router from 'next/router'
import React, { Component } from 'react'
import { initializeStore } from '~/reducers'
import { authenticate } from '~/actions/auth'
import { getCookieByName } from '~/helpers/cookie'
const isServer = typeof window === 'undefined'
const __NEXT_REDUX_STORE__ = '__NEXT_REDUX_STORE__'
function getOrCreateStore(initialState) {
// Always make a new store if server, otherwise state is shared between requests
if (isServer) {
return initializeStore(initialState)
}
// Create store if unavailable on the client and set it on the window object
if (!window[__NEXT_REDUX_STORE__]) {
window[__NEXT_REDUX_STORE__] = initializeStore(initialState)
}
return window[__NEXT_REDUX_STORE__]
}
export default App => {
return class AppWithRedux extends Component {
static async getInitialProps(appContext) {
const reduxStore = getOrCreateStore()
appContext.ctx.reduxStore = reduxStore
let appProps = {}
if (typeof App.getInitialProps === 'function') {
appProps = await App.getInitialProps(appContext)
}
const JWT = (isServer ? getCookieByName('JWT', appContext.ctx.req.headers.cookie) : getCookieByName('JWT', document.cookie))
const pathname = appContext.ctx.pathname
//set axios baseURL
axios.defaults.baseURL = (isServer ? `${appContext.ctx.req.headers['x-forwarded-proto']}://${appContext.ctx.req.headers.host}` : window.location.origin)
//if user has a JWT
if(JWT){
axios.defaults.headers.common['Authorization'] = `Bearer ${JWT}`
//get user from API layer
reduxStore.dispatch(authenticate())
}
return {
...appProps,
initialReduxState: reduxStore.getState()
}
}
constructor(props) {
super(props)
this.reduxStore = getOrCreateStore(props.initialReduxState)
}
render() {
return <App {...this.props} reduxStore={this.reduxStore} />
}
}
}</code></pre>
</div>
</div>
</p>
<p>Specifically <code>reduxStore.dispatch(authenticate())</code></p>
<p>And my actual axios call (using redux thunk), looking at the <code>authenticate</code> method: </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>import axios from 'axios'
import { setCookieByName } from '~/helpers/cookie'
const BASE_URL = '/api/auth'
export const TYPE_REGISTER = 'TYPE_REGISTER'
export const TYPE_AUTHENTICATE = 'TYPE_AUTHENTICATE'
export const register = values => (dispatch) => {
return axios.post(`${BASE_URL}/register`, values)
.then(function({data: {token, user}}){
setCookieByName('JWT', token, 365)
dispatch({
type: TYPE_REGISTER,
payload: user
})
})
}
export const authenticate = () => (dispatch) => {
return axios.post(`${BASE_URL}/me`)
.then(function({data: {user}}){
dispatch({
type: TYPE_AUTHENTICATE,
payload: user
})
})
.catch(function(err){
console.log(err)
dispatch({
type: TYPE_AUTHENTICATE,
payload: {}
})
})
}</code></pre>
</div>
</div>
</p>
<p>I'm running my local Kubernetes cluster using Docker for Mac, and my Ingress controller is being accessed on <code>http://kludge.info:30561</code>. My domain is mapped from <code>127.0.0.1 kludge.info</code> locally to allow the Ingress controller to hit the container. My theory is that when I send a request to <code>http://kludge.info:30561/api/auth/me</code> for example, the docker container running the Node.js app thinks it is a request to localhost (inside the container), and results in a connection error. Please note that the Node.js app inside the container is running on <code>http://localhost:8080</code>. Basically I'm running localhost on my machine, and localhost inside the Node instance. How could I send a request outside to <code>http://kludge.info:30561/</code> where the Ingress controller is running.</p>
<p>I've also configured the <code>baseURL</code>in axios, but it does not solve the problem. My ingress controller has a path <code>/api</code> that will point to a PHP instance, so I need my Node.js axios call to hit that inside it's container. Any help would be much appreciated.</p>
<p>When I ran my K8 cluster on Minikube I did not have this problem, however minikube does provide you with the IP of the VM, while Docker for Desktop uses <code>localhost</code> directly on your machine.</p>
|
<p>If I understand you correctly I see that you open a socket on <code>lcoalhost(127.0.0.1)</code> so it is only accessible locally. If you want it to be accessible from outside you need to bind it to <code>0.0.0.0</code> meaning "all interfaces".
Listening on <code>0.0.0.0</code> means listening from anywhere with network access to this computer. For example, from this very computer, from local network or from the Internet. And listening on 127.0.0.1 means from this computer only.</p>
<p>Please let me know if that helped. Or if I have misunderstood you. </p>
|
<p>Is it true that I cannot have two LoadBalancer services on a docker-desktop cluster (osx), because they would both use <code>localhost</code> (and all ports are forwarded)?</p>
<p>I created an example and the latter service is never assigned an external IP address but stays in state <code>pending</code>. However, the former is accessible on localhost.</p>
<pre><code>> kubectl get all
NAME READY STATUS RESTARTS AGE
pod/whoami-deployment-9f9c86c4f-l5lkj 1/1 Running 0 28s
pod/whoareyou-deployment-b896ddb9c-lncdm 1/1 Running 0 27s
pod/whoareyou-deployment-b896ddb9c-s72sc 1/1 Running 0 27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 95s
service/whoami-service LoadBalancer 10.97.171.139 localhost 80:30024/TCP 27s
service/whoareyou-service LoadBalancer 10.97.171.204 <pending> 80:32083/TCP 27s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/whoami-deployment 1/1 1 1 28s
deployment.apps/whoareyou-deployment 2/2 2 2 27s
NAME DESIRED CURRENT READY AGE
replicaset.apps/whoami-deployment-9f9c86c4f 1 1 1 28s
replicaset.apps/whoareyou-deployment-b896ddb9c 2 2 2 27s
</code></pre>
<p>Detailed state fo whoareyou-service:</p>
<pre><code>kubectl describe service whoareyou-service
Name: whoareyou-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"whoareyou-service","namespace":"default"},"spec":{"ports":[{"name...
Selector: app=whoareyou
Type: LoadBalancer
IP: 10.106.5.8
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30333/TCP
Endpoints: 10.1.0.209:80,10.1.0.210:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
|
<p>I decided to copy my comments, as they partially explain the problem, and make a <code>Community Wiki</code> answer out of them so it is more clearly seen and available for possible further edits by the Community:</p>
<p>It works probably exactly the same way as in <strong>Minikube</strong>. As <strong>docker-desktop</strong> is unable to provision real <code>LoadBalancer</code> it can still "simulate" creating <code>Service</code> of such type using <code>NodePort</code> (this can easily be seen from port range it uses). I'm pretty sure you cannot use same IP address as the <code>ExternalIP</code> of the <code>LoadBalancer</code> <code>Service</code> and if you create one more <code>Service</code> of such type, your <strong>docker-desktop</strong> has no other choice than to use your localhost one more time. As it is already used by one <code>Service</code> it cannot be used by another one and that's why it remains in a <code>pending state</code>.</p>
<p>Note that if you create real <code>LoadBalancer</code> in a cloud environment, each time new IP is provisioned and there is no situation that next <code>LoadBalancer</code> you create gets the same IP that is already used by the existing one. Apparently here it cannot use any other IP then one of <code>localhost</code>, and this one is already in use. Anyway I would recommend you to simply use <code>NodePort</code> if you want to expose your <code>Deployment</code> to the external world.</p>
|
<p>I have setup nginx ingress controller configurations under the data property as shown in the below yaml file.</p>
<ol>
<li><p>I would like to know is this the correct way to set nginx configurations instead of providing a nginx.conf file.</p>
</li>
<li><p>Secondly I would like to find out whether the provided configurations are set. To find whether the new configurations are applied, should I exec into the pod and run <code>nginx -T</code> or is there any other way to find it?</p>
</li>
</ol>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
worker-processes: "24"
worker-connections: "100000"
worker-rlimit-nofile: "102400"
worker-cpu-affinity: "auto 111111111111111111111111"
keepalive: "200"
main-template: |
user nginx;
worker_processes {{.WorkerProcesses}};
{{- if .WorkerRlimitNofile}}
worker_rlimit_nofile {{.WorkerRlimitNofile}};{{end}}
{{- if .WorkerCPUAffinity}}
worker_cpu_affinity {{.WorkerCPUAffinity}};{{end}}
{{- if .WorkerShutdownTimeout}}
worker_shutdown_timeout {{.WorkerShutdownTimeout}};{{end}}
daemon off;
error_log /var/log/nginx/error.log {{.ErrorLogLevel}};
pid /var/run/nginx.pid;
{{- if .MainSnippets}}
{{range $value := .MainSnippets}}
{{$value}}{{end}}
{{- end}}
events {
worker_connections {{.WorkerConnections}};
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
...
sendfile on;
access_log off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 315;
keepalive_requests 10000000;
#gzip on;
...
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
|
<p>There are many ways how to install <code>NGINX Ingress Controller</code>, however they depends on environments they are deploying on.
For example for minikube:</p>
<strong>minikube</strong>
<p>For standard usage:</p>
<p><code>minikube addons enable ingress</code></p>
<p>To check if the ingress controller pods have started, run the following command:</p>
<p><code>$ kubectl get pods -n ingress-nginx \ -l app.kubernetes.io/name=ingress-nginx --watch</code></p>
<p>You can use <strong>helm</strong> (but only <code>v3</code>):</p>
<p><code>NGINX Ingress controller</code> can be installed via <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a> using the chart from the project repository. To install the chart with the release name <code>ingress-nginx</code>:</p>
<pre><code>$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
$ helm install ingress-nginx ingress-nginx/ingress-nginx
</code></pre>
<p>Then try to detect installed version:</p>
<p><code>POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version</code></p>
<p>However most common way is to install the NGINX Ingress Controller in your Kubernetes cluster using Kubernetes manifests and then modify nginx-config.yaml</p>
<p><strong>Summing up:</strong> you have to to modify nginx.conf file. You are providing clear specification and then easily you can debug it.</p>
<p>Read more: <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">nginx-ingress-controller-installation-manifest</a>, <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">nginx-ingress-controller</a>.</p>
<p>Even while troubleshooting you have examples to check <code>nginx.conf</code> file.
To check Ingress Controller you can for example:</p>
<ul>
<li><p><strong>check the Ingress Resource Events</strong></p>
<pre><code>$ kubectl get ing -n <namespace-of-ingress-resource> NAME
$ kubectl describe ing <ingress-resource-name> -n <namespace-of-ingress-resource>
</code></pre>
</li>
<li><p><strong>check the Ingress Controller Logs</strong></p>
<pre><code> $ kubectl get pods -n <namespace-of-ingress-controller>
$ kubectl logs -n <namespace> nginx-ingress-controller
</code></pre>
</li>
<li><p><strong>check the Nginx Configuration</strong></p>
<pre><code> $ kubectl get pods -n <namespace-of-ingress-controller>
$ kubectl exec -it -n <namespace-of-ingress-controller> nginx-ingress-controller -- cat /etc/nginx/nginx.conf
</code></pre>
</li>
<li><p><strong>check if used Services Exist</strong></p>
<pre><code> $ kubectl get svc --all-namespaces
</code></pre>
</li>
</ul>
<p>See more: <a href="https://kubernetes.github.io/ingress-nginx/troubleshooting/" rel="nofollow noreferrer">ingress-troubleshooting</a>.</p>
|
<p>Is there a command/method to get the time elapsed from the instance a Kubernetes object <strong>creation command is launched</strong> (e.g., <code>kubectl create -f mydeployment-pod.yaml</code>), until the Kubernetes object (deployment/pod…) is fully created and in <strong>running/ready state</strong>.</p>
|
<p>As mentioned by @Sahadat: there is no native way of calculating that. However, you can use <code>kubectl get events</code> to see the <code>CreationTimestamp</code>, <code>firstTimestamp</code> and <code>lastTimestamp</code>. You can either request the output in yaml/json format by executing <code>kubectl get events -o yaml</code> or use <a href="https://kubernetes.io/docs/reference/kubectl/overview/#custom-columns" rel="nofollow noreferrer">custom columns</a> and <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="nofollow noreferrer">fields selectors</a> to narrow down the output, for example:</p>
<pre><code>kubectl get events -o custom-columns=FirstSeen:.firstTimestamp,LastSeen:.lastTimestamp,Created:.CreationTimestamp
</code></pre>
<p>That of course can be adjusted according to your needs.</p>
|
<p>Is it possible to have NodePort Default_range + user_defined_range on a kubernetes cluster Or it can be just one range only</p>
<blockquote>
<blockquote>
<p>Can we configure to have say default range plus a range of user defined values ? Like default range is 30000-32767 can we have additional range from 40000-41000 as well ?</p>
</blockquote>
</blockquote>
<p>Will like to retain default range for other applications in cluster but build one range specific to my application.</p>
<p>I have tested assigning port outside the range it clearly fails so the range is hard defined will like to understand if there is any way to have two range or user needs to live with default range or custom range (i.e. two diffrent range in single cluster are not supported )</p>
<pre><code>ubuntu@cluster-master:~$ kubectl expose deployment nginx --type=NodePort --port=80 --dry-run -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 40000
selector:
run: nginx
type: NodePort
status:
loadBalancer: {}
</code></pre>
<pre><code>ubuntu@cluster-master:~$ kubectl create -f service.yaml
The Service "nginx" is invalid: spec.ports[0].nodePort: Invalid value: 40000: provided port is not in the valid range. The range of valid ports is 30000-32767
</code></pre>
|
<p>Unfortunately, it is not possible that way.</p>
<p>The default range is indeed 30000-32767 but it can be changed by setting the
<code>--service-node-port-range</code>
Update the file <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> and add the line <code>--service-node-port-range=xxxxx-yyyyy</code></p>
<p>Be careful however and not to generate any configuration issues as the range was picked to avoid conflicts with anything else on the host machine network.</p>
<p>I think that the best solution for you would be to set a single but wider range.</p>
<p>I hope it helps. </p>
|
<p>Is it possible to specify extended resources in Kubelet configuration or would this need to be achieved using something like a daemon pod?</p>
<p>An extended resource in this context refers to this: <a href="https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/</a>. This page specifies that the way to advertise these resources is to send a patch to the nodes <code>/status</code> endpoint like so:</p>
<p><code>curl --header "Content-Type: application/json-patch+json" \
--request PATCH \
--data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' \
http://localhost:8001/api/v1/nodes/<your-node-name>/status</code></p>
|
<blockquote>
<p>Is it possible to specify extended resources in Kubelet configuration
or would this need to be achieved using something like a daemon pod?</p>
</blockquote>
<p>No, <strong>extended resources</strong> cannot be specified just in <strong>Kubelet</strong> configuration. If you want to configure them permanently, you can use <strong><a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/" rel="nofollow noreferrer">Device Plugins</a></strong>. </p>
<blockquote>
<p>You can deploy a device plugin as a <code>DaemonSet</code>, as a package for your
node’s operating system, or manually.</p>
<p>The canonical directory <code>/var/lib/kubelet/device-plugins</code> requires
privileged access, so a device plugin must run in a privileged
security context. If you’re deploying a device plugin as a <code>DaemonSet</code>,
<code>/var/lib/kubelet/device-plugins</code> must be mounted as a Volume in the
plugin’s PodSpec.</p>
<p>If you choose the <code>DaemonSet</code> approach you can rely on Kubernetes to:
place the device plugin’s <code>Pod</code> onto <code>Nodes</code>, to restart the daemon <code>Pod</code>
after failure, and to help automate upgrades.</p>
</blockquote>
|
<p>I setup a k8s cluster, which have one master node and one worker node, <code>coredns</code> pod is schedule to worker node and works fine. When I delete worker node, <code>coredns</code> pod is schedule to master node, but have <code>CrashLoopBackOff</code> state, the log of <code>coredns</code> pod as following:</p>
<pre><code>E0118 10:06:02.408608 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
E0118 10:06:02.408752 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
</code></pre>
<p><a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/cluster-administration/guaranteed-scheduling-critical-addon-pods/" rel="nofollow noreferrer">This article</a> say DNS components must run on a regular cluster node (rather than the Kubernetes master):</p>
<blockquote>
<p>In addition to Kubernetes core components like api-server, scheduler,
controller-manager running on a master machine there are a number of
add-ons which, for various reasons, must run on a regular cluster node
(rather than the Kubernetes master). Some of these add-ons are
critical to a fully functional cluster, such as Heapster, DNS, and UI.</p>
</blockquote>
<p>Can anyone explain why <code>coredns</code> pod can't run on master node?</p>
|
<p>Starting from scratch Kubernetes idea is to deploy pods on worker nodes. Master nodes are a nodes which controls and manages worker nodes.</p>
<p>When the Kubernetes cluster is first set up, a Taint is set on the master node which automatically prevents any pods from being scheduled on this node. You can see this as well as modify this behavior if required. Best practice is not to deploy application workloads on a master server.</p>
<p>Read useful article: <a href="https://medium.com/@kumargaurav1247/no-pod-on-master-node-why-kubernetes-6afb07b05ce6#:%7E:text=The%20reason,workloads%20on%20a%20master%20server." rel="nofollow noreferrer">master-node-scheduling</a>.</p>
<p>However you can force pods/deployments to be deployed on master nodes by using <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">nodeSelector</a>. For example, give your master node a label say dedicated=master and set nodeSelector for your pod to look for this label.</p>
<p>See more: <a href="https://stackoverflow.com/questions/41999756/how-to-force-pods-deployments-to-master-nodes">deployment-on-master-nodes</a>.</p>
|
<p>Quick question, in Rancher is it possible to use lets-encrypt to sign the k8s TLS certs (etcd, kub-api, etc). I have a compliance requirement to sign my k8s environment with a valid trusted CA chain? </p>
|
<p>Yes, it is actually one of the <a href="https://rancher.com/docs/rancher/v2.x/en/installation/k8s-install/helm-rancher/#4-choose-your-ssl-configuration" rel="nofollow noreferrer">recommended options</a> for the source of the certificate used for TLS termination at the Rancher server:</p>
<blockquote>
<p>Let’s Encrypt: The Let’s Encrypt option also uses cert-manager.
However, in this case, cert-manager is combined with a special Issuer
for Let’s Encrypt that performs all actions (including request and
validation) necessary for getting a Let’s Encrypt issued cert.</p>
</blockquote>
<p>In the links below you will find a walkthrough showing how to:</p>
<ul>
<li><a href="https://rancher.com/docs/rancher/v2.x/en/installation/k8s-install/helm-rancher/#5-install-cert-manager" rel="nofollow noreferrer">Install cert-manager</a></li>
<li><a href="https://rancher.com/docs/rancher/v2.x/en/installation/k8s-install/helm-rancher/#6-install-rancher-with-helm-and-your-chosen-certificate-option" rel="nofollow noreferrer">Install Rancher with Helm and Your Chosen Certificate Option</a></li>
</ul>
<blockquote>
<p>This option uses cert-manager to automatically request and renew Let’s
Encrypt certificates. This is a free service that provides you with a
valid certificate as Let’s Encrypt is a trusted CA.</p>
</blockquote>
<p>Please let me know if that helped.</p>
|
<p>I have the same problem as the following:
<a href="https://stackoverflow.com/questions/54289786/dual-nginx-in-one-kubernetes-pod">Dual nginx in one Kubernetes pod</a></p>
<p>In my Kubernetes <code>Deployment</code> template, I have 2 containers that are using the same port 80.
I understand that containers within a <code>Pod</code> are actually under the same network namespace, which enables accessing another container in the <code>Pod</code> with <code>localhost</code> or <code>127.0.0.1</code>.
It means containers can't use the same port.</p>
<p>It's very easy to achieve this with the help of <code>docker run</code> or <code>docker-compose</code>, by using <code>8001:80</code> for the first container and <code>8002:80</code> for the second container.</p>
<p>Is there any similar or better solution to do this in Kubernetes Pod ? Without separating these 2 containers into different Pods.</p>
|
<p>Basically I totally agree with <em>@David's</em> and <em>@Patric's</em> comments but I decided to add to it a few more things expanding it into an answer.</p>
<blockquote>
<p>I have the same problem as the following: <a href="https://stackoverflow.com/questions/54289786/dual-nginx-in-one-kubernetes-pod">Dual nginx in one Kubernetes pod</a></p>
</blockquote>
<p>And there is already a pretty good answer for that problem in a mentioned thread. From the technical point of view it provides ready solution to your particular use-case however it doesn't question the idea itself.</p>
<blockquote>
<p>It's very easy to achieve this with the help of docker run or
docker-compose, by using 8001:80 for the first container and 8002:80
for the second container.</p>
</blockquote>
<p>It's also very easy to achieve in <strong>Kubernetes</strong>. Simply put both containers in different <code>Pods</code> and you will not have to manipulate with nginx config to make it listen on a port different than <code>80</code>. Note that those two docker containers that you mentioned don't share a single network namespace and that's why they can both listen on ports <code>80</code> which are mapped to different ports on host system (<code>8001</code> and <code>8002</code>). This is not the case with <strong>Kubernetes</strong> <em>Pods</em>. Read more about <strong>microservices architecture</strong> and especially how it is implemented on <strong>k8s</strong> and you'll notice that placing a few containers in a single <code>Pod</code> is really rare use case and definitely should not be applied in a case like yours. There should be a good reason to put 2 or more containers in a single <code>Pod</code>. Usually the second container has some complimentary function to the main one.</p>
<p>There are <a href="https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/" rel="noreferrer">3 design patterns for multi-container Pods, commonly used in <strong>Kubernetes</strong></a>: sidecar, ambassador and adapter. Very often all of them are simply referred to as <strong>sidecar containers</strong>.</p>
<p>Note that 2 or more containers coupled together in a single <code>Pod</code> in all above mentioned use cases <em>have totally different function</em>. Even if you put more than just one container in a single <code>Pod</code> (which is most common), in practice it is never a container of the same type (like two nginx servers listening on different ports in your case). They should be complimentary and there should be a good reason why they are put together, why they should start and shut down at the same time and share same network namespace. Sidecar container with a monitoring agent running in it has complimentary function to the main container which can be e.g. nginx webserver. You can read more about container design patterns in general in <a href="https://techbeacon.com/enterprise-it/7-container-design-patterns-you-need-know" rel="noreferrer">this</a> article.</p>
<blockquote>
<p>I don't have a very firm use case, because I'm still
very new to Kubernetes and the concept of a cluster. </p>
</blockquote>
<p>So definitely don't go this way if you don't have particular reason for such architecture.</p>
<blockquote>
<p>My initial planning of the cluster is putting all my containers of the system
into a pod. So that I can replicate this pod as many as I want.</p>
</blockquote>
<p>You don't need a single <code>Pod</code> to replicate it. You can have in your cluster a lot of <code>replicaSets</code> (usually managed by <code>Deployments</code>), each of them taking care of running declared number of replicas of a <code>Pod</code> of a certain kind.</p>
<blockquote>
<p>But according to all the feedback that I have now, it seems like I going
in the wrong direction.</p>
</blockquote>
<p>Yes, this is definitely wrong direction, but it was actually already said. I'd like only to highlight why namely this direction is wrong. Such approach is totally against the idea of <em>microservices architecture</em> and this is what <strong>Kubernetes</strong> is designed for. Putting all your infrastructure in a single huge <code>Pod</code> and binding all your containers tightly together makes no sense. Remember that a <code>Pod</code> <em>is the smallest deployable unit in <strong>Kubernetes</em></strong> and when one of its containers crashes, the whole <code>Pod</code> crashes. There is no way you can manually restart just one container in a <code>Pod</code>.</p>
<blockquote>
<p>I'll review my structure and try with the
suggests you all provided. Thank you, everyone! =)</p>
</blockquote>
<p>This is a good idea :)</p>
|
<p>I'm kinda new to Kubernetes, and I would like to understand what is the purpose of Kube-proxy in Azure AKS/regular cluster.
from what I understand, Kube-proxy is updated by the API cluster from the various deployments configurations, which then updates the IP-table stack in the Linux kernel that responsible for the traffic routes between pods and services.</p>
<p>Am I missing something important?</p>
<p>Thanks!!</p>
|
<p>Basically <em>kube-proxy</em> component runs on each node to provide network features. It is run as a Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer"><code>DaemonSet</code></a> and its configuration is stored on a Kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer"><code>ConfigMap</code></a>. You can edit the kube-proxy <code>DaemonSet</code> or <code>ConfigMap</code> on the <code>kube-system</code> namespace using commands:</p>
<pre><code>$ kubectl -n kube-system edit daemonset kube-proxy
</code></pre>
<p>or</p>
<pre><code>$ kubectl -n kube-system edit configmap kube-proxy
</code></pre>
<blockquote>
<p><code>kube-proxy</code> currently supports three different operation modes:</p>
<ul>
<li><strong>User space:</strong> This mode gets its name because the service routing takes place in <code>kube-proxy</code> in the user process space
instead of in the kernel network stack. It is not commonly used as it is slow and outdated.</li>
<li><strong>IPVS (IP Virtual Server)</strong>: Built on the Netfilter framework, IPVS implements Layer-4 load balancing in the Linux kernel, supporting multiple load-balancing algorithms, including least connections and shortest expected delay. This <code>kube-proxy</code> mode became generally available in Kubernetes 1.11, but it requires the Linux kernel to have the IPVS modules loaded. It is also not as widely supported by various Kubernetes networking projects as the iptables mode.</li>
<li><strong>iptables:</strong> This mode uses Linux kernel-level Netfilter rules to configure all routing for Kubernetes Services. This mode is the default for <code>kube-proxy</code> on most platforms. When load balancing for multiple backend pods, it uses unweighted round-robin scheduling.</li>
<li><strong>IPVS (IP Virtual Server)</strong>: Built on the Netfilter framework, IPVS implements Layer-4 load balancing in the Linux kernel, supporting multiple load-balancing algorithms, including least connections and shortest expected delay. This <code>kube-proxy</code> mode became generally available in Kubernetes 1.11, but it requires the Linux kernel to have the IPVS modules loaded. It is also not as widely supported by various Kubernetes networking projects as the iptables mode.</li>
</ul>
</blockquote>
<p>Take a look: <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="nofollow noreferrer">kube-proxy</a>, <a href="https://www.stackrox.com/post/2020/01/kubernetes-networking-demystified/" rel="nofollow noreferrer">kube-proxy-article</a>, <a href="https://stackoverflow.com/questions/52784153/how-to-set-kube-proxy-settings-using-kubectl-on-aks">aks-kube-proxy</a>.</p>
<p>Read also: <a href="https://kubernetes.io/docs/concepts/cluster-administration/proxies/" rel="nofollow noreferrer">proxies-in-kubernetes</a>.</p>
|
<p>I am creating deployments using Kubernetes API from my server. The deployment pod has two containers - one is the main and the other is a sidecar container that checks the health of the pod and calls the server when it becomes healthy.</p>
<p>I am using <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#read-status-deployment-v1-apps" rel="noreferrer">this</a> endpoint to get the deployment. It has deployment status property with the following structure as mention <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#deploymentstatus-v1-apps" rel="noreferrer">here</a>.</p>
<p>I couldn't understand the fields <code>availableReplicas</code>, <code>readyReplicas</code>, <code>replicas</code>, <code>unavailableReplicas</code> and <code>updatedReplicas</code>.</p>
<p>I checked docs of Kubernetes and these SO questions too - <a href="https://stackoverflow.com/q/51085740/11642727">What is the difference between current and available pod replicas in kubernetes deployment?</a> and <a href="https://stackoverflow.com/q/39606728/11642727">Meaning of "available" and "unavailable" in kubectl describe deployment</a> but could not infer the difference between of a pod being ready, running and available. Could somebody please explain the difference between these terms and states?</p>
|
<p>A different kinds of <code>replicas</code> in the Deployment's Status can be described as follows:</p>
<ul>
<li><p><code>Replicas</code> - describes how many pods this deployment should have. It is copied from the spec. This happens asynchronously, so in a very brief interval, you could read a Deployment where the <code>spec.replicas</code> is not equal to <code>status.replicas</code>.</p>
</li>
<li><p><code>availableReplicas</code> - means how many pods are ready for at least some time (<code>minReadySeconds</code>). This prevents flapping of state.</p>
</li>
<li><p><code>unavailableReplicas</code> - is the total number of pods that should be there, minus the number of pods that has to be created, or ones that are not available yet (e.g. are failing, or are not ready for <code>minReadySeconds</code>).</p>
</li>
<li><p><code>updatedReplicas</code> - the number of pods reachable by deployment, that match the spec template.</p>
</li>
<li><p><code>readyReplicas</code> - the number of pods that are reachable from deployment through all the replicas.</p>
</li>
</ul>
<p>Let's use the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="noreferrer">official example</a> of a Deployment that creates a ReplicaSet to bring up three nginx Pods:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
<p>The Deployment creates three replicated Pods, indicated by the <code>.spec.replicas</code> field.</p>
<p>Create the Deployment by running the following command:</p>
<pre><code>kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
</code></pre>
<p>Run <code>kubectl get deployments</code> to check if the Deployment was created.</p>
<p>If the Deployment is still being created, the output is similar to the following:</p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/3 0 0 1s
</code></pre>
<p>When you inspect the Deployments in your cluster, the following fields are displayed:</p>
<ul>
<li><p><code>NAME</code> - lists the names of the Deployments in the namespace.</p>
</li>
<li><p><code>READY</code> - displays how many replicas of the application are available to your users. It follows the pattern ready/desired.</p>
</li>
<li><p><code>UP-TO-DATE</code> - displays the number of replicas that have been updated to achieve the desired state.</p>
</li>
<li><p><code>AVAILABLE</code> - displays how many replicas of the application are available to your users.</p>
</li>
<li><p><code>AGE</code> - displays the amount of time that the application has been running.</p>
</li>
</ul>
<p>Run the <code>kubectl get deployments</code> again a few seconds later. The output is similar to this:</p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 18s
</code></pre>
<p>To see the ReplicaSet (<code>rs</code>) created by the Deployment, run <code>kubectl get rs</code>. The output is similar to this:</p>
<pre><code>NAME DESIRED CURRENT READY AGE
nginx-deployment-75675f5897 3 3 3 18s
</code></pre>
<p>ReplicaSet output shows the following fields:</p>
<ul>
<li><p><code>NAME</code> - lists the names of the ReplicaSets in the namespace.</p>
</li>
<li><p><code>DESIRED</code> - displays the desired number of replicas of the application, which you define when you create the Deployment. This is the desired state.</p>
</li>
<li><p><code>CURRENT</code> - displays how many replicas are currently running.</p>
</li>
<li><p><code>READY</code> displays how many replicas of the application are available to your users.</p>
</li>
<li><p><code>AGE</code> - displays the amount of time that the application has been running.</p>
</li>
</ul>
<p>As you can see there is no actual difference between <code>availableReplicas</code> and <code>readyReplicas</code> as both of those fields displays how many replicas of the application are available to your users.</p>
<p>And when it comes to the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="noreferrer">Pod Lifecycle</a> it is important to see the difference between <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="noreferrer">Pod phase</a>, <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-states" rel="noreferrer">Container states</a> and <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions" rel="noreferrer">Pod conditions</a> which all have different meanings. I strongly recommend going through the linked docs in order to get a solid understanding behind them.</p>
|
<p>I am using <a href="https://postgres-operator.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://postgres-operator.readthedocs.io/en/latest/</a> and have deployed:</p>
<pre><code>kind: "postgresql"
apiVersion: "acid.zalan.do/v1"
metadata:
name: "acid-databaker-db"
namespace: "dev"
labels:
team: acid
spec:
teamId: "acid"
postgresql:
version: "12"
numberOfInstances: 2
volume:
size: "2Gi"
users:
admin:
- superuser
- createdb
kcadmin: []
databases:
keycloak: kcadmin
allowedSourceRanges:
# IP ranges to access your cluster go here
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 500Mi
</code></pre>
<p>everything is up and running and I can connect to database, but I do not understand this part:</p>
<pre><code> users:
admin:
- superuser
- createdb
kcadmin: []
databases:
keycloak: kcadmin
</code></pre>
<p>According to the <a href="https://postgres-operator.readthedocs.io/en/latest/user/#prepared-databases-with-roles-and-default-privileges" rel="nofollow noreferrer">doc</a>, <code>admin</code> is a role - right?
What about <code>kcadmin</code>? Is it an <code>user</code> or <code>role</code>? If it is an user, what kind of role does the <code>kcadmin</code> has? </p>
|
<p>This is a community wiki answer based on the correct info from the comments and with more explanation and details. </p>
<p>In your use case:</p>
<pre><code> users:
admin:
- superuser
- createdb
kcadmin: []
databases:
keycloak: kcadmin
</code></pre>
<p>we see two users: <code>admin</code> and <code>kcadmin</code>.</p>
<p>User <code>admin</code> has two manifest roles: <code>superuser</code> and <code>createdb</code>.</p>
<p>User <code>kcadmin</code> has no manifest roles. </p>
<p>Manifest roles are defined as a dictionary, with a role name as a key and a list of role options as a value. For a role without any options it is best to supply the empty list <code>[]</code>, like with your <code>kcadmin</code> user. </p>
<p>The following roles can be used: <code>superuser</code>, <code>inherit</code>, <code>login</code>, <code>nologin</code>, <code>createrole</code>, <code>createdb</code>, <code>replication</code> and <code>bypassrls</code>.</p>
<p>I hope it helps.</p>
|
<p>I am trying to setup a user that will have permissions to install using Helm 3 in to a specific namespace. (For my CI/CD system.)</p>
<p>For example, if the user tries to run a <code>helm upgrade --install</code> with <code>--namespace=dev</code> then it works just fine. But if they try <code>--namespace=test</code> it will fail.</p>
<p>But I finding my self overwhelmed by the options. When creating a role you have to pick <code>apiGroups</code>, <code>resources</code> and <code>verbs</code>. I see a resource called <code>deployments</code>, but I have read that secret access is also needed. I have done some googling, but most hits are about configuring Helm 2 (with tiller).</p>
<p><strong>What are the minimum Kubernetes permissions needed to install using Helm 3?</strong></p>
|
<p>In Kubernetes, best practice is to ensure that your application is operating in the scope that you have specified that is why you have to grant role to user or application-specific service account. Read more about service account permissions <a href="https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions" rel="nofollow noreferrer">in the official Kubernetes docs</a>.</p>
<p>To restrict a user's access to a particular namespace, you can use either the <code>edit</code> or the <code>admin</code> role. If your charts create or interact with Roles and Rolebindings, you'll want to use the <code>admin</code> ClusterRole.</p>
<p>Additionally, you may also create a RoleBinding with <code>cluster-admin</code> access. Granting a user <code>cluster-admin</code> access at the namespace scope provides full control over every resource in the namespace, including the namespace itself.</p>
<p>For this example, we will create a user with the <code>edit</code> Role. First, create the namespace:</p>
<pre><code>$ kubectl create namespace your-namespace
</code></pre>
<p>Now, create a RoleBinding in that namespace, granting the user the <code>edit</code> role.</p>
<pre><code>$ kubectl create rolebinding steve-edit
--clusterrole edit \
--user steve \
--namespace your-namespace
</code></pre>
<p>This command will create rolebinding <code>steve-edit</code>. This rolebinding grants the permissions defined in a clusterrole <code>edit</code> to a user <code>steve</code> for namespace <code>your-namespace</code></p>
<p><code>Edit</code> is default clusterrole which allows read/write access to most objects in a namespace. It does not allow viewing or modifying roles or rolebindings.</p>
<p>Take a look: <a href="https://helm.sh/docs/topics/rbac/#example-grant-a-user-readwrite-access-to-a-particular-namespace" rel="nofollow noreferrer">rbac-namespace-helm</a>.</p>
<p>Read about clusterroles: <a href="https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/testdata/cluster-roles.yaml" rel="nofollow noreferrer">rbac-clusteroles</a>, <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/admin/authorization/rbac/" rel="nofollow noreferrer">kubernetes-authorization</a>.</p>
<p>You can also grant specific user read/write access at the cluster scope, so you will be able to install helm in any namespace. You have to grant the user either <code>admin</code> or <code>cluster-admin</code> access.</p>
<p>Read more here: <a href="https://helm.sh/docs/topics/rbac/#example-grant-a-user-readwrite-access-at-the-cluster-scope" rel="nofollow noreferrer">cluster-scope-rbac-helm</a>.</p>
|
<p>We have a few k8s clusters in AWS that were created using Kops.<br />
We are trying to improve security by setting up RBAC using service accounts and users.<br />
Unfortunately, some of our devs were given the master/admin certificates.<br />
Would it be possible to regenerate the master certificates without creating a new cluster?</p>
<p>Other best practices related to security would also be appreciated!
Thanks.</p>
|
<p>This is a community wiki answer based on a solution from a similar question. Feel free to expand it.</p>
<p>As already mentioned in the comments, the answer to your question boils down to the below conclusions:</p>
<ul>
<li><p>Currently there is no easy way to <a href="https://kops.sigs.k8s.io/rotate-secrets/" rel="nofollow noreferrer">roll certificates without disruptions</a></p>
</li>
<li><p>You cannot disable certificates as Kubernetes relies on the PKI to authenticate</p>
</li>
<li><p>Rotating secrets should be graceful in the future as stated in <a href="https://github.com/kubernetes/kops/pull/10516" rel="nofollow noreferrer">this PR</a></p>
</li>
</ul>
|
<p>Experiencing an odd issue with KubernetesPodOperator on Airflow 1.1.14.</p>
<p>Essentially for some jobs Airflow is losing contact with the pod it creates.</p>
<p><code>[2021-02-10 07:30:13,657] {taskinstance.py:1150} ERROR - ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))</code></p>
<p>When I check logs in kubernetes with <code>kubectl logs</code> I can see that the job carried on past the connection broken error.</p>
<p>The connection broken error seems to happen exactly 1 hour after the last logs that Airflow pulls from the pod (we do have a 1 hour config on connections), but the pod keeps running happily in the background.</p>
<p>I've seen this behaviour repeatedly, and it tends to happen with longer running jobs with a gap in the log output, but I have no other leads. Happy to update the question if certain specifics are misssing.</p>
|
<p>As I have mentioned in comments section I think you can try to set operators <code>get_logs</code> parameter to <code>False</code> - default value is <code>True</code> .</p>
<p>Take a look: <a href="https://stackoverflow.com/questions/55176707/airflow-worker-connection-broken-incompleteread0-bytes-read">airflow-connection-broken</a>, <a href="https://issues.apache.org/jira/browse/AIRFLOW-3534" rel="nofollow noreferrer">airflow-connection-issue</a> .</p>
|
<p>Based on <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">this guide</a>.</p>
<p>I have created this ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: ing-something
namespace: samples
spec:
rules:
- host: myhost.internal
http:
paths:
- backend:
serviceName: my-app
servicePort: 8080
path: /api(/|$)(.*)
tls:
- hosts:
- myhost.internal
secretName: lets
status:
loadBalancer:
ingress:
- {}
</code></pre>
<p>When I take a look at the bottom of the generated nginx.config it contains:</p>
<pre><code>## start server myhost.internal
server {
server_name myhost.internal ;
listen 80;
set $proxy_upstream_name "-";
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
listen 443 ssl http2;
...
location ~* "^/api(/|$)(.*)" {
set $namespace "samples";
set $ingress_name "ing-something";
set $service_name "my-app";
set $service_port "8080";
set $location_path "/api(/|${literal_dollar})(.*)";
...
rewrite "(?i)/api(/|$)(.*)" /$2 break;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
location ~* "^/" {
set $namespace "";
set $ingress_name "";
set $service_name "";
...
}
</code></pre>
<p>I don't understand where this part <code>(?i)</code> is coming from in:</p>
<pre><code> rewrite "(?i)/api(/|$)(.*)" /$2 break;
</code></pre>
<p>and what it means.</p>
<p>Any ideas?</p>
|
<p>This is a community wiki answer based on the info from the comments and posted for better visibility. Feel free to expand it.</p>
<p>As already pointed out in the comments, the path matching <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#use-regex" rel="nofollow noreferrer">is case-insensitive by default</a>:</p>
<blockquote>
<p>Using the <code>nginx.ingress.kubernetes.io/use-regex</code> annotation will
indicate whether or not the paths defined on an Ingress use regular
expressions. The default value is <code>false</code>.</p>
<p>The following will indicate that regular expression paths are being
used:</p>
<pre><code>nginx.ingress.kubernetes.io/use-regex: "true"
</code></pre>
<p>The following will indicate that regular expression paths are not
being used:</p>
<pre><code>nginx.ingress.kubernetes.io/use-regex: "false"
</code></pre>
<p>When this annotation is set to <code>true</code>, the case insensitive regular
expression <a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#location" rel="nofollow noreferrer">location modifier</a> will be enforced on ALL paths for a
given host regardless of what Ingress they are defined on.</p>
<p>Additionally, if the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rewrite" rel="nofollow noreferrer"><code>rewrite-target</code> annotation</a> is used on any
Ingress for a given host, then the case insensitive regular expression
<a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#location" rel="nofollow noreferrer">location modifier</a> will be enforced on ALL paths for a given host
regardless of what Ingress they are defined on.</p>
<p>Please read about <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">ingress path matching</a> before using this
modifier.</p>
</blockquote>
<p>And all the specifics regarding the Mode Modifiers can be found <a href="https://www.regular-expressions.info/refmodifiers.html" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>Mode modifier syntax consists of two elements that differ among regex
flavors. Parentheses and a question mark are used to add the modifier
to the regex. Depending on its position in the regex and the regex
flavor it may affect the whole regex or part of it. If a flavor
supports at least one modifier syntax, then it will also support one
or more letters that can be used inside the modifier to toggle
specific modes. If it doesn’t, “n/a” is indicated for all letters for
that flavors.</p>
</blockquote>
<p>Notice the <a href="https://www.regular-expressions.info/modifiers.html" rel="nofollow noreferrer">Specifying Modes Inside The Regular Expression</a> for case-insensitivity:</p>
<pre><code>(?i) Turn on case insensitivity. (?i)a matches a and A.
</code></pre>
|
<p>I am trying to use Minikube and Docker to understand the concepts of Kubernetes architecture.</p>
<p>I created a spring boot application with Dockerfile, created tag and pushed to Dockerhub.</p>
<p>In order to deploy the image in K8s cluster, i issued the below command,</p>
<pre><code># deployed the image
$ kubectl run <deployment-name> --image=<username/imagename>:<version> --port=<port the app runs>
# exposed the port as nodeport
$ kubectl expose deployment <deployment-name> --type=NodePort
</code></pre>
<p>Everything worked and i am able to see the 1 pods running <code>kubectl get pods</code></p>
<p>The Docker image i pushed to Dockerhub didn't had any deployment YAML file.</p>
<p>Below command produced an yaml output</p>
<p><strong>Does <code>kubectl</code> command creates deployment Yaml file out of the box?</strong></p>
<pre><code> $ kubectl get deployments --output yaml
</code></pre>
<pre><code>apiVersion: v1
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-12-24T14:59:14Z"
generation: 1
labels:
run: hello-service
name: hello-service
namespace: default
resourceVersion: "76195"
selfLink: /apis/apps/v1/namespaces/default/deployments/hello-service
uid: 90950172-1c0b-4b9f-a339-b47569366f4e
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-service
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-service
spec:
containers:
- image: thirumurthi/hello-service:0.0.1
imagePullPolicy: IfNotPresent
name: hello-service
ports:
- containerPort: 8800
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-12-24T14:59:19Z"
lastUpdateTime: "2019-12-24T14:59:19Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-12-24T14:59:14Z"
lastUpdateTime: "2019-12-24T14:59:19Z"
message: ReplicaSet "hello-service-75d67cc857" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
|
<p>I think the easiest way to understand whats going on under the hood when you create kubernetes resources using <strong>imperative commands</strong> (versus <strong>declarative approach</strong> by writing and applying yaml definition files) is to run a simple example with 2 additional flags:</p>
<pre><code>--dry-run
</code></pre>
<p>and</p>
<pre><code>--output yaml
</code></pre>
<p>Names of these flags are rather self-explanatory so I think there is no further need for comment explaining what they do. You can simply try out the below examples and you'll see the effect:</p>
<pre><code>kubectl run nginx-example --image=nginx:latest --port=80 --dry-run --output yaml
</code></pre>
<p>As you can see it produces the appropriate yaml manifest without applying it and creating actual deployment:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx-example
name: nginx-example
spec:
replicas: 1
selector:
matchLabels:
run: nginx-example
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx-example
spec:
containers:
- image: nginx:latest
name: nginx-example
ports:
- containerPort: 80
resources: {}
status: {}
</code></pre>
<p>Same with <code>expose</code> command:</p>
<pre><code>kubectl expose deployment nginx-example --type=NodePort --dry-run --output yaml
</code></pre>
<p>produces the following output:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: nginx-example
name: nginx-example
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx-example
type: NodePort
status:
loadBalancer: {}
</code></pre>
<p>And now the coolest part. You can use simple output redirection:</p>
<pre><code>kubectl run nginx-example --image=nginx:latest --port=80 --dry-run --output yaml > nginx-example-deployment.yaml
kubectl expose deployment nginx-example --type=NodePort --dry-run --output yaml > nginx-example-nodeport-service.yaml
</code></pre>
<p>to save generated <code>Deployment</code> and <code>NodePort Service</code> definitions so you can further modify them if needed and apply using either <code>kubectl apply -f filename.yaml</code> or <code>kubectl create -f filename.yaml</code>.</p>
<p>Btw. <code>kubectl run</code> and <code>kubectl expose</code> are generator-based commands and as you may have noticed when creating your deployment (as you probably got the message: <code>kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.</code>) they use <code>--generator</code> flag. If you don't specify it explicitly it gets the default value which for <code>kubectl run</code> is <code>--generator=deployment/apps.v1beta1</code> so by default it creates a <code>Deployment</code>. But you can modify it by providing <code>--generator=run-pod/v1 nginx-example</code> and instead of <code>Deployment</code> it will create a single <code>Pod</code>. When we go back to our previous example it may look like this:</p>
<pre><code>kubectl run --generator=run-pod/v1 nginx-example --image=nginx:latest --port=80 --dry-run --output yaml
</code></pre>
<p>I hope this answered your question and clarified a bit the mechanism of creating kubernetes resources using <strong>imperative commands</strong>.</p>
|
<p>I have an ingress something like below</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: staging-ingress-rules-login
annotations:
kubernetes.io/ingress.class: 'nginx'
nginx.ingress.kubernetes.io/proxy-body-size: '0'
spec:
rules:
- host: staging.mysite.com
http:
paths:
- path: /
backend:
serviceName: login
servicePort: 80
- path: /login/info
backend:
serviceName: login
servicePort: 80
</code></pre>
<p>and the nginx.conf for this is something like this</p>
<pre><code>server {
location / {
---------
---------
}
location /login/info {
---------
-------
}
}
</code></pre>
<p>I would like to add the rate limit for location /login.info, i tried location-snippet but it is creating nested location inside /login/info and the result for this api is giving 404, any way to do this ?</p>
|
<p>This is a community wiki answer, feel free to edit and expand it.</p>
<p>As we are lacking some details regarding your configuration, I will explain how you can deal with this in general.</p>
<p>You can use the below annotation in order to add a custom location block:</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
limit_req zone=authentication_ratelimit nodelay;
</code></pre>
<p>And than use a <strong>map</strong>, for example:</p>
<pre><code>http-snippets: |
map $uri $with_limit_req {
default 0;
"~*^/authenticate$" 1;
}
map $with_limit_req $auth_limit_req_key {
default '';
'1' $binary_remote_addr; # the limit key
}
limit_req_zone $auth_limit_req_key zone=authentication_ratelimit:10m rate=1r/s;
</code></pre>
<p><a href="http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone" rel="nofollow noreferrer">Notice that</a>:</p>
<blockquote>
<pre><code>Syntax: limit_req_zone key zone=name:size rate=rate [sync];
Default: —
Context: http
</code></pre>
<p>Sets parameters for a shared memory zone that will keep states for
various keys. In particular, the state stores the current number of
excessive requests. The key can contain text, variables, and their
combination. Requests with an empty key value are not accounted.</p>
</blockquote>
|
<p>I would like to transform a <code>ClusterRoleBinding</code> in a <code>RoleBinding</code> using <code>kustomize-v4.0.5</code>, and also set the namespace field for the <code>RoleBinding</code> and in an additional <code>Deployment</code> resource with the same value.</p>
<p>I succeed in doing that using files below:</p>
<pre><code>cat <<EOF > kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /kind
value: RoleBinding
- op: add
path: /metadata/namespace
value:
<NAMESPACE>
target:
group: rbac.authorization.k8s.io
kind: ClusterRoleBinding
name: manager-rolebinding
version: v1
resources:
- role_binding.yaml
- service_account.yaml
namespace: <NAMESPACE>
EOF
cat <<EOF > role_binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
EOF
cat <<EOF > service_account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: controller-manager
namespace: system
EOF
</code></pre>
<p>However, in above example, I have to hardcode <code><NAMESPACE></code> in several places of my <code>kustomization.yaml</code>. Is there a way to change namespace value for these field without using sed, in 'pure' kustomize and without having to change manually values in kustomization.json?</p>
|
<p>This is a community wiki answer. Feel free to expand it.</p>
<p>I have analyzed your issue and came to the following conclusions.</p>
<p>TL;DR: Unfortunately, the answer is: "not possible like you want it". The current workaround you are using with <code>sed</code> is the way to go. At the end of the day, even if a bit atypical, it is a practical solution.</p>
<p>First of all, the whole point of Kustomize is to apply different configurations from files or rather directories containing files to customize for multiple environments or the likes of environments. As such, if you know which values you would like to apply, than you would only have to include them in the corresponding overlay directory and apply whichever you would like to. For example, as part of the "development" and "production" overlays <a href="https://kubectl.docs.kubernetes.io/guides/introduction/kustomize/#2-create-variants-using-overlays" rel="nofollow noreferrer">included here</a>. That means hardcoding the namespace for each overlay.</p>
<p>But there is that question: "where do you get the namespace value from"? And, as a consequence, how dynamic it is - if not dynamic at all, simply one of a set of values, it is just a matter of using the approach I just described.</p>
<p>Let's assume it is fully dynamic:</p>
<p>There is a command for dynamic substitution of values: <code>kustomize edit set</code> but unfortunately it only takes these parameters: <code>image</code>, <code>label</code>, <code>nameprefix</code>, <code>namespace</code>, <code>namesuffix</code>, <code>replicas</code> so we cannot use it here (See the help for that command for more information). This is also an indication that dynamic substitution for arbitrary values has not been implemented yet.</p>
<p>I have also investigated other approaches and I can think of no "pure" Kustomize solution.</p>
|
<p>I have an EKS cluster which has following files.</p>
<p>1)mysql deployment file
2)pvc claim file
3)storageclass file</p>
<p>when I run all three files, dynamically ebs volume is created. Then I make an entry in mysql table and try to delete and recreate the pod.</p>
<p>Now everything in ebs volume gets deleted and there is no data.</p>
<p>I am trying to figure out how to make the data persistent when pod or deployment gets deleted and started again.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
application: mysql
replicas: 1
template:
metadata:
labels:
application: mysql
spec:
containers:
- name: mysql
image: vjk94/data-admin2:version2
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-data
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-data</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-data
spec:
storageClassName: ebs-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: WaitForFirstConsumer</code></pre>
</div>
</div>
</p>
|
<p>PVCs have a lifetime independent of pods. <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PV</a> still exists because it has ReclaimPolicy set to Retain in which case it won't be deleted even if PVC is gone, however while you are starting your pod again new PV and PVC is created that is why you are seeing empty - With reclaim policy Retain when the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. See: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain" rel="nofollow noreferrer">pv-reclaim-policy-retain</a>.
However:</p>
<blockquote>
<p>An administrator can manually reclaim the volume with the following
steps.</p>
<ol>
<li>Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or
Cinder volume) still exists after the PV is deleted.</li>
<li>Manually clean up the data on the associated storage asset accordingly.</li>
<li>Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the
storage asset definition.</li>
</ol>
</blockquote>
<p>Read more: <a href="https://stackoverflow.com/questions/58418093/state-of-pv-pvc-after-pod-is-deleted-in-kubernetes">pv-reclaim-policy-example</a>, <a href="https://stackoverflow.com/questions/60606141/can-we-assign-pvc-to-a-pv-if-it-is-in-released-state">pv-reclaim-policy-retain-example</a>, <a href="https://letsdocloud.com/?p=854" rel="nofollow noreferrer">retain-reclaim-policy-fetch-data</a>.</p>
<p><strong>EDIT:</strong></p>
<p>If you will add subpath: mysql parameter in deployment everything will work properly. Data will persist even if you will delete deployment and restart again.</p>
|
<p><strong>ENVIRONMENT:</strong></p>
<pre><code>Kubernetes version: v1.16.3
OS: CentOS 7
Kernel: Linux k8s02-master01 3.10.0-1062.4.3.el7.x86_64
</code></pre>
<p><strong>WHAT HAPPENED:</strong></p>
<p>I have a Wordpress Deployment running a container built from a custom Apache/Wordpress image. I tried to upload plugins using the Wordpress admin, but the plugin folders default to 777 permission. Plugin folders ONLY, not their files. Noticed that <code>/var/www/html</code> is set to 777 by default, then I tried to manually <code>chmod 755 /var/www/html</code> in the container context... It works, new plugin folders default to 755, but it's not persistent. Tried to chmod in the Dockerfile, but it does not work, <code>/var/www/html</code> still defaults to 777. Same issue when I use the official Wordpress image instead of my Dockerfile.</p>
<p>Is it possible to default <code>/var/www/html</code> to 755 permission?</p>
<p><strong>DOCKERFILE (wordpress-test:5.2.4-apache):</strong></p>
<pre><code>FROM wordpress:5.2.4-apache
RUN sed -i 's/Listen 80/Listen 8080/g' /etc/apache2/ports.conf;
RUN sed -i 's/:80/:8080/g' /etc/apache2/sites-enabled/000-default.conf;
RUN sed -i 's/#ServerName www.example.com/ServerName localhost/g' /etc/apache2/sites-enabled/000-default.conf;
RUN /bin/bash -c 'ls -la /var/www; chmod 755 /var/www/html; ls -la /var/www'
EXPOSE 8080
CMD ["apache2-foreground"]
</code></pre>
<p><strong>DOCKERFILE BUILD LOGS:</strong></p>
<pre><code>Step 8/10 : RUN /bin/bash -c 'ls -la /var/www; chmod 755 /var/www/html; ls -la /var/www';
---> Running in 7051d46dd9f3
total 12
drwxr-xr-x 1 root root 4096 Oct 17 14:22 .
drwxr-xr-x 1 root root 4096 Oct 17 14:22 ..
drwxrwxrwx 2 www-data www-data 4096 Oct 17 14:28 html
total 12
drwxr-xr-x 1 root root 4096 Oct 17 14:22 .
drwxr-xr-x 1 root root 4096 Oct 17 14:22 ..
drwxr-xr-x 2 www-data www-data 4096 Oct 17 14:28 html
</code></pre>
<p>Checked result in the container context :</p>
<pre><code>$ kubectl exec -it <POD_NAME> -n development -- sh
(inside the container) $ ls -la /var/www
total 12
drwxr-xr-x. 1 root root 4096 Oct 17 14:22 .
drwxr-xr-x 1 root root 4096 Oct 17 14:22 ..
drwxrwxrwx 5 www-data www-data 4096 Dec 17 05:40 html
</code></pre>
<p><code>/var/www/html</code> still defaults to 777.</p>
<p><strong>DEPLOYMENT</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-wordpress
namespace: development
labels:
app: blog
spec:
selector:
matchLabels:
app: blog
tier: wordpress
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 2
template:
metadata:
labels:
app: blog
tier: wordpress
spec:
volumes:
- name: blog-wordpress
persistentVolumeClaim:
claimName: blog-wordpress
containers:
- name: blog-wordpress
# image: wordpress:5.2.4-apache
image: wordpress-test:5.2.4-apache
securityContext:
runAsUser: 33
runAsGroup: 33
allowPrivilegeEscalation: false
capabilities:
add:
- "NET_ADMIN"
- "NET_BIND_SERVICE"
- "SYS_TIME"
resources:
requests:
cpu: "250m"
memory: "64Mi"
limits:
cpu: "500m"
memory: "128Mi"
ports:
- name: liveness-port
containerPort: 8080
readinessProbe:
initialDelaySeconds: 15
httpGet:
path: /index.php
port: 8080
timeoutSeconds: 15
periodSeconds: 15
failureThreshold: 5
livenessProbe:
initialDelaySeconds: 10
httpGet:
path: /index.php
port: 8080
timeoutSeconds: 10
periodSeconds: 15
failureThreshold: 5
env:
# Database
- name: WORDPRESS_DB_HOST
value: blog-mysql
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: blog-mysql
key: username
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: blog-mysql
key: password
- name: WORDPRESS_TABLE_PREFIX
value: wp_
- name: WORDPRESS_AUTH_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: auth-key
- name: WORDPRESS_SECURE_AUTH_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: secure-auth-key
- name: WORDPRESS_LOGGED_IN_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: logged-in-key
- name: WORDPRESS_NONCE_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: nonce-key
- name: WORDPRESS_AUTH_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: auth-salt
- name: WORDPRESS_SECURE_AUTH_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: secure-auth-salt
- name: WORDPRESS_LOGGED_IN_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: logged-in-salt
- name: WORDPRESS_NONCE_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: nonce-salt
- name: WORDPRESS_CONFIG_EXTRA
value: |
define('WPLANG', 'fr_FR');
define('WP_CACHE', false);
define('WP_MEMORY_LIMIT', '64M');
volumeMounts:
- name: blog-wordpress
mountPath: "/var/www/html/wp-content"
</code></pre>
<p><strong>/etc/apache2/apache2.conf</strong></p>
<pre><code># This is the main Apache server configuration file. It contains the
# configuration directives that give the server its instructions.
# See http://httpd.apache.org/docs/2.4/ for detailed information about
# the directives and /usr/share/doc/apache2/README.Debian about Debian specific
# hints.
#
#
# Summary of how the Apache 2 configuration works in Debian:
# The Apache 2 web server configuration in Debian is quite different to
# upstream's suggested way to configure the web server. This is because Debian's
# default Apache2 installation attempts to make adding and removing modules,
# virtual hosts, and extra configuration directives as flexible as possible, in
# order to make automating the changes and administering the server as easy as
# possible.
# It is split into several files forming the configuration hierarchy outlined
# below, all located in the /etc/apache2/ directory:
#
# /etc/apache2/
# |-- apache2.conf
# | `-- ports.conf
# |-- mods-enabled
# | |-- *.load
# | `-- *.conf
# |-- conf-enabled
# | `-- *.conf
# `-- sites-enabled
# `-- *.conf
#
#
# * apache2.conf is the main configuration file (this file). It puts the pieces
# together by including all remaining configuration files when starting up the
# web server.
#
# * ports.conf is always included from the main configuration file. It is
# supposed to determine listening ports for incoming connections which can be
# customized anytime.
#
# * Configuration files in the mods-enabled/, conf-enabled/ and sites-enabled/
# directories contain particular configuration snippets which manage modules,
# global configuration fragments, or virtual host configurations,
# respectively.
#
# They are activated by symlinking available configuration files from their
# respective *-available/ counterparts. These should be managed by using our
# helpers a2enmod/a2dismod, a2ensite/a2dissite and a2enconf/a2disconf. See
# their respective man pages for detailed information.
#
# * The binary is called apache2. Due to the use of environment variables, in
# the default configuration, apache2 needs to be started/stopped with
# /etc/init.d/apache2 or apache2ctl. Calling /usr/bin/apache2 directly will not
# work with the default configuration.
# Global configuration
#
#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# NOTE! If you intend to place this on an NFS (or otherwise network)
# mounted filesystem then please read the Mutex documentation (available
# at <URL:http://httpd.apache.org/docs/2.4/mod/core.html#mutex>);
# you will save yourself a lot of trouble.
#
# Do NOT add a slash at the end of the directory path.
#
#ServerRoot "/etc/apache2"
#
# The accept serialization lock file MUST BE STORED ON A LOCAL DISK.
#
#Mutex file:${APACHE_LOCK_DIR} default
#
# The directory where shm and other runtime files will be stored.
#
DefaultRuntimeDir ${APACHE_RUN_DIR}
#
# PidFile: The file in which the server should record its process
# identification number when it starts.
# This needs to be set in /etc/apache2/envvars
#
PidFile ${APACHE_PID_FILE}
#
# Timeout: The number of seconds before receives and sends time out.
#
Timeout 300
#
# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
#
KeepAlive On
#
# MaxKeepAliveRequests: The maximum number of requests to allow
# during a persistent connection. Set to 0 to allow an unlimited amount.
# We recommend you leave this number high, for maximum performance.
#
MaxKeepAliveRequests 100
#
# KeepAliveTimeout: Number of seconds to wait for the next request from the
# same client on the same connection.
#
KeepAliveTimeout 5
# These need to be set in /etc/apache2/envvars
User ${APACHE_RUN_USER}
Group ${APACHE_RUN_GROUP}
#
# HostnameLookups: Log the names of clients or just their IP addresses
# e.g., www.apache.org (on) or 204.62.129.132 (off).
# The default is off because it'd be overall better for the net if people
# had to knowingly turn this feature on, since enabling it means that
# each client request will result in AT LEAST one lookup request to the
# nameserver.
#
HostnameLookups Off
# ErrorLog: The location of the error log file.
# If you do not specify an ErrorLog directive within a <VirtualHost>
# container, error messages relating to that virtual host will be
# logged here. If you *do* define an error logfile for a <VirtualHost>
# container, that host's errors will be logged there and not here.
#
ErrorLog ${APACHE_LOG_DIR}/error.log
#
# LogLevel: Control the severity of messages logged to the error_log.
# Available values: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the log level for particular modules, e.g.
# "LogLevel info ssl:warn"
#
LogLevel warn
# Include module configuration:
IncludeOptional mods-enabled/*.load
IncludeOptional mods-enabled/*.conf
# Include list of ports to listen on
Include ports.conf
# Sets the default security model of the Apache2 HTTPD server. It does
# not allow access to the root filesystem outside of /usr/share and /var/www.
# The former is used by web applications packaged in Debian,
# the latter may be used for local directories served by the web server. If
# your system is serving content from a sub-directory in /srv you must allow
# access here, or in any related virtual host.
<Directory />
Options FollowSymLinks
AllowOverride None
Require all denied
</Directory>
<Directory /usr/share>
AllowOverride None
Require all granted
</Directory>
<Directory /var/www/>
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
#<Directory /srv/>
# Options Indexes FollowSymLinks
# AllowOverride None
# Require all granted
#</Directory>
# AccessFileName: The name of the file to look for in each directory
# for additional configuration directives. See also the AllowOverride
# directive.
#
AccessFileName .htaccess
#
# The following lines prevent .htaccess and .htpasswd files from being
# viewed by Web clients.
#
<FilesMatch "^\.ht">
Require all denied
</FilesMatch>
#
# The following directives define some format nicknames for use with
# a CustomLog directive.
#
# These deviate from the Common Log Format definitions in that they use %O
# (the actual bytes sent including headers) instead of %b (the size of the
# requested file), because the latter makes it impossible to detect partial
# requests.
#
# Note that the use of %{X-Forwarded-For}i instead of %h is not recommended.
# Use mod_remoteip instead.
#
LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%a %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent
# Include of directories ignores editors' and dpkg's backup files,
# see README.Debian for details.
# Include generic snippets of statements
IncludeOptional conf-enabled/*.conf
# Include the virtual host configurations:
IncludeOptional sites-enabled/*.conf
</code></pre>
<p><strong>/etc/apache2/ports.conf</strong></p>
<pre><code># If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf
Listen 8080
<IfModule ssl_module>
Listen 443
</IfModule>
<IfModule mod_gnutls.c>
Listen 443
</IfModule>
</code></pre>
<p><strong>/etc/apache2/envvars</strong></p>
<pre><code># envvars - default environment variables for apache2ctl
# this won't be correct after changing uid
unset HOME
# for supporting multiple apache2 instances
if [ "${APACHE_CONFDIR##/etc/apache2-}" != "${APACHE_CONFDIR}" ] ; then
SUFFIX="-${APACHE_CONFDIR##/etc/apache2-}"
else
SUFFIX=
fi
# Since there is no sane way to get the parsed apache2 config in scripts, some
# settings are defined via environment variables and then used in apache2ctl,
# /etc/init.d/apache2, /etc/logrotate.d/apache2, etc.
: ${APACHE_RUN_USER:=www-data}
export APACHE_RUN_USER
: ${APACHE_RUN_GROUP:=www-data}
export APACHE_RUN_GROUP
# temporary state file location. This might be changed to /run in Wheezy+1
: ${APACHE_PID_FILE:=/var/run/apache2$SUFFIX/apache2.pid}
export APACHE_PID_FILE
: ${APACHE_RUN_DIR:=/var/run/apache2$SUFFIX}
export APACHE_RUN_DIR
: ${APACHE_LOCK_DIR:=/var/lock/apache2$SUFFIX}
export APACHE_LOCK_DIR
# Only /var/log/apache2 is handled by /etc/logrotate.d/apache2.
: ${APACHE_LOG_DIR:=/var/log/apache2$SUFFIX}
export APACHE_LOG_DIR
## The locale used by some modules like mod_dav
: ${LANG:=C}
export LANG
## Uncomment the following line to use the system default locale instead:
#. /etc/default/locale
export LANG
## The command to get the status for 'apache2ctl status'.
## Some packages providing 'www-browser' need '--dump' instead of '-dump'.
#export APACHE_LYNX='www-browser -dump'
## If you need a higher file descriptor limit, uncomment and adjust the
## following line (default is 8192):
#APACHE_ULIMIT_MAX_FILES='ulimit -n 65536'
## If you would like to pass arguments to the web server, add them below
## to the APACHE_ARGUMENTS environment.
#export APACHE_ARGUMENTS=''
## Enable the debug mode for maintainer scripts.
## This will produce a verbose output on package installations of web server modules and web application
## installations which interact with Apache
#export APACHE2_MAINTSCRIPT_DEBUG=1
</code></pre>
<p><strong>/etc/apache2/conf-enabled/docker-php.conf</strong></p>
<pre><code><FilesMatch \.php$>
SetHandler application/x-httpd-php
</FilesMatch>
DirectoryIndex disabled
DirectoryIndex index.php index.html
<Directory /var/www/>
Options -Indexes
AllowOverride All
</Directory>
</code></pre>
<p><strong>/etc/apache2/sites-enabled/000-default.conf</strong></p>
<pre><code><VirtualHost *:8080>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
ServerName localhost
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
</VirtualHost>
</code></pre>
|
<p>In <strong>kubernetes</strong> you can change permissions of the mounted volume with help of the <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">initContainers</a>. Your deployment may look like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-wordpress
namespace: development
labels:
app: blog
spec:
selector:
matchLabels:
app: blog
tier: wordpress
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 2
template:
metadata:
labels:
app: blog
tier: wordpress
spec:
volumes:
- name: blog-wordpress
persistentVolumeClaim:
claimName: blog-wordpress
initContainers:
- name: permission-fix
image: busybox
command: ["/bin/chmod","-R","755", "/var/www/html"]
volumeMounts:
- name: blog-wordpress
mountPath: /var/www/html/wp-content
containers:
- name: blog-wordpress
# image: wordpress:5.2.4-apache
image: wordpress-test:5.2.4-apache
securityContext:
runAsUser: 33
runAsGroup: 33
allowPrivilegeEscalation: false
capabilities:
add:
- "NET_ADMIN"
- "NET_BIND_SERVICE"
- "SYS_TIME"
resources:
requests:
cpu: "250m"
memory: "64Mi"
limits:
cpu: "500m"
memory: "128Mi"
ports:
- name: liveness-port
containerPort: 8080
readinessProbe:
initialDelaySeconds: 15
httpGet:
path: /index.php
port: 8080
timeoutSeconds: 15
periodSeconds: 15
failureThreshold: 5
livenessProbe:
initialDelaySeconds: 10
httpGet:
path: /index.php
port: 8080
timeoutSeconds: 10
periodSeconds: 15
failureThreshold: 5
env:
# Database
- name: WORDPRESS_DB_HOST
value: blog-mysql
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: blog-mysql
key: username
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: blog-mysql
key: password
- name: WORDPRESS_TABLE_PREFIX
value: wp_
- name: WORDPRESS_AUTH_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: auth-key
- name: WORDPRESS_SECURE_AUTH_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: secure-auth-key
- name: WORDPRESS_LOGGED_IN_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: logged-in-key
- name: WORDPRESS_NONCE_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: nonce-key
- name: WORDPRESS_AUTH_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: auth-salt
- name: WORDPRESS_SECURE_AUTH_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: secure-auth-salt
- name: WORDPRESS_LOGGED_IN_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: logged-in-salt
- name: WORDPRESS_NONCE_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: nonce-salt
- name: WORDPRESS_CONFIG_EXTRA
value: |
define('WPLANG', 'fr_FR');
define('WP_CACHE', false);
define('WP_MEMORY_LIMIT', '64M');
volumeMounts:
- name: blog-wordpress
mountPath: "/var/www/html/wp-content"
</code></pre>
<p><strong>EDIT:</strong>
However keep in mind that you can only change permissions for the mounted folder, not it's parent folder/folders. So in the example above you can use:</p>
<pre><code>command: ["/bin/chmod","-R","755", "/var/www/html"]
</code></pre>
<p>but it will change permissions only of <code>/var/www/html/wp-content</code> directory.
If you can prepare your volume so it contains <code>/var/www/html</code> directory and can be mounted as such, you'll be able to set its permissions.</p>
<p>Let me know if it helped.</p>
|
<p>I have one node pool named "<strong>application pool</strong>" which contains node vm size of <strong>Standard_D2a_v4</strong>. This node pool is set to "<strong>Autoscaling</strong>".
Is there in solution, where I taint the whole node pool in azure? to restrict the pods to schedule on that node pool?</p>
|
<p>Taints can be setup with the <code>[--node-taints]</code> flag only when you are adding a node pool with <a href="https://learn.microsoft.com/en-us/cli/azure/ext/aks-preview/aks/nodepool?view=azure-cli-latest#ext_aks_preview_az_aks_nodepool_add" rel="nofollow noreferrer">az aks nodepool add</a> command:</p>
<blockquote>
<p>Add a node pool to the managed Kubernetes cluster.</p>
</blockquote>
<pre><code>az aks nodepool add --cluster-name
--name
--resource-group
[--node-taints]
</code></pre>
<p>However, you <a href="https://learn.microsoft.com/en-us/cli/azure/ext/aks-preview/aks/nodepool?view=azure-cli-latest" rel="nofollow noreferrer">cannot add taints to already existing node pool</a>:</p>
<blockquote>
<p>You can't change the node taints through CLI after the node pool is
created.</p>
</blockquote>
<p>A very similar topic is being discussed in <a href="https://github.com/Azure/AKS/issues/1402" rel="nofollow noreferrer">this open thread</a>.</p>
<p>So currently it is not possible to set taints to an existing node pool on AKS. But you can set them up while adding a new node pool to the managed cluster.</p>
|
<p>I have defined a new service with a ClusterIP.</p>
<pre><code>[ciuffoly@master-node ~]$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d1h
test-reg ClusterIP 10.102.196.35 <none> 5000/TCP 58m
test-web LoadBalancer 10.108.151.13 192.168.1.125 80:30001/TCP 73m
</code></pre>
<p>The pod is running on worker-node1 and I can connect to this server with the worker-node1 plumbed on ethernet ip.</p>
<pre><code>[ciuffoly@worker-node1 ~]$ ip addr show|grep "192\.168\.1\."
inet 192.168.1.20/24 brd 192.168.1.255 scope global noprefixroute ens33
[ciuffoly@worker-node1 ~]$ telnet 192.168.1.20 5000
Connected to 192.168.1.20.
Escape character is '^]'.
^]
telnet> q
[ciuffoly@master-node ~]$ telnet 192.168.1.20 5000
Connected to 192.168.1.20.
Escape character is '^]'.
^]
telnet> q
</code></pre>
<p>But I cannot connect to this service with the ClusterIP</p>
<pre><code>[ciuffoly@master-node ~]$ telnet 10.102.196.35 5000
Trying 10.102.196.35...
^C
</code></pre>
<p>Following the answers I have tested also NodePort but I still have the same problem.</p>
<pre><code>[ciuffoly@master-node ~]$ kubectl get services|grep reg
test-reg NodePort 10.111.117.116 <none> 5000:30030/TCP 5m41s
[ciuffoly@master-node ~]$ kubectl delete svc test-reg
service "test-reg" deleted
[ciuffoly@master-node ~]$ netstat -an|grep 30030
[ciuffoly@master-node ~]$ kubectl apply -f myreg.yaml
myreg.yamldeployment.apps/test-reg unchanged
service/test-reg created
[ciuffoly@master-node ~]$ netstat -an|grep 30030
tcp 0 0 0.0.0.0:30030 0.0.0.0:* LISTEN
</code></pre>
<p>This does not work</p>
<pre><code>[ciuffoly@master-node ~]$ telnet master-node 30030
Trying 192.168.1.10...
^C
</code></pre>
<p>This works</p>
<pre><code>[ciuffoly@master-node ~]$ telnet worker-node1 30030
Trying 192.168.1.20...
Connected to worker-node1.
Escape character is '^]'.
^]
telnet> q
Connection closed.
</code></pre>
|
<p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>As already confirmed by andrea ciuffoli, switching from Flannel to Calico solved the issue.</p>
<p>Flannel is a very simple overlay network that satisfies the Kubernetes requirements.</p>
<p>On the other hand, Calico is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports multiple data planes including: a pure Linux eBPF dataplane, a standard Linux networking dataplane, and a Windows HNS dataplane. Calico provides a full networking stack but can also be used in conjunction with <a href="https://docs.projectcalico.org/networking/determine-best-networking#calico-compatible-cni-plugins-and-cloud-provider-integrations" rel="nofollow noreferrer">cloud provider CNIs</a> to provide network policy enforcement.</p>
<p>It's hard to say what was the sole reason behind the final solution but you can find some details about <a href="https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/" rel="nofollow noreferrer">Comparing Kubernetes CNI Providers: Flannel, Calico, Canal, and Weave</a>.</p>
|
<p>To redirect any HTTP traffic to HTTPS on tls enabled hosts, I have added the below annotation to my ingress resources</p>
<pre><code>nignx.ingress.kubernetes.io/force-ssl-redirect: true
</code></pre>
<p>With this when I curl the host in question, I get redirected as expected</p>
<p><a href="https://i.stack.imgur.com/bVR9N.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bVR9N.png" alt="enter image description here" /></a></p>
<p>But when I use a browser, the request to HTTP times out.</p>
<p>Now, I am not sure if it's something I am doing wrong at Nginx ingress conf as curl works?
Any pointers please? Thanks!</p>
<p>complete annotaiotns:</p>
<pre><code> annotations:
kubernetes.io/ingress.class: nginx-ingress
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
</code></pre>
<p>rules</p>
<pre><code> rules:
- host: hostX
http:
paths:
- backend:
serviceName: svcX
servicePort: 8080
path: /
- host: hostY
http:
paths:
- backend:
serviceName: svcX
servicePort: 8080
path: /
tls:
- hosts:
- hostX
- hosts:
- hostY
secretName: hostY-secret-tls
</code></pre>
<p>Note:</p>
<ol>
<li>The curl mentioned is to hostY in the rule above.</li>
<li>HTTPS to hostY via browser works and so cert is valid one.</li>
</ol>
|
<p>As @mdaniel have mentioned your snippet shows <code>nignx.ingress.kubernetes.io/force-ssl-redirect: true</code> but annotations should be strings. Notice that in your "complete" config, you have both <code>force-ssl-redirect: "true"</code> <em>(now correctly a string)</em> and <code>ssl-redirect: "false"</code> .</p>
<p>Simply remove annotation <code>nginx.ingress.kubernetes.io/ssl-redirect: "false"</code> and leave just <code>nignx.ingress.kubernetes.io/force-ssl-redirect: "true"</code>
Also enable <a href="https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/" rel="noreferrer"><code>--enable-ssl-passthrough</code></a>. This is required to enable passthrough backends in Ingress objects.</p>
<p>Your annotation should look like:</p>
<pre><code>kubernetes.io/ingress.class: nginx-ingress
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
</code></pre>
<p>If you defined hosts under TLS section they are going to be accessible only using https. HTTP requests are being redirected to use HTTPS. That is why you cannot access host via HTTP. Also you have to specify secret for host <code>hostX</code>, otherwise the default certificate will be used for ingress. Or if you don't want to connect to host <code>hostX</code> via HTTPS simply create different ingress without TLS section for it.</p>
<p>Take a look: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/" rel="noreferrer"></a>.</p>
|
<p>I try to create multiple scheduler running on kubernetes following this instruction <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/</a>
the new kubernetes scheduler status is <strong>running</strong> but the logs generate this error, and the pods that using the new scheduler status is <strong>pending</strong> </p>
<pre><code>E1129 02:43:22.639372 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: the server could not find the requested resource
</code></pre>
<p>and this is my clusterrole of kube-scheduler</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2019-11-28T08:29:43Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-scheduler
resourceVersion: "74398"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/system%3Akube-scheduler
uid: 517e8769-911c-4833-a37c-254edf49cbaa
rules:
- apiGroups:
- ""
- events.k8s.io
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- apiGroups:
- ""
resourceNames:
- kube-scheduler
- my-scheduler
resources:
- endpoints
verbs:
- delete
- get
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- delete
- get
- list
- watch
- apiGroups:
- ""
resources:
- bindings
- pods/binding
verbs:
- create
- apiGroups:
- ""
resources:
- pods/status
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- replicationcontrollers
- services
verbs:
- get
- list
- watch
- apiGroups:
- apps
- extensions
resources:
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- persistentvolumeclaims
- persistentvolumes
verbs:
- get
- list
- watch
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
- apiGroups:
- storage.k8s.io
resources:
- csinodes
verbs:
- watch
- list
- get
</code></pre>
<p>is there any suggestion of this problem?</p>
<p>Thank you</p>
|
<blockquote>
<p>finally I try to use the old version of kubernetes before 16.3, I am
using 15.6 and it works now – akhmad alimudin</p>
</blockquote>
<p>OK, now I understand what was the cause of the issue you experienced. You probably tried to run newer <code>kube-scheduler</code> version on the a bit older k8s cluster (where the key component is <code>kube-apiserver</code>) which cannot be done. As you can read in the <a href="https://kubernetes.io/docs/setup/release/version-skew-policy/#kube-controller-manager-kube-scheduler-and-cloud-controller-manager" rel="nofollow noreferrer">official kubernetes documentation</a>:</p>
<blockquote>
<p><code>kube-controller-manager</code>, <code>kube-scheduler</code>, and <code>cloud-controller-manager</code>
<strong>must not be newer</strong> than the <code>kube-apiserver</code> instances they communicate
with. They are expected to match the <code>kube-apiserver</code> <strong>minor version</strong>, but
may be up to one minor version older (to allow live upgrades).</p>
<p>Example:</p>
<p><code>kube-apiserver</code> is at <strong>1.13</strong> <code>kube-controller-manager</code>, <code>kube-scheduler</code>, and
<code>cloud-controller-manager</code> are supported at <strong>1.13</strong> and <strong>1.12</strong></p>
</blockquote>
<p>So you can use <code>kube-scheduler</code> which is one minor version older than your currently deployed <code>kube-apiserver</code> but not newer. </p>
|
<p>I run three services in three different containers. The logs for these services are sent to the system so if I run these on a Linux server, I can see the logs with journalctl.</p>
<p>Also, if I run the services in Docker containers, I can gather the logs with docker logs <container_name> or from /var/lib/docker/containers directory. But when I move to Kubernetes (Microk8s), I cannot retrieve them with kubectl logs command, and there are also no logs in /var/log/containers or /var/log/pods.</p>
<p>If I login to the pods, I can see that the processes are running, but without logs I couldn't say if there are running correctly. Also, I tried to change the runtime of microk8s kubelet from containerd to docker, but still I can't get any logs.</p>
<pre><code># kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
amf-deployment-7785db9758-h24kz 1/1 Running 0 72s 10.1.243.237 ubuntu <none>
# kubectl describe po amf-deployment-7785db9758-h24kz
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 87s default-scheduler Successfully assigned default/amf-deployment-7785db9758-h24kz to ubuntu
Normal AddedInterface 86s multus Add eth0 [10.1.243.237/32]
Normal Pulled 86s kubelet Container image "amf:latest" already present on machine
Normal Created 86s kubelet Created container amf
Normal Started 86s kubelet Started container amf
# kubectl logs amf-deployment-7785db9758-h24kz
# kubectl logs -f amf-deployment-7785db9758-h24kz
^C
</code></pre>
<p>You can see in the following screenshot the difference of running the same container with Docker and running it with Kubernetes. The behaviour seems very strange, since the logs can be gathered if the application run as an independent Docker container, but not when it is running with Kubernetes.<a href="https://i.stack.imgur.com/mdRHo.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>In traditional server environments, application logs are written to a file such as <code>/var/log/app.log</code>. However, when working with Kubernetes, you need to collect logs for multiple transient pods (applications), across multiple nodes in the cluster, making this log collection method less than optimal. Instead, the default Kubernetes logging framework recommends capturing the standard output (<code>stdout</code>) and standard error output (<code>stderr</code>) from each container on the node to a log file. If you can't see you apps logs when using <code>kubectl logs</code> command it most likely means that your app is not writing logs in the right place. The official <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes" rel="nofollow noreferrer">Logging Architecture</a> docs explain this topic in more detail. There is also an example of <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes" rel="nofollow noreferrer">Basic logging in Kubernetes</a>:</p>
<blockquote>
<p>This example uses a Pod specification with a container to write text
to the standard output stream once per second.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
</code></pre>
<p>To run this pod, use the following command:</p>
<pre><code>kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml
</code></pre>
<p>The output is:</p>
<pre><code>pod/counter created
</code></pre>
<p>To fetch the logs, use the kubectl logs command, as follows:</p>
<pre><code>kubectl logs counter
</code></pre>
<p>The output is:</p>
<pre><code>0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
...
</code></pre>
<p>You can use <code>kubectl logs --previous</code> to retrieve logs from a previous
instantiation of a container. If your pod has multiple containers,
specify which container's logs you want to access by appending a
container name to the command. See the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer">kubectl logs documentation</a>
for more details.</p>
</blockquote>
<p>You can compare it with your <code>Pod</code>/app configs to see if there are any mistakes.</p>
<hr />
<p>Having that knowledge in mind you now have several option to <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/" rel="nofollow noreferrer">Debug Running Pods</a> such as:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#debugging-pods" rel="nofollow noreferrer">Debugging Pods</a> by executing <code>kubectl describe pods ${POD_NAME}</code> and checking the reason behind it's failure.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#examine-pod-logs" rel="nofollow noreferrer">Examining pod logs</a>: with <code>kubectl logs ${POD_NAME} ${CONTAINER_NAME}</code> or <code>kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}</code></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#container-exec" rel="nofollow noreferrer">Debugging with container exec</a>: by running commands inside a specific container with <code>kubectl exec</code></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container" rel="nofollow noreferrer">Debugging with an ephemeral debug container</a>: Ephemeral containers are useful for interactive troubleshooting when <code>kubectl exec</code> is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with <a href="https://github.com/GoogleContainerTools/distroless" rel="nofollow noreferrer">distroless images</a>.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#node-shell-session" rel="nofollow noreferrer">Debugging via a shell on the node</a>: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host</p>
</li>
</ul>
<hr />
<p><strong>To sum up:</strong></p>
<ul>
<li><p>Make sure your logging is in place</p>
</li>
<li><p>Debug with the options listed above</p>
</li>
</ul>
|
<p>I have problem with PersistentVolume in local K8s cluster. When I am rebooting PC or close and open Desktop Docker I lose data in my PV.</p>
<p>This is my PV config:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pvolume
labels:
name: pvolume
spec:
storageClassName: manual
volumeMode: Filesystem
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
</code></pre>
<p>Is there any possibility to keep data and not use external PV provider like GCP volume?</p>
<p><strong>SOLUTION:</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pvolume
labels:
name: pvolume
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: your-local-storage-class-name
local:
path: /c/yourDir
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
</code></pre>
|
<p>As it is mentioned in links I have provided in comment section - read more: <a href="https://github.com/docker/for-win/issues/7023#issuecomment-753536143" rel="nofollow noreferrer">docker-desktop-pv</a>, <a href="https://github.com/docker/for-win/issues/7023#issuecomment-772989137" rel="nofollow noreferrer">docker-desktop-pv-local</a> - solution/workaround is to use different volume type. Change <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> volume to local. Your <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PV</a> yaml file should look like:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pvolume
labels:
name: pvolume
spec:
storageClassName: manual
volumeMode: Filesystem
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
local:
path: /c/yourDirectory
</code></pre>
<p><code>local.path</code> shows directly where you reference the Windows folder so that you can connect the PersistentVolume locally on a Windows machine.</p>
<blockquote>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><code>hostPath</code></a>- HostPath volume (for single node testing only; WILL NOT WORK in a multi-node cluster; consider using <code>local</code> volume instead)
...</li>
<li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer"><code>local</code></a> - local storage devices mounted on nodes.</li>
</ul>
</blockquote>
<p>See: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">types-of-pv</a>.</p>
|
<p>I am trying to set TCP idleTimeout via an Envoy Filter, so that outbound connections external domain <code>some.app.com</code> will be terminated if they are idle for 5s:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: listener-timeout-tcp
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: SIDECAR_OUTBOUND
listener:
filterChain:
sni: some.app.com
filter:
name: envoy.filters.network.tcp_proxy
patch:
operation: MERGE
value:
name: envoy.filters.network.tcp_proxy
typed_config:
'@type': type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy
idle_timeout: 5s
</code></pre>
<p>However, when I try to apply this filter I get the following error:</p>
<pre><code>Error from server: error when creating "filter.yaml": admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: envoy filter: missing filters
</code></pre>
<p>So, I realised that the EnvoyFilter configuration above is not supported by <code>istio 1.2.5</code>, so I modified the configuration to work with the old version:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: tcp-idle-timeout
spec:
workloadSelector:
labels:
app: mecha-dev
filters:
- listenerMatch:
listenerType: SIDECAR_OUTBOUND
listenerProtocol: TCP
filterName: envoy.tcp_proxy
filterType: NETWORK
filterConfig:
idle_timeout: 5s
</code></pre>
<p>After modifying the EnvoyFilter was created but it does not seem to have any affect on the outbound requests. Also, I couldn't find a way to restrict this filter to only outbound requests going to external service <code>some.app.com</code>.</p>
<p>Is there something missing in my EnvoyFilter configuration? Also, can we restrict this filter to just <code>some.app.com</code>? There's <code>address</code> option under <code>listenerMatch</code> but what if the IP address of the external service keeps on changing?</p>
<p>Istio and EnvoyProxy version used:</p>
<pre><code>ISTIO_VERSION=1.2.5
ENVOY_VERSION=1.11.0-dev
</code></pre>
|
<p>This is a community wiki answer. Feel free to expand it.</p>
<p>As already discussed in the comments, the <code>EnvoyFilter</code> was not yet supported in Istio version 1.2 and actually that version is no longer in support since Dec 2019.</p>
<p>I strongly recommend upgrading to the latest Istio and Envoy versions. Also, after you upgrade please notice that the filter name you want to use was <a href="https://www.envoyproxy.io/docs/envoy/latest/version_history/v1.14.0#deprecated" rel="nofollow noreferrer">deprecated and replaced</a>. You should now use <code>envoy.filters.network.tcp_proxy</code> instead of <code>envoy.tcp_proxy</code>.</p>
<p>Please remember that things are getting deprecated for a reason and keeping the old versions will sooner or later bring you more trouble. Try to keep things more up-to-date.</p>
<p>More details can be found in the <a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="nofollow noreferrer">latest docs</a>.</p>
|
<p>I have a node in my K8S cluster that I use for monitoring tools.</p>
<p>Pods running here: <code>Grafana</code>, <code>PGAdmin</code>, <code>Prometheus</code>, and <code>kube-state-metrics</code> </p>
<p>My problem is that I have a lot of evicted pods</p>
<p>The pods evicted: <code>kube-state-metrics</code>, <code>grafana-core</code>, <code>pgadmin</code></p>
<p>Then, the pod evicted with reason: <code>The node was low on resource: [DiskPressure].</code> : <code>kube-state-metrics</code> (90% of evicted pods), <code>pgadmin</code> (20% of evicted pods)</p>
<p>When I check any of the pods, I have free space on disk:</p>
<pre><code>bash-5.0$ df -h
Filesystem Size Used Available Use% Mounted on
overlay 7.4G 3.3G 3.7G 47% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 484.2M 0 484.2M 0% /sys/fs/cgroup
/dev/nvme0n1p2 7.4G 3.3G 3.7G 47% /dev/termination-log
shm 64.0M 0 64.0M 0% /dev/shm
/dev/nvme0n1p2 7.4G 3.3G 3.7G 47% /etc/resolv.conf
/dev/nvme0n1p2 7.4G 3.3G 3.7G 47% /etc/hostname
/dev/nvme0n1p2 7.4G 3.3G 3.7G 47% /etc/hosts
/dev/nvme2n1 975.9M 8.8M 951.1M 1% /var/lib/grafana
/dev/nvme0n1p2 7.4G 3.3G 3.7G 47% /etc/grafana/provisioning/datasources
tmpfs 484.2M 12.0K 484.2M 0% /run/secrets/kubernetes.io/serviceaccount
tmpfs 484.2M 0 484.2M 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 64.0M 0 64.0M 0% /proc/sched_debug
tmpfs 484.2M 0 484.2M 0% /sys/firmware
</code></pre>
<p>Only one or two pods show another message:</p>
<pre><code>The node was low on resource: ephemeral-storage. Container addon-resizer was using 48Ki, which exceeds its request of 0. Container kube-state-metrics was using 44Ki, which exceeds its request of 0.
The node was low on resource: ephemeral-storage. Container pgadmin was using 3432Ki, which exceeds its request of 0.
</code></pre>
<p>I also have kubelet saying: </p>
<pre><code>(combined from similar events): failed to garbage collect required amount of images. Wanted to free 753073356 bytes, but freed 0 bytes
</code></pre>
<p>I have those pods running on a AWS <code>t3.micro</code></p>
<p>It appears that it is not affecting my services in production. </p>
<p>Why is it happening, and how should I fix this.</p>
<p>EDIT: Here is the result when I do <code>df -h</code> in my node</p>
<pre><code>admin@ip-172-20-41-112:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 789M 3.0M 786M 1% /run
/dev/nvme0n1p2 7.5G 6.3G 804M 89% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
</code></pre>
<p>I can see that <code>/dev/nvme0n1p2</code>, but how can I see the content ? when I do <a href="https://dev.yorhel.nl/ncdu" rel="nofollow noreferrer">ncdu</a> in /, I can only see 3GB of data... </p>
|
<p>Apparently you're about to run out of the available disk space <strong>on your node</strong>. However keep in mind that according to the <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#node-conditions" rel="noreferrer">documentation</a> <code>DiskPressure</code> condition denotes:</p>
<blockquote>
<p>Available disk space and inodes on either the node’s root filesystem
or image filesystem has satisfied an eviction threshold</p>
</blockquote>
<p>Try to run <code>df -h</code> but on your worker <code>node</code>, not in a <code>Pod</code>. What is the percentage of disk usage ? Additionally you may check <strong>kubelet</strong> logs for more details:</p>
<pre><code>journalctl -xeu kubelet.service
</code></pre>
<p>Also take a look at <a href="https://medium.com/faun/kubelet-pod-the-node-was-low-on-resource-diskpressure-384f590892f5" rel="noreferrer">this</a> article and <a href="https://github.com/kubernetes/kubernetes/issues/66961#issuecomment-448135500" rel="noreferrer">this</a> comment.</p>
<p>Let me know if it helps.</p>
<p><a href="https://stackoverflow.com/questions/42576661/diskpressure-crashing-the-node">Here</a> you can find an answer which explains very well the same topic.</p>
<h3>update:</h3>
<p>This line clearly shows that the default treshold is close to being exceeded:</p>
<pre><code>/dev/nvme0n1p2 7.5G 6.3G 804M 89% /
</code></pre>
<p>Swith to the root user ( <code>su -</code> ) and run:</p>
<pre><code>du -hd1 /
</code></pre>
<p>to see what directories take up most of the disk space.</p>
|
<p>I am using kubernetes-dashboard to view all pods, check status, login, pass commands, etc. It works good, but there is a lot of connectivity issues related to it. I am currently running it on port-8443, and forwarding the connection from 443 to 8443 via Nginx's proxy pass. But I keep getting bad gateway, and connection keeps dropping. It's not an nginx issue, since I have kubernetes error. I am using Letsencrypt certificate in nginx, What am I doing wrong?</p>
<p>Error log :</p>
<pre><code>E0831 05:31:45.839693 11324 portforward.go:385] error copying from local connection to remote stream: read tcp4 127.0.0.1:8443->127.0.0.1:33380: read: connection reset by peer
E0831 05:33:22.971448 11324 portforward.go:340] error creating error stream for port 8443 -> 8443: Timeout occured
</code></pre>
<p>Theses are the 2 errors I constantly get. I am running this command as a nohup process :</p>
<pre><code>nohup kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8443:443 --address 0.0.0.0 &
</code></pre>
<p>And finally my nginx config :</p>
<p>default :</p>
<pre><code> location / {
proxy_intercept_errors off;
proxy_pass https://localhost:8443/;
}
</code></pre>
<p>Thank you. :-)</p>
|
<p>Unfortunately this is an on-going issue with Kubernetes' port forwarding. You may find it not particularly reliable when used for long-running connections. If possible, try to setup a direct connection instead. A more extended discussion regarding this can be found <a href="https://github.com/kubernetes/kubernetes/issues/78446" rel="nofollow noreferrer">here</a> and <a href="https://github.com/kubernetes/kubernetes/issues/74551" rel="nofollow noreferrer">here</a>.</p>
|
<ol>
<li>How many CRs of a certain CRD can a k8s cluster handle?</li>
<li>How many CRs can a certain controller (or Operator) reconcile?</li>
</ol>
<p>Thanks!</p>
|
<p><strong>1. How many CRs of a certain CRD can a k8s cluster handle?</strong></p>
<p>As many as your API server's <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#storage" rel="nofollow noreferrer">storage space allows</a>:</p>
<blockquote>
<p>Custom resources consume storage space in the same way that ConfigMaps
do. Creating too many custom resources may overload your API server's
storage space.</p>
<p>Aggregated API servers may use the same storage as the main API
server, in which case the same warning applies.</p>
</blockquote>
<p><strong>2. How many CRs can a certain controller (or Operator) reconcile?</strong></p>
<p>A controller in Kubernetes keeps track of <strong>at least one resource type</strong>. There many built-in controllers in Kubernetes, replication controller, namespace controller, service account controller, etc. Custom Resource definition along with Custom Controller makes the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">Operator Pattern</a>. It's hard to tell what is the maximum CR that a given Operator could handle as it would depend on different aspects such as the Operator itself. For example, <a href="https://01.org/kubernetes/blogs/2020/using-kubernetes-custom-controller-manage-two-custom-resources-designing-akraino-icn-bpa" rel="nofollow noreferrer">this guide</a> shows that the Binary Provisioning Agent (BPA) Custom Controller can handle two or more custom resources.</p>
<p>You can find some useful sources below:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#storage" rel="nofollow noreferrer">Custom Resources</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">Operator pattern</a></p>
</li>
<li><p><a href="https://01.org/kubernetes/blogs/2020/using-kubernetes-custom-controller-manage-two-custom-resources-designing-akraino-icn-bpa" rel="nofollow noreferrer">Using a Kubernetes custom controller to manage two custom resources</a></p>
</li>
<li><p><a href="https://admiralty.io/blog/2018/06/27/kubernetes-custom-resource-controller-and-operator-development-tools/" rel="nofollow noreferrer">Kubernetes Custom Resource, Controller and Operator Development Tools</a></p>
</li>
</ul>
|
<p>I am digging into nginx, Kubernetes and SSL as I am trying to deploy my web app onto Kubernetes via Google Kubernetes Engine (GKE).</p>
<p>A little about my project:</p>
<ul>
<li>My frontend is a React web application</li>
<li>My backend is a Node Express API service</li>
</ul>
<p>They are on separate repos (ex: frontend repo and backend repo)</p>
<p>I am planning on using nginx to serve my frontend and proxy requests to my backend.</p>
<p>My question is...</p>
<p>Would it be possible to have both my backend and frontend within the same Kubernetes config yaml file, and if possible, is this best practice to do so?</p>
<p>How I see it is...</p>
<p>In my <code>nginx.conf</code> file, I will have a server section that has a <code>proxy_pass</code> to something like <code>localhost:8000</code>, which is the port of the backend. I would assume this would work if both are within the same container and local network.</p>
<p>I would then probably have a load balancer that points to nginx.</p>
<p>Any help or advice is appreciated! I hope this makes sense (still very new to this)!</p>
|
<p>Let's start from the beginning. There is too much to say on this topic to put everything in the comments, so let's move it to an answer. If something below isn't entirely clear, don't hesitate to ask.</p>
<p>First of all you need to start from your <strong>application architecture</strong> and try to determine how it is supposed to work, what part should be able to communicate with another one.</p>
<p><strong>Microservices</strong> approach in design of applications is entire broad topic and would deserve rather separate article than attempt to explain it in a single answer. But in a nutshell, all it is about is decoupling the application into separate parts where each of them have a distinct functionality and can be developed, deployed and updated separately. Those elements can be closely related, they can even totally depend on one another but at the same time they are separate entities.</p>
<p><strong>Kubernetes</strong> by its very nature encourages you to follow the above approach. If you read more about <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">Pods</a>, you will see that they are perfectly made for this purpose, for being a kind of wrapper for single <strong>microservice</strong>. This is a simplification to some extent, but I believe it reflects the essence of the matter. In real production environment you can have a set of <code>Pods</code> managed by higher abstraction such as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>, but note that even if you have 3 or more replicas of a single <code>Pod</code> it still represents the same single <strong>microservice</strong>, only scaled horizontaly e.g. to be able to handle bigger load or more requests.</p>
<p>Separate sets of <code>Pods</code> representing different <strong>microservices</strong> can communicate with each other and be exposed to external world thanks to <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Services</a>.</p>
<p>As you can read in <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes docs</a>, an <code>Ingress</code> is:</p>
<blockquote>
<p>An API object that manages external access to the services in a
cluster, typically HTTP. Ingress may provide load balancing, SSL
termination and name-based virtual hosting.</p>
<p>Ingress exposes HTTP and HTTPS routes from outside the cluster to
services within the cluster. Traffic routing is controlled by rules
defined on the Ingress resource.</p>
<pre><code> internet
|
[ Ingress ]
--|-----|--
[ Services ]
</code></pre>
</blockquote>
<p>You should ask yourself if you really want to expose to the external world both your frontend and backend <code>Pods</code>. Typically you don't want to do it. <strong>Backend</strong> by its very definition should act as a... well... as a <strong>backend</strong> of your app :) It is not supposed to be reachable directly by external users but only via <strong>frontend</strong> part of the application so it shouldn't be exposed via <code>Ingress</code>. Only after the <strong>frontend</strong> part of your app receives a request from external user, it makes its own request to the <strong>backend</strong>, retrives some data processed by the backend app and then passes it to the end user. <strong>User doesn't make direct requests to your backend app</strong>.</p>
<p>You can expose via <code>Ingress</code> different parts of your application using different paths (like in the example given in another answer), which can be backed by different <strong>microservices</strong> running in different sets of <code>Pods</code> but still they should be <strong>frontend parts</strong> of your app, <strong>NOT</strong> <strong>backend</strong>.</p>
<p><strong>Backend</strong> <code>Pods</code> is something that you usually expose only within your <strong>kubernetes cluster</strong>, to make it available to other components of your app. For that purpose simple <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">Service</a> of type <code>ClusterIP</code> (which is by the way the default type, automatically chosen when the <code>type:</code> field is not specified) is the way to go.</p>
<p>I would encourage you to read more about different <code>Service</code> <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">types</a> and what they are used for.</p>
<p>You may also want to take a look at the following articles. I believe they will make the whole concept even clearer:</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">Connecting Applications with Services</a></p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="nofollow noreferrer">Connect a Front End to a Back End Using a Service</a></p>
<p>As to merging different <strong>kubernetes objects</strong> definitions to a single <code>yaml</code> file like in <a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend" rel="nofollow noreferrer">this</a> example, where you can see <code>Service</code> and <code>Deployment</code> defined in a single <code>yaml</code> file and separated with <code>---</code> is just a convention and I wouldn't pay too much attention to it. If it makes your work more convenient, you can use it.</p>
<p>As to your additional question in the comment:</p>
<blockquote>
<p>I'm curious if having two load balancers would also work as well or
if Ingress is suited better for this scenario?</p>
</blockquote>
<p>I hope after reading the whole answer it's already much clearer. Note that typically an <code>Ingress</code> also uses <strong>loadbalancer</strong> under the hood. If you only want to expose externally your <strong>frontend app</strong> without using different paths which are backed by separate <strong>microservices</strong>, you may not even need an <code>Ingress</code>. <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> <code>Service</code> would be totally enough for what you want to accomplish. Remember that you also use it to expose your app to the external world. If you expose something only within your cluster, that shouldn't be reachable from outside, use simple <code>ClusterIP</code> <code>Service</code> instead.</p>
<blockquote>
<p>LoadBalancer: Exposes the Service externally using a cloud provider’s
load balancer. NodePort and ClusterIP Services, to which the external
load balancer routes, are automatically created.</p>
</blockquote>
<p>I hope this answered your question.</p>
|
<p>1.16 deprecation notice:</p>
<pre><code>DaemonSet, Deployment, StatefulSet, and ReplicaSet resources will no longer
be served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 in v1.16. Migrate to the apps/v1 API, available since v1.9. Existing persisted data
can be retrieved through the apps/v1 API. For example, to convert a
Deployment that currently uses apps/v1beta1, enter the following command.
</code></pre>
<p>I have about 10 helm charts that contain the old api versions - datadog, nginx-ingress and more. I don't want to upgrade these different services. are there any known work arounds?</p>
|
<p>There are some options you should consider:</p>
<ul>
<li><p>don't update anything and just stick to Kubernetes 1.15 (not recommended as it is 4 main versions behind the latest one)</p>
</li>
<li><p><code>git clone</code> your repo and change <code>apiVersion</code> to <code>apps/v1</code> in all your resources</p>
</li>
<li><p>use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#convert" rel="nofollow noreferrer">kubectl convert</a> in order to change the <code>apiVersion</code>, for example: <code>kubectl convert -f deployment.yaml --output-version apps/v1</code></p>
</li>
</ul>
<p>It is worth to mention that stuff gets deprecated for a reason and it is strongly not recommended to stick to old ways if they are not supported anymore.</p>
|
<p>Getting the below error for the command <code>kubectl apply -n prod -f kustomize/kustomization.yaml</code></p>
<pre><code>error: unable to recognize "kustomize/kustomization.yaml": no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"
</code></pre>
<p>Please advise.</p>
|
<p>Firstly I recommend to read official doc: <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">kubernetes-kustomization</a>.</p>
<p>To solve the problem use <code>-k</code> flag instead of <code>-f</code> flag in command:</p>
<pre><code>$ kubectl apply -k <kustomization_directory>
</code></pre>
<p>If in your kustomize directory will be only one manifest file (<code>kustomization.yaml</code>) then use <code>$ kubectl apply -k kustomize/</code> from this directory. Otherwise create new empty directory and put your <code>kustomization.yaml</code> there then execute following command from parent directory <code>$ kubectl apply -k new-directory/</code></p>
<p>Take a look: <a href="https://github.com/kubernetes-sigs/kustomize/issues/738" rel="nofollow noreferrer">kustomize-no-matches</a>, <a href="https://stackoverflow.com/questions/63081332/kustomize-no-matches-for-kind-kustomization-in-version-kustomize-config-k8s">kustomize-kubernetes-no-matches</a>.</p>
|
<p>I am executing the below-mentioned command to install Prometheus.</p>
<pre><code>helm install my-kube-prometheus-stack prometheus-community/kube-prometheus-stack
</code></pre>
<p>I am getting the below error message. Please advise.</p>
<pre><code>Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Alertmanager.spec): unknown field "alertmanagerConfigNamespaceSelector" in com.coreos.monitoring.v1.Alertmanager.spec, ValidationError(Alertmanager.spec): unknown field "alertmanagerConfigSelector" in com.coreos.monitoring.v1.Alertmanager.spec]
</code></pre>
|
<p>Hello @saerma and welcome to Stack Overflow!</p>
<p>@rohatgisanat might be right but without seeing your current configs it's impossible to verify that. Please check if that was the case.</p>
<p>There are also two other things you should look for:</p>
<ol>
<li>If there was any previous installations of other prometheus-relevant manifest files than delete the following:</li>
</ol>
<ul>
<li><code>crd alertmanagerconfigs.monitoring.coreos.com</code></li>
<li><code>alertmanagers.monitoring.coreos.com</code></li>
<li><code>crd podmonitors.monitoring.coreos.com</code></li>
<li><code>crd probes.monitoring.coreos.com</code></li>
<li><code>crd prometheuses.monitoring.coreos.com</code></li>
<li><code>crd prometheusrules.monitoring.coreos.com</code></li>
<li><code>crd servicemonitors.monitoring.coreos.com</code></li>
<li><code>crd thanosrulers.monitoring.coreos.com</code></li>
</ul>
<p>Also, check if there are any other Prometheus related config files with:</p>
<pre><code>kubectl get configmap --all-namespaces
</code></pre>
<p>and also delete them.</p>
<p>Notice that deleting the CRDs will result in deleting any servicemonitors and so on, which have previously been created by other charts.</p>
<p>After that you can try to install again from scratch.</p>
<ol start="2">
<li>If installing fresh, run:</li>
</ol>
<hr />
<pre><code>kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.45.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
</code></pre>
<hr />
<p>as CRD changed with the newer version and you need to use the updated ones.</p>
<p><a href="https://github.com/prometheus-community/helm-charts/issues/557" rel="nofollow noreferrer">Source</a>.</p>
|
<p>When I use this command: "kubectl get nodes" I get the below errors:</p>
<pre><code>Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
</code></pre>
<p>Can anyone help to solve this issue?</p>
|
<p>As OP @Sam.Yang informed in comment section problem is resolved.</p>
<p>Problem was in the wrong configuration of <code>KUBE_ETCD_VERSION</code> in <code>apiserver.sh</code>.</p>
|
<p>Disclaimer: I'm new to Kubernetes and Helm.</p>
<p>I am trying to install a Helm chart using the brand new Helm Hub and for the life of me I can't figure out how this is supposed to work.</p>
<p>A new version of Helm (3.0) was released only a few months ago with significant changes, one of them is that it doesn't come with any repositories configured. Helm released the Helm Hub which is supposed to be a centralized service to find charts.</p>
<p>I am trying to install a CloudBees Jenkins chart. This is what I get when I search the hub:</p>
<pre><code>[me@localhost tmp]$ helm search hub cloudbees -o yaml
- app_version: 2.222.1.1
description: The Continuous Delivery Solution for Enterprises
url: https://hub.helm.sh/charts/cloudbees/cloudbees-core
version: 3.12.0+80c17a044bc4
- app_version: 9.2.0.139827
description: A Helm chart for CloudBees Flow
url: https://hub.helm.sh/charts/cloudbees/cloudbees-flow
version: 1.1.1
- app_version: 9.2.0.139827
description: A Helm chart for CloudBees Flow Agent
url: https://hub.helm.sh/charts/cloudbees/cloudbees-flow-agent
version: 1.1.1
- app_version: 2.204.3.7
description: CloudBees Jenkins Distribution provides development teams with a highly
dependable, secure, Jenkins environment curated from the most recent supported
Jenkins release. The distribution comes with a recommended catalog of tested plugins
available through the CloudBees Assurance Program.
url: https://hub.helm.sh/charts/cloudbees/cloudbees-jenkins-distribution
version: 2.204.307
- app_version: 2.0.2
description: Helm chart for sidecar injector webhook deployment
url: https://hub.helm.sh/charts/cloudbees/cloudbees-sidecar-injector
version: 2.0.2
</code></pre>
<p>So it looks like the chart I am looking for is available: <code>cloudbees-jenkins-distribution</code>.</p>
<p>However, I can't find any way to install from the hub or to add a repository based on the hub output. Some of the things I've tried:</p>
<pre><code>[me@localhost tmp]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "incubator" chart repository
...Successfully got an update from the "gitlab" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
[me@localhost tmp]$ helm install myJenkins cloudbees-jenkins-distribution
Error: failed to download "cloudbees-jenkins-distribution" (hint: running `helm repo update` may help)
[me@localhost tmp]$ helm repo add cbRepo https://hub.helm.sh/charts/cloudbees
Error: looks like "https://hub.helm.sh/charts/cloudbees" is not a valid chart repository or cannot be reached: error converting YAML to JSON: yaml: line 8: mapping values are not allowed in this context
[me@localhost tmp]$ helm repo add cbRepo https://hub.helm.sh/charts/cloudbees/cloudbees-jenkins-distribution
Error: looks like "https://hub.helm.sh/charts/cloudbees/cloudbees-jenkins-distribution" is not a valid chart repository or cannot be reached: error converting YAML to JSON: yaml: line 8: mapping values are not allowed in this context
</code></pre>
<p>The documentation really doesn't say much about how I'm supposed to go from the Helm Hub to an installed chart. What am I missing here?</p>
|
<p><strong>Helm Hub is NOT like a repo that you can add and install from it helm charts.</strong> It doesn't expose valid repos urls either. That's why you're getting the error message like below: </p>
<pre><code>Error: looks like "https://hub.helm.sh/charts/cloudbees" is not a valid chart repository ...
</code></pre>
<p>when you're trying to run <code>helm repo add</code> on <code>https://hub.helm.sh</code> based <strong>urls</strong>.</p>
<p>I know it may seem pretty confusing but it just works like that, by its very design. Please refer to <a href="https://github.com/helm/hub/issues/208" rel="noreferrer">this discussion</a> on <strong>Github</strong>. Specifically <a href="https://github.com/helm/hub/issues/208#issuecomment-565704879" rel="noreferrer">this comment</a> explains it a bit more and I hope it also answers your question:</p>
<blockquote>
<p>hub.helm.sh is not the helm repo, so it will not work the you trying,
it is only meant to view and search for charts. check in there for
chart repository and it that way, then you will be able to install the
charts.</p>
</blockquote>
<p>Unfortunatelly the <a href="https://helm.sh/docs/intro/using_helm/#helm-search-finding-charts" rel="noreferrer">official helm documentation</a> doesn't explain it well enough. It mentions only:</p>
<blockquote>
<p><code>helm search hub</code> searches the Helm Hub, which comprises helm charts
from dozens of different repositories.</p>
</blockquote>
<p>But it shows <em>"no explanation how to get from <code>helm search repo</code> which shows <code>hub.helm.sh</code> to <code>helm repo add</code> which magically shows the a new url to use."</em> - as one user wrote in <a href="https://github.com/helm/hub/issues/208#issuecomment-573624005" rel="noreferrer">the thread</a> mentioned above.</p>
|
<p>I succeeded in setup a control-plane,master node in my vm. Then I copied the vm, trying to join the copied vm to the existed kubernete cluster.</p>
<p>Problem is that the original vm(node)'s name is new-master-1, and the copied node has the same name. Even after I <code>vi /etc/hostname</code> and change the copied vm's name to new-master-2, after running <code>kubectl get nodes</code> in the copied vm, the output name is still <code>new-master-1</code>:</p>
<pre><code>root@new-master-2:/home/hzg# kubectl get nodes
NAME STATUS ROLES AGE VERSION
new-master-1 Ready control-plane,master 32h v1.20.2
</code></pre>
<p>I think I can only join the copied vm as another master node to the cluster after I see that the name change to <code>new-master-2</code>, right? How to change the node's name?</p>
|
<p><strong>TL;DR:</strong></p>
<p>Run: <code>kubeadm join</code> - adjust this command based on the output from <code>kubeadm token create</code> and add flags like <code>--control-plane</code> and <code>--node-name</code> if needed. Take a look at the <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/" rel="nofollow noreferrer">kubeadm join</a> before proceeding.</p>
<hr />
<p>The <code>kubelet</code> registers a node under a particular name. And usually you can <a href="https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/" rel="nofollow noreferrer">Reconfigure a Node's Kubelet in a Live Cluster</a>. But it does not apply when it comes to changing the Node's name. If you try to use <code>kubectl edit node</code> for that, you will get an error:</p>
<pre><code>error: At least one of apiVersion, kind and name was changed
</code></pre>
<p>There is a way however. You need to change the hostname and than remove the Node, reset and rejoin it.</p>
<p>Here are the steps. On the Node that needs its name to be changed:</p>
<ul>
<li><p>Change the hostname.</p>
</li>
<li><p>Run: <code>kubectl delete node <nodename></code> (notice that you still have to use the old Node name)</p>
</li>
<li><p>Run: <code>kubeadm reset</code> (as <code>root</code> if needed)</p>
</li>
</ul>
<p>Now, on the original Master Node:</p>
<ul>
<li><p>Run: <code>export KUBECONFIG=/path/to/admin.conf</code></p>
</li>
<li><p>Run: <code>kubeadm token create --print-join-command</code></p>
</li>
</ul>
<p>Back on the renamed Node:</p>
<ul>
<li>Run: <code>kubeadm join</code> - adjust this command based on the output from <code>kubeadm token create</code> and add flags like <code>--control-plane</code> and <code>--node-name</code> if needed. Take a look at the <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/" rel="nofollow noreferrer">kubeadm join</a> before proceeding.</li>
</ul>
<p>You can check out <a href="https://www.youtube.com/watch?v=TqoA9HwFLVU" rel="nofollow noreferrer">this source</a> for a tutorial video.</p>
<hr />
<p><strong>EXAMPLE:</strong></p>
<pre><code>kube-master:~$ kubectl get nodes
NAME STATUS
kube-master Ready
kube-node-1 Ready
kube-node-2 Ready
kube-master:~$ kubectl delete node kube-node-2
node "kube-node-2" deleted
kube-node-2:~$ sudo kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
kube-node-2:~$ sudo kubeadm join --node-name kube-node-22 <rest-of-the-join-command>
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
kube-master:~$ kubectl get nodes
NAME STATUS
kube-master Ready
kube-node-1 Ready
kube-node-22 Ready
</code></pre>
<p>Result: the name of <code>kube-node-2</code> was successfully changed to <code>kube-node-22</code></p>
|
<p>I am setting up an ingress service following some k8s documentation, but I am not able to understand the following annotations:</p>
<p><code>kubernetes.ip/ingress.class:</code></p>
<p><code>nginx.ingress.kubernetes.io/rewrite-target:</code></p>
<p>Do you know what these annotations do?</p>
<p>Thanks in advance.</p>
|
<ol>
<li><code>kubernetes.io/ingress.class</code> annotation is officially <a href="https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#deprecating-the-ingress-class-annotation" rel="noreferrer">deprecated</a>:</li>
</ol>
<blockquote>
<p>Before the <code>IngressClass</code> resource was added in Kubernetes 1.18, a
similar concept of Ingress class was often specified with a
<code>kubernetes.io/ingress.class</code> annotation on the Ingress. Although this
annotation was never formally defined, it was widely supported by
Ingress controllers, and should now be considered formally deprecated.</p>
</blockquote>
<p>Instead you should use the <code>ingressClassName</code>:</p>
<blockquote>
<p>The newer <code>ingressClassName</code> field on Ingresses is a replacement for
that annotation, but is not a direct equivalent. While the annotation
was generally used to reference the name of the Ingress controller
that should implement the Ingress, the field is a reference to an
<code>IngressClass</code> resource that contains additional Ingress
configuration, including the name of the Ingress controller.</p>
</blockquote>
<ol start="2">
<li>The <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rewrite" rel="noreferrer">rewrite annotation</a> does as follows:</li>
</ol>
<blockquote>
<p>In some scenarios the exposed URL in the backend service differs from
the specified path in the Ingress rule. Without a rewrite any request
will return 404. Set the annotation
<code>nginx.ingress.kubernetes.io/rewrite-target</code> to the path expected by
the service.</p>
<p>If the Application Root is exposed in a different path and needs to be
redirected, set the annotation <code>nginx.ingress.kubernetes.io/app-root</code>
to redirect requests for <code>/</code>.</p>
</blockquote>
<p>For a more detailed example I strongly suggest you can check out this <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">source</a>. It shows exactly how rewriting works.</p>
|
<p>I installed Istio with</p>
<pre><code>gateways.istio-egressgateway.enabled = true
</code></pre>
<p>I have a service that consumes external services, so I define the following egress rule.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-service1
spec:
hosts:
- external-service1.com
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
</code></pre>
<p>But using Jaeger I can not see the traffic to the external service, and thus be able to detect problems in the network.</p>
<p>I'm forwarding the appropriate headers to the external service (x-request-id, x-b3-traceid, x-b3-spanid, b3-parentspanid, x-b3-sampled, x-b3-flags, x-ot-span-context)</p>
<p>Is this the correct behavior?
what is happening?
Can I only have statistics of internal calls?
How can I have statistics for egress traffic?</p>
|
<p>Assuming that your services are defined in Istio’s internal service registry. If not please configure it according to instruction <a href="https://istio.io/docs/tasks/traffic-management/egress/" rel="nofollow noreferrer"><code>service-defining</code></a>.</p>
<p>In HTTPS all the HTTP-related information like method, URL path, response code, is encrypted so Istio <strong>cannot</strong> see and cannot monitor that information for HTTPS.
If you need to monitor HTTP-related information in access to external HTTPS services, you may want to let your applications issue HTTP requests and configure Istio to perform TLS origination.</p>
<p>First you have to <strong>redefine</strong> your ServiceEntry and create VirtualService to rewrite the HTTP request port and add a DestinationRule to perform TLS origination.</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-service1
spec:
hosts:
- external-service1.com
ports:
- number: 80
name: http-port
protocol: HTTP
- number: 443
name: http-port-for-tls-origination
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: external-service1
spec:
hosts:
- external-service1.com
http:
- match:
- port: 80
route:
- destination:
host: external-service1.com
port:
number: 443
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: external-service1
spec:
host: external-service1.com
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: SIMPLE # initiates HTTPS when accessing external-service1.com
EOF
</code></pre>
<p>The VirtualService redirects HTTP requests on port 80 to port 443 where the corresponding DestinationRule then performs the TLS origination. Unlike the previous ServiceEntry, this time the protocol on port 443 is HTTP, instead of HTTPS, because clients will only send HTTP requests and Istio will upgrade the connection to HTTPS.</p>
<p>I hope it helps.</p>
|
<p>Say I want to deploy a pod with skaffold that will <em>not</em> contain a continuously running/blocking program. E.g. take the <a href="https://github.com/GoogleContainerTools/skaffold/tree/master/examples/getting-started" rel="nofollow noreferrer">getting started example</a> and change <code>main.go</code> to:</p>
<pre><code>package main
import (
"fmt"
)
func main() {
fmt.Println("Hello world!")
}
</code></pre>
<p>If I run <code>skaffold dev</code> with the above modified example and just wait without making any changes to the code, the pod will continuously restart, cycling through statuses <code>Completed</code> -> <code>CrashLoopBackOff</code> -> <code>Completed</code>, with each restart running the program in the pod again. How do I get the pod to run the program once while only rerunning/restarting the pod on changes to the code?</p>
<p>This is with skaffold v1.6.0-docs, ubuntu 18, microk8s 1.16/stable, having set <code>skaffold config set default-repo localhost:32000</code>.</p>
|
<p>First of all I would like to emphasize that there is nothing specific to <strong>Skaffold</strong>. It is rather related with the very nature of kubernetes <code>Pod</code> which is not meant to run to completion but rather keep running (at least with its default settings).</p>
<p>You can easily verify it by running a <code>Pod</code> from <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#pod-templates" rel="nofollow noreferrer">this</a> example in a few differant variants:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
</code></pre>
<p>Try to reduce sleep time to some shorter value and you'll observe that it is also constantly changing between <code>Completed</code> and <code>CrashLoopBackOff</code> states. It will also happen when you remove the command which keeps the container up and running.</p>
<p>If you run:</p>
<pre><code>kubectl get pod myapp-pod -o yaml
</code></pre>
<p>you may notice that there is <code>restartPolicy</code> defined in <code>Pod</code> specification and if you don't set it explicitely to a different value, by default it is set to <code>Always</code>. Here you have the reason why your <code>Completed</code> <code>Pods</code> are constantly restarted.</p>
<p><strong>Setting it to <code>Never</code> should give you the result you want to achieve.</strong>:</p>
<pre><code>spec:
restartPolicy: Never
containers:
- name: myapp-container
image: busybox
...
</code></pre>
<p>However bear in mind that you typically won't be using bare <code>Pods</code> for running your workload in <strong>kubernetes</strong>. You will rather use controllers such as <code>Deployment</code> that manage them. As long as <code>Deployment</code> is used to ensure that certain set of <code>Pods</code> is up and running, <strong>for running something to completion</strong> you have in <strong>kubernetes</strong> another controller, named <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a>.</p>
|
<p>When a container restarts in Kubernetes (due to exiting with a non-zero exit status), it will be restarted if unless I specify restartPolicy:never .</p>
<p>Question: will the new container get the same container id, or will it have a different one?
In my tests on KIND (K8s-in-Docker) it seems to get the same container id -- is this guaranteed?</p>
<p>Thanks, all.</p>
<p>Alex</p>
|
<p>Both pod and container are ephemeral. If a pod gets recreated it gets a new ID. The same goes with container ID. You can check that in a few steps:</p>
<ul>
<li><p>Create a deployment, for example: <code>kubectl create deployment --image=nginx nginx-app</code></p>
</li>
<li><p>list the pods to see if the freshly created one is up and running: <code>kubectl get pods</code></p>
</li>
<li><p>see the container details of that pod with: <code>kubectl describe pod nginx-app-<pod_id></code>, you can find the Container ID in section <code>Containers:</code></p>
</li>
<li><p>delete the pod with <code>kubectl delete pod nginx-app-<pod_id></code> so the deployment can start another one in his place</p>
</li>
<li><p>again see the details of the new nginx-app pod with <code>kubectl describe pod nginx-app-<new_pod_id></code> and notice that it has a different Container ID</p>
</li>
</ul>
|
<p>I think I am misunderstanding Kubernetes CronJobs. On the CKAD exam there was a question to have a CronJob run every minute, but it should start after an arbitrary amount of time. I don't see any properties for CronJobs or Jobs to have them start after a specific time. Should that be part of the cron string or am I completely misunderstanding?</p>
|
<p>You can schedule your CronJob to start at specific date/time and than run every minute or however you would like to set it. There is a <a href="https://www.freeformatter.com/cron-expression-generator-quartz.html" rel="nofollow noreferrer">powerful online tool</a> that can help you with it. For example:</p>
<pre><code>0 0/10 10/1 ? * * *
</code></pre>
<p>will schedule your CronJob to run every 10 mins starting at 10h of the day. Or:</p>
<pre><code>0 0/10 * ? * 6/1 *
</code></pre>
<p>will schedule your CronJob to run every 10 mins starting on Friday.</p>
<p>The important thing to keep in mind while using this particular approach is to be aware of the timezone that your cluster is running in:</p>
<blockquote>
<p>All CronJob <code>schedule:</code> times are based on the timezone of the
kube-controller-manager.</p>
<p>If your control plane runs the kube-controller-manager in Pods or bare
containers, the timezone set for the kube-controller-manager container
determines the timezone that the cron job controller uses.</p>
</blockquote>
<p>More info/examples about scheduling can be found below:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-schedule-syntax" rel="nofollow noreferrer">Cron schedule syntax</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">Running Automated Tasks with a CronJob</a></p>
</li>
</ul>
|
<p>I have a kubernetes cluster with one master and two nodes.
For some reason, a node became unreachable for the cluster so all pods were moved to the other node. The problem is that the broken node keep in the cluster, but i think the master should remove the node automatically and create another one.</p>
<p>Can anyone help me?</p>
|
<p><strong>I option:</strong></p>
<p>If you work on GKE and have HA cluster, node with <strong>NotReady</strong> state should have been automatically deleted after couple of minutes if you have autoscaling mode on. After a while new node will be added.</p>
<p><strong>II option:</strong>
If you use kubeadm:</p>
<p>Nodes with state <strong>NotReady</strong> aren't automatically deleted if you don't have autoscaling mode on and HA cluster. Node will be continuously check and restart.</p>
<p>If you have Prometheus check metrics what happened on your node which has NotReady state or from unreachable node execute command:</p>
<p><code> $ sudo journalctl -u kubelet</code></p>
<p>If you want node with <strong>NotReady</strong> state to be deleted you should do it manually:</p>
<p>You should first drain the node and make sure that the node is empty before shutting it down.</p>
<p><code> $ kubectl drain <node name> --delete-local-data --force --ignore-daemonsets</code></p>
<p><code> $ kubectl delete node <node name></code></p>
<p>Then, on the node being removed, reset all kubeadm installed state:</p>
<p><code>$ kubeadm reset</code></p>
<p>The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:</p>
<p><code>$ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X</code></p>
<p>If you want to reset the IPVS tables, you must run the following command:</p>
<p><code>$ ipvsadm -C</code></p>
<p>You can also simply shutdown desire node:</p>
<p><code>$ shutdown -h</code></p>
<p>The <strong>-h</strong> means halt while now clearly means that the instruction should be carried out immediately. Different delays can be used. For example, you might use +6 instead, which will tell the computer to run the shutdown procedure in six minutes.</p>
<p>In this case new node will <strong>not</strong> be added automatically.</p>
<p>I hope this helps.</p>
|
<p>I have implemented HPA for all the pods based on CPU and it was working as expected. But when we did a maintenance of worker nodes, it seems tha HPA's got messed up as it failed to identify it. Do I need to disable HPA temporarily during maintenance and bring it up once the maitenance is over.</p>
<p>Please suggest</p>
<p>HPA Manifest -</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: pod-name-cpu
spec:
maxReplicas: 6
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: pod-name
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 60
</code></pre>
|
<p>There is a <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#implicit-maintenance-mode-deactivation" rel="nofollow noreferrer">maintenance-mode</a> solution which says:</p>
<blockquote>
<p>You can implicitly deactivate the HPA for a target without the need to
change the HPA configuration itself. If the target's desired replica
count is set to 0, and the HPA's minimum replica count is greater than
0, the HPA stops adjusting the target (and sets the <code>ScalingActive</code>
Condition on itself to <code>false</code>) until you reactivate it by manually
adjusting the target's desired replica count or HPA's minimum replica
count.</p>
</blockquote>
<p><strong>EDIT:</strong></p>
<p>To explain more the above, the things you should do are:</p>
<ul>
<li><p>Scale your deployment to <code>0</code></p>
</li>
<li><p>Describe your <code>HPA</code></p>
</li>
<li><p>Notice that under the <code>Conditions:</code> section the <code>ScalingActive</code> will turn to <code>False</code> which will disable <code>HPA</code> until you set the replicas back to desired value</p>
</li>
<li><p>See more <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#appendix-horizontal-pod-autoscaler-status-conditions" rel="nofollow noreferrer">here</a></p>
</li>
</ul>
<p>Also, as you did not specify what exactly happened and what is the desired outcome you might also consider moving your workload into a different node.
<a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#how-to-perform-disruptive-actions-on-your-cluster" rel="nofollow noreferrer">How to perform Disruptive Actions on your Cluster</a> has a few options for you to choose from.</p>
|
<p>In a Kubernetes pod, I have:</p>
<ul>
<li><code>busybox</code> container running in a <code>dind</code> container</li>
<li><code>fluentd</code> container</li>
</ul>
<p>I understand if <code>dind</code> wants to access <code>fluentd</code>, it needs to simply connect to localhost:9880. But what if <code>busybox</code> wants to access <code>fluentd</code> as the depicted diagram below. Which address should I use?</p>
<p><a href="https://i.stack.imgur.com/T85n2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T85n2.png" alt="dind access to another container"></a></p>
|
<p>These tips may help you:</p>
<p><strong>1. First approach</strong></p>
<p>From inside the docker:latest container, where you were trying to access it originally, it will be available on whatever hostname is set for the docker:dind container. In this case, you used --name dind, therefore curl dind:busybox_port would give you the standard.</p>
<p>And then you could from inside the docker:dind container (busybox) connect to fluentd, it will be available on localhost:9880.</p>
<p><strong>2. Second approach</strong></p>
<p>Another approach is to <strong>EXPOSE [/<protocol>...]</strong> and in this case we assume that busyboox and fluentd are in different networks
You can also specify this within a docker run command, such as:</p>
<pre><code>$ docker run --expose=1234 busybox
</code></pre>
<p>But <strong>EXPOSE</strong> will not allow communication via the defined ports to containers outside of the same network or to the host machine. To allow this to happen you need to publish the ports.</p>
<p><strong>Publish</strong> ports and map them to the host</p>
<p>To publish the port when running the container, use the <strong>-p</strong> flag on docker run to publish and map one or more ports, or the <strong>-P</strong> flag to publish all exposed ports and map them to high-order ports.</p>
<pre><code>$ docker run -p 80:80/tcp -p 80:80/udp busybox
</code></pre>
<p>And then connect from busybox to fluentd using <strong>localhost:9880</strong></p>
<p>You can find more information here: <a href="https://applatix.com/case-docker-docker-kubernetes-part-2/" rel="nofollow noreferrer"><code>docker-in-docker</code></a>.</p>
<p>I hope it helps.</p>
|
<p>i've finished setting up my HA k8s cluster using kubeadm.
Everything seems to be working fine, but after checking with the command <strong>kubectl get componentstatus</strong> I get:</p>
<pre><code>NAME STATUS MESSAGE
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 12
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 12
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
</code></pre>
<p>I see that manifests for scheduler and controller have other ports set up for the health check:</p>
<p><strong>kube-scheduler.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- --port=0
image: k8s.gcr.io/kube-scheduler:v1.18.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
status: {}
</code></pre>
<p><strong>kube-controller-manager.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --port=0
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
image: k8s.gcr.io/kube-controller-manager:v1.18.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
status: {}
</code></pre>
<p>So these are using ports 10259 and 10257 respectively.
Any idea why is kubectl trying to perform health check using 10251 and 10252?</p>
<p>version:</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>PS: I am able to make deployments and expose services, no problem there.</p>
|
<p>This is a known issue which unfortunately is not going to be fixed as the feature is <a href="https://github.com/kubernetes/enhancements/issues/553#issuecomment-529591505" rel="noreferrer">planned to be deprecated</a>. Also, see this <a href="https://github.com/Azure/AKS/issues/173#issuecomment-664430043" rel="noreferrer">source</a>:</p>
<blockquote>
<p>I wouldn't expect a change for this issue. Upstream Kubernetes wants
to deprecate component status and does not plan on enhancing it. If
you need to check for cluster health using other monitoring sources is
recommended.</p>
<p><a href="https://github.com/kubernetes/kubernetes/pull/93171" rel="noreferrer">kubernetes/kubernetes#93171</a> - 'fix component status server address'
which is getting recommendation to close due to deprecation talk.</p>
<p><a href="https://github.com/kubernetes/enhancements/issues/553" rel="noreferrer">kubernetes/enhancements#553</a> - Deprecate ComponentStatus</p>
<p><a href="https://github.com/kubernetes/kubeadm/issues/2222" rel="noreferrer">kubernetes/kubeadm#2222</a> - kubeadm default init and they are looking to
'start printing a warning in kubect get componentstatus that this API
object is no longer supported and there are plans to remove it.'</p>
</blockquote>
|
<p>I use a kubernetes manifest file to deploy my code. My manifest typically has a number of things like Deployment, Service, Ingress, etc.. How can I perform a type of "rollout" or "restart" of everything that was applied with my manifest?</p>
<p>I know I can update my deployment say by running</p>
<pre><code>kubectl rollout restart deployment <deployment name>
</code></pre>
<p>but what if I need to update all resources like ingress/service? Can it all be done together?</p>
|
<p><em>This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.</em></p>
<p>As <a href="https://stackoverflow.com/users/11923999/burak-serdar">Burak Serdar</a> has already suggested in his comment, you can simply use:</p>
<pre><code>kubectl apply -f your-manifest.yaml
</code></pre>
<p>and it will apply all the changes you made in your manifest file to the resources, which are already deployed.</p>
<p>However note that running:</p>
<pre><code>kubectl rollout restart -f your-manifest.yaml
</code></pre>
<p>makes not much sense as this file contains definitions of resources such as <code>Services</code> to which <code>kubectl rollout restart</code> cannot be applied. In consequence you'll see the following error:</p>
<pre><code>$ kubectl rollout restart -f deployment-and-service.yaml
deployment.apps/my-nginx restarted
error: services "my-nginx" restarting is not supported
</code></pre>
<p>So as you can see it is perfectly possible to run <code>kubectl rollout restart</code> against a file that contains definitions of both resources that support this operation and those which do not support it.</p>
<p>Running <code>kubectl apply</code> instead will result in update of all the resources which definition has changed in your manifest:</p>
<pre><code>$ kubectl apply -f deployment-and-service.yaml
deployment.apps/my-nginx configured
service/my-nginx configured
</code></pre>
|
<p>Question on Memory resource on GKE.</p>
<p>i have a node which has 8G memory and workload with the following resources :</p>
<pre><code>resources:
limits:
memory: 2560Mi
requests:
cpu: 1500m
memory: 2Gi
</code></pre>
<p>recently i’ve noticed many cases where i see on the VM log itself (GCE) messages like the following:</p>
<pre><code>[14272.865068] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=3e209c27d4b26f4f63c4f0f1243aeee928f4f2eb4c180e5b986211e3ae1c0b5a,mems_allowed=0,oom_memcg=/kubepods/burstable/podc90baea5-9ea8-49cd-bd38-2adda4250d17,task_memcg=/kubepods/burstable/podc90baea5-9ea8-49cd-bd38-2adda4250d17/3e209c27d4b26f4f63c4f0f1243aeee928f4f2eb4c180e5b986211e3ae1c0b5a,task=chrome,pid=222605,uid=1001\r\n
[14272.899698] Memory cgroup out of memory: Killed process 222605 (chrome) total-vm:7238644kB, anon-rss:2185428kB, file-rss:107056kB, shmem-rss:0kB, UID:1001 pgtables:14604kB oom_score_adj:864\r\n
[14273.125672] oom_reaper: reaped process 222605 (chrome), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB\r\n
[14579.292816] chrome invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=864\r\n
</code></pre>
<p>which basically indicate the node got OOM and kill one of the services on the node, which in my case is chrome, which is the service that being run as per the workload.
at the exact same time i see an error on the workload (page crash on browser) , but there was no restart for the container.</p>
<p>as i know GKE can evict pods while under memory pressure , i’m trying to figure out the difference between OOM of the service itself, and OOM-kill for the pod.</p>
<p>when looking on the memory usage i see at this timeframe, pod reached top of 2.4G and the Node itself reached 7.6G.</p>
<p>the reason the pod wasnt evictetd with oom-kill error is cause it did not pass the actual limit?
wasnt the oom-killer was supposed to restart the container? baed on the logs the specific service on the container just killed and everything 'remains' the same.</p>
<p>any help will be appriciated.
thanks
CL</p>
|
<p>There are few concepts that needs to be explained here. First would be the importance of <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">Requests and Limits</a>. See the example:</p>
<blockquote>
<p>when a process in the container tries to consume more than the allowed
amount of memory, the system kernel terminates the process that
attempted the allocation, with an out of memory (OOM) error.</p>
</blockquote>
<p>The behavior you are facing is well described in <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">this article</a> along with <a href="https://www.youtube.com/watch?v=xjpHggHKm78" rel="nofollow noreferrer">this video</a>.</p>
<p>Than there is the idea of <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="nofollow noreferrer">Configuring Out of Resource Handling</a>. Especially the <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#node-oom-behavior" rel="nofollow noreferrer">Node OOM Behavior</a>:</p>
<blockquote>
<p>If the node experiences a system OOM (out of memory) event prior to
the <code>kubelet</code> being able to reclaim memory, the node depends on the
<a href="https://lwn.net/Articles/391222/" rel="nofollow noreferrer">oom_killer</a> to respond.</p>
</blockquote>
<p>I highly recommend getting familiar with the linked materials to get a good understanding about the topics you mentioned. Also, there is a good article showing a live example of it: <a href="https://zhimin-wen.medium.com/memory-limit-of-pod-and-oom-killer-891ee1f1cad8" rel="nofollow noreferrer">Memory Limit of POD and OOM Killer</a>.</p>
|
<p>I have some VMs on top of a private cloud (OpenStack). While trying to make a cluster on the master node, it initiates the cluster on its private IP by default. When I tried to initiate a cluster based on public IP of master node, using <code>--apiserver-advertise-address=publicIP</code> flag, it gives error.</p>
<blockquote>
<p>Initiation phase stops as below:</p>
<p>[wait-control-plane] Waiting for the kubelet to boot up the control
plane as static Pods from directory "/etc/kubernetes/manifests". This
can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.</p>
</blockquote>
<p>I've noticed that I do not see the public IP of VM from inside it (running "ip addr"), but VMs are reachable via their public IPs.</p>
<p>Is there a way to setup a Kubernetes cluster on top of "public IPs" of nodes at all?</p>
|
<p>Private IP addresses are used for communication between instances, and public addresses are used for communication with networks outside the cloud, including the Internet. So it's recommended to setup cluster only on private adresses.</p>
<p>When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.</p>
<p>A pool of floating IP addresses, configured by the cloud administrator, is available in OpenStack Compute. The project quota defines the maximum number of floating IP addresses that you can allocate to the project.</p>
<p>This error is likely caused by:</p>
<ul>
<li>The kubelet is not running</li>
<li>The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)</li>
</ul>
<p>If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:</p>
<ul>
<li>systemctl status kubelet</li>
<li>journalctl -xeu kubelet</li>
</ul>
<p>Try to add floating IPs of machines to <strong>/etc/hosts</strong> file on master node from which you want to deploy cluster and run installation again.</p>
|
<p>I am trying to setup minikube in a VM with ubuntu desktop 20.04 LTS installed, using docker driver.</p>
<p>I've followed the steps <a href="https://minikube.sigs.k8s.io/docs/start/" rel="noreferrer">here</a>, and also taken into consideration the limitations for the docker driver (reported <a href="https://github.com/kubernetes/minikube/issues/9607" rel="noreferrer">here</a>), that have to do with runtime security options. And when I try to start minikube the error I get is : Failed to start host: creating host: create: creating: prepare kic ssh: copying pub key.</p>
<p>This is what I have done to have my brand new VM with minikube installed.</p>
<ol>
<li>Install docker</li>
<li>Add my user to the docker group, otherwise minkube start would fail because dockerd runs as root (aka Rootless mode in docker terminology).</li>
<li>Install kubectl (that is not necessary, but I opted for this instead of the embedded kubectl in minikube)</li>
<li>Install minikube</li>
</ol>
<p>When I start minikube, this is what I get:</p>
<pre class="lang-none prettyprint-override"><code>ubuntuDesktop:~$ minikube start
😄 minikube v1.16.0 on Ubuntu 20.04
✨ Using the docker driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating docker container (CPUs=2, Memory=4500MB) ...
✋ Stopping node "minikube" ...
🛑 Powering off "minikube" via SSH ...
🔥 Deleting "minikube" in docker ...
🤦 StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset051825440 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset051825440: no such file or directory
: exit status 1
🔥 Creating docker container (CPUs=2, Memory=4500MB) ...
😿 Failed to start docker container. Running "minikube delete" may fix it: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset544814591 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset544814591: no such file or directory
: exit status 1
❌ Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset544814591 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset544814591: no such file or directory
: exit status 1
😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose
</code></pre>
<p>I suspect that the error has to do with the security settings issues with the docker driver, but this seems to be like a dog chasing its tail: if I don't use rootless mode in docker and I attempt to start minikube with sudo (so that docker can also start up with a privileged user), then I get this:</p>
<pre class="lang-none prettyprint-override"><code>ubuntuDesktop:~$ sudo minikube start
[sudo] password for alberto:
😄 minikube v1.16.0 on Ubuntu 20.04
✨ Automatically selected the docker driver. Other choices: virtualbox, none
🛑 The "docker" driver should not be used with root privileges.
💡 If you are running minikube within a VM, consider using --driver=none:
📘 https://minikube.sigs.k8s.io/docs/reference/drivers/none/
❌ Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
</code></pre>
<p>So, or either I am missing something or minikube doesn't work at all with docker driver, which I doubt.</p>
<p>Here is my environment info:</p>
<pre class="lang-none prettyprint-override"><code>ubuntuDesktop:~$ docker version
Client:
Version: 19.03.11
API version: 1.40
Go version: go1.13.12
Git commit: dd360c7
Built: Mon Jun 8 20:23:26 2020
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 19.03.11
API version: 1.40 (minimum version 1.12)
Go version: go1.13.12
Git commit: 77e06fd
Built: Mon Jun 8 20:24:59 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit:
docker-init:
Version: 0.18.0
GitCommit: fec3683
ubuntuDesktop:~$ minikube version
minikube version: v1.16.0
commit: 9f1e482427589ff8451c4723b6ba53bb9742fbb1-dirty
</code></pre>
<p>If someone has minikube working on ubuntu 20.04 and could share versions and driver, I would appreciate. with the info in minikube and docker sites I don't know what else to check to make this work.</p>
|
<h3>Solution:</h3>
<p>As I mentioned in my comment you may just need to run:</p>
<pre><code>docker system prune
</code></pre>
<p>then:</p>
<pre><code>minikube delete
</code></pre>
<p>and finally:</p>
<pre><code>minikube start --driver=docker
</code></pre>
<p>This should help.</p>
<h3>Explanation:</h3>
<p>Although as I already mentioned in my comment, it's difficult to say what was the issue in your specific case, such situation may happen as a consequence of previous unseccessful attempt to run your <strong>Minikube</strong> instance.</p>
<p>It happens sometimes also when different driver is used and it is run as a <strong>VM</strong> and basically deleting such <strong>VM</strong> may help. Usually running <code>minikube delete && minikube start</code> is enough.</p>
<p>In this case, when <code>--driver=docker</code> is used, your <strong>Minikube</strong> instance is configured as a container in your <strong>docker runtime</strong> but apart from container itself other things like networking or storage are configured.</p>
<p><a href="https://docs.docker.com/engine/reference/commandline/system_prune/" rel="noreferrer"><code>docker system prune</code></a> command <em>removes all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes.</em> So what we can say for sure it was one of the above.</p>
<p>Judging by the exact error message:</p>
<pre><code>❌ Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset544814591 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset544814591: no such file or directory
: exit status 1
</code></pre>
<p>I guess it could be simply clearing some <code>cached</code> data that helped in your case and removing broken references to non-existing files. The above message explains quite clearly what couldn't be done, namely <strong>docker</strong> couldn't <code>copy</code> a <code>public ssh key</code> to the destination <code>minikube:/home/docker/.ssh/authorized_keys</code> as the source file <code>/tmp/tmpf-memory-asset544814591</code>, it attempted to copy it from, simply didn't exist. So it's actually very simple to say <em>what</em> happend but to be able to tell <em>why</em> it happened might require diving a bit deeper in both <strong>Docker</strong> and <strong>Minikube</strong> internals and analyze step by step how <strong>Minikube</strong> instance is provisioned when using <code>--driver=docker</code>.</p>
<p>It's a good point that you may try to analyze your <strong>docker</strong> logs but I seriously doubt that you will find there the exact reason why non-existing temporary file <code>/tmp/tmpf-memory-asset544814591</code> was referenced or why it didn't exsist.</p>
|
<p>I have recently updated podManagementPolicy field in my StatefulSet from default(OrderedReady) to Parallel.</p>
<ul>
<li>It has significantly reduced the scale-up and scale-down time.</li>
<li>I have not seen any downsides of this change as of now, but I am worried if there can be any scenario where it might create problems for me?</li>
</ul>
<p>I wanted to know if there is any case where i can face any issue?</p>
|
<p>I would like to expand on this topic a bit.</p>
<p>The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#orderedready-pod-management" rel="noreferrer"><code>OrderedReady</code> pod management</a> behaves as follows:</p>
<ul>
<li><p>For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.</p>
</li>
<li><p>When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.</p>
</li>
<li><p>Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.</p>
</li>
<li><p>Before a Pod is terminated, all of its successors must be completely shutdown.</p>
</li>
</ul>
<p>While the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management" rel="noreferrer"><code>Parallel</code> pod management</a>:</p>
<blockquote>
<p>tells the StatefulSet controller to launch or terminate all Pods in
parallel, and to not wait for Pods to become Running and Ready or
completely terminated prior to launching or terminating another Pod.
This option only affects the behavior for scaling operations. Updates
are not affected.</p>
</blockquote>
<p>Theoretically you will not face any downtime while updating your app as <code>parallel</code> strategy only affects the scaling operations. As already said by Jonas it is hard to foresee the potential consequences without knowing much about your app and architecture. But generally it is safe to say that if your app's instances does not depend on each other (and thus does not have to wait for each pod to be Running and Ready) the <code>parallel</code> strategy should be safe and quicker than the <code>OrderedReady</code> one. However, if you possibly face any issues with your <code>StatefulSet</code> in the future and would like to analyze it from the Kubernetes side, these <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-stateful-set/" rel="noreferrer">official docs</a> might be helpul for you.</p>
|
<p>With the release of version 2.x of rancher we started using v3 of the Apis but to my despair there is no proper documentation for the apis.
If we visit the Rancher Documentation Page <a href="https://rancher.com/docs/rancher/v2.x/en/api/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.x/en/api/</a> we just find the brief intro and not the information about how to use the specific endpoints and what are the inputs accepted by them.
For eg we have a v3/cluster endpoint to create the cluster but it requires "n" number of inputs in the form strings/objects. How could one find out what all attributes are needed and also what all attributes map to what thing in the UI.</p>
<p>There is some documentation available for v2 of the api but things have changed miles with the introduction of v3 of Rancherapi.</p>
<p><strong>UseCase :</strong> I need to automate the complete process of cluster creation to helm chart installation</p>
<p>I took some help from the medium blog : <a href="https://medium.com/@superseb/adding-custom-nodes-to-your-kubernetes-cluster-in-rancher-2-0-tech-preview-2-89cf4f55808a" rel="nofollow noreferrer">https://medium.com/@superseb/adding-custom-nodes-to-your-kubernetes-cluster-in-rancher-2-0-tech-preview-2-89cf4f55808a</a> to understand the APIs</p>
|
<p>Quite good fragment of Rancher documentation about v3 API you can find here:
<a href="https://cdn2.hubspot.net/hubfs/468859/Rancher%202.0%20Architecture%20-%20Doc%20v1.0.pdf" rel="nofollow noreferrer">v3-rancher</a> (starting from page 9).</p>
<p>Source code you can find here: <a href="https://github.com/rancher/validation/tree/master/tests/v3_api" rel="nofollow noreferrer"><code>rancher-v3-api</code></a>.</p>
|
<p>I have a <code>ConfigMap</code> where I have defined some environment variables like <code>log_level..</code> and referencing them in the deployment.</p>
<pre><code> envFrom:
- configMapRef:
name: test_config_map
</code></pre>
<p>After deployment, I have changed some of the values in the config map and restarted the pods.</p>
<pre><code>kubectl edit configmap test_config_map
</code></pre>
<p>When I upgrade the helm chart, the modified values are overridden with the default values.</p>
<p>I assume that helm v3 3-way merge will take of the live state and keep the older values. But it doesn't seems to be the case.</p>
<p>Is there any way I can keep the modified values even after upgrade?</p>
|
<p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>As already mentioned in the comments, the best practice is to have your resource definitions, or in the case of using helm charts, your <code>values.yaml</code> files stored in your code repository and not changing things manually on your cluster as this leads to a configuration drift and makes it hard to restore the exact previous version in case of an outage or other emergency.</p>
<p>See the <a href="https://kubernetes.io/docs/concepts/configuration/overview/" rel="nofollow noreferrer">Configuration Best Practices</a>.</p>
|
<p>Is there a way to configure Kubernetes SetviceAccount tokens to expire? Following the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens" rel="nofollow noreferrer">documentation these tokens are JWT</a> (as I was able to also check it using a <a href="https://jwt.io/#debugger-io" rel="nofollow noreferrer">JWT debugger</a>). <a href="https://www.rfc-editor.org/rfc/rfc7519#section-4.1.4" rel="nofollow noreferrer">Following the specification JWT specifies expiration</a> but so far I was not able to find out how I can convince Kubernetes create tokens with this header.</p>
<p>Any thoughts?</p>
|
<p>Currently the default service account JWT tokens in Kubernetes are considered as “forever” tokens. They don’t expire and are valid for as long as the service account exists. I fear that your goal might nor be possible to achieve from the Kubernetes side.</p>
<p>I am posting this answer as a community wiki. Feel free to expand it if you know how to approach it from another side.</p>
<p>I hope this helps.</p>
|
<p>I would like to know how to get current or the last read metric value of CPU and memory usage of all pods.</p>
<p>I tried to call the hawkler endpoint. I went to the browser developer mode by hitting f12 and took this endpoint from list of calls that are made when metrics page of a pod is loaded.</p>
<pre><code>https://metrics.mydev.abccomp.com/hakular/metrics/gauges/myservice%dfjlajdflk-lkejre-12112kljdfkl%2Fcpu%2Fusage_rate/data?bucketDuration=1mn&start=-1mn
</code></pre>
<p>However this will give me the cpu usage metrics for the last minute, for that particular pod. I am trying to see if there is a command or way exisits that will give me only the current snapshot of cpu usage and memory stats of all the pods collectively like below:</p>
<pre><code>pod memory usage memory max cpu usage cpu max
pod1 0.4 mb 2.0 mb 20 m cores 25 m cores
pod2 1.5 mb 2.0 mb 25 m cores 25 m cores
</code></pre>
|
<p>To see the pods that use the most cpu and memory you can use the <a href="https://www.mankier.com/1/kubectl-top-pod" rel="nofollow noreferrer">kubectl top</a> command but it doesn't sort yet and is also missing the quota limits and requests per pod. You only see the current usage.</p>
<p>Execute command below:</p>
<pre><code>$ kubectl top pods --all-namespaces --containers=true
</code></pre>
<p>Because of these limitations, but also because you want to gather and store this resource usage information on an ongoing basis, a monitoring tool comes in handy. This allows you to analyze resource usage both in real time and historically, and also lets you alert on capacity bottlenecks.</p>
<p>To workaround problem " Error from server (Forbidden): unknown (get services http:heapster"</p>
<p>Make sure that heapster deployment don't forgot to install the Service for heapster, otherwise you will have to do it manually.</p>
<p>E.g.:</p>
<pre><code>$ kubectl create -f /dev/stdin <<SVC
apiVersion: v1
kind: Service
metadata:
name: heapster
namespace: kube-system
spec:
selector:
whatever-label: is-on-heapster-pods
ports:
- name: http
port: 80
targetPort: whatever-is-heapster-is-listening-on
SVC
</code></pre>
|
<p>Hi I am new to elastic search.</p>
<p>I searched a lot but not found documentation / article regarding auto scaling <code>elastic search</code> using <code>kubernetes</code> Vertical Pod Autoscaling.</p>
<p>I need to know whether I can do <code>VPA</code> with <code>elastic search</code> ?</p>
|
<p>According to the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscaler#limitations_for_vertical_pod_autoscaling" rel="nofollow noreferrer">official google docs</a> regarding the limitations of the Vertical Pod Autoscaling:</p>
<blockquote>
<p>Vertical Pod autoscaling is not yet ready for use with JVM-based
workloads due to limited visibility into actual memory usage of the
workload.</p>
</blockquote>
<p>and to the <a href="https://www.elastic.co/guide/en/elasticsearch/reference/6.8/setup.html#jvm-version" rel="nofollow noreferrer">Elasticsearch documentation</a>:</p>
<blockquote>
<p>Elasticsearch is built using Java, and requires at least Java 8 in
order to run. Only Oracle’s Java and the OpenJDK are supported. The
same JVM version should be used on all Elasticsearch nodes and
clients.</p>
</blockquote>
<p>So I am afraid that combining these two is not a recommended way.</p>
|
<p>I am adding a node to the Kubernetes cluster as a node using flannel. Here are the nodes on my cluster:
<code>kubectl get nodes</code></p>
<pre><code>NAME STATUS ROLES AGE VERSION
jetson-80 NotReady <none> 167m v1.15.0
p4 Ready master 18d v1.15.0
</code></pre>
<p>This machine is reachable through the same network. When joining the cluster, Kubernetes pulls some images, among others k8s.gcr.io/pause:3.1, but for some reason failed in pulling the images:</p>
<pre><code>Warning FailedCreatePodSandBox 15d
kubelet,jetson-81 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: read tcp 192.168.8.81:58820->108.177.126.82:443: read: connection reset by peer
</code></pre>
<p>The machine is connected to the internet but only <code>wget</code> command works, not <code>ping</code></p>
<p>I tried to pull images elsewhere and copy them to the machine.</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.0 d235b23c3570 2 months ago 82.4MB
quay.io/coreos/flannel v0.11.0-arm64 32ffa9fadfd7 6 months ago 53.5MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 20 months ago 742kB
</code></pre>
<p>Here are the list of pods on the master :</p>
<pre><code>NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-gmsz7 1/1 Running 0 2d22h
coredns-5c98db65d4-j6gz5 1/1 Running 0 2d22h
etcd-p4 1/1 Running 0 2d22h
kube-apiserver-p4 1/1 Running 0 2d22h
kube-controller-manager-p4 1/1 Running 0 2d22h
kube-flannel-ds-amd64-cq7kz 1/1 Running 9 17d
kube-flannel-ds-arm64-4s7kk 0/1 Init:CrashLoopBackOff 0 2m8s
kube-proxy-l2slz 0/1 CrashLoopBackOff 4 2m8s
kube-proxy-q6db8 1/1 Running 0 2d22h
kube-scheduler-p4 1/1 Running 0 2d22h
tiller-deploy-5d6cc99fc-rwdrl 1/1 Running 1 17d
</code></pre>
<p>but it didn't work either when I check the associated <code>flannel</code> pod <code>kube-flannel-ds-arm64-4s7kk</code>:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 66s default-scheduler Successfully assigned kube-system/kube-flannel-ds-arm64-4s7kk to jetson-80
Warning Failed <invalid> kubelet, jetson-80 Error: failed to start container "install-cni": Error response from daemon: cannot join network of a non running container: 68ffc44cf8cd655234691b0362615f97c59d285bec790af40f890510f27ba298
Warning Failed <invalid> kubelet, jetson-80 Error: failed to start container "install-cni": Error response from daemon: cannot join network of a non running container: a196d8540b68dc7fcd97b0cda1e2f3183d1410598b6151c191b43602ac2faf8e
Warning Failed <invalid> kubelet, jetson-80 Error: failed to start container "install-cni": Error response from daemon: cannot join network of a non running container: 9d05d1fcb54f5388ca7e64d1b6627b05d52aea270114b5a418e8911650893bc6
Warning Failed <invalid> kubelet, jetson-80 Error: failed to start container "install-cni": Error response from daemon: cannot join network of a non running container: 5b730961cddf5cc3fb2af564b1abb46b086073d562bb2023018cd66fc5e96ce7
Normal Created <invalid> (x5 over <invalid>) kubelet, jetson-80 Created container install-cni
Warning Failed <invalid> kubelet, jetson-80 Error: failed to start container "install-cni": Error response from daemon: cannot join network of a non running container: 1767e9eb9198969329eaa14a71a110212d6622a8b9844137ac5b247cb9e90292
Normal SandboxChanged <invalid> (x5 over <invalid>) kubelet, jetson-80 Pod sandbox changed, it will be killed and re-created.
Warning BackOff <invalid> (x4 over <invalid>) kubelet, jetson-80 Back-off restarting failed container
Normal Pulled <invalid> (x6 over <invalid>) kubelet, jetson-80 Container image "quay.io/coreos/flannel:v0.11.0-arm64" already present on machine
</code></pre>
<p>I still can't identify if it's a Kubernetes or Flannel issue and haven't been able to solve it despite multiple attempts. Please let me know if you need me to share more details</p>
<p><strong>EDIT</strong>:</p>
<p>using <code>kubectl describe pod -n kube-system kube-proxy-l2slz</code> :</p>
<pre><code> Normal Pulled <invalid> (x67 over <invalid>) kubelet, ahold-jetson-80 Container image "k8s.gcr.io/kube-proxy:v1.15.0" already present on machine
Normal SandboxChanged <invalid> (x6910 over <invalid>) kubelet, ahold-jetson-80 Pod sandbox changed, it will be killed and re-created.
Warning FailedSync <invalid> (x77 over <invalid>) kubelet, ahold-jetson-80 (combined from similar events): error determining status: rpc error: code = Unknown desc = Error: No such container: 03e7ee861f8f63261ff9289ed2d73ea5fec516068daa0f1fe2e4fd50ca42ad12
Warning BackOff <invalid> (x8437 over <invalid>) kubelet, ahold-jetson-80 Back-off restarting failed container
</code></pre>
|
<p>Your problem may be coused by the mutil sandbox container in you node. Try to restart the kubelet:</p>
<pre><code>$ systemctl restart kubelet
</code></pre>
<p>Check if you have generated and copied public key to right node to have connection between them: <a href="https://www.ssh.com/ssh/keygen/" rel="nofollow noreferrer">ssh-keygen</a>.</p>
<p>Please make sure the firewall/security groups allow traffic on UDP port 58820.
Look at the flannel logs and see if there are any errors there but also look for "Subnet added: " messages. Each node should have added the other two subnets.</p>
<p>While running ping, try to use <strong>tcpdump</strong> to see where the packets get dropped. </p>
<p>Try src flannel0 (icmp), src host interface (udp port 58820), dest host interface (udp port 58820), dest flannel0 (icmp), docker0 (icmp).</p>
<p>Here is useful documentation: <a href="https://github.com/coreos/flannel" rel="nofollow noreferrer">flannel-documentation</a>.</p>
|
<p>I currently have a Cronjob that has a job that schedule at some period of time and run in a pattern. I want to export the logs of each pod runs to a file in the path as <code>temp/logs/FILENAME</code>
with the <code>FILENAME</code> to be the timestamp of the run being created. How am I going to do that? Hopefully to provide a solution. If you would need to add a script, then please use python or shell command. Thank you.</p>
|
<p>According to <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">Kubernetes Logging Architecture</a>:</p>
<blockquote>
<p>In a cluster, logs should have a separate storage and lifecycle
independent of nodes, pods, or containers. This concept is called
cluster-level logging.</p>
<p>Cluster-level logging architectures require a separate backend to
store, analyze, and query logs. Kubernetes does not provide a native
storage solution for log data. Instead, there are many logging
solutions that integrate with Kubernetes.</p>
</blockquote>
<p>Which brings us to <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures" rel="nofollow noreferrer">Cluster-level logging architectures</a>:</p>
<blockquote>
<p>While Kubernetes does not provide a native solution for cluster-level
logging, there are several common approaches you can consider. Here
are some options:</p>
<ul>
<li><p>Use a node-level logging agent that runs on every node.</p>
</li>
<li><p>Include a dedicated sidecar container for logging in an application pod.</p>
</li>
<li><p>Push logs directly to a backend from within an application.</p>
</li>
</ul>
</blockquote>
<p>Kubernetes does not provide log aggregation of its own. Therefore, you need a local agent to gather the data and send it to the central log management. See some options below:</p>
<ul>
<li><p><a href="https://github.com/fluent/fluentd#fluentd-open-source-log-collector" rel="nofollow noreferrer">Fluentd</a></p>
</li>
<li><p><a href="https://www.elastic.co/elastic-stack/" rel="nofollow noreferrer">ELK Stack</a></p>
</li>
</ul>
|
<p>I am creating a config map as below</p>
<p><code>kubectl create configmap testconfigmap --from-file=testkey=/var/opt/testfile.txt</code></p>
<p>As I am using helm charts, I would like to create the config map using YAML file instead of running kubectl.
I went through <a href="https://stackoverflow.com/questions/53429486/kubernetes-how-to-define-configmap-built-using-a-file-in-a-yaml">Kubernetes - How to define ConfigMap built using a file in a yaml?</a> and we can use <code>.Files.Get</code> to access the files.
But then testfile.txt needs to be a part of helm. I would like to have something like</p>
<pre><code>kind: ConfigMap
metadata:
name: testconfigmap
data:
fromfile: |-
{{ .Files.Get "/var/opt/testfile.txt" | indent 4 }}
</code></pre>
<p>It works when "testfile.txt" is under the main helm directory. So, <code>{{ .Files.Get "testfile.txt" | indent 4 }}</code> works but <code>{{ .Files.Get "/var/opt/testfile.txt" | indent 4 }}</code> doesn't. With custom path, the value for the ConfigMap is empty.</p>
<p>Is is possible to place the file at a custom path outside the helm folder, so I can define my path in Values.yaml and read it in my ConfigMap yaml ?</p>
|
<p><em>This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.</em></p>
<p>As <a href="https://stackoverflow.com/users/225016/mdaniel">mdaniel</a> has already stated in his comment:</p>
<blockquote>
<p><em>Is is possible to place the file at a custom path outside the helm
folder</em> no, because <a href="https://helm.sh/docs/chart_template_guide/accessing_files/" rel="noreferrer">helm considers that a security risk</a> – mdaniel 2
days ago</p>
</blockquote>
<p>You can also compare it with <a href="https://github.com/helm/helm/issues/3276" rel="noreferrer">this</a> feature request on <strong>GitHub</strong> where you can find very similar requirement described in short e.g. in <a href="https://github.com/helm/helm/issues/3276#issuecomment-353675624" rel="noreferrer">this comment</a>:</p>
<blockquote>
<p>I have this exact need. My chart publishes a secret read from file at
/keybase. This file is deliberately not in the chart.</p>
<p>I believe files for .Files.Get should not be assumed to be inside the
chart ...</p>
</blockquote>
<p>One interesting <a href="https://github.com/helm/helm/issues/3276#issuecomment-353718344" rel="noreferrer">comment</a>:</p>
<blockquote>
<p>lenalebt commented on Dec 23, 2017 I am quite sure .<code>Files.Get</code> not
being able to access the file system arbitrarily is a security
feature, so I don't think the current behaviour is wrong - it just
does not fulfill all use cases.</p>
</blockquote>
<p>This issue was created quite long time ago (Dec 19, 2017) but has been recently reopened. There are even <a href="https://github.com/helm/helm/issues/3276#issuecomment-607952569" rel="noreferrer">some specific proposals on how it could be handled</a>:</p>
<blockquote>
<p>titou10titou10 commented on Apr 2 @misberner can you confirm that
using--include-dir =will allow us to use
.Files.Glob().AsConfig(), and so create a ConfigMap with one
entry in the CM per file in?</p>
<p>@misberner misberner commented on Apr 2 Yeah that's the idea. An open
question from my point of view is whether an --include-dir with a
specified introduces an overlay, or shadows everything under
/ from previous args and from the bundle itself. I'm not super
opinionated on that one but would prefer the former.</p>
</blockquote>
<p>The most recent comments give some hope that this feature might become available in future releases of helm.</p>
|
<p>I am trying to implement Kubernetes Network policy in my application on the basis of domain name to control the Egress and Ingress calls to the pod. I found <a href="https://github.com/kubernetes/kubernetes/issues/50453" rel="nofollow noreferrer">DNSSelector</a> but it seems from there <a href="https://github.com/kubernetes/kubernetes/issues/50453#issuecomment-368334028" rel="nofollow noreferrer">last comment</a> that this feature is not implemented in Kubernetes. <br/>I explored calico also, but in calico this feature comes under <a href="https://docs.projectcalico.org/security/calico-enterprise/egress-access-controls" rel="nofollow noreferrer">calico-enterprise</a> hence it is paid. Can someone let me know if any other way out there to achieve this or maybe DNSSelector is being implemented now in Kubernetes?</p>
|
<p>This is a community wiki answer. Feel free to expand it.</p>
<p>Unfortunately, the <a href="https://github.com/kubernetes/kubernetes/issues/50453" rel="nofollow noreferrer">DNSSelector</a> is not yet implemented in Kubernetes (nor being implemented).</p>
<p><a href="https://docs.projectcalico.org/security/calico-enterprise/egress-access-controls" rel="nofollow noreferrer">Calico Enterprise</a> supports such functionality if you wish to pay for it.</p>
|
<p>I have a Kubernetes cluster deployed locally to a node prepped by kubeadm.
I am experimenting with one of the pods. This pod fails to deploy, however I can't locate the cause of it. I have guesses as to what the problem is but I'd like to see something related in the Kubernetes logs</p>
<p>Here's what i have tried:</p>
<pre><code>$kubectl logs nmnode-0-0 -c hadoop -n test
</code></pre>
<pre><code>Error from server (NotFound): pods "nmnode-0-0" not found
</code></pre>
<pre><code>$ kubectl get event -n test | grep nmnode
(empty results here)
</code></pre>
<pre><code>$ journalctl -m |grep nmnode
</code></pre>
<p>and I get a bunch of repeated entries like the following. It talks about killing the pod but it gives no reason whatsoever for it</p>
<pre><code>Aug 08 23:10:15 jeff-u16-3 kubelet[146562]: E0808 23:10:15.901051 146562 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nmnode-0-0.15b92c3ff860aed6", GenerateName:"", Namespace:"test", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"test", Name:"nmnode-0-0", UID:"743d2876-69cf-43bc-9227-aca603590147", APIVersion:"v1", ResourceVersion:"38152", FieldPath:"spec.containers{hadoop}"}, Reason:"Killing", Message:"Stopping container hadoop", Source:v1.EventSource{Component:"kubelet", Host:"jeff-u16-3"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf4b616dacae12d6, ext:2812562895486, loc:(*time.Location)(0x781e740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf4b616dacae12d6, ext:2812562895486, loc:(*time.Location)(0x781e740)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "nmnode-0-0.15b92c3ff860aed6" is forbidden: unable to create new content in namespace test because it is being terminated' (will not retry!)
</code></pre>
<p>The shorted version of the above message is this:</p>
<pre><code>Reason:"Killing", Message:"Stopping container hadoop",
</code></pre>
<p>The cluster is still running. Do you know how I can get to the bottom of this?</p>
|
<p>Try to execute command below:</p>
<pre><code>$ kubectl get pods --all-namespaces
</code></pre>
<p>Take a look if your pod was not created in a different namespace.</p>
<p>Most common reason of pod failures:</p>
<p><strong>1.</strong> The container was never created because it failed to pull image.</p>
<p><strong>2.</strong> The container never existed in the runtime, and the error reason is not in the "special error list", so the containerStatus was never set and kept as "no state".</p>
<p><strong>3.</strong> Then the container was treated as "Unkown" and the pod was reported as Pending without any reason.
The containerStatus was always "no state" after each syncPod(), the status manager could never delete the pod even though the Deletiontimestamp was set.</p>
<p>Useful article: <a href="https://kukulinski.com/10-most-common-reasons-kubernetes-deployments-fail-part-1/" rel="nofollow noreferrer">pod-failure</a>.</p>
|
<p>I would like to control the number of revision in the resultset of this command in my k8s cluster:</p>
<pre><code>kubectl rollout history deployment.v1.apps/<<my_deployment>>
</code></pre>
<p>Here it is what I have:</p>
<pre><code> REVISION CHANGE-CAUSE
10 set app version to 1.1.10
11 set app version to 1.1.11
12 set app version to 1.1.12
13 set app version to 1.1.13
14 set app version to 1.1.14
15 set app version to 1.1.15
16 set app version to 1.1.16
17 set app version to 1.1.17
18 set app version to 1.1.18
19 set app version to 1.1.19
20 set app version to 1.1.20
21 set app version to 1.1.21
</code></pre>
<p>I would like to have only:</p>
<pre><code>21 set app version to 1.1.21
</code></pre>
<p>Is there a magical command like:</p>
<pre><code>kubectl rollout history clean deployment.v1.apps/<<my_deployment>>
</code></pre>
|
<p>Yes, as per <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#clean-up-policy" rel="nofollow noreferrer">the documentation</a>, it can be done by setting <code>.spec.revisionHistoryLimit</code> in your <code>Deployment</code> to <code>0</code>:</p>
<blockquote>
<h2>Clean up Policy<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#clean-up-policy" rel="nofollow noreferrer"></a></h2>
<p>You can set <code>.spec.revisionHistoryLimit</code> field in a Deployment to
specify how many old ReplicaSets for this Deployment you want to
retain. The rest will be garbage-collected in the background. By
default, it is 10.</p>
<blockquote>
<p><strong>Note:</strong> Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment thus that Deployment
will not be able to roll back.</p>
</blockquote>
</blockquote>
<p>The easiest way to do it is by <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer">patching</a> your <code>Deployment</code>. It can be done by executing the following command:</p>
<pre><code>kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 0}]'
</code></pre>
<p>Then you can set it back to the previous value:</p>
<pre><code>kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 10}]'
</code></pre>
<h3>UPDATE:</h3>
<blockquote>
<p>Thx, I have already try this. The history revision table is still
present. The only way I have found is to delete the deployment
configuration. – Sunitrams 20 mins ago</p>
</blockquote>
<p>Are you sure you did it the same way ? 🤔 Take a quick look and see how it works in my case:</p>
<pre><code>$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
4 <none>
</code></pre>
<p>After running:</p>
<pre><code>$ kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 0}]'
deployment.apps/nginx-deployment patched
</code></pre>
<p>the revision history is reduced to the latest one:</p>
<pre><code>$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
4 <none>
</code></pre>
<p>When I set <code>.spec.revisionHistoryLimit</code> back to 10:</p>
<pre><code>$ kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 10}]'
deployment.apps/nginx-deployment patched
</code></pre>
<p>there is still only latest revision:</p>
<pre><code>$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
4 <none>
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.