Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>According to the <a href="https://docs.docker.com/network/iptables/" rel="nofollow noreferrer">docker docs</a> (emphasis mine):</p>
<blockquote>
<p>On Linux, Docker manipulates iptables rules to provide network isolation. While this is an implementation detail (...) <em>you should not modify</em> the rules Docker inserts into your iptables policies ...</p>
</blockquote>
<p>and</p>
<blockquote>
<p>It is possible to set the iptables key to false in the Docker engine’s configuration file at /etc/docker/daemon.json, but <em>this option is not appropriate for most users</em>. It is not possible to completely prevent Docker from creating iptables rules, and creating them after-the-fact is <em>extremely involved</em> and beyond the scope of these instructions. Setting iptables to false will more than likely <em>break container networking</em> for the Docker engine.</p>
</blockquote>
<p>The docs do make it rather clear that you shouldn't mess up with these options but they don't give any further information. So, the question is what exactly are the problems of disabling the iptables manipulation by docker?
How will that affect running:</p>
<ul>
<li>standalone containers?</li>
<li>docker-compose?</li>
<li><a href="https://docs.docker.com/engine/swarm/" rel="nofollow noreferrer">docker-swarm mode</a>?</li>
<li>kubernetes?</li>
</ul>
<p>I am assumming that container internet connectivity will be handled by setting up NAT rules for <code>docker0</code> manually, as described e.g. <a href="https://blog.daknob.net/debian-firewall-docker/" rel="nofollow noreferrer">here</a>.</p>
| pmav99 | <p>I decided to answer part of the question related to Kubernetes.<br />
Typical Kubernetes network setup relies on third-party tools (<a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">CNI plugins</a>) that comply with the <a href="https://github.com/containernetworking/cni/blob/master/SPEC.md#container-network-interface-specification" rel="nofollow noreferrer">Container Network Interface Specification</a>. You can find different Kubernetes networking options
in the <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model" rel="nofollow noreferrer">Kubernetes networking model</a> documentation.</p>
<p>Kubernetes has its own set of <code>iptables</code> rules managed by <code>kube-proxy</code> to do all kinds of filtering and NAT between pods and services. The most important chains are <code>KUBE-SERVICES</code>, <code>KUBE-SVC-*</code> and <code>KUBE-SEP-*</code> (see: <a href="https://kubernetes.io/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/#kubernetes-networking-basics" rel="nofollow noreferrer">Kubernetes networking basics</a> ).</p>
<p>It’s also important to know that Kubernetes doesn't need to use docker default bridge ( <code>docker0</code>), as it uses CNI for network setup. Each Kubernetes CNI plugin works in a bit different way, so it's better to check the detailed concepts in their official documentation.</p>
| matt_j |
<p>I am deploying some pods in Azure Kubernetes Service. When I deploy the pods with CPU requests for 100m I can see the 5 pods are running. Now with this state I run some performance tests and benchmark my result.
Now I redeploy the pods with CPU requests of 1 CPU and run same tests again. I can see that the pods are created successfully in both the cases and are in running state.
Shouldnt I see better performance results? Can someone please explain. Below is deployment file. CPU request for first test is 100m and for second is 1. If no performance difference is expected how to improve performance?</p>
<pre><code>resources:
limits:
cpu: 3096m
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
</code></pre>
| ckv | <p><code>CPU requests</code> are mostly more important for the <code>kube-scheduler</code> to identify the best node suitable to place a pod. If you set <code>CPU requests = 1</code> for every workload there will be no more capacity soon to schedule new pods.</p>
<p>Furthermore assigning more CPU <code>requests</code> to a pod does not automatically mean that the container/application will consume this.</p>
<p><code>CPU limits</code> on the other hand can be responsible for CPU throttling in Kubernetes bcs they limit the time pods can consume the CPU.</p>
<p><a href="https://medium.com/omio-engineering/cpu-limits-and-aggressive-throttling-in-kubernetes-c5b20bd8a718" rel="nofollow noreferrer">Here</a> is a great article about it.</p>
<p>Basically there are a lot of articles about about to no limit the CPU to avoid kernel throttling but from my experience throttling of a pod is less harmless than a pod going crazy and consume the whole CPU of a node. So i would recommend to not overcommit resources and set requests=limits.</p>
<p>You can also check the capacity and allocated resources of your nodes:</p>
<p><code>kubectl describe node <node></code>:</p>
<pre><code>Capacity:
cpu: 4
ephemeral-storage: 203070420Ki
memory: 16393308Ki
Allocatable:
cpu: 3860m
ephemeral-storage: 187149698763
memory: 12899420Ki
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1080m (27%) 5200m (134%)
memory 1452Mi (11%) 6796Mi (53%)
</code></pre>
| Philip Welz |
<p>I am trying to start a postgres pod on microk8s kubernetes cluster. At the moment the postgres container with all its data is started locally on the host machine.</p>
<p>The question is: Is it possible to map the current volume (from local docker volume ) to the kubernetes pod deployment?</p>
<p>I have used <code>kompose</code> to convert the <code>docker-compose.yml</code> to appropriate <code>.yaml</code> files for kubernetes deployment.</p>
<p>The above mentioned command <code>kompose</code> creates <code>postgres-deployment.yaml</code>, <code>postgres-service.yaml</code>, and 2 <code>persistantvolumeclaims</code> ( from the volumes mapped in the docker-compose one for the pg_data and the other one for the init_db script).</p>
<p>Do I need to generate <code>PersistantVolume</code> mappings alongside the <code>PersistantVolumeClaims</code> that were automatically generated by <code>kompose</code> and how would they look?</p>
<p>EDIT: Using the yaml below I made 2 <code>volumes</code> and 2 <code>volumeclaims</code> for the postgres container one for the data one for the init_db script. Running that and then exposing the service endpoints worked.
WARNING: Because the database was running on docker host machine container and kubernetes pod in same time data corruption happened.</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/docker/volumes/dummy_pgdata/_data"
</code></pre>
| LexByte | <p>Posted community wiki for better visibility. Feel free to expand it.</p>
<hr />
<p>There is possibility to share same docker volume with the Kubernetes pod by <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="nofollow noreferrer">defining custom PersistentVolume</a> with <code>storageClassName</code> and <code>hostPath</code> set as below:</p>
<pre><code>storageClassName: manual
hostPath:
path: "/var/lib/docker/volumes/{docker-volume}"
</code></pre>
<p><strong>However, there is no possibility to share the same Postgres data files between different Postgres installations.</strong> It may cause data corruption. Check <a href="https://stackoverflow.com/a/43583234/16391991">this answer</a> for more details.</p>
| Mikolaj S. |
<p>I'm new to Azure and k8s and somewhat confused about when to assign rights to which principal.</p>
<p>Whats the difference between assigning rights to <code>azurerm_kubernetes_cluster.[name].kubelet_identity[0].object_id</code> vs <code>azurerm_kubernetes_cluster.[name].identity.0.principal_id</code> and are there any other principals on the cluster that might be relevant in some other situation?</p>
| mibollma | <p><code>azurerm_kubernetes_cluster.[name].kubelet_identity[0].object_id</code> = Managed identity of your user node pool ( this identity is needed for example to access the ACR in order to pull images or acces the AKV via CSI integration )</p>
<p><code>azurerm_kubernetes_cluster.[name].identity.0.principal_id</code> = Managed identity of your AKS ( this identity is needed for example to add new nodes to the Vnet or use Monitoring/Metrics )</p>
| Philip Welz |
<p>I am using test kubenetes cluster (Kubeadm 1 master and 2 nodes setup), My public ip change time to time and when my public IP changed, I am unable to connect to cluster and i get below error</p>
<pre><code> Kubernetes Unable to connect to the server: dial tcp x.x.x.x:6443: i/o timeout
</code></pre>
<p>I also have private IP 10.10.10.10 which is consistent all the time.</p>
<p>I have created kubernetes cluster using below command</p>
<pre><code> kubeadm init --control-plane-endpoint 10.10.10.10
</code></pre>
<p>But still it failed because certificates are signed to public IP and below is the error</p>
<pre><code> The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
</code></pre>
<p>Can someone help to setup kubeadm, and should allow for all IP's something like 0.0.0.0 and I am fine for security view point since it is test setup. or any parament fix.</p>
| Vidya | <p>Since <strong>@Vidya</strong> has already solved this issue by using a static IP address, I decided to provide a Community Wiki answer just for better visibility to other community members.</p>
<p>First of all, it is not recommended to have a frequently changing master/server IP address.<br />
As we can find in the discussion on GitHub <a href="https://github.com/kubernetes/kubernetes/issues/88648" rel="nofollow noreferrer">kubernetes/88648</a> - <code>kubeadm</code> does not provide an easy way to deal with this.</p>
<p>However, there are a few workarounds that can help us, when the IP address on the Kubernetes master node changes.
Based on the discussion <a href="https://github.com/kubernetes/kubeadm/issues/338" rel="nofollow noreferrer">Changing master IP address</a>, I prepared a script that regenerates certificates and re-init master node.</p>
<p>This script might be helpful, but I recommend running one command at a time (it will be safer).<br />
In addition, you may need to customize some steps to your needs:<br />
<strong>NOTE:</strong> In the example below, I'm using Docker as the container runtime.</p>
<pre><code>root@kmaster:~# cat reinit_master.sh
#!/bin/bash
set -e
echo "Stopping kubelet and docker"
systemctl stop kubelet docker
echo "Making backup kubernetes data"
mv /etc/kubernetes /etc/kubernetes-backup
mv /var/lib/kubelet /var/lib/kubelet-backup
echo "Restoring certificates"
mkdir /etc/kubernetes
cp -r /etc/kubernetes-backup/pki /etc/kubernetes/
rm /etc/kubernetes/pki/{apiserver.*,etcd/peer.*}
echo "Starting docker"
systemctl start docker
echo "Reinitializing master node"
kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd
echo "Updating kubeconfig file"
cp /etc/kubernetes/admin.conf ~/.kube/config
</code></pre>
<p>Then you need to rejoin the worker nodes to the cluster.</p>
| matt_j |
<p>I'm trying to get the nginx ingress controller load balancer ip in Azure AKS. I figured I would use the kubernetes provider via:</p>
<pre><code>data "kubernetes_service" "nginx_service" {
metadata {
name = "${local.ingress_name}-ingress-nginx-controller"
namespace = local.ingress_ns
}
depends_on = [helm_release.ingress]
}
</code></pre>
<p>However, i'm not seeing the IP address, this is what i get back:</p>
<pre><code>nginx_service = [
+ {
+ cluster_ip = "10.0.165.249"
+ external_ips = []
+ external_name = ""
+ external_traffic_policy = "Local"
+ health_check_node_port = 31089
+ load_balancer_ip = ""
+ load_balancer_source_ranges = []
+ port = [
+ {
+ name = "http"
+ node_port = 30784
+ port = 80
+ protocol = "TCP"
+ target_port = "http"
},
+ {
+ name = "https"
+ node_port = 32337
+ port = 443
+ protocol = "TCP"
+ target_port = "https"
},
]
+ publish_not_ready_addresses = false
+ selector = {
+ "app.kubernetes.io/component" = "controller"
+ "app.kubernetes.io/instance" = "nginx-ingress-internal"
+ "app.kubernetes.io/name" = "ingress-nginx"
}
+ session_affinity = "None"
+ type = "LoadBalancer"
},
]
</code></pre>
<p>However when I pull down the service via <code>kubectl</code> I can get the IP address via:</p>
<pre><code> kubectl get svc nginx-ingress-internal-ingress-nginx-controller -n nginx-ingress -o json | jq -r '.status.loadBalancer.ingress[].ip'
10.141.100.158
</code></pre>
<p>Is this a limitation of kubernetes provider for AKS? If so, what is a workaround other people have used? My end goals is to use the IP to configure the application gateway backend.</p>
<p>I guess I can use <code>local-exec</code>, but that seem hacky. Howerver, this might be my only option at the moment.</p>
<p>Thanks,</p>
<p>Jerry</p>
| Gerb | <p>although i strongly advise against creating resources inside Kubernetes with Terraform, you can do that:</p>
<p>Create a Public IP with Terraform -> Create the ingress-nginx inside Kubernetes with Terraform and pass <code>annotations</code> and <code>loadBalancerIP</code>with data from your Terraform resources. The final manifest should look like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
</code></pre>
<p>Terraform could look like this:</p>
<pre><code>resource "kubernetes_service" "ingress_nginx" {
metadata {
name = "tingress-nginx-controller"
annotations {
"service.beta.kubernetes.io/azure-load-balancer-resource-group" = "${azurerm_resource_group.YOUR_RG.name}"
}
spec {
selector = {
app = <PLACEHOLDER>
}
port {
port = <PLACEHOLDER>
target_port = <PLACEHOLDER>
}
type = "LoadBalancer"
load_balancer_ip = "${azurerm_public_ip.YOUR_IP.ip_address}"
}
}
</code></pre>
| Philip Welz |
<p>I was reading the documentation about <code>kubernetes.io/dockerconfigjson</code>
and I just have a question: Is there any security risk to public publish <code>dockerconfigjson</code>? For example:</p>
<pre><code> data:
.dockerconfigjson: <base64>
</code></pre>
| h4x0r_dz | <p>Posted community wiki answer for better visibility. Feel free to expand it.</p>
<hr />
<p>As suggested by David Maze's comment:</p>
<blockquote>
<p>I'd expect that to usually contain credentials to access your Docker registry...so yes, it'd be a significant security exposure to publish it?</p>
</blockquote>
<p>It's dangerous and not recommended because docker <code>config.json</code> imported to Kubernetes is mainly used <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">for keeping credentials used for pulling images</a> from private registry.</p>
<p>Even if it's saved in base64 format as in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials" rel="nofollow noreferrer">example from Kubernetes docs</a> (in your example too) it can be easily decoded:</p>
<p><em>my-secret.yaml</em></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: awesomeapps
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>Let's decode it:</p>
<pre><code>user@shell:~/ $ cat my-secret.yaml | yq e '.data.".dockerconfigjson"' - | base64 -d
Really really reeeeeeeeeeaaaaaaaaaaaaaaaaaaaaaaaaaaalllllllllllllllllllllllllllllllyyyyyyyyyyyyyyyyyyyy llllllllllllllooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggg auth keys
</code></pre>
| Mikolaj S. |
<p>Use case:
Get stream from Kafka store in parquet file using spark.
Open these parquet files and generate graph using graphframes.</p>
<p>Infra:
i have a bitnami spark infra on kubernetes connected to Kafka.</p>
<p>The goal is to call the spark-submit into a kubernetes pods.
With that all the code run into kubernetes and i doesn't install spark outside kubernetes.</p>
<p>Without kubernetes, i have do the job into spark master container:</p>
<pre><code>docker cp ./Spark/Python_code/edge_stream.py spark_spark_1:/opt/bitnami/spark/edge_stream.py
docker cp ./Spark/Python_code/config.json spark_spark_1:/opt/bitnami/spark/config.json
docker exec spark_spark_1 \
spark-submit \
--master spark://0.0.0.0:7077 \
--deploy-mode client \
--conf spark.cores.max=1 \
--conf spark.executor.memory=1g \
--conf spark.eventLog.enabled=true \
--conf spark.eventLog.dir=/tmp/spark-events \
--conf spark.eventLog.rolling.maxFileSize=256m\
/opt/bitnami/spark/edge_stream.py
</code></pre>
<p>Is it possible to do the same job in kubernetes ?</p>
<p>Best regards</p>
| Sebastien Warichet | <p>Using exec command of kubernetes</p>
<pre><code>minikube kubectl -- exec my-spark-master-0 -- spark-submit \
--master spark://0.0.0.0:7077 \
--deploy-mode client \
--conf spark.cores.max=1 \
--conf spark.executor.memory=1g \
--conf spark.eventLog.enabled=true \
--conf spark.eventLog.dir=/tmp/spark-events \
--conf spark.eventLog.rolling.maxFileSize=256m\
../Python/edge_stream.py
</code></pre>
| Sebastien Warichet |
<p>I'm using nginx in a docker container which is serving out static content. It's run in Kubernetes as a sidecar to another service in the same file.</p>
<p>However, the issue is that although the same exact HTML page is being served (I checked using a text comparer) the page looks malformed on the web server (but fine when I render it locally)</p>
<p>So because of this, it makes me think that there is an issue with serving some of the css, js, or image files</p>
<p>Here's part of the Kubernetes deployment</p>
<pre><code>containers:
- image: <OTHER IMAGE>
imagePullPolicy: Always
name: <imagename>
ports:
- containerPort: 8888
- image: <MY NGINX IMAGE>
imagePullPolicy: Always
name: <imagename>
ports:
- containerPort: 80
- containerPort: 443
restartPolicy: Always
</code></pre>
<p><a href="https://paste.mod.gg/isivicazeg.nginx" rel="nofollow noreferrer">The nginx file</a></p>
<p><a href="https://paste.mod.gg/tiloxigamu.cpp" rel="nofollow noreferrer">The Dockerfile</a></p>
<p>Here is the file path of the actual proxy (static content in kiwoon-pages)
<a href="https://i.stack.imgur.com/TICqk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TICqk.png" alt="the file paths" /></a></p>
<p>Here is the static content</p>
<p><a href="https://i.stack.imgur.com/rOW3N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rOW3N.png" alt="file paths of static content" /></a></p>
<p>Is there anything that looks glaringly wrong here? Let me know, thanks!</p>
| tymur999 | <p>Since <strong>@tymur999</strong> has already solved this issue, I decided to provide a Community Wiki answer just for better visibility to other community members.</p>
<p>It's important to know that browsers use the MIME type, to choose a suitable displaying method.
Therefore, the web server must send the correct MIME type in the response's <code>Content-Type</code> header.</p>
<p>In the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types" rel="nofollow noreferrer">MIME types documentation</a>, we can find an important note:</p>
<blockquote>
<p>Important: Browsers use the MIME type, not the file extension, to determine how to process a URL, so it's important that web servers send the correct MIME type in the response's Content-Type header. If this is not correctly configured, browsers are likely to misinterpret the contents of files and sites will not work correctly, and downloaded files may be mishandled.</p>
</blockquote>
<p>In nginx, we can use the <code>types</code> directive to map file name extensions to MIME types of responses (see: <a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#types" rel="nofollow noreferrer">NGINX documentation</a>):</p>
<pre><code>Syntax: types { ... }
Default:
types {
text/html html;
image/gif gif;
image/jpeg jpg;
}
</code></pre>
<blockquote>
<p>Context: http, server, location</p>
</blockquote>
<p><strong>NOTE:</strong> A sufficiently full mapping table is distributed with nginx in the <code>mime.types</code> file.</p>
<hr />
<p>As an example, suppose I have a simple website - a single HTML (<code>index.html</code>) and CSS (<code>mystyle.css</code>) file.</p>
<pre><code>$ ls /var/www/html/
index.html mystyle.css
$ cat /var/www/html/index.html
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<link rel="stylesheet" href="mystyle.css"/>
<p>This is a paragraph.</p>
</body>
</html>
$ cat /var/www/html/mystyle.css
body {
background-color: aquamarine;
}
</code></pre>
<p>Without the correct MIME type for CSS, my website doesn't work as expected:<br />
<strong>NOTE:</strong> The <code>text/css</code> MIME type is commented out.</p>
<pre><code>$ grep -Ri -A 3 "types {" /etc/nginx/nginx.conf
types {
text/html html htm shtml;
# text/css css;
}
</code></pre>
<p><a href="https://i.stack.imgur.com/0K12r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0K12r.png" alt="enter image description here" /></a></p>
<p>When the <code>text/css</code> MIME type is properly included, everything works as expected:</p>
<pre><code>grep -Ri -A 3 "types {" /etc/nginx/nginx.conf
types {
text/html html htm shtml;
text/css css;
}
</code></pre>
<p><a href="https://i.stack.imgur.com/EyALT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EyALT.png" alt="enter image description here" /></a></p>
| matt_j |
<p>I have installed a kubernetes cluster on EC2 instances on AWS.</p>
<p>1 master node and 2 worker nodes.</p>
<p>Everything works fine when I connect to the master node and issue commands using <code>kubectl</code>.</p>
<p>But I want to be able to issue <code>kubectl</code> commands from my local machine.
So I copied the contents of <code>.kube/config</code> file from master node to my local machine's <code>.kube/config</code>.</p>
<p>I have only changed the ip address of the <strong>server</strong> because the original file references to an internal ip. The file looks like this now :</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1URXhNVEUyTXpneE5Gb1hEVE14TVRFd09U4M0xTCkJ1THZGK1VMdHExOHovNG0yZkFEMlh4dmV3emx0cEovOUlFbQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://35.166.48.257:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJYkhZQStwL3UvM013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeE1URXhOak00TVRSYUZ3MHlNakV4TVRFeE5qTTRNVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVCQVFzRkFBT0NBUUVBdjJlVTBzU1cwNDdqUlZKTUQvYm1WK1VwWnRBbU1NVDJpMERNCjhCZjhDSm1WajQ4QlpMVmg4Ly82dnJNQUp6YnE5cStPa3dBSE1iWVQ4TTNHK09RUEdFcHd3SWRDdDBhSHdaRVQKL0hlVnI2eWJtT2VNeWZGNTJ1M3RIS3MxU1I1STM5WkJPMmVSU2lDeXRCVSsyZUlCVFkrbWZDb3JCRWRnTzJBMwpYQVVWVlJxRHVrejZ6OTAyZlJkd29yeWJLaU5mejVWYXdiM3VyQUxKMVBrOFpMNE53QU5vejBEL05HekpNT2ZUCjJGanlPeXcrRWFFMW96UFlRTnVaNFBuM1FWdlFMVTQycU5adGM0MmNKbUszdlBVWHc1LzBYbkQ4anNocHpNbnYKaFZPb2Y2ZGp6YzZMRGNzc1hxVGRYZVdIdURKMUJlcUZDbDliaDhQa1NQNzRMTnE3NGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBeVY1TGdGMjFvbVBBWGh2eHlzKzJIUi8xQXpLNThSMkRUUHdYYXZmSjduS1hKczh5CjBETkY5RTFLVmIvM0dwUDROcC84WEltRHFpUHVoN2J1YytYNkp1T0J0bGpwM0w1ZEFjWGxPaTRycWJMR1FBdzUKdG90UU94OHoyMHRLckFTbElUdUFwK3ZVMVR0M25hZ0xoK2JqdHVzV0wrVnBSdDI0d0JYbm93eU10ZW5HRUdLagpKRXJFSmxDc1pKeTRlZWdXVTZ3eDBHUm1TaElsaE9JRE9yenRValVMWVVVNUJJODBEMDVSSzBjeWRtUjVYTFJ1CldIS0kxZ3hZRnBPTlh4VVlOVWMvVU1YbjM0UVdJeE9GTTJtSWd4cG1jS09vY3hUSjhYWWRLV2tndDZoN21rbGkKejhwYjV1VUZtNURJczljdEU3cFhiUVNESlQzeXpFWGFvTzJQa1FJREFRQUJBb0lCQUhhZ1pqb28UZCMGNoaUFLYnh1RWNLWEEvYndzR3RqU0J5MFNFCmtyQ2FlU1BBV0hBVUZIWlZIRWtWb1FLQmdERllwTTJ2QktIUFczRk85bDQ2ZEIzUE1IMHNMSEdCMmN2Y3JZbFMKUFY3bVRhc2Y0UEhxazB3azlDYllITzd0UVg0dlpBVXBVZWZINDhvc1dJSjZxWHorcTEweXA4cDNSTGptaThHSQoyUE9rQmQ0U05IY0habXRUcExEYzhsWG13aXl2Z1RNakNrU0tWd3l5UDVkUlNZZGVWbUdFSDl1OXJZVWtUTkpwCjRzQUJBb0dCQUpJZjA4TWl2d3h2Z05BQThxalllYTQzTUxUVnJuL3l0ck9LU0RqSXRkdm9QYnYrWXFQTnArOUUKdUZONDlHRENtc0UvQUJwclRpS2hyZ0I4aGI4SkM5d3A3RmdCQ25IU0tiOVVpVG1KSDZQcDVYRkNKMlJFODNVNQp0NDBieFE0NXY3VzlHRi94MWFpaW9nVUlNcTkxS21Vb1RUbjZhZHVkMWM5bk5yZmt3cXp3Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
~
</code></pre>
<p>When I try to use a <code>kubectl</code> command from my local machine I get this error :</p>
<p><code>Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 172.31.4.108, not 35.166.48.257</code></p>
| Tomas.R | <p>This is bcs the kube-api server TLS cert is only valid for <code>10.96.0.1, 172.31.4.108</code> and not for <code>35.166.48.257</code>. There are several options, like to tell <code>kubectl</code> the skip TLS verfiy but i would not re-commend that. The best would be to re-generate the whole PKI on your Cluster.</p>
<p>Both ways are described <a href="https://stackoverflow.com/a/46360852/16776451">here</a></p>
<p>Next time for a kubeadm Cluster you can use <code>--apiserver-cert-extra-sans=EXTERNAL_IP</code> at the cluster init to also add the external IP to the API Server TLS cert.</p>
| Philip Welz |
<p>I am wondering if Kubernetes can automatically move the pods to another node if that node resources are on critical, or if that happens only for pods managed by Replica Sets?</p>
<p>In particular:</p>
<ol>
<li>What happens when a <strong>bare pod</strong> (not managed by Replica Sets or similar) is evicted? Is it moved to another node or it is just removed?</li>
<li>In case it is "moved" to a new node, is it really moved or it is recreated? For example, does its <em>age</em> change?</li>
<li>Is it true that only <strong>pods managed by Deployments and Replica Sets</strong> are moved/recreated on a new node while bare pods are simply removed in case of resource shortage?</li>
</ol>
| collimarco | <blockquote>
<ol>
<li>What happens when a <strong>bare pod</strong> (not managed by Replica Sets or similar) is evicted? Is it moved to another node or it is just removed?</li>
</ol>
</blockquote>
<p>Pod is <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">designed as a relatively ephemeral, disposable entity</a>; when it is evicted, it's deleted by a Kubelet agent running on the node. There is no recreating / moving to the other node, it's just removed (for bare pods). The controllers (like Deployment, StatefulSet, DaemonSet) are responsible for <a href="https://kubernetes.io/docs/concepts/workloads/pods/#pods-and-controllers" rel="nofollow noreferrer">placing the replacement pods</a>.</p>
<blockquote>
<ol start="2">
<li>In case it is "moved" to a new node, is it really moved or it is recreated? For example, does its <em>age</em> change?</li>
</ol>
</blockquote>
<p>As I mentioned in the answer from the previous question, in the Kubernetes architecture pods are <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">designed as relatively ephemeral, disposable entities</a>, so there is no "moving". It's recreated as a "fresh" pod on the same or new node. The <em>age</em> parameter is changed, it's starting counting from the beginning as it is a new pod.</p>
<blockquote>
<ol start="3">
<li>Is it true that only <strong>pods managed by Deployments and Replica Sets</strong> are moved/recreated on a new node while bare pods are simply removed in case of resource shortage?</li>
</ol>
</blockquote>
<p>Not only pods managed by Deployment/ReplicaSets but it's also true for some <a href="https://kubernetes.io/docs/concepts/workloads/controllers/" rel="nofollow noreferrer">other controllers</a> (e.g StatefulSet). When a pod is missing (it's got evicted by resource shortage or there is a rolling update), the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/" rel="nofollow noreferrer">controllers</a> are making requests to the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/" rel="nofollow noreferrer">Kubernetes Scheduler</a> to roll-out new pods. The bare pods, as answered in the first question, are not recreated.</p>
<p>If you want to read more about this process, check <a href="https://stackoverflow.com/questions/69976108/kubernetes-control-plane-communication/70007320#70007320">my other answer</a>.</p>
| Mikolaj S. |
<p>I'm trying to replicate the <code>kubectl get pods</code> command in Python3 using the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">kubernetes python library</a>. Except, I'm working with a remote kubernetes cluster, NOT my localhost. The configuration host is a particular web address.</p>
<p>Here's what I tried:</p>
<pre><code> v1 = kubernetes.client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
<p>As recommended in the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">documentation</a>. This however defaults to searching my localhost instead of the specific web address. I know I have access to this web address because the following runs totally 100% as expected:</p>
<pre><code>import time
import kubernetes.client
from kubernetes.client.rest import ApiException
from pprint import pprint
configuration = kubernetes.client.Configuration()
# Configure API key authorization: BearerToken
configuration.api_key['authorization'] = 'YOUR_API_KEY'
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
configuration.api_key_prefix['authorization'] = 'Bearer'
# Defining host is optional and default to http://localhost
configuration.host = "THE WEB HOST I'M USING"
# Enter a context with an instance of the API kubernetes.client
with kubernetes.client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = kubernetes.client.AdmissionregistrationApi(api_client)
try:
api_response = api_instance.get_api_group()
pprint(api_response)
except ApiException as e:
print("Exception when calling AdmissionregistrationApi->get_api_group: %s\n" % e)
</code></pre>
<p>What do you all think? How do I force it to check the pods of that host getting around the <code>localhost</code> default?</p>
| Cam I | <p>I know two solutions that may help in your case.
I will describe both of them and you may choose which one suits you best.</p>
<h3>Using kubeconfig file</h3>
<p>I recommend setting up a <code>kubeconfig</code> file which allows you to connect to a remote cluster.
You can find more information on how to configure it in the documentation: <a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/" rel="nofollow noreferrer">Organizing Cluster Access Using kubeconfig Files</a></p>
<p>If you have a <code>kubeconfig</code> file configured, you can use the <a href="https://github.com/kubernetes-client/python-base/blob/master/config/kube_config.py#L782" rel="nofollow noreferrer">load_kube_config()</a> function to load authentication and cluster information from your <code>kubeconfig</code> file.</p>
<p>I've created a simple <code>list_pods_1.py</code> script to illustrate how it may work:</p>
<pre><code>$ cat list_pods_1.py
#!/usr/bin/python3.7
# Script name: list_pods_1.py
import kubernetes.client
from kubernetes import client, config
config.load_kube_config("/root/config") # I'm using file named "config" in the "/root" directory
v1 = kubernetes.client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
$ ./list_pods_1.py
Listing pods with their IPs:
10.32.0.2 kube-system coredns-74ff55c5b-5k28b
10.32.0.3 kube-system coredns-74ff55c5b-pfppk
10.156.15.210 kube-system etcd-kmaster
10.156.15.210 kube-system kube-apiserver-kmaster
10.156.15.210 kube-system kube-controller-manager-kmaster
10.156.15.210 kube-system kube-proxy-gvxhq
10.156.15.211 kube-system kube-proxy-tjxch
10.156.15.210 kube-system kube-scheduler-kmaster
10.156.15.210 kube-system weave-net-6xqlq
10.156.15.211 kube-system weave-net-vjm7j
</code></pre>
<h3>Using Bearer token</h3>
<p>As described in this example - <a href="https://github.com/kubernetes-client/python/blob/6d4587e18064288d031ed9bbf5ab5b8245460b3c/examples/remote_cluster.py" rel="nofollow noreferrer">remote_cluster.py</a>:</p>
<blockquote>
<p>Is it possible to communicate with a remote Kubernetes cluster from a server outside of the cluster without kube client installed on it.The communication is secured with the use of <strong>Bearer token</strong>.</p>
</blockquote>
<p>You can see how to create and use the token in the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/" rel="nofollow noreferrer">Accessing Clusters</a> documentation.</p>
<p>I've created simple <code>list_pods_2.py</code> script (
based on the <a href="https://github.com/kubernetes-client/python/blob/6d4587e18064288d031ed9bbf5ab5b8245460b3c/examples/remote_cluster.py" rel="nofollow noreferrer">remote_cluster.py</a> script) to illustrate how it may work:</p>
<pre><code>$ cat list_pods_2.py
#!/usr/bin/python3.7
import kubernetes.client
from kubernetes import client, config
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
# Define the barer token we are going to use to authenticate.
# See here to create the token:
# https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
aToken = "<MY_TOKEN>"
# Create a configuration object
aConfiguration = client.Configuration()
# Specify the endpoint of your Kube cluster
aConfiguration.host = "https://<ENDPOINT_OF_MY_K8S_CLUSTER>"
# Security part.
# In this simple example we are not going to verify the SSL certificate of
# the remote cluster (for simplicity reason)
aConfiguration.verify_ssl = False
# Nevertheless if you want to do it you can with these 2 parameters
# configuration.verify_ssl=True
# ssl_ca_cert is the filepath to the file that contains the certificate.
# configuration.ssl_ca_cert="certificate"
aConfiguration.api_key = {"authorization": "Bearer " + aToken}
# Create a ApiClient with our config
aApiClient = client.ApiClient(aConfiguration)
# Do calls
v1 = client.CoreV1Api(aApiClient)
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
$ ./list_pods_2.py
Listing pods with their IPs:
10.32.0.2 kube-system coredns-74ff55c5b-5k28b
10.32.0.3 kube-system coredns-74ff55c5b-pfppk
10.156.15.210 kube-system etcd-kmaster
10.156.15.210 kube-system kube-apiserver-kmaster
10.156.15.210 kube-system kube-controller-manager-kmaster
10.156.15.210 kube-system kube-proxy-gvxhq
10.156.15.211 kube-system kube-proxy-tjxch
10.156.15.210 kube-system kube-scheduler-kmaster
10.156.15.210 kube-system weave-net-6xqlq
10.156.15.211 kube-system weave-net-vjm7j
</code></pre>
<p><strong>NOTE:</strong> As an example, I am using a token for the default service account (you will probably want to use a different <code>ServiceAcccount</code>), but to work it properly this <code>ServiceAccount</code> needs appropriate permissions.<br />
For example, you may add a <code>view</code> role to your <code>ServiceAccount</code> like this:</p>
<pre><code>$ kubectl create clusterrolebinding --serviceaccount=default:default --clusterrole=view default-sa-view-access
clusterrolebinding.rbac.authorization.k8s.io/default-sa-view-access created
</code></pre>
| matt_j |
<p>Afaik, the K8s <code>NetworkPolicy</code> can only allow pods matching a label to do something. I do not want to:</p>
<ul>
<li>Deny all traffic</li>
<li>Allow traffic for all pods except the ones matching my label</li>
</ul>
<p>but instead:</p>
<ul>
<li>Allow all traffic</li>
<li>Deny traffic for pods matching my label</li>
</ul>
<p>How do I do that?</p>
<p>From <code>kubectl explain NetworkPolicy.spec.ingress.from</code>:</p>
<pre><code>DESCRIPTION:
List of sources which should be able to access the pods selected for this
rule. Items in this list are combined using a logical OR operation. If this
field is empty or missing, this rule matches all sources (traffic not
restricted by source). If this field is present and contains at least one
item, this rule allows traffic only if the traffic matches at least one
item in the from list.
</code></pre>
<p>As far as I understand this, we can only allow, not deny.</p>
| User12547645 | <p>As you mentioned in the comments, you are using the Kind tool for running Kubernetes. Instead of <a href="https://github.com/aojea/kindnet" rel="nofollow noreferrer">kindnet CNI plugin</a> (default CNI plugin for Kind) which does not support <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Kubernetes network policies</a>, you can use <a href="https://github.com/projectcalico/cni-plugin" rel="nofollow noreferrer">Calico CNI plugin</a> which support Kubernetes network policies + it has its own, similar solution called <a href="https://docs.projectcalico.org/security/calico-network-policy" rel="nofollow noreferrer">Calico network policies</a>.</p>
<hr />
<p>Example - I will create cluster with disabled default kind CNI plugin + <a href="https://stackoverflow.com/a/62433164/16391991">enabled NodePort</a> for testing (assuming that you have <code>kind</code> + <code>kubectl</code> tools already installed):</p>
<p><em>kind-cluster-config.yaml</em> file:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
disableDefaultCNI: true # disable kindnet
podSubnet: 192.168.0.0/16 # set to Calico's default subnet
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: tcp # Optional, defaults to tcp
</code></pre>
<p>Time for create a cluster using above config:</p>
<pre><code>kind create cluster --config kind-cluster-config.yaml
</code></pre>
<p>When cluster is ready, I will install Calico CNI plugin:</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
</code></pre>
<p>I will wait until all calico pods are ready (<code>kubectl get pods -n kube-system</code> command to check). Then, I will create sample <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">nginx deployment</a> + service type NodePort for accessing:</p>
<p><em>nginx-deploy-service.yaml</em></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30000
</code></pre>
<p>Let's apply it: <code>kubectl apply -f nginx-deploy-service.yaml</code></p>
<p>So far so good. Now I will try to access <code>nginx-service</code> using node IP (<code>kubectl get nodes -o wide</code> command to check node IP address):</p>
<pre><code>curl 172.18.0.2:30000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<p>Okay, it's working.</p>
<p>Now time to <a href="https://docs.projectcalico.org/getting-started/clis/calicoctl/install" rel="nofollow noreferrer">install <code>calicoctl</code></a> and apply some example policy - <a href="https://docs.projectcalico.org/security/tutorials/calico-policy" rel="nofollow noreferrer">based on this tutorial</a> - to block ingress traffic only for pods with label <code>app</code> with value <code>nginx</code>:</p>
<p><em>calico-rule.yaml</em>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: app == "nginx"
types:
- Ingress
</code></pre>
<p>Apply it:</p>
<pre><code>calicoctl apply -f calico-rule.yaml
Successfully applied 1 'GlobalNetworkPolicy' resource(s)
</code></pre>
<p>Now I can't reach the address <code>172.18.0.2:30000</code> which was working previously. The policy is working fine!</p>
<p>Read more about calico policies:</p>
<ul>
<li><a href="https://docs.projectcalico.org/security/calico-network-policy" rel="nofollow noreferrer">Get started with Calico network policy</a></li>
<li><a href="https://docs.projectcalico.org/security/tutorials/calico-policy" rel="nofollow noreferrer">Calico policy tutorial</a></li>
</ul>
<p>Also check <a href="https://github.com/kubernetes-sigs/kind/issues/842" rel="nofollow noreferrer">this GitHub topic</a> for more information about NetworkPolicy support in Kind.</p>
<p><strong>EDIT:</strong></p>
<p>Seems like Calico plugin <a href="https://docs.projectcalico.org/security/kubernetes-network-policy" rel="nofollow noreferrer">supports as well Kubernetes NetworkPolicy</a>, so you can just install Calico CNI plugin and apply the following policy:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
</code></pre>
<p>I tested it and seems it's working fine as well.</p>
| Mikolaj S. |
<p>I have the following Jenkinsfile:</p>
<pre><code>node {
stage('Apply Kubernetes files') {
withKubeConfig([credentialsId: 'jenkins-deployer', serverUrl: 'https://192.168.64.2:8443']) {
sh 'kubectl apply -f '
}
}
}
</code></pre>
<p>While running it, I got "kubectl: not found". I installed Kubernetes-cli plugin to Jenkins, generated secret key via <code>kubectl create sa jenkins-deployer</code>. What's wrong here?</p>
| Matty | <p>I know this is a fairly old question, but I decided to describe an easy workaround that might be helpful.<br />
To use the <a href="https://plugins.jenkins.io/kubernetes-cli/" rel="noreferrer">Kubernetes CLI</a> plugin we need to have an executor with <code>kubectl</code> installed.</p>
<p>One possible way to get <code>kubectl</code> is to install it in the Jenkins pipeline like in the snipped below:<br />
<strong>NOTE:</strong> I'm using <code>./kubectl get pods</code> to list all Pods in the default Namespace. Additionally, you may need to change <code>kubectl</code> version (<code>v1.20.5</code>).</p>
<pre><code>node {
stage('List pods') {
withKubeConfig([credentialsId: 'kubernetes-config']) {
sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl"'
sh 'chmod u+x ./kubectl'
sh './kubectl get pods'
}
}
}
</code></pre>
<p>As a result, in the Console Output, we can see that it works as expected:</p>
<pre><code>curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl
...
[Pipeline] sh
+ chmod u+x ./kubectl
[Pipeline] sh
+ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-zhxwb 1/1 Running 0 34s
my-jenkins-0 2/2 Running 0 134m
</code></pre>
| matt_j |
<p>I would like to change the default tcp keep alive value in a Kubernetes pod, what's the recommended approach?</p>
| Conundrum | <p>You could do this via sysctls on the pod manifest in AKS/Kubernetes:</p>
<pre><code>spec:
securityContext:
sysctls:
- name: "net.ipv4.tcp_keepalive_time"
value: "45"
</code></pre>
<p>Here is also further documentation:</p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/</a></p>
<p><a href="https://docs.syseleven.de/metakube/de/tutorials/confiugre-unsafe-sysctls" rel="nofollow noreferrer">https://docs.syseleven.de/metakube/de/tutorials/confiugre-unsafe-sysctls</a></p>
| Philip Welz |
<p>I set an EKS cluster using Terraform. I try to set Route53 record to map my domain name, to the load balancer of my cluster.</p>
<p>I set my EKS cluster:</p>
<pre><code>resource "aws_eks_cluster" "main" {
name = "${var.project}-cluster"
role_arn = aws_iam_role.cluster.arn
version = "1.24"
vpc_config {
subnet_ids = flatten([aws_subnet.public[*].id, aws_subnet.private[*].id])
endpoint_private_access = true
endpoint_public_access = true
public_access_cidrs = ["0.0.0.0/0"]
}
tags = merge(
var.tags,
{
Stack = "backend"
Name = "${var.project}-eks-cluster",
}
)
depends_on = [
aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy
]
}
</code></pre>
<p>And I have created the following k8s service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: dashboard-backend
type: LoadBalancer
ports:
- protocol: TCP
port: '$PORT'
targetPort: '$PORT'
</code></pre>
<p><strong>As far as I know, once I deploy a k8s service, AWS automatically generates an ALB resource for my service.</strong> So, I set this route53 sources:</p>
<pre><code>resource "aws_route53_zone" "primary" {
name = var.domain_name
tags = merge(
var.tags,
{
Name = "${var.project}-Route53-zone",
}
)
}
data "kubernetes_service" "backend" {
metadata {
name = "backend-service"
}
}
resource "aws_route53_record" "backend_record" {
zone_id = aws_route53_zone.primary.zone_id
name = "www.api"
type = "A"
ttl = "300"
alias {
name = data.kubernetes_service.backend.status.0.load_balancer.0.ingress.0.hostname
zone_id = ??????
evaluate_target_health = true
}
}
</code></pre>
<p>I did get the load balancer host name using <code>data.kubernetes_service.backend.status.0.load_balancer.0.ingress.0.hostname</code>, but how can I get its zone ID to use in <code>zone_id</code> key?</p>
| Tal Rofe | <p>You can get the ELB-hosted Zone Id using the data source <a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/elb_hosted_zone_id" rel="nofollow noreferrer"><code>aws_elb_hosted_zone_id</code></a>, as it only depends on the region where you created this ELB. Technically you can hardcode this value also because these are static values on a regional basis.</p>
<p>Official AWS Documentation on <a href="https://docs.aws.amazon.com/general/latest/gr/elb.html" rel="nofollow noreferrer">Elastic Load Balancing endpoints</a></p>
<pre><code>resource "aws_route53_zone" "primary" {
name = var.domain_name
tags = merge(
var.tags,
{
Name = "${var.project}-Route53-zone",
}
)
}
data "kubernetes_service" "backend" {
metadata {
name = "backend-service"
}
}
## Add data source ##
data "aws_elb_hosted_zone_id" "this" {}
### This will use your aws provider-level region config otherwise mention explicitly.
resource "aws_route53_record" "backend_record" {
zone_id = aws_route53_zone.primary.zone_id
name = "www.api"
type = "A"
ttl = "300"
alias {
name = data.kubernetes_service.backend.status.0.load_balancer.0.ingress.0.hostname
zone_id = data.aws_elb_hosted_zone_id.this.id ## Updated ##
evaluate_target_health = true
}
}
</code></pre>
<p>Out of your question scope, even though hopefully this may work but I would also suggest you look into <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/integrations/external_dns/" rel="nofollow noreferrer">external-dns</a> for managing DNS with EKS.</p>
| ishuar |
<p>I'm trying to add a rewrite to coredns to point a domain to the cluster loadbalancer (so that the request for that domain gets redirected back into the cluster). I can't seem to find a way to affect k3s' coredns configuration. Is there a way to change it?</p>
<p>(This is to work around <a href="https://github.com/jetstack/cert-manager/issues/1292#issuecomment-757283796" rel="nofollow noreferrer">https://github.com/jetstack/cert-manager/issues/1292#issuecomment-757283796</a> where a pod tries to contact another service in the cluster via a DNS name that points to the router's IP, which fails due to how NAT works.)</p>
| tibbe | <p>It is possible to configure <code>CoreDNS</code> to mapping one domain to another domain by adding <code>rewrite</code> rule.
Suppose you have domain <code>example.com</code> and you want that domain to point to <code>google.com</code> domain.</p>
<p>To do this in <code>CoreDNS</code>, you can use the <code>rewrite</code> plugin.</p>
<p>Configuration of <code>CoreDNS</code> is stored in <code>coredns</code> <code>ConfigMap</code> in <code>kube-system</code> namespace.
You can edit it using:<br></p>
<pre><code>root@kmaster:~# kubectl edit cm coredns -n kube-system
</code></pre>
<p>Just add one <code>rewrite</code> rule, like in the example below:<br></p>
<pre><code>apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
rewrite name example.com google.com # mapping example.com to google.com
ready
...
</code></pre>
<p>Next you need to reload <code>CoreDNS</code>, to use new configuration. You may delete coredns <code>Pod</code> (<code>coredns</code> is deployed as <code>Deployment</code>, so new <code>Pod</code> will be created) or you can send it a <code>SIGUSR1</code> to tell it to reload graceful.</p>
<p>Finally we can check how it works:</p>
<pre><code>root@kmaster:~# kubectl run -it --rm --image=infoblox/dnstools:latest dnstools
dnstools# host -t A google.com
google.com has address 172.217.21.238
dnstools# host -t A example.com
example.com has address 172.217.21.238
</code></pre>
<p>You can find more information about rewrite plugin in <a href="https://coredns.io/plugins/rewrite/" rel="nofollow noreferrer">Coredns rewrite documentation</a>.</p>
| matt_j |
<p><strong>What happened:<br></strong>
I am trying to create a service endpoint using the <code>externalName</code> spec to allow my microservices running inside the pods to access a local MySQL server on my local host.</p>
<p>This is the relevant section for the yaml file:<br></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: default
spec:
type: ExternalName
externalName: host.minikube.internal
</code></pre>
<p><strong>What you expected to happen:</strong><br>
I expect to be able to connect but my SpringBoot containers are showing that the mysql connection is not working. I have tested the microservices and it is working in Docker with the same MySQL database.</p>
<p><strong>How to reproduce it (as minimally and precisely as possible):<br></strong>
Normal installation of minikube and kubernetes, running the <code>dnsutils</code> image from <a href="https://k8s.io/examples/admin/dns/dnsutils.yaml" rel="noreferrer">https://k8s.io/examples/admin/dns/dnsutils.yaml</a> with the mysql service given above.</p>
<p><strong>Anything else we need to know?:<br></strong>
I have tested out the troubleshooting detailed here (<a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a>) but it did not resolve the problem. When running:<br></p>
<pre><code>kubectl exec -i -t dnsutils -- nslookup mysql.default
</code></pre>
<p>I get the following message:<br></p>
<pre><code>Server: 10.96.0.10
Address: 10.96.0.10#53
mysql.default.svc.cluster.local canonical name = host.minikube.internal.
** server can't find host.minikube.internal: SERVFAIL
command terminated with exit code 1
</code></pre>
<p>I have verified that <code>CoreDNS</code> is installed and running:<br></p>
<pre><code>NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-z58cr 1/1 Running 0 31m
</code></pre>
<p>Endpoints are exposed:<br></p>
<pre><code>NAME ENDPOINTS AGE
kube-dns 172.17.0.2:53,172.17.0.2:53,172.17.0.2:9153 32m
</code></pre>
<p>My <code>/etc/resolv.conf</code> only has one entry:<br></p>
<pre><code>nameserver 192.168.53.145
</code></pre>
<p><strong>Environment:</strong></p>
<p>Kubernetes version (use kubectl version):</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:09:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Cloud provider or hardware configuration:<br>
Local Windows 10 Pro x64 running Kubernetes with Minikube
OS (e.g: cat /etc/os-release):<br>
<code>NAME=Buildroot VERSION=2020.02.7 ID=buildroot VERSION_ID=2020.02.7 PRETTY_NAME="Buildroot 2020.02.7</code>"</p>
<p>Kernel (e.g. uname -a): Linux minikube <code>4.19.150 #1 SMP Fri Nov 6 15:58:07 PST 2020 x86_64 GNU/Linux</code></p>
<p>Install tools: Installed using the relevant kubectl and minikube .exe files
Network plugin and version (if this is a network-related bug):</p>
<p>Others:</p>
| Juin | <p>This problem seems to be close related to <code>Minikube</code> <a href="https://github.com/kubernetes/minikube/issues/8439" rel="nofollow noreferrer">issue</a> described on github.</p>
<p>You can see, that in your <code>Pod</code> in the <code>/etc/hosts</code> file - there isn't any <code>host.minikube.internal</code> entry:<br></p>
<pre><code>$ kubectl exec -it dnsutils -- cat /etc/hosts | grep "host.minikube.internal"
$
</code></pre>
<p>On the <code>Minikube</code> host you are able to reach <code>host.minikube.internal</code> because <code>Minikube</code> (version <strong>v1.10+</strong>) adds this <code>hostname</code> entry to <code>/etc/hosts</code> file. You can find more information in <a href="https://minikube.sigs.k8s.io/docs/handbook/host-access/" rel="nofollow noreferrer">Host access | minikube</a>.</p>
<p>This is example from my <code>Minikube</code> (I'm using docker driver):<br></p>
<pre><code>user@minikube:~$ kubectl exec -it dnsutils -- cat /etc/hosts | grep "host.minikube.internal"
user@minikube:~$
user@minikube:~$ minikube ssh
docker@minikube:~$ cat /etc/hosts | grep host.minikube.internal
192.168.49.1 host.minikube.internal
docker@minikube:~$ ping host.minikube.internal
PING host.minikube.internal (192.168.49.1) 56(84) bytes of data.
64 bytes from host.minikube.internal (192.168.49.1): icmp_seq=1 ttl=64 time=0.075 ms
64 bytes from host.minikube.internal (192.168.49.1): icmp_seq=2 ttl=64 time=0.067 ms
</code></pre>
<p><code>host.minikube.internal</code> is only the entry in the <code>/etc/hosts</code> file, therefore <code>nslookup</code> can't correctly resolve it (<code>nslookup</code> queries name servers <strong>ONLY</strong>.).</p>
<pre><code>docker@minikube:~$ nslookup host.minikube.internal
Server: 192.168.49.1
Address: 192.168.49.1#53
** server can't find host.minikube.internal: NXDOMAIN
</code></pre>
<p>The only workaround I think may help in some cases is adding <code>hostAliases</code> to <code>Deployment</code>/<code>Pod</code> manifest files:</p>
<pre><code>...
spec:
hostAliases:
- ip: "192.168.49.1" # minikube IP
hostnames:
- "host.minikube.internal" # one or more hostnames that should resolve to the above address
containers:
- name: dnsutils
image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
...
</code></pre>
| matt_j |
<p>I am starting my journey on Confluent for Kubernetes and while following their quick start guide
to install Confluent on AKS I was able to get the pods up and running.
<a href="https://docs.confluent.io/operator/current/co-quickstart.html" rel="nofollow noreferrer">https://docs.confluent.io/operator/current/co-quickstart.html</a></p>
<pre><code>confluent-operator-99f7f8dcb-87ll8 1/1 Running 0 7m30s
connect-0 1/1 Running 2 6m22s
controlcenter-0 1/1 Running 0 3m
elastic-0 1/1 Running 4 6m5s
kafka-0 1/1 Running 0 4m22s
kafka-1 1/1 Running 0 4m22s
kafka-2 1/1 Running 0 4m22s
ksqldb-0 1/1 Running 0 3m1s
schemaregistry-0 1/1 Running 0 3m
zookeeper-0 1/1 Running 0 6m23s
zookeeper-1 1/1 Running 0 6m23s
zookeeper-2 1/1 Running 0 6m23s
</code></pre>
<p>running a quick curl on http://localhost:9021 is giving me a html output.</p>
<p>However after enabling web preview on cloud shell and previewing the port 9021 it is giving me a blank white page.</p>
<p>Am I doing anything wrong? How do I view http://localhost:9021 on AKS?</p>
| IndianRaptor | <pre><code>kubectl port-forward controlcenter-0 9021:9021
</code></pre>
<p>It is also written in the docs <a href="https://docs.confluent.io/operator/current/co-quickstart.html#step-4-view-control-center" rel="nofollow noreferrer">here</a>.</p>
| Philip Welz |
<p>I am using <code>MobaXterm_21.2</code> installed version.
When I run <code>kubectl version</code>, it's working as expected:</p>
<pre><code> kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", G
oVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
</code></pre>
<p>But it is not able to read the <code>.kube/config</code> file or able to pickup the config file given through ENV variable <code>KUBECONFIG</code> or <code>--kubeconfig</code>. See the response below:</p>
<pre><code> export KUBECONFIG=/drives/path/to/config/file.config
✔
kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
</code></pre>
<p>Not working either:</p>
<pre><code> kubectl config --kubeconfig=/drives/path/to/config/file.config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
</code></pre>
<p><strong>This is a kind of blocking issues for me. Can anyone guide me on how to make <code>kubectl</code> work in mobaxterm? Any help will be highly appreciated.</strong></p>
<p><strong>Edit</strong> - like @mikolaj-s pointed. From <code>powershel/cmd/gitbash</code>, I am able to access the k8s cluster with out any problem. I have been accessing the cluster using powershell for several months now and it reads the <code>.kube/config</code> file or <code>KUBECONFIG</code> env var as expected.<br />
I want to shift to mobaxterm for it's multi-tab feature. If there is another tool that provides multi-tab feature I might be ok with it too.</p>
<p>In Mobaxterm -</p>
<pre><code> kubectl cluster-info dump
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
kubectl config get-contexts --kubeconfig /path/to/config/file
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
</code></pre>
<p>The kubeconfig files i am using are tested and have no issues for sure(100%) as they are working from powershell.</p>
<p><strong>Edit 2 -</strong> Many thanks for @mikolaj-s.<br />
With lot of hope I tried using powershell in mobaxterm as suggest by mikolaj - <a href="https://codetryout.com/mobaxterm-windows-command-prompt/" rel="nofollow noreferrer">mobaxterm-windows-command-prompt</a> and <strong>it worked.</strong></p>
| samshers | <p>The solution is to use PowerShell directly in the MobaXterm - steps how to configure that can be <a href="https://codetryout.com/mobaxterm-windows-command-prompt/" rel="nofollow noreferrer">found here</a> (instead of <code>CMD</code> choose <code>Powershell</code>):</p>
<blockquote>
<p>MobaXterm comes with various client tools such as SSH, telnet, WSL, CMD, and so on. It can well handle a Windows command line as well, here is how,</p>
</blockquote>
<blockquote>
<p>How to open Windows command prompt using MobaXterm?</p>
<ul>
<li>Open your MobaXterm</li>
<li>From the top menu, click on Sessions</li>
<li>From the Session settings window, click on the Shell button</li>
<li>Under the Basic Shell settings tab, select Terminal shell CMD</li>
<li>Also select a startup directory of your choice, which the CMD prompt will start it as you startup folder.</li>
<li>Now, Click the OK button to open a windows command window!</li>
</ul>
</blockquote>
<blockquote>
<p>With this, you should be able to use multiple Windows command lines in a tabbed view, or along with your other sessions.</p>
</blockquote>
| Mikolaj S. |
<p>I'm following the <a href="https://camel.apache.org/camel-k/1.9.x/installation/installation.html#procedure" rel="nofollow noreferrer">documentation procedure</a> and <a href="https://camel.apache.org/camel-k/1.9.x/installation/platform/minikube.html" rel="nofollow noreferrer">enabling the registration add-on in minikube</a>.</p>
<p>So I'm running</p>
<pre class="lang-bash prettyprint-override"><code>minikube start --addons registry
kamel install
</code></pre>
<p>to start the cluster and install Camel K into it.</p>
<p>But when I run <code>kubectl get pod</code> I get <code>CrashLoopBackOff</code> as the <code>camel-k-operator</code> status.</p>
<p><code>kubectl get events</code> gave me the following:</p>
<pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE
7m9s Normal Scheduled pod/camel-k-operator-848fd8785b-cr9pp Successfully assigned default/camel-k-operator-848fd8785b-cr9pp to minikube
7m5s Normal Pulling pod/camel-k-operator-848fd8785b-cr9pp Pulling image "docker.io/apache/camel-k:1.9.2"
2m23s Normal Pulled pod/camel-k-operator-848fd8785b-cr9pp Successfully pulled image "docker.io/apache/camel-k:1.9.2" in 4m45.3178036s
42s Normal Created pod/camel-k-operator-848fd8785b-cr9pp Created container camel-k-operator
42s Normal Started pod/camel-k-operator-848fd8785b-cr9pp Started container camel-k-operator
43s Normal Pulled pod/camel-k-operator-848fd8785b-cr9pp Container image "docker.io/apache/camel-k:1.9.2" already present on machine
55s Warning BackOff pod/camel-k-operator-848fd8785b-cr9pp Back-off restarting failed container
7m9s Normal SuccessfulCreate replicaset/camel-k-operator-848fd8785b Created pod: camel-k-operator-848fd8785b-cr9pp
7m9s Normal ScalingReplicaSet deployment/camel-k-operator Scaled up replica set camel-k-operator-848fd8785b to 1
</code></pre>
<p>Running <code>kubectl logs [podname] -p</code> I get</p>
<pre class="lang-json prettyprint-override"><code>{
"level": "error",
"ts": 1658235623.4016757,
"logger": "cmd",
"msg": "failed to set GOMAXPROCS from cgroups",
"error": "path \"/docker/ec4a100d598f3529dbcc3a9364c8caceb32abd8c11632456d58c7948bb756d36\" is not a descendant of mount point root \"/docker/ec4a100d598f3529dbcc3a9364c8caceb32abd8c11632456d58c7948bb756d36/kubelet\" and cannot be exposed from \"/sys/fs/cgroup/rdma/kubelet\"",
"stacktrace": "github.com/apache/camel-k/pkg/cmd.(*operatorCmdOptions).run\n\tgithub.com/apache/camel-k/pkg/cmd/operator.go:57\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/[email protected]/command.go:860\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/[email protected]/command.go:974\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/[email protected]/command.go:902\nmain.main\n\tcommand-line-arguments/main.go:47\nruntime.main\n\truntime/proc.go:225"
}
</code></pre>
<p>Formatting the stacktrace we get:</p>
<pre><code>github.com/apache/camel-k/pkg/cmd.(*operatorCmdOptions).run
github.com/apache/camel-k/pkg/cmd/operator.go:57
github.com/spf13/cobra.(*Command).execute
github.com/spf13/[email protected]/command.go:860
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/[email protected]/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/[email protected]/command.go:902
main.main
command-line-arguments/main.go:47
runtime.main
runtime/proc.go:225
</code></pre>
<hr />
<p>Camel K Client 1.9.2</p>
<p>minikube v1.25.2</p>
| Ocimar | <p>It's probably a <a href="https://github.com/apache/camel-k/issues/3348" rel="nofollow noreferrer">bug with the docker driver</a>.</p>
<p>A workaround is to use the hyperv driver instead:</p>
<pre class="lang-bash prettyprint-override"><code>minikube start --addons registry --driver hyperv
</code></pre>
| Ocimar |
<p>I'm trying to configure cache for a specific host, but getting 404. Also It seems my config was not included into final nginx.conf. This file doesn't contain it</p>
<p>My ingress.yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: images-ingress
labels:
last_updated: "14940999355"
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 8m
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/server-snippet: |
proxy_cache static-cache;
proxy_cache_valid 404 10m;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_bypass $http_x_purge;
add_header X-Cache-Status $upstream_cache_status;
spec:
tls:
- hosts:
- static.qwstrs.com
secretName: letsencrypt-prod
rules:
- host: static.qwstrs.com
http:
paths:
- path: /
backend:
serviceName: imaginary
servicePort: 9000
</code></pre>
<p>If I remove this sample</p>
<pre><code> nginx.ingress.kubernetes.io/server-snippet: |
proxy_cache static-cache;
proxy_cache_valid 404 10m;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_bypass $http_x_purge;
add_header X-Cache-Status $upstream_cache_status;
</code></pre>
<p>everything works but without cache</p>
<p>even if I have one line from snippet above It produces 404 error and doesn't work</p>
<pre><code> nginx.ingress.kubernetes.io/server-snippet: |
proxy_cache static-cache;
</code></pre>
| Сергей Коновалов | <p>To enable caching, you need to configure the <a href="https://docs.nginx.com/nginx/admin-guide/content-cache/content-caching/#enabling-the-caching-of-responses" rel="noreferrer">proxy_cache_path</a> for the <code>nginx-ingress-controller</code>.<br />
You can do it by modifying the <code>ConfigMap</code> for <code>nginx-ingress-controller</code>.</p>
<hr />
<p>I've created an example to illustrate you how it works (I assume you have <a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">kubernetes/ingress-nginx</a>).</p>
<p>First, create a <code>ConfigMap</code> named <code>ingress-nginx-controller</code> as described in the documentation <a href="https://kubernetes.github.io/ingress-nginx/examples/customization/custom-configuration/#custom-configuration" rel="noreferrer">custom_configuration</a>:<br />
<strong>Note:</strong> You may need to modify the <code>proxy_cache_path</code> settings, but shared memory zone (keys_zone=<strong>static-cache</strong>) should be the same as in your <code>proxy_cache</code> directive.</p>
<pre><code>$ cat configmap.yml
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: default
data:
http-snippet: "proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=static-cache:10m max_size=10g inactive=60m use_temp_path=off;"
$ kubectl apply -f configmap.yml
configmap/ingress-nginx-controller configured
</code></pre>
<p>And then create the <code>Ingress</code> resource ( I've modified your ingress resource a bit to demonstrate how <code>X-Cache-Status</code> header works):</p>
<pre><code>$ cat ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: images-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 8m
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache static-cache;
proxy_cache_valid any 60m;
add_header X-Cache-Status $upstream_cache_status;
spec:
tls:
- hosts:
- static.qwstrs.com
secretName: letsencrypt-prod
rules:
- host: static.qwstrs.com
http:
paths:
- path: /
backend:
serviceName: imaginary
servicePort: 9000
$ kubectl apply -f ingress.yml
ingress.extensions/images-ingress configured
</code></pre>
<p>Finally we can check:</p>
<pre><code>$ curl -k -I https://static.qwstrs.com
HTTP/2 200
...
x-cache-status: MISS
accept-ranges: bytes
$ curl -k -I https://static.qwstrs.com
HTTP/2 200
...
x-cache-status: HIT
accept-ranges: bytes
</code></pre>
<p>More information on <code>proxy_cache_path</code> and <code>proxy_cache</code> can be found <a href="https://www.nginx.com/blog/nginx-caching-guide/#How-to-Set-Up-and-Configure-Basic-Caching" rel="noreferrer">here</a>.</p>
| matt_j |
<p>Say I have 100 running pods with an HPA set to <code>min=100</code>, <code>max=150</code>. Then I change the HPA to <code>min=50</code>, <code>max=105</code> (e.g. max is still above current pod count). Should k8s immediately initialize new pods when I change the HPA? I wouldn't think it does, but I seem to have observed this today.</p>
| L P | <p>First, as mentioned in the comments, in your specific case some pods will be terminated if usage metrics are below utilization target, no new pods will be created.</p>
<p>Second thing it's absolutely normal that is takes some time to scale down replicas - <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#default-behavior" rel="nofollow noreferrer">it's because the
<code>stabilizationWindowSeconds</code> parameter is by default set to <code>300</code></a>:</p>
<blockquote>
<pre class="lang-yaml prettyprint-override"><code>behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
</code></pre>
</blockquote>
<p>So, if you have running HPA with configuration (min=100, max=150) for a long time, and you have changed to min=50, max=105, then after 300 seconds (5 minutes) your replicas will be scaled down to the 50 replicas.</p>
<p>Good explanation about how exactly <code>stabilizationWindowSeconds</code> works is in <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md#story-5-stabilization-before-scaling-down" rel="nofollow noreferrer">this document</a>:</p>
<blockquote>
<h4>Story 5: Stabilization before scaling down</h4>
<p>This mode is used when the user expects a lot of flapping or does not want to scale down pods too early expecting some late load spikes.</p>
<p>Create an HPA with the following behavior:</p>
<pre class="lang-yaml prettyprint-override"><code>behavior:
scaleDown:
stabilizationWindowSeconds: 600
policies:
- type: Pods
value: 5
</code></pre>
<p>i.e., the algorithm will:</p>
<ul>
<li>gather recommendations for 600 seconds <em>(default: 300 seconds)</em></li>
<li>pick the largest one</li>
<li>scale down no more than 5 pods per minute</li>
</ul>
<p>Example for <code>CurReplicas = 10</code> and HPA controller cycle once per a minute:</p>
<ul>
<li>First 9 minutes the algorithm will do nothing except gathering recommendations. Let's imagine that we have the following recommendations</li>
</ul>
<p>recommendations = [10, 9, 8, 9, 9, 8, 9, 8, 9]</p>
<ul>
<li>On the 10th minute, we'll add one more recommendation (let it me <code>8</code>):</li>
</ul>
<p>recommendations = [10, 9, 8, 9, 9, 8, 9, 8, 9, 8]</p>
<p>Now the algorithm picks the largest one <code>10</code>. Hence it will not change number of replicas</p>
<ul>
<li>On the 11th minute, we'll add one more recommendation (let it be <code>7</code>) and removes the first one to keep the same amount of recommendations:</li>
</ul>
<p>recommendations = [9, 8, 9, 9, 8, 9, 8, 9, 8, 7]</p>
<p>The algorithm picks the largest value <code>9</code> and changes the number of replicas <code>10 -> 9</code></p>
</blockquote>
<p>Another thing is that it depends which Kubernetes version, which <code>apiVersion</code> for the autoscaling are you using and which Kuberntes solution are you using. The behaviour could vary - check <a href="https://github.com/kubernetes/kubernetes/issues/78761" rel="nofollow noreferrer">this topic on GitHub</a> with a bug reports.</p>
<p>If you want to have scale down done immediately (not recommended in the production), you can setup following:</p>
<pre class="lang-yaml prettyprint-override"><code>behavior:
scaleDown:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 1
</code></pre>
<p>Also check:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaling</a> in particular <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#example-change-downscale-stabilization-window" rel="nofollow noreferrer">Example: change downscale stabilization window</a> and <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#example-limit-scale-down-rate" rel="nofollow noreferrer">Example: limit scale down rate</a></li>
<li><a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md#configurable-scale-updown-velocity-for-hpa" rel="nofollow noreferrer">Configurable scale up/down velocity for HPA</a></li>
</ul>
| Mikolaj S. |
<p>i have an application that record live traffic and replay them.</p>
<p><a href="https://github.com/buger/goreplay" rel="nofollow noreferrer">https://github.com/buger/goreplay</a></p>
<p>it is a simple app to use, but when i tried to use it with kubernetes i get a problem with connecting or communicating pods.</p>
<p>i created a pod with two containers, one is goreplay and the other is a simple python webserver.
in this pod the goreplay will track the traffic coming from outside to the python server and will forward it to another python server which is in another pod.</p>
<p>here is the first deployment file :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: goreplay-deployment
labels:
app: goreplay-app
spec:
replicas: 1
selector:
matchLabels:
app: goreplay-app
template:
metadata:
labels:
app: goreplay-app
spec:
containers:
- name: goreplay
image: feiznouri/goreplay:2.0
args:
- --input-raw
- :3000
- --output-http="http://service-server.default:3200"
volumeMounts:
- name: data
mountPath: /var/lib/goreplay
- name: myserver
image: feiznouri/python-server:1.1
args:
- "3000"
ports:
- name: server-port
containerPort: 3000
volumes:
- name: data
persistentVolumeClaim:
claimName: goreplay-claim
</code></pre>
<p>in the args section of the goreplay container i put the parametre , the output is one of them but i'm not sure what adress to put.</p>
<p>here is the service for the first deployment :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-goreplay
spec:
selector:
app: goreplay-app
ports:
- port: 31001
nodePort: 31001
targetPort: server-port
protocol: TCP
type: NodePort
</code></pre>
<p>and here is the second deploment which has only the second server :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
labels:
app: server-app
spec:
replicas: 1
selector:
matchLabels:
app: server-app
template:
metadata:
labels:
app: server-app
spec:
containers:
- name: myserver
image: feiznouri/python-server:1.1
args:
- "3001"
ports:
- name: server-port
containerPort: 3001
</code></pre>
<p>and here is it's service :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-server
spec:
selector:
app: server-app
ports:
- port: 3200
targetPort: server-port
protocol: TCP
</code></pre>
<p>the problem is that , this way , i'm not getting the traffic to the second server, i am sending request from outside to the first server and i see the traffic coming to the first one , but there is nothing in the second server</p>
<p>What is the correct adress to put in the output parametre of the second server for this to work.</p>
| feiz | <p>I reproduced your issue and it seems the only thing you need to fix is the <code>args</code> field.</p>
<p>Add <code>"</code> <code>"</code> to every arguments, it should look like this:<br>
<strong>Note:</strong> <code>http://service-server.default:3200</code> isn't enclosed in additional double quotation mark.</p>
<pre><code>args:
- "--input-raw"
- ":3000"
- "--output-http=http://service-server.default:3200"
</code></pre>
<p>More information on defining a command and arguments can be found in the <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">documentation</a>.</p>
| matt_j |
<p>I am using Minikube and here is my configuration:</p>
<p><code>kubectl describe deployment mysql</code></p>
<p>the output:</p>
<pre><code>Name: mysql
Namespace: default
CreationTimestamp: Sat, 12 Nov 2022 02:20:54 +0200
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=mysql
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=mysql
Containers:
mysql:
Image: mysql
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'mysql-pass'> Optional: false
Mounts:
/docker-entrypoint-initdb.d from mysql-init (rw)
Volumes:
mysql-init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mysql-init
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: mysql-77fd55bbd9 (1/1 replicas created)
</code></pre>
<p>when I try to connect to it using mysql workbench:</p>
<p><a href="https://i.stack.imgur.com/sGqJt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sGqJt.png" alt="workbench config" /></a></p>
<p>it shows me:</p>
<p><a href="https://i.stack.imgur.com/w94tA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w94tA.png" alt="failed to connect to mysql" /></a></p>
<p>However, when I execute this line to create a <strong>mysql-client</strong> to try to connect to mysql server:</p>
<p><code>kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u skaffold -p</code>
and then enter the password, <strong>it works well!</strong> but still I need to use workbench better.</p>
<p>any help please?</p>
<p>edit 1:</p>
<p>Here is the yaml file for the deployment and the service:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-init
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-init
configMap:
name: mysql-init
---
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
protocol: TCP
selector:
app: mysql
</code></pre>
| Karim Abdel-Qader | <p>First make sure your service is running, so</p>
<pre><code>kubectl get service
</code></pre>
<p>should return something like :</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.99.140.115 <none> 3306/TCP 2d6h
</code></pre>
<p>From that point onwards, I'd try running a port-forward first :</p>
<pre><code>kubectl port-forward service/mysql 3306:3306
</code></pre>
<p>This should allow you to connect even when using a ClusterIP service.</p>
| jcroyoaun |
<p>Can I disable Log management in Log Analytics Workspace for AKS?</p>
| Syed | <p>yes, you can do that with:</p>
<pre><code>az aks disable-addons -a monitoring -n MyExistingManagedCluster -g MyExistingManagedClusterRG
</code></pre>
<p><a href="https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-optout" rel="nofollow noreferrer">Here</a> you can find the docs for this.</p>
| Philip Welz |
<p>Not sure if this is OS specific, but on my M1 Mac, I'm installing the Nginx controller and resource example located in the official <a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start" rel="nofollow noreferrer">Quick Start guide for the controller.</a> for Docker Desktop for Mac. The instructions are as follows:</p>
<pre><code>// Create the Ingress
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
// Pre-flight checks
kubectl get pods --namespace=ingress-nginx
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
// and finally, deploy and test the resource.
kubectl create deployment demo --image=httpd --port=80
kubectl expose deployment demo
kubectl create ingress demo-localhost --class=nginx \
--rule=demo.localdev.me/*=demo:80
kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
</code></pre>
<p>I noticed that the instructions did not mention having to edit the <code>/etc/hosts</code> file, which I found strange. And, when I tested it by putting <code>demo.localdev.me:8080</code> into the browser, it did work as expected!</p>
<p>But why? What happened that an application inside of a docker container was able to influence behavior on my host machine and intercept its web traffic without me having to edit the <code>/etc/hosts</code> file?</p>
<p>For my next test, I re-executed everything above with the only change being that I switched <code>demo</code> to <code>demo2</code>. That did <strong>not</strong> work. I did have to go into <code>/etc/hosts</code> and add <code>demo2.localdev.me 127.0.0.1</code> as an entry. After that both demo and demo2 work as expected.</p>
<p>Why is this happening? Not having to edit the /etc/hosts file is appealing. Is there a way to configure it so that they all work? How would I turn it "off" from happening automatically if I needed to route traffic back out to the internet rather than my local machine?</p>
| user658182 | <p>I replicated your issue and got a similar behaviour on the Ubuntu 20.04.3 OS.</p>
<p>The problem is that <a href="https://kubernetes.github.io/ingress-nginx/deploy/#local-testing" rel="noreferrer">NGINX Ingress controller Local testing guide</a> did not mention that <a href="https://mxtoolbox.com/SuperTool.aspx?action=a%3ademo.localdev.me&run=toolpage" rel="noreferrer"><code>demo.localdev.me</code> address points to <code>127.0.0.1</code></a> - that's why it works without editing <code>/etc/hosts</code> or <code>/etc/resolve.conf</code> file. Probably it's something like <a href="https://readme.localtest.me/" rel="noreferrer"><code>*.localtest.me</code> addresses</a>:</p>
<blockquote>
<p>Here’s how it works. The entire domain name localtest.me—and all wildcard entries—point to 127.0.0.1. So without any changes to your host file you can immediate start testing with a local URL.</p>
</blockquote>
<p>Also good and detailed explanation in <a href="https://superuser.com/questions/1280827/why-does-the-registered-domain-name-localtest-me-resolve-to-127-0-0-1">this topic</a>.</p>
<p>So Docker Desktop / Kubernetes change nothing on your host.</p>
<p>The <a href="https://mxtoolbox.com/SuperTool.aspx?action=a%3ademo2.localdev.me&run=toolpage" rel="noreferrer">address <code>demo2.localdev.me</code> also points to <code>127.0.0.1</code></a>, so it should work as well for you - and as I tested in my environment the behaviour was exactly the same as for the <code>demo.localdev.me</code>.</p>
<p>You may run <a href="https://www.oreilly.com/library/view/mac-os-x/0596003706/re315.html" rel="noreferrer"><code>nslookup</code> command</a> and check which IP address is pointed to the specific domain name, for example:</p>
<pre><code>user@shell:~$ nslookup demo2.localdev.me
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: demo2.localdev.me
Address: 127.0.0.1
</code></pre>
<p>You may try to do some tests with other hosts name, like some existing ones or no-existing then of course it won't work because the address won't be resolved to the <code>127.0.0.1</code> thus it won't be forwarded to the Ingress NGINX controller. In these cases, you can edit <code>/etc/hosts</code> (as you did) or use <a href="https://riptutorial.com/curl/example/31719/change-the--host---header" rel="noreferrer"><code>curl</code> flag <code>-H</code></a>, for example:</p>
<p>I created the ingress using following command:</p>
<pre><code>kubectl create ingress demo-localhost --class=nginx --rule=facebook.com/*=demo:80
</code></pre>
<p>Then I started port-forwarding and I run:</p>
<pre><code>user@shell:~$ curl -H "Host: facebook.com" localhost:8080
<html><body><h1>It works!</h1></body></html>
</code></pre>
<p>You wrote:</p>
<blockquote>
<p>For my next test, I re-executed everything above with the only change being that I switched <code>demo</code> to <code>demo2</code>. That did <strong>not</strong> work. I did have to go into <code>/etc/hosts</code> and add <code>demo2.localdev.me 127.0.0.1</code> as an entry. After that both demo and demo2 work as expected.</p>
</blockquote>
<p>Well, that sounds strange, could you run <code>nslookup demo2.localdev.me</code> without adding an entry in the <code>/etc/hosts</code> and then check? Are you sure you performed the correct query before, did you not change something on the Kubernetes configuration side? As I tested (and presented above), it should work exactly the same as for <code>demo.localdev.me</code>.</p>
| Mikolaj S. |
<p>As I have seen few related posts but none answered my question, I thought I would ask a new question based on suggestions from other users as well <a href="https://stackoverflow.com/questions/64223630/job-invalid-selector-not-auto-generated/64224974?noredirect=1#comment117481868_64224974">here</a>.</p>
<p>I have the need to make a selector label for a network policy for a running cronjob that is responsible to connect to some other services within the cluster, as far as I know there is no easy straight forward way to make a selector label for the jobs pod as that would be problematic with duplicate job labels if they ever existed. Not sure why the cronjob can't have a selector itself, and then can be applied to the job and the pod.</p>
<p>also there might be a possibility to just set this cronjob in its own namespace and then allow all from that one namespace to whatever needed in the network policy but does not feel like the right way to overcome that problem.</p>
<p>Using k8s v1.20</p>
| Waheed | <p>First of all, to select pods (spawned by your <code>CronJob</code>) that should be allowed by the <code>NetworkPolicy</code> as ingress sources or egress destinations, you may set specific label for those pods.</p>
<p>You can easily set a label for <code>Jobs</code> spawned by <code>CronJob</code> using labels field (another example with an explanation can be found in the <a href="https://docs.openshift.com/container-platform/4.1/nodes/jobs/nodes-nodes-jobs.html#nodes-nodes-jobs-creating_nodes-nodes-jobs" rel="noreferrer">OpenShift CronJobs documentation</a>):</p>
<pre><code>---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mysql-test
spec:
...
jobTemplate:
spec:
template:
metadata:
labels:
workload: cronjob # Sets a label for jobs spawned by this CronJob.
type: mysql # Sets another label for jobs spawned by this CronJob.
...
</code></pre>
<p>Pods spawned by this <code>CronJob</code> will have the labels <code>type=mysql</code> and <code>workload=cronjob</code>, using this labels you can create/customize your <code>NetworkPolicy</code>:</p>
<pre><code>$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
mysql-test-1615216560-tkdvk 0/1 Completed 0 2m2s ...,type=mysql,workload=cronjob
mysql-test-1615216620-pqzbk 0/1 Completed 0 62s ...,type=mysql,workload=cronjob
mysql-test-1615216680-8775h 0/1 Completed 0 2s ...,type=mysql,workload=cronjob
$ kubectl describe pod mysql-test-1615216560-tkdvk
Name: mysql-test-1615216560-tkdvk
Namespace: default
...
Labels: controller-uid=af99e9a3-be6b-403d-ab57-38de31ac7a9d
job-name=mysql-test-1615216560
type=mysql
workload=cronjob
...
</code></pre>
<p>For example this <code>mysql-workload</code> <code>NetworkPolicy</code> allows connections to all pods in the <code>mysql</code> namespace from any pod with the labels <code>type=mysql</code> and <code>workload=cronjob</code> (logical conjunction) in a namespace with the label <code>namespace-name=default</code> :<br />
<strong>NOTE:</strong> Be careful to use correct YAML (take a look at this <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors" rel="noreferrer">namespaceSelector and podSelector example</a>).</p>
<pre><code>---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mysql-workload
namespace: mysql
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
namespace-name: default
podSelector:
matchLabels:
type: mysql
workload: cronjob
</code></pre>
<p>To use network policies, you must be using a networking solution which supports <code>NetworkPolicy</code>:</p>
<blockquote>
<p>Network policies are implemented by the network plugin. To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.</p>
</blockquote>
<p>You can learn more about creating Kubernetes <code>NetworkPolicies</code> in the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="noreferrer">Network Policies documentation</a>.</p>
| matt_j |
<p>I am working on a task where I need to design an ML pipeline for model retraining and inference on <code>Kubernetes</code></p>
<p>I read some articles and watched some tutorials, with the help of which I have created 2 apps as described below</p>
<ul>
<li>For Model retraining, I have scheduled a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a> (<code>Flask App #1</code>)</li>
<li>For inference, I have created a separate flask app (<code>Flask App #2</code>)</li>
</ul>
<p>I don't know how can we transfer the latest trained model from <code>CronJob</code> to the inference flask app</p>
<p>I am a newbie in Kubernetes any suggestion would be of great help</p>
| arush1836 | <p>We can make use of the <strong>Google Persistent Disk</strong>, <strong><a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">Kubernetes Volume</a></strong> and <strong><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer">Kubernetes Persistent Volume Claim</a></strong> to do so.</p>
<p>I tried replicating a scenario where a Cronjob updates a text file with current time and date each time it creates a Pod. I then created a separate Pod outside the Cronjob to access this text file and was successful. Below are the steps I followed,</p>
<ol>
<li><p>Create a Standard Persistent Disk on GCP using the following gcloud command,</p>
<pre><code>gcloud compute disks create pd-name --size 500G --type pd-standard --zone us-central1-c
</code></pre>
</li>
<li><p>Then create a Kubernetes Persistent Volume using the above PD and a Persistent Volume Claim, so that the pods can request for storage on the Persistent Volume using the following configuration,</p>
</li>
</ol>
<p>config.yaml:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
storageClassName: "test"
capacity:
storage: 10G
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: pv-claim
gcePersistentDisk:
pdName: pd-name
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: "test"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
</code></pre>
<ol start="3">
<li>Deploy a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">Cronjob</a> with the PVC configuration and which writes the current time and date into a text file stored on the PV using the following configuration,</li>
</ol>
<p>Cronjob.yaml:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cron
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
volumes:
- name: pv-storage
persistentVolumeClaim:
claimName: pv-claim
containers:
- name: container
image: nginx
volumeMounts:
- mountPath: "/usr/data"
name: pv-storage
command:
- /bin/sh
- -c
- date >> /usr/data/msg.txt
restartPolicy: OnFailure
</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">Configure a Pod to Use a PersistentVolume for Storage</a> for more information.</p>
<ol start="4">
<li>Deploy a Pod with the same PVC configuration to check whether data added by the Cronjob pods is visible through this pod using the following configuration,</li>
</ol>
<p>Readpod.yaml:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: readpod
spec:
volumes:
- name: pv-storage
persistentVolumeClaim:
claimName: pv-claim
containers:
- name: read-container
image: nginx
volumeMounts:
- mountPath: "/usr/data"
name: pv-storage
</code></pre>
<ol start="5">
<li><p>Then use <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">kubectl exec</a> command to get a shell to a running container on the above Pod, by using the following commands and we should be able to view the text file in which the cronjob was updating time and date.</p>
<pre><code> $ kubectl exec -it readpod -- /bin/bash
$ cd usr/data
$ cat msg.txt
</code></pre>
</li>
</ol>
<p>You can make use of the above concepts and modify the configuration according to your use case.</p>
| Gellaboina Ashish |
<p>Is there any command which points me to the path where kubeconfig file is present?
I mean I am working on python k8s/openshift client. I am looking for linux or python command or libraries which can print me the path where kubeconfig file is present.</p>
<p>By default kubeconfig is present it home most of the time, but it may also vary for different deployment types.</p>
<p>looking forward to any suggestions/concerns.</p>
| majid asad | <p>I used an installer to install the K8's cluster.
In my case, it is to be found under oauth folder. (I am not sure about the full correct path) but you can take a look in your k8's folder structure.</p>
<p>Or use the command find in linux to locate the file.</p>
<p>something like: <code>find / -type f -name '*.kubeconfig'</code></p>
| NeutralStuff |
<p>How to tweak kubernetes so that the liveness probe only fail after five health check failures</p>
| Wisdom Seeker | <p>You just need to add <code>failureThreshold: 5</code> option (idk what it's called correctly). k8s doc: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes</a></p>
| Makariy |
<p>k8s webhook requires tls verification, <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-admission-webhooks-on-the-fly" rel="nofollow noreferrer">the official document</a> says that the server certificate requires <strong><svc_name>.<svc_namespace>.svc</strong>.</p>
<p>But when I deploy with <code>helm</code>, I may not know which namespace will be deployed in. The <code>svc_name</code> generally does not change, so is there some way to match any namespace. such as <strong><svc_name>.<any_namespace>.svc</strong>.</p>
<p>Is there a method implementation that works for arbitrary namespaces?</p>
<p>I really appreciate any help with this</p>
<blockquote>
<p>k8s version is 1.18</p>
</blockquote>
<p>Attach a sample of my self-signed certificate</p>
<pre><code>[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
prompt = no
[req_distinguished_name]
CN = webhook.kube-system.svc
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = webhook.kube-system.svc
</code></pre>
| moluzhui | <p>Posted community wiki answer for better visibility. Feel free to expand it.</p>
<hr />
<p><strong>EDIT:</strong>
The workaround presented by the original poster (@moluzhui):</p>
<blockquote>
<p>At present, I provide ValidatingWebhookConfiguration in <code>chart/template</code> in advance and write it through <code>.Files.Get</code></p>
</blockquote>
<p>As stated in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-admission-webhooks-on-the-fly" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p><strong>Note:</strong> When using <code>clientConfig.service</code>, the server cert must be valid for <code><svc_name>.<svc_namespace>.svc</code>.</p>
</blockquote>
<p>The namespace name is required - this is how <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#namespaces-of-services" rel="nofollow noreferrer">DNS in Kubernetes works</a> - by using service and namespace name.</p>
<p>However, there is a good article which presents best practices of managing TLS certificates for Kubernetes Admission Webhooks - <a href="https://medium.com/trendyol-tech/5-ways-of-managing-tls-certificates-for-your-kubernetes-admission-webhooks-b2ca971c065#be41" rel="nofollow noreferrer">5 Ways of Managing TLS Certificates for your Kubernetes Admission Webhooks</a>. Maybe some of them will be useful to you and will be solution for your issue:</p>
<ul>
<li>for helm - use Certificator project and Helm Hooks - it automatically patches <code>caBundle</code> field</li>
<li>setup <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a> to create a certificate and provide CA bundle to the API server</li>
<li>generate certificate with cert-manager CA Injector and inject them to WebhookConfiguration</li>
</ul>
<p>You can also <a href="https://stackoverflow.com/questions/70472703/can-k8s-webhook-self-sign-pan-domain-certificate">set up URL with a location of the webhook</a>, where you don't have to use <code>caBundle</code>:</p>
<blockquote>
<p>Expects the TLS certificate to be verified using system trust roots, so does not specify a caBundle.</p>
</blockquote>
<p>Answering your comment:</p>
<blockquote>
<p>Well, then I can only use multiple DNS(1,2,3...) to preset the name space that may be deployed. Does this affect efficiency?</p>
</blockquote>
<p>Probably depends how many namespaces you want to deploy, but for sure it is not good practice.</p>
<p>Another solution from the comment (thanks to @JWhy user):</p>
<blockquote>
<p>You may create another service at a predictable location (i.e. in a specific namespace) and link that to your actual service in the less predictable namespace. See <a href="https://stackoverflow.com/a/44329470/763875">stackoverflow.com/a/44329470/763875</a></p>
</blockquote>
| Mikolaj S. |
<p>I am trying to make an nginx deployment and during the container creation, I want to create multiply symbolic links. But for some reason, it doesn't work and the container crashes.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tcc
component: nginx
name: tcc-nginx-deployment
namespace: dev2
spec:
replicas: 1
selector:
matchLabels:
app: tcc
component: nginx
template:
metadata:
labels:
app: tcc
component: nginx
spec:
containers:
- image: nginx
name: nginx
command:
- /bin/sh
- -c
- |
ln -s /shared/apps/ /var/www
rm -r /etc/nginx/conf.d
ln -s /shared/nginx-config/ /etc/nginx/conf.d
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /shared
name: efs-pvc
volumes:
- name: efs-pvc
persistentVolumeClaim:
claimName: tcc-efs-storage-claim
</code></pre>
| rholdberh | <p>The container is not running, because after the <code>command</code> block is executed, container is exiting, which is expected behaviour.</p>
<p>Instead of playing with symbolic links in <code>command</code> in yaml template (which is not the best practice solution), why just don't use solution builtin Kubernetes and do not use <code>command</code> block at all?</p>
<p>You should use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer"><code>subPath</code> which is designed to share directories from one volume for multiple, different directories on the single pod</a>:</p>
<blockquote>
<p>Sometimes, it is useful to share one volume for multiple uses in a single pod. The <code>volumeMounts.subPath</code> property specifies a sub-path inside the referenced volume instead of its root.</p>
</blockquote>
<p>In your case, the deployment yaml should look like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tcc
component: nginx
name: tcc-nginx-deployment
namespace: dev2
spec:
replicas: 1
selector:
matchLabels:
app: tcc
component: nginx
template:
metadata:
labels:
app: tcc
component: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /shared
name: efs-pvc
- mountPath: /etc/nginx/conf.d
name: efs-pvc
subPath: nginx-config
- mountPath: /var/www
name: efs-pvc
subPath: apps
volumes:
- name: efs-pvc
persistentVolumeClaim:
claimName: tcc-efs-storage-claim
</code></pre>
<p>Also if you want to mount only config files for NGINX, you may use ConfigMap instead of volume - check <a href="https://stackoverflow.com/a/42078899/16391991">this answer</a> for more information.</p>
| Mikolaj S. |
<p>I have an application running on Kubernetes that needs to access SMB shares that are configured dynamically (host, credentials, etc) within said application. I am struggling to achieve this (cleanly) with Kubernetes.</p>
<p>I am facing several difficulties:</p>
<ul>
<li>I do not want "a" storage, I want explicitly specified SMB shares</li>
<li>These shares are dynamically defined within the application and not known beforehand</li>
<li>I have a variable amount of shares and a single pod needs to be able to access all of them</li>
</ul>
<p>We currently have a solution where, on each kubernetes worker node, all shares are mounted to mountpoints in a common folder. This folder is then given as <code>HostPath</code> volume to the containers that need access to those storages. Finally, each of those containers has a logic to access the subfolder(s) matching the storage(s) he needs.</p>
<p>The downside, and the reason why I'm looking for a cleaner alternative, is:</p>
<ul>
<li><code>HostPath</code> volumes present security risks</li>
<li>For this solution, I need something outside Kubernetes that mounts the SMB shares automatically on each Kubernetes node</li>
</ul>
<p>Is there a better solution that I am missing?</p>
<p>The Kubernetes object that seems to match this approach the most closely is the Projected Volume, since it "maps existing volume sources into the same directory". However, it doesn't support the type of volume source I need and I don't think it is possible to add/remove volume sources dynamically without restarting the pods that use this Projected Volume.</p>
| Odsh | <p>For sure your current solution using HostPath on the nodes is not flexible, not secure thus it is not a good practice.</p>
<p>I think you should consider using one of the custom drivers for your SMB shares:</p>
<ul>
<li><a href="https://github.com/fstab/cifs#cifs-flexvolume-plugin-for-kubernetes" rel="nofollow noreferrer">CIFS FlexVolume Plugin</a> - older solution, not maintained</li>
<li><a href="https://github.com/kubernetes-csi/csi-driver-smb#smb-csi-driver-for-kubernetes" rel="nofollow noreferrer">SMB CSI Driver</a> - actively developed (recommended)</li>
</ul>
<hr />
<p><strong>CIFS FlexVolume Plugin</strong>:</p>
<p>This solution is older and it is replaced by a CSI Driver. The advantage compared to CSI is that you can specify <a href="https://github.com/fstab/cifs#running" rel="nofollow noreferrer">SMB shares directly from the pod definition (including credentials as Kubernetes secret) as you prefer</a>.</p>
<p><a href="https://github.com/fstab/cifs#installing" rel="nofollow noreferrer">Here</a> you can find instructions on how to install this plugin on your cluster.</p>
<p><strong>SMB CSI Driver</strong>:</p>
<p>This driver will automatically take care of <a href="https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/v1.4.0/csi-smb-node.yaml" rel="nofollow noreferrer">mounting SMB shares on all nodes by using DaemonSet</a>.</p>
<p>You can install SMB CSI Driver either by <a href="https://github.com/kubernetes-csi/csi-driver-smb/blob/master/docs/install-csi-driver-master.md#install-smb-csi-driver-master-version-on-a-kubernetes-cluster" rel="nofollow noreferrer">bash script</a> or by using <a href="https://github.com/kubernetes-csi/csi-driver-smb/tree/master/charts" rel="nofollow noreferrer">a helm chart</a>.</p>
<p>Assuming you have your SMB server ready, you can use one of the following solution to access it from your pod:</p>
<ul>
<li><a href="https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/example/e2e_usage.md#option1-storage-class-usage" rel="nofollow noreferrer">Storage class</a></li>
<li><a href="https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/example/e2e_usage.md#option2-pvpvc-usage" rel="nofollow noreferrer">PV/PVC</a></li>
</ul>
<p>In both cases you have to use a previously created secret with the credentials.</p>
<p>In your case, for every SMB share you should create a Storage class / PV and mount it to the pod.</p>
<p>The advantage of CSI Driver is that it is <a href="https://medium.com/flant-com/kubernetes-volume-plugins-from-flexvolume-to-csi-c9a011d2670d" rel="nofollow noreferrer">newer, currently maintained solution and it replaced FlexVolume</a>.</p>
<p>Below is diagram representing how CSI plugin operates:
<img src="https://miro.medium.com/max/2400/0*I4HPfmFAGgM6-oGw" alt="" /></p>
<p>Also check:</p>
<ul>
<li><a href="https://medium.com/flant-com/kubernetes-volume-plugins-from-flexvolume-to-csi-c9a011d2670d" rel="nofollow noreferrer">Kubernetes volume plugins evolution from FlexVolume to CSI</a></li>
<li><a href="https://kubernetes.io/blog/2018/01/introducing-container-storage-interface/" rel="nofollow noreferrer">Introducing Container Storage Interface (CSI) Alpha for Kubernetes</a></li>
</ul>
| Mikolaj S. |
<p>i have a service account which has access to one of the app namespace. I have created a cluster role and rolebinding and mapped it to the associated Service account in that namespace.
Everything works as expected except the listing/creation of PV on the cluster level. Can some please help.</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: dxf-clusterrole
rules:
-
apiGroups:
- ""
- apps
- batch
- extensions
- policy
- rbac.authorization.k8s.io
- roles.rbac.authorization.k8s.io
- authorization.k8s.io
resources:
- secrets
- configmaps
- deployments
- endpoints
- horizontalpodautoscalers
- jobs
- limitranges
- namespaces
- nodes
- pods
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
- role
- rolebindings
verbs:
- get
- watch
- list
- create
- delete
- nonResourceURLs: ["*"]
verbs:
- get
- watch
- list
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: dxf-clusterrolebinding
namespace: dxf-uat
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: dxf-clusterrole
subjects:
- kind: ServiceAccount
name: dxf-deployer
namespace: dxf-uat
</code></pre>
<p>User "system:serviceaccount:dxf-uat:dxf-deployer" cannot get resource "persistentvolumes" in API group "" at the cluster scope</p>
| cks cks | <p>There are four Kubernetes objects: <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">Role, ClusterRole</a>, <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">RoleBinding</a> and <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">ClusterRoleBinding</a>, that we can use to configure needed <strong>RBAC</strong> rules. <code>Role</code> and <code>RoleBinding</code> are namespaced and <code>ClusterRole</code> and <code>ClusterRoleBinding</code> are cluster scoped resources.</p>
<p>As you can see in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">RoleBinding and ClusterRoleBinding documentation</a>:</p>
<blockquote>
<p>A RoleBinding grants permissions within a specific namespace whereas a ClusterRoleBinding grants that access cluster-wide.</p>
</blockquote>
<hr />
<p>Your problem is with all cluster-scope resources such as <code>PersistentVolumes</code>, <code>Nodes</code>, <code>Namespaces</code> etc:</p>
<pre><code>$ kubectl get nodes --as=system:serviceaccount:dxf-uat:dxf-deployer
Error from server (Forbidden): nodes is forbidden: User "system:serviceaccount:dxf-uat:dxf-deployer" cannot list resource "nodes" in API group "" at the cluster scope
$ kubectl get persistentvolumes -n dxf-uat --as=system:serviceaccount:dxf-uat:dxf-deployer
Error from server (Forbidden): persistentvolumes is forbidden: User "system:serviceaccount:dxf-uat:dxf-deployer" cannot list resource "persistentvolumes" in API group "" at the cluster scope
$ kubectl get namespaces --as=system:serviceaccount:dxf-uat:dxf-deployer
Error from server (Forbidden): namespaces is forbidden: User "system:serviceaccount:dxf-uat:dxf-deployer" cannot list resource "namespaces" in API group "" at the cluster scope
</code></pre>
<p>You need to create a <code>ClusterRole</code> with all cluster scoped resources you want to have access from the <code>dxf-deployer</code> <code>ServiceAccount</code> and then bind this <code>ClusterRole</code> to the <code>dxf-deployer</code> <code>ServiceAccount</code> using <code>ClusterRoleBinding</code>.</p>
<p>In the example below, I've granted permissions for <code>dxf-deployer</code> <code>ServiceAccount</code> to <code>Nodes</code> and <code>PersistentVolumes</code>:</p>
<pre><code>$ cat cluster-scope-permissions.yml
# cluster-scope-permissions.yml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-scope-role
rules:
- apiGroups:
- ""
resources:
- nodes
- persistentvolumes
verbs:
- get
- list
- watch
- create
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-scope-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-scope-role
subjects:
- kind: ServiceAccount
name: dxf-deployer
namespace: dxf-uat
</code></pre>
<p>Finally, we can check if it works as expected:</p>
<pre><code>$ kubectl apply -f cluster-scope-permissions.yml
clusterrole.rbac.authorization.k8s.io/cluster-scope-role created
clusterrolebinding.rbac.authorization.k8s.io/cluster-scope-rolebinding created
$ kubectl get nodes --as=system:serviceaccount:dxf-uat:dxf-deployer
NAME STATUS ROLES AGE VERSION
node1 Ready <none> 5h11m v1.18.12-gke.1210
node2 Ready <none> 5h11m v1.18.12-gke.1210
$ kubectl get persistentvolumes -n dxf-uat --as=system:serviceaccount:dxf-uat:dxf-deployer
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0ba2fd12-c883-45b8-b52d-a6c826a2775a 8Gi RWO Delete Bound default/my-jenkins standard 131m
pvc-b4b7a4c8-c9ad-4e83-b1ee-663b3e4d938b 10Gi RWO Delete Bound default/debug-pvc standard 5h12m
</code></pre>
| matt_j |
<p>I'm running Apache Flink in Standalone Kubernetes (sesion) mode without Job Manager HA. But I need to deploy Job Manager HA, because only in HA mode, Flink can be persistent (can save job's after job manager restart).
Flink runs in dedicated kubernetes namespace, and I have permissions only to namespace.</p>
<p>HA are enabled using this article:
<a href="https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/" rel="nofollow noreferrer">https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/</a></p>
<p>and I use yaml files from this article:
<a href="https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/resource-providers/standalone/kubernetes/#kubernetes-high-availability-services" rel="nofollow noreferrer">https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/resource-providers/standalone/kubernetes/#kubernetes-high-availability-services</a></p>
<p>I have for example k28s namespace named flink-namespace. In this namespace I've created:</p>
<ul>
<li>serviceAccount named flink-sa</li>
<li>role witch permissions to create/edit Configmaps in this namespace</li>
</ul>
<p>so this serviceAccount has permissions to create/editr configmaps, but only in this namespace.</p>
<p>After deployment, jobManager can't start and throws error:</p>
<p><em>Caused by: io.fabric8.kubernetes.client.KubernetesClientException: configmaps "flink-restserver-leader" is forbidden: User "system:serviceaccount:flink-namespace:flink-sa" cannot watch resource "configmaps" in API group "" in the namespace "default"</em></p>
<p>With mean that serviceAccount with Flink are using to manage Configmaps, try to create Configmap in namespace "default" not in namespace "flink-namespace"</p>
<p>Does anybody know how to config flink to manage configmaps in specified namespace ?</p>
| danthelos | <p>Problem solved - possibility to tell Flink in with kubernetes namespace is running I found in Flink source code.
So, to solve this problem you should set this in config:</p>
<p><em>kubernetes.namespace: YOUR_NAMESPACE_NAME</em></p>
| danthelos |
<p>When I'm running following code:</p>
<pre class="lang-sh prettyprint-override"><code>minikube addons enable ingress
</code></pre>
<p>I'm getting following error:</p>
<pre class="lang-sh prettyprint-override"><code>▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎 Verifying ingress addon...
❌ Exiting due to MK_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: Process exited with status 1
stdout:
namespace/ingress-nginx unchanged
configmap/ingress-nginx-controller unchanged
configmap/tcp-services unchanged
configmap/udp-services unchanged
serviceaccount/ingress-nginx unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
serviceaccount/ingress-nginx-admission unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
service/ingress-nginx-controller-admission unchanged
service/ingress-nginx-controller unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
stderr:
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"controller\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-controller\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"minReadySeconds\":0,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"controller\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"}},\"strategy\":{\"rollingUpdate\":{\"maxUnavailable\":1},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"controller\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\",\"gcp-auth-skip-secret\":\"true\"}},\"spec\":{\"containers\":[{\"args\":[\"/nginx-ingress-controller\",\"--ingress-class=nginx\",\"--configmap=$(POD_NAMESPACE)/ingress-nginx-controller\",\"--report-node-internal-ip-address\",\"--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services\",\"--udp-services-configmap=$(POD_NAMESPACE)/udp-services\",\"--validating-webhook=:8443\",\"--validating-webhook-certificate=/usr/local/certificates/cert\",\"--validating-webhook-key=/usr/local/certificates/key\"],\"env\":[{\"name\":\"POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}},{\"name\":\"LD_PRELOAD\",\"value\":\"/usr/local/lib/libmimalloc.so\"}],\"image\":\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\",\"imagePullPolicy\":\"IfNotPresent\",\"lifecycle\":{\"preStop\":{\"exec\":{\"command\":[\"/wait-shutdown\"]}}},\"livenessProbe\":{\"failureThreshold\":5,\"httpGet\":{\"path\":\"/healthz\",\"port\":10254,\"scheme\":\"HTTP\"},\"initialDelaySeconds\":10,\"periodSeconds\":10,\"successThreshold\":1,\"timeoutSeconds\":1},\"name\":\"controller\",\"ports\":[{\"containerPort\":80,\"hostPort\":80,\"name\":\"http\",\"protocol\":\"TCP\"},{\"containerPort\":443,\"hostPort\":443,\"name\":\"https\",\"protocol\":\"TCP\"},{\"containerPort\":8443,\"name\":\"webhook\",\"protocol\":\"TCP\"}],\"readinessProbe\":{\"failureThreshold\":3,\"httpGet\":{\"path\":\"/healthz\",\"port\":10254,\"scheme\":\"HTTP\"},\"initialDelaySeconds\":10,\"periodSeconds\":10,\"successThreshold\":1,\"timeoutSeconds\":1},\"resources\":{\"requests\":{\"cpu\":\"100m\",\"memory\":\"90Mi\"}},\"securityContext\":{\"allowPrivilegeEscalation\":true,\"capabilities\":{\"add\":[\"NET_BIND_SERVICE\"],\"drop\":[\"ALL\"]},\"runAsUser\":101},\"volumeMounts\":[{\"mountPath\":\"/usr/local/certificates/\",\"name\":\"webhook-cert\",\"readOnly\":true}]}],\"dnsPolicy\":\"ClusterFirst\",\"serviceAccountName\":\"ingress-nginx\",\"volumes\":[{\"name\":\"webhook-cert\",\"secret\":{\"secretName\":\"ingress-nginx-admission\"}}]}}}}\n"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"minReadySeconds":0,"selector":{"matchLabels":{"addonmanager.kubernetes.io/mode":"Reconcile"}},"strategy":{"$retainKeys":["rollingUpdate","type"],"rollingUpdate":{"maxUnavailable":1}},"template":{"metadata":{"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","gcp-auth-skip-secret":"true"}},"spec":{"$setElementOrder/containers":[{"name":"controller"}],"containers":[{"$setElementOrder/ports":[{"containerPort":80},{"containerPort":443},{"containerPort":8443}],"args":["/nginx-ingress-controller","--ingress-class=nginx","--configmap=$(POD_NAMESPACE)/ingress-nginx-controller","--report-node-internal-ip-address","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESPACE)/udp-services","--validating-webhook=:8443","--validating-webhook-certificate=/usr/local/certificates/cert","--validating-webhook-key=/usr/local/certificates/key"],"image":"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a","name":"controller","ports":[{"containerPort":80,"hostPort":80},{"containerPort":443,"hostPort":443}]}],"nodeSelector":null,"terminationGracePeriodSeconds":null}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "ingress-nginx-controller", Namespace: "ingress-nginx"
for: "/etc/kubernetes/addons/ingress-dp.yaml": Deployment.apps "ingress-nginx-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "app.kubernetes.io/component":"controller", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"helm.sh/hook":null,"helm.sh/hook-delete-policy":null,"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-create\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-create\"},\"spec\":{\"containers\":[{\"args\":[\"create\",\"--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc\",\"--namespace=$(POD_NAMESPACE)\",\"--secret-name=ingress-nginx-admission\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"create\"}],\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"template":{"metadata":{"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"$setElementOrder/containers":[{"name":"create"}],"containers":[{"image":"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7","name":"create"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-create", Namespace: "ingress-nginx"
for: "/etc/kubernetes/addons/ingress-dp.yaml": Job.batch "ingress-nginx-admission-create" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-create", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "controller-uid":"d33a74a3-101c-4e82-a2b7-45b46068f189", "job-name":"ingress-nginx-admission-create"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"create", Image:"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7", Command:[]string(nil), Args:[]string{"create", "--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc", "--namespace=$(POD_NAMESPACE)", "--secret-name=ingress-nginx-admission"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"POD_NAMESPACE", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00a79dea0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc003184dc0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc010b3d980), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"helm.sh/hook":null,"helm.sh/hook-delete-policy":null,"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-patch\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-patch\"},\"spec\":{\"containers\":[{\"args\":[\"patch\",\"--webhook-name=ingress-nginx-admission\",\"--namespace=$(POD_NAMESPACE)\",\"--patch-mutating=false\",\"--secret-name=ingress-nginx-admission\",\"--patch-failure-policy=Fail\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"patch\"}],\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"template":{"metadata":{"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"$setElementOrder/containers":[{"name":"patch"}],"containers":[{"image":"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7","name":"patch"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-patch", Namespace: "ingress-nginx"
for: "/etc/kubernetes/addons/ingress-dp.yaml": Job.batch "ingress-nginx-admission-patch" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-patch", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "controller-uid":"ef303f40-b52d-49c5-ab80-8330379fed36", "job-name":"ingress-nginx-admission-patch"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"patch", Image:"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7", Command:[]string(nil), Args:[]string{"patch", "--webhook-name=ingress-nginx-admission", "--namespace=$(POD_NAMESPACE)", "--patch-mutating=false", "--secret-name=ingress-nginx-admission", "--patch-failure-policy=Fail"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"POD_NAMESPACE", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00fd798a0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc00573d190), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00d7d9100), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
]
😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose
</code></pre>
<p>So I had some bug issue in my PC. So, i reinstall minikube. After this when I use <code>minikube start</code> and all want fine. But when i enable ingress then the above error was showing.</p>
<p>And when i run <code>skaffold dev</code> the following error was showing:</p>
<pre class="lang-sh prettyprint-override"><code>Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
- Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": an error on the server ("") has prevented the request from succeeding
exiting dev mode because first deploy failed: kubectl apply: exit status 1
</code></pre>
| Ishan Joshi | <p>As <strong>@Brian de Alwis</strong> pointed out in the comments section, this PR <a href="https://github.com/kubernetes/minikube/pull/11189" rel="nofollow noreferrer">#11189</a> should resolve the above issue.</p>
<p>You can try the <a href="https://github.com/kubernetes/minikube/releases/tag/v1.20.0-beta.0" rel="nofollow noreferrer">v1.20.0-beta.0</a> release with this fix. Additionally, a stable <a href="https://github.com/kubernetes/minikube/releases/tag/v1.20.0" rel="nofollow noreferrer">v1.20.0</a> version is now available.</p>
| matt_j |
<p>I installed the <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">kube-prometheus-0.9.0</a>, and want to deploy a sample application on which to test the Prometheus metrics autoscaling, with the following resource manifest file: (hpa-prome-demo.yaml)</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "80"
prometheus.io/path: "/status/format/prometheus"
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: NodePort
</code></pre>
<p>For testing purposes, used a NodePort Service and luckly I can get the http repsonse after applying the deployment. Then I installed
Prometheus Adapter via Helm Chart by creating a new <code>hpa-prome-adapter-values.yaml</code> file to override the default Values values, as follows.</p>
<pre class="lang-yaml prettyprint-override"><code>rules:
default: false
custom:
- seriesQuery: 'nginx_vts_server_requests_total'
resources:
overrides:
kubernetes_namespace:
resource: namespace
kubernetes_pod_name:
resource: pod
name:
matches: "^(.*)_total"
as: "${1}_per_second"
metricsQuery: (sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))
prometheus:
url: http://prometheus-k8s.monitoring.svc
port: 9090
</code></pre>
<p>Added a rules rule and specify the address of Prometheus. Install Prometheus-Adapter with the following command.</p>
<pre class="lang-sh prettyprint-override"><code>$ helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring -f hpa-prome-adapter-values.yaml
NAME: prometheus-adapter
LAST DEPLOYED: Fri Jan 28 09:16:06 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
</code></pre>
<p>Finally the adatper was installed successfully, and can get the http response, as follows.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get po -nmonitoring |grep adapter
prometheus-adapter-665dc5f76c-k2lnl 1/1 Running 0 133m
$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
<p>But it was supposed to be like this,</p>
<pre class="lang-json prettyprint-override"><code>$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
<p>Why I can't get the metrics <code>pods/nginx_vts_server_requests_per_second</code>? as a result, below query was also failed.</p>
<pre class="lang-sh prettyprint-override"><code> kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
Error from server (NotFound): the server could not find the metric nginx_vts_server_requests_per_second for pods
</code></pre>
<p>Anybody cloud please help? many thanks.</p>
| Marco Mei | <p><strong>ENV</strong>:</p>
<ol>
<li>helm install all Prometheus charts from <code>prometheus-community https://prometheus-community.github.io/helm-chart</code></li>
<li>k8s cluster enabled by docker for mac</li>
</ol>
<p><strong>Solution</strong>:<br />
I met the same problem, from Prometheus UI, i found it had <code>namespace</code> label and no <code>pod</code> label in metrics as below.</p>
<pre><code>nginx_vts_server_requests_total{code="1xx", host="*", instance="10.1.0.19:80", job="kubernetes-service-endpoints", namespace="default", node="docker-desktop", service="hpa-prom-demo"}
</code></pre>
<p>I thought Prometheus may <strong>NOT</strong> use <code>pod</code> as a label, so i checked Prometheus config and found:</p>
<pre><code>121 - action: replace
122 source_labels:
123 - __meta_kubernetes_pod_node_name
124 target_label: node
</code></pre>
<p>then searched
<a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/latest/configuration/configuration/</a> and do the similar thing as below under every <code>__meta_kubernetes_pod_node_name</code> i searched(ie. 2 places)</p>
<pre><code>125 - action: replace
126 source_labels:
127 - __meta_kubernetes_pod_name
128 target_label: pod
</code></pre>
<p>after a while, the configmap reloaded, UI and API could find <code>pod</code> label</p>
<pre><code>$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
| Jing Wang |
<p>I'm trying to install emissary-ingress using the <a href="https://www.getambassador.io/docs/emissary/latest/topics/install/install-ambassador-oss/" rel="nofollow noreferrer">instructions here</a>.</p>
<p>It started failing with error <code>no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta"</code>. I searched and found an <a href="https://stackoverflow.com/a/69054743/4980703">answer</a> on Stack Overflow which said to update <code>apiextensions.k8s.io/v1beta1</code> to <code>apiextensions.k8s.io/v1</code> which I did.
It also asked to use the <code>admissionregistration.k8s.io/v1</code> which my kubectl already uses.</p>
<p>When I run the <code>kubectl apply -f filename.yml</code> command, the above error was gone and a new error started popping in with error: <code>error validating data: ValidationError(CustomResourceDefinition.spec): unknown field "validation" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec;</code></p>
<p>What should I do next?</p>
<p>My kubectl version - Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:32:41Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}</p>
<p>minikube version - minikube version: v1.23.2
commit: 0a0ad764652082477c00d51d2475284b5d39ceed</p>
<p>EDIT:</p>
<p>The custom resource definition yml file: <a href="https://www.getambassador.io/yaml/ambassador/ambassador-crds.yaml" rel="nofollow noreferrer">here</a></p>
<p>The rbac yml file: <a href="https://www.getambassador.io/yaml/ambassador/ambassador-rbac.yaml" rel="nofollow noreferrer">here</a></p>
| Kartikeya Gokhale | <p>The <code>validation</code> field was officially deprecated in <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#customresourcedefinition-v122" rel="nofollow noreferrer">apiextensions.k8s.io/v1</a>.
According to the official kubernetes documentation, you should use <code>schema</code> as a substitution for <code>validation.</code>
Here is a SAMPLE code using <code>schema</code> instead of <code>validation:</code></p>
<pre><code>apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: crontabs.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
---> schema: <---
# openAPIV3Schema is the schema for validating custom objects.
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$'
image:
type: string
replicas:
type: integer
minimum: 1
maximum: 10
</code></pre>
| Vicente Ayala |
<p>I have several image do different thing. Now, I expose them like these commands:</p>
<pre><code>kubectl create deployment work_deployment_1 --image=username/work_image_1:0.0.1-SNAPSHOT
kubectl expose deployment work_deployment_1 --type=LoadBalancer --port=8000
</code></pre>
<p>and then</p>
<pre><code>kubectl create deployment work_deployment_2 --image=username/work_image_2:0.0.1-SNAPSHOT
kubectl expose deployment work_deployment_2 --type=LoadBalancer --port=9000
</code></pre>
<p>After deployment creating and exposing, I check them by <code>kubectl get service</code>, the result of it will like:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
work_deployment_1 LoadBalancer 10.245.197.226 159.65.210.104 8000:30798/TCP 30m
work_deployment_2 LoadBalancer 10.245.168.156 159.65.129.201 9000:32105/TCP 51s
</code></pre>
<p>Can I make deployment (or deployments), expose <code>same_external-ip:8000</code> and <code>same_external-ip:9000</code>, instead of address above (<code>159.65.210.104:8000</code> and <code>159.65.129.201:9000</code>) ?</p>
| hlx | <p>You will need to have an Ingress controller installed in your cluster to handle incoming traffic and route it to the appropriate service. Examples of Ingress controllers include Nginx Ingress, Traefik, and Istio.</p>
<p>Another way around that we use in Azure and Google Cloud is that we Expose the services via App Gateway in Azure & HTTPS Global LB</p>
<p>In GCP case and the services are exposed on the LB single Anycast IP.</p>
<p>In GCP Case workflow is:
<em>Create a Kubernetes service> Create a backend service that references each Kubernetes service > Create a URL map that maps the incoming requests to the appropriate backend service based on the requested URL or hostname > Create a target HTTP proxy that references the URL map > Create a Google Cloud HTTPS load balancer and configure it to use the target HTTP proxy</em></p>
<p>Each time the Front End will be using the SAME Anycast IP & Different ports in front end...</p>
<p>In your private cloud case I will refer using Traefik you can follow their documentation on this:<a href="https://doc.traefik.io/traefik/providers/kubernetes-ingress/" rel="nofollow noreferrer">https://doc.traefik.io/traefik/providers/kubernetes-ingress/</a></p>
| Abdul Fahad |
<p>I am trying to create a web API (ASP.NET Core using Azure AD OAuth for authorization) which runs in Kubernetes (Bare-metal, using NGINX-Ingress).
Running the API in IIS Express works without error, but after turning it into a Docker image and deploying it in the cluster, the app randomly throws the following exception on any request:</p>
<pre class="lang-text prettyprint-override"><code>fail: Microsoft.AspNetCore.Server.Kestrel[13]
Connection id "0HMBENKCJR3ER", Request id "0HMBENKCJR3ER:00000003": An unhandled exception was thrown by the application.
System.InvalidOperationException: IDX20803: Unable to obtain configuration from: 'System.String'.
---> System.IO.IOException: IDX20804: Unable to retrieve document from: 'System.String'.
---> System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure: RemoteCertificateNameMismatch
at System.Net.Security.SslStream.SendAuthResetSignal(ProtocolToken message, ExceptionDispatchInfo exception)
at System.Net.Security.SslStream.ForceAuthenticationAsync[TIOAdapter](TIOAdapter adapter, Boolean receiveFirst, Byte[] reAuthenticationData, Boolean isApm)
at System.Net.Http.ConnectHelper.EstablishSslConnectionAsyncCore(Boolean async, Stream stream, SslClientAuthenticationOptions sslOptions, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Net.Http.ConnectHelper.EstablishSslConnectionAsyncCore(Boolean async, Stream stream, SslClientAuthenticationOptions sslOptions, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.CreateHttp11ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.GetHttpConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.DiagnosticsHandler.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.SendAsyncCore(HttpRequestMessage request, HttpCompletionOption completionOption, Boolean async, Boolean emitTelemetryStartStop, CancellationToken cancellationToken)
at Microsoft.IdentityModel.Protocols.HttpDocumentRetriever.GetDocumentAsync(String address, CancellationToken cancel)
--- End of inner exception stack trace ---
at Microsoft.IdentityModel.Protocols.HttpDocumentRetriever.GetDocumentAsync(String address, CancellationToken cancel)
at Microsoft.Identity.Web.InstanceDiscovery.IssuerConfigurationRetriever.GetConfigurationAsync(String address, IDocumentRetriever retriever, CancellationToken cancel)
at Microsoft.IdentityModel.Protocols.ConfigurationManager`1.GetConfigurationAsync(CancellationToken cancel)
--- End of inner exception stack trace ---
at Microsoft.IdentityModel.Protocols.ConfigurationManager`1.GetConfigurationAsync(CancellationToken cancel)
at Microsoft.IdentityModel.Protocols.ConfigurationManager`1.GetConfigurationAsync()
at Microsoft.Identity.Web.Resource.AadIssuerValidator.GetIssuerValidator(String aadAuthority)
at Microsoft.Identity.Web.MicrosoftIdentityWebApiAuthenticationBuilderExtensions.<>c__DisplayClass3_0.<AddMicrosoftIdentityWebApiImplementation>b__0(JwtBearerOptions options, IServiceProvider serviceProvider, IOptionsMonitor`1 microsoftIdentityOptionsMonitor)
at Microsoft.Extensions.Options.ConfigureNamedOptions`3.Configure(String name, TOptions options)
at Microsoft.Extensions.Options.OptionsFactory`1.Create(String name)
at Microsoft.Extensions.Options.OptionsMonitor`1.<>c__DisplayClass11_0.<Get>b__0()
at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)
--- End of stack trace from previous location ---
at System.Lazy`1.CreateValue()
at System.Lazy`1.get_Value()
at Microsoft.Extensions.Options.OptionsCache`1.GetOrAdd(String name, Func`1 createOptions)
at Microsoft.Extensions.Options.OptionsMonitor`1.Get(String name)
at Microsoft.AspNetCore.Authentication.AuthenticationHandler`1.InitializeAsync(AuthenticationScheme scheme, HttpContext context)
at Microsoft.AspNetCore.Authentication.AuthenticationHandlerProvider.GetHandlerAsync(HttpContext context, String authenticationScheme)
at Microsoft.AspNetCore.Authentication.AuthenticationService.AuthenticateAsync(HttpContext context, String scheme)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)
</code></pre>
<p>Sometimes a pod works without issue and sometimes it will persistently fail with this error on each request, but this changes seemingly randomly each time it's deployed.
NGINX-Ingress on the cluster is fully configured with both own certificate and intermediate certificate and can serve a similar API without authorization over HTTPS without error.</p>
<p>Here's the Dockerfile for the image:</p>
<pre class="lang-text prettyprint-override"><code>FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
RUN apt-get update \
&& apt-get install -y --no-install-recommends libgdiplus libc6-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0-buster-slim AS build
WORKDIR /src
COPY ["AuthTest/AuthTest.csproj", "AuthTest/"]
RUN dotnet restore "AuthTest/AuthTest.csproj"
COPY . .
WORKDIR "/src/AuthTest"
RUN dotnet build "AuthTest.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "AuthTest.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "AuthTest.dll"]
</code></pre>
<p>And this is the .yaml file for the deployment and ingress:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: authtest-dep
labels:
app: authtest
spec:
selector:
matchLabels:
app: authtest-app
replicas: 4
template:
metadata:
labels:
app: authtest-app
spec:
containers:
- name: authtest-app
image: authtest:latest
imagePullPolicy: Never
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: authtest-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/x-forwarded-prefix: /api/auth
spec:
tls:
- hosts:
- valid.hostname.com
secretName: secret-tls
rules:
- host: valid.hostname.com
http:
paths:
- path: /api/auth/(.*)
pathType: Prefix
backend:
service:
name: authtest-service
port:
number: 80
</code></pre>
<p>I have tried including our own certificates (which are not self-signed) in the Docker image and also attempted to override the certificate validation to see what certificate fails, with no success.
I couldn't find any answers on StackOverflow, because most of them seemed to revolve around the use of self-signed certs or had solutions involving disabling certificate authentication, which seems to be counterproductive.
My question is what certificate is the error being thrown for and what can be done to fix it?</p>
| Breenono | <p>After a lot of searching and diagnosing I found a solution.</p>
<p>In my case the DNS was misbehaving. When the API pod tries to connect to login.microsoftonline.com it first tries to resolve DNS within the cluster, resulting in the following:</p>
<pre><code>[INFO] 10.244.1.60:53217 - 25833 "AAAA IN login.microsoftonline.com.default.svc.cluster.local. udp 69 false 512" NXDOMAIN qr,aa,rd 162 0.000272099s
[INFO] 10.244.1.60:53217 - 30678 "A IN login.microsoftonline.com.default.svc.cluster.local. udp 69 false 512" NXDOMAIN qr,aa,rd 162 0.000654896s
[INFO] 10.244.1.59:42740 - 22396 "AAAA IN login.microsoftonline.com.svc.cluster.local. udp 61 false 512" NXDOMAIN qr,aa,rd 154 0.000201999s
[INFO] 10.244.1.59:42740 - 25712 "A IN login.microsoftonline.com.svc.cluster.local. udp 61 false 512" NXDOMAIN qr,aa,rd 154 0.000690095s
[INFO] 10.244.1.59:44797 - 49225 "A IN login.microsoftonline.com.cluster.local. udp 57 false 512" NXDOMAIN qr,aa,rd 150 0.000318898s
[INFO] 10.244.1.59:44797 - 60243 "AAAA IN login.microsoftonline.com.cluster.local. udp 57 false 512" NXDOMAIN qr,aa,rd 150 0.000847195s
[INFO] 10.244.1.59:53903 - 63962 "AAAA IN login.microsoftonline.com.mydomain.com. udp 57 false 512" NXDOMAIN qr,aa,rd,ra 152 0.001664889s
[INFO] 10.244.1.59:53903 - 58575 "A IN login.microsoftonline.com.mydomain.com. udp 57 false 512" NOERROR qr,aa,rd,ra 112 0.001311591s
</code></pre>
<p>DNS was incorrectly giving me a NOERROR result for <em>login.microsoftonline.com.mydomain.com</em>, resulting in a connection to an address with my certificate. Using curl in the pod showed:</p>
<pre class="lang-none prettyprint-override"><code>$ curl -v login.microsoftonline.com
* Server certificate:
* subject: CN=*.mydomain.com
* start date: Mar 11 00:00:00 2021 GMT
* expire date: Apr 11 23:59:59 2022 GMT
* subjectAltName does not match login.microsoftonline.com
* SSL: no alternative certificate subject name matches target host name 'login.microsoftonline.com'
</code></pre>
<p><strong>This caused the RemoteCertificateNameMismatch error.</strong></p>
<p>I found two ways to work around this:</p>
<ol>
<li>Use a fully qualified domain name by adding a dot to the end of the URL (example: google.com. instead of google.com). This bypasses the DNS resolution and makes it connect directly to the specified address. Sadly, this didn't work for login.microsoftonline.com, so I used option 2.</li>
<li>Adjust <em>ndots</em> for the DNS config of the pod, by adding the following dnsConfig under spec:</li>
</ol>
<pre><code>spec:
containers:
- name: authtest
image: authtest:latest
imagePullPolicy: Never
dnsConfig:
options:
- name: ndots
value: "2"
</code></pre>
<p>By default, <em>ndots</em> is set to 5. This means that any URL with less then five dots is not considered an absolute domain, and DNS will try to resolve it using local search domains first, before trying it finally as an absolute address.</p>
<p>By specifying <em>ndots</em> to be two, login.microsoftonline.com will automatically become an absolute domain and the faulty internal resolution will not happen.</p>
<p>This could be considered a band-aid fix for the problem with the DNS resolving incorrectly, but in my case it solved the issue.</p>
| Breenono |
<p>I am trying to run an nginx image as unprivileged, and found the following command stanza required to make this happen. I am NOT concerned with running the official nginx-unprivileged image, as that would defeat the purpose of the exercise (don't ask why...please).</p>
<p>Intended commands to convert from linux terminal style to Kubernetes YAML Pod manifest init-container section...</p>
<pre><code>RUN sed -i 's,listen 80;,listen 8080;,' /etc/nginx/conf.d/default.conf \
&& sed -i '/user nginx;/d' /etc/nginx/nginx.conf \
&& sed -i 's,/var/run/nginx.pid,/tmp/nginx.pid,' /etc/nginx/nginx.conf \
&& sed -i "/^http {/a \ proxy_temp_path /tmp/proxy_temp;\n client_body_temp_path /tmp/client_temp;\n fastcgi_temp_path /tmp/fastcgi_temp;\n uwsgi_temp_path /tmp/uwsgi_temp;\n scgi_temp_path /tmp/scgi_temp;\n" /etc/nginx/nginx.conf \
&& chown -R 101:0 /var/cache/nginx \
&& chmod -R g+w /var/cache/nginx \
&& chown -R 101:0 /etc/nginx \
&& chmod -R g+w /etc/nginx
</code></pre>
<p>I have tried the following using block scalars to no avail...</p>
<pre><code>...
command: ["/bin/sh", "-c"]
args:
- >
sed -i 's,listen 80;,listen 8080;,' /etc/nginx/conf.d/default.conf \
&& sed -i '/user nginx;/d' /etc/nginx/nginx.conf \
&& sed -i 's,/var/run/nginx.pid,/tmp/nginx.pid,' /etc/nginx/nginx.conf \
&& sed -i "/^http {/a \ proxy_temp_path /tmp/proxy_temp;\n client_body_temp_path /tmp/client_temp;\n fastcgi_temp_path /tmp/fastcgi_temp;\n uwsgi_temp_path /tmp/uwsgi_temp;\n scgi_temp_path /tmp/scgi_temp;\n" /etc/nginx/nginx.conf \
&& chown -R 101:0 /var/cache/nginx \
&& chmod -R g+w /var/cache/nginx \
&& chown -R 101:0 /etc/nginx \
&& chmod -R g+w /etc/nginx
...
...
command: ["/bin/sh", "-c"]
args:
- |
sed -i 's,listen 80;,listen 8080;,' /etc/nginx/conf.d/default.conf \
&& sed -i '/user nginx;/d' /etc/nginx/nginx.conf \
&& sed -i 's,/var/run/nginx.pid,/tmp/nginx.pid,' /etc/nginx/nginx.conf \
&& sed -i "/^http {/a \ proxy_temp_path /tmp/proxy_temp;\n client_body_temp_path /tmp/client_temp;\n fastcgi_temp_path /tmp/fastcgi_temp;\n uwsgi_temp_path /tmp/uwsgi_temp;\n scgi_temp_path /tmp/scgi_temp;\n" /etc/nginx/nginx.conf \
&& chown -R 101:0 /var/cache/nginx \
&& chmod -R g+w /var/cache/nginx \
&& chown -R 101:0 /etc/nginx \
&& chmod -R g+w /etc/nginx
...
</code></pre>
<p>Also using a single line...</p>
<pre><code>...
command: ["/bin/sh"]
args: ["-c", "sed -i 's,listen 80;,listen 8080;,' /etc/nginx/conf.d/default.conf && sed -i '/user nginx;/d' /etc/nginx/nginx.conf && sed -i 's,/var/run/nginx.pid,/tmp/nginx.pid,' /etc/nginx/nginx.conf && sed -i "/^http {/a \ proxy_temp_path /tmp/proxy_temp;\n client_body_temp_path /tmp/client_temp;\n fastcgi_temp_path /tmp/fastcgi_temp;\n uwsgi_temp_path /tmp/uwsgi_temp;\n scgi_temp_path /tmp/scgi_temp;\n" /etc/nginx/nginx.conf && chown -R 101:0 /var/cache/nginx && chmod -R g+w /var/cache/nginx && chown -R 101:0 /etc/nginx && chmod -R g+w /etc/nginx"]
...
</code></pre>
<p>None of these worked...the init-container never starts.</p>
<p>Here is another attempt...but the initContainer remains in a crashloopbackoff state...</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: securityreview
spec:
securityContext:
runAsUser: 101
runAsNonRoot: True
initContainers:
- name: permission-fix
image: nginx
command:
- /bin/sh
- -c
- sed -i 's,listen 80;,listen 8080;,' /etc/nginx/conf.d/default.conf
&& sed -i '/user nginx;/d' /etc/nginx/nginx.conf
&& sed -i 's,/var/run/nginx.pid,/tmp/nginx.pid,' /etc/nginx/nginx.conf
&& sed -i "/^http {/a \ proxy_temp_path /tmp/proxy_temp;\n client_body_temp_path /tmp/client_temp;\n
fastcgi_temp_path /tmp/fastcgi_temp;\n uwsgi_temp_path /tmp/uwsgi_temp;\n
scgi_temp_path /tmp/scgi_temp;\n" /etc/nginx/nginx.conf
&& chown -R 101:0 /var/cache/nginx && chmod -R g+w /var/cache/nginx
&& chown -R 101:0 /etc/nginx && chmod -R g+w /etc/nginx
containers:
- name: webguy
image: nginx
securityContext:
runAsUser: 101
runAsGroup: 101
allowPrivilegeEscalation: false
</code></pre>
| EMP JCR | <p>I like to use the following approach to separating multiple commands in a readable way:</p>
<pre><code>command: ["/bin/sh", "-c"]
args:
- >
command1 &&
command2 &&
...
commandN
</code></pre>
<p>However, your case is more complicated, as running <code>sed</code>, <code>chown</code> and <code>chmod</code> commands without root privileges will result in a <code>Permission denied</code> error.</p>
<p>You can use an init container that shares a Volume with the nginx container.
The init container will run the <code>sed</code>,<code>chown</code> and <code>chmod</code> commands as <code>root</code> and then copy the modified files to the shared Volume that will be mounted and used by the nginx container. In this approach, you need a volume that init and application containers can use.<br />
A similar use case can be found in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container" rel="nofollow noreferrer">Configure Pod Initialization</a> documentation.</p>
<p>I will create an example to illustrate how it works.</p>
<hr />
<p>As you can see in the code snippet below, I created the <code>permission-fix</code> init container that runs required commands and then copies modified files to the shared volume (<code>cp -Rp /etc/nginx/* /mnt/nginx-fix/</code>). The <code>webguy</code> container then mounts these files to <code>/etc/nginx</code>:</p>
<pre><code>$ cat nginx-unpriv.yml
apiVersion: v1
kind: Pod
metadata:
name: securityreview
spec:
initContainers:
- name: permission-fix
image: nginx
command: ["/bin/sh", "-c"]
args:
- >
sed -i 's,listen 80;,listen 8080;,' /etc/nginx/conf.d/default.conf &&
sed -i '/user nginx;/d' /etc/nginx/nginx.conf &&
sed -i 's,/var/run/nginx.pid,/tmp/nginx.pid,' /etc/nginx/nginx.conf &&
sed -i "/^http {/a \ proxy_temp_path /tmp/proxy_temp;\n client_body_temp_path /tmp/client_temp;\n fastcgi_temp_path /tmp/fastcgi_temp;\n uwsgi_temp_path /tmp/uwsgi_temp;\n scgi_temp_path /tmp/scgi_temp;\n" /etc/nginx/nginx.conf &&
chown -R 101:0 /var/cache/nginx &&
chmod -R g+w /var/cache/nginx &&
chown -R 101:0 /etc/nginx &&
chmod -R g+w /etc/nginx &&
cp -Rp /etc/nginx/* /mnt/nginx-fix/
volumeMounts:
- name: nginx-fix
mountPath: "/mnt/nginx-fix"
containers:
- name: webguy
image: nginx
volumeMounts:
- name: nginx-fix
mountPath: "/etc/nginx"
securityContext:
runAsUser: 101
runAsGroup: 101
allowPrivilegeEscalation: false
volumes:
- name: nginx-fix
persistentVolumeClaim:
claimName: myclaim
</code></pre>
<p>We can check if it works as expected:</p>
<pre><code>$ kubectl apply -f nginx-unpriv.yml
pod/securityreview created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
securityreview 1/1 Running 0 12s
$ kubectl exec -it securityreview -c webguy -- bash
nginx@securityreview:/$ id
uid=101(nginx) gid=101(nginx) groups=101(nginx)
nginx@securityreview:/$ ls -l /etc/nginx
total 44
drwxrwxr-x 2 nginx root 4096 Jun 10 13:18 conf.d
-rw-rw-r-- 1 nginx root 1007 May 25 12:28 fastcgi_params
drwx------ 2 root root 16384 Jun 10 13:18 lost+found
-rw-rw-r-- 1 nginx root 5290 May 25 12:28 mime.types
lrwxrwxrwx 1 nginx root 22 May 25 13:01 modules -> /usr/lib/nginx/modules
-rw-rw-r-- 1 nginx root 826 Jun 10 13:18 nginx.conf
-rw-rw-r-- 1 nginx root 636 May 25 12:28 scgi_params
-rw-rw-r-- 1 nginx root 664 May 25 12:28 uwsgi_params
</code></pre>
<p>If this response does not answer your question, please provide more details on what you want to achieve.</p>
| matt_j |
<p>I am trying to have 1 redis master with 2 redis replicas tied to a 3 Quorum Sentinel on Kubernetes. I am very new to Kubernetes.</p>
<p>My initial plan was to have the master running on a pod tied to 1 Kubernetes SVC and the 2 replicas running on their own pods tied to another Kubernetes SVC. Finally, the 3 Sentinel pods will be tied to their own SVC. The replicas will be tied to the master SVC (because without svc, ip will change). The sentinel will also be configured and tied to master and replica SVCs. But I'm not sure if this is feasible because when master pod crashes, how will one of the replica pods move to the master SVC and become the master? Is that possible?</p>
<p>The second approach I had was to wrap redis pods in a replication controller and the same for sentinel as well. However, I'm not sure how to make one of the pods master and the others replicas with a replication controller.</p>
<p>Would any of the two approaches work? If not, is there a better design that I can adopt? Any leads would be appreciated.</p>
| Kaushik | <p>You can deploy Redis Sentinel using the <a href="https://helm.sh/" rel="noreferrer">Helm</a> package manager and the <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis" rel="noreferrer">Redis Helm Chart</a>.<br />
If you don't have <code>Helm3</code> installed yet, you can use this <a href="https://helm.sh/docs/intro/install/" rel="noreferrer">documentation</a> to install it.</p>
<p>I will provide a few explanations to illustrate how it works.</p>
<hr />
<p>First we need to get the <code>values.yaml</code> file from the Redis Helm Chart to customize our installation:</p>
<pre><code>$ wget https://raw.githubusercontent.com/bitnami/charts/master/bitnami/redis/values.yaml
</code></pre>
<p>We can configure a lot of parameters in the <code>values.yaml</code> file , but for demonstration purposes I only enabled Sentinel and set the redis password:<br />
<strong>NOTE:</strong> For a list of parameters that can be configured during installation, see the <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis#parameters" rel="noreferrer">Redis Helm Chart Parameters</a> documentation.</p>
<pre><code># values.yaml
global:
redis:
password: redispassword
...
replica:
replicaCount: 3
...
sentinel:
enabled: true
...
</code></pre>
<p>Then we can deploy Redis using the configuration from the <code>values.yaml</code> file:<br />
<strong>NOTE:</strong> It will deploy a three Pod cluster (one master and two slaves) managed by the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">StatefulSets</a> with a <code>sentinel</code> container running inside each Pod.</p>
<pre><code>$ helm install redis-sentinel bitnami/redis --values values.yaml
</code></pre>
<p>Be sure to carefully read the <em><strong>NOTES</strong></em> section of the chart installation output. It contains many useful information (e.g. how to connect to your database from outside the cluster)</p>
<p>After installation, check redis <code>StatefulSet</code>, <code>Pods</code> and <code>Services</code> (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="noreferrer">headless service</a> can be used for internal access):</p>
<pre><code>$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP
redis-sentinel-node-0 2/2 Running 0 2m13s 10.4.2.21
redis-sentinel-node-1 2/2 Running 0 86s 10.4.0.10
redis-sentinel-node-2 2/2 Running 0 47s 10.4.1.10
$ kubectl get sts
NAME READY AGE
redis-sentinel-node 3/3 2m41s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-sentinel ClusterIP 10.8.15.252 <none> 6379/TCP,26379/TCP 2m
redis-sentinel-headless ClusterIP None <none> 6379/TCP,26379/TCP 2m
</code></pre>
<p>As you can see, each <code>redis-sentinel-node</code> Pod contains the <code>redis</code> and <code>sentinel</code> containers:</p>
<pre><code>$ kubectl get pods redis-sentinel-node-0 -o jsonpath={.spec.containers[*].name}
redis sentinel
</code></pre>
<p>We can check the <code>sentinel</code> container logs to find out which <code>redis-sentinel-node</code> is the master:</p>
<pre><code>$ kubectl logs -f redis-sentinel-node-0 sentinel
...
1:X 09 Jun 2021 09:52:01.017 # Configuration loaded
1:X 09 Jun 2021 09:52:01.019 * monotonic clock: POSIX clock_gettime
1:X 09 Jun 2021 09:52:01.019 * Running mode=sentinel, port=26379.
1:X 09 Jun 2021 09:52:01.026 # Sentinel ID is 1bad9439401e44e749e2bf5868ad9ec7787e914e
1:X 09 Jun 2021 09:52:01.026 # +monitor master mymaster 10.4.2.21 6379 quorum 2
...
1:X 09 Jun 2021 09:53:21.429 * +slave slave 10.4.0.10:6379 10.4.0.10 6379 @ mymaster 10.4.2.21 6379
1:X 09 Jun 2021 09:53:21.435 * +slave slave 10.4.1.10:6379 10.4.1.10 6379 @ mymaster 10.4.2.21 6379
...
</code></pre>
<p>As you can see from the logs above, the <code>redis-sentinel-node-0</code> Pod is the master and the <code>redis-sentinel-node-1</code> & <code>redis-sentinel-node-2</code> Pods are slaves.</p>
<p>For testing, let's delete the master and check if sentinel will switch the master role to one of the slaves:</p>
<pre><code> $ kubectl delete pod redis-sentinel-node-0
pod "redis-sentinel-node-0" deleted
$ kubectl logs -f redis-sentinel-node-1 sentinel
...
1:X 09 Jun 2021 09:55:20.902 # Executing user requested FAILOVER of 'mymaster'
...
1:X 09 Jun 2021 09:55:22.666 # +switch-master mymaster 10.4.2.21 6379 10.4.1.10 6379
...
1:X 09 Jun 2021 09:55:50.626 * +slave slave 10.4.0.10:6379 10.4.0.10 6379 @ mymaster 10.4.1.10 6379
1:X 09 Jun 2021 09:55:50.632 * +slave slave 10.4.2.22:6379 10.4.2.22 6379 @ mymaster 10.4.1.10 6379
</code></pre>
<p>A new master (<code>redis-sentinel-node-2</code> <code>10.4.1.10</code>) has been selected, so everything works as expected.</p>
<p>Additionally, we can display more information by connecting to one of the Redis nodes:</p>
<pre><code>$ kubectl run --namespace default redis-client --restart='Never' --env REDIS_PASSWORD=redispassword --image docker.io/bitnami/redis:6.2.1-debian-10-r47 --command -- sleep infinity
pod/redis-client created
$ kubectl exec --tty -i redis-client --namespace default -- bash
I have no name!@redis-client:/$ redis-cli -h redis-sentinel-node-1.redis-sentinel-headless -p 6379 -a $REDIS_PASSWORD
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
redis-sentinel-node-1.redis-sentinel-headless:6379> info replication
# Replication
role:slave
master_host:10.4.1.10
master_port:6379
master_link_status:up
...
</code></pre>
| matt_j |
<p>I have an application that uses an install token that is required for each new install (token can't be used more than once). I have an api I created that I can curl to, that will recreate my <code>secret.yaml</code> file with a new install token when I run it.</p>
<p>Is there a way to trigger the curl command to execute when I run</p>
<pre><code>kubectl scale deployment --replicas=x
</code></pre>
<p>so that the <code>secret.yaml</code> will update before it creates the new pod?</p>
<p>I know I can run a cronjob every minute, but I would like for this to be able to autoscale eventually, so it would be ideal if I could get the secret file updated just before the new pods are created from the deployment yaml.</p>
| pbay12345 | <p>Since @Lei Yang has already proposed a solution to this issue, I decided to provide a Community Wiki answer just for better visibility to other community members.</p>
<p>Using <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init containers</a> is a good way to trigger the <code>curl</code> command every time a Deployment is scaled. As described in the <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">documentation</a> - init containers can contain utilities or setup scripts not present in an app image.</p>
<hr />
<p>I've create a simple example to illustrate how init containers can work with the <code>curl</code> command.</p>
<p>First, I created an <code>app-1</code> Deployment which contains an <code>init-curl</code> init container:<br />
<strong>NOTE:</strong> I used the <a href="https://hub.docker.com/r/curlimages/curl" rel="nofollow noreferrer">curlimages/curl</a> image, it's the official curl image generated by the curl docker team.</p>
<pre><code>$ cat app-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
initContainers:
- name: init-curl
image: curlimages/curl
command: ['sh', '-c', 'curl example.com']
containers:
- image: nginx
name: nginx
</code></pre>
<p>After creating the Deployment we can check if the <code>init-curl</code> works correctly:</p>
<pre><code>$ kubectl apply -f app-1.yaml
deployment.apps/app-1 created
$ kubectl get pods | grep app-1
app-1-f6d7cdd68-hfp78 1/1 Running 0 12s
$ kubectl logs -f app-1-f6d7cdd68-hfp78 init-curl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
<!doctype html>
<html>
<head>
<title>Example Domain</title>
...
</code></pre>
<p>Now let's try to scale the <code>app-1</code> Deployment:</p>
<pre><code>$ kubectl scale deployment app-1 --replicas=2
deployment.apps/app-1 scaled
$ kubectl get pods | grep app-1
app-1-f6d7cdd68-hfp78 1/1 Running 0 6m1s
app-1-f6d7cdd68-wbchx 1/1 Running 0 5s
$ kubectl logs -f app-1-f6d7cdd68-wbchx init-curl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1256 100 1256 0 0 2043 0 --:--:-- --:--:-- --:--:-- 2042
<!doctype html>
<html>
<head>
<title>Example Domain</title>
...
</code></pre>
<p>As we can see it works as expected, the <code>curl</code> command is executed every time the Deployment is scaled.</p>
| matt_j |
<p>I installed the <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">kube-prometheus-0.9.0</a>, and want to deploy a sample application on which to test the Prometheus metrics autoscaling, with the following resource manifest file: (hpa-prome-demo.yaml)</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "80"
prometheus.io/path: "/status/format/prometheus"
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: NodePort
</code></pre>
<p>For testing purposes, used a NodePort Service and luckly I can get the http repsonse after applying the deployment. Then I installed
Prometheus Adapter via Helm Chart by creating a new <code>hpa-prome-adapter-values.yaml</code> file to override the default Values values, as follows.</p>
<pre class="lang-yaml prettyprint-override"><code>rules:
default: false
custom:
- seriesQuery: 'nginx_vts_server_requests_total'
resources:
overrides:
kubernetes_namespace:
resource: namespace
kubernetes_pod_name:
resource: pod
name:
matches: "^(.*)_total"
as: "${1}_per_second"
metricsQuery: (sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))
prometheus:
url: http://prometheus-k8s.monitoring.svc
port: 9090
</code></pre>
<p>Added a rules rule and specify the address of Prometheus. Install Prometheus-Adapter with the following command.</p>
<pre class="lang-sh prettyprint-override"><code>$ helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring -f hpa-prome-adapter-values.yaml
NAME: prometheus-adapter
LAST DEPLOYED: Fri Jan 28 09:16:06 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
</code></pre>
<p>Finally the adatper was installed successfully, and can get the http response, as follows.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get po -nmonitoring |grep adapter
prometheus-adapter-665dc5f76c-k2lnl 1/1 Running 0 133m
$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
<p>But it was supposed to be like this,</p>
<pre class="lang-json prettyprint-override"><code>$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
<p>Why I can't get the metrics <code>pods/nginx_vts_server_requests_per_second</code>? as a result, below query was also failed.</p>
<pre class="lang-sh prettyprint-override"><code> kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
Error from server (NotFound): the server could not find the metric nginx_vts_server_requests_per_second for pods
</code></pre>
<p>Anybody cloud please help? many thanks.</p>
| Marco Mei | <p>It is worth knowing that using the <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">kube-prometheus</a> repository, you can also install components such as <strong>Prometheus Adapter for Kubernetes Metrics APIs</strong>, so there is no need to install it separately with Helm.</p>
<p>I will use your <code>hpa-prome-demo.yaml</code> manifest file to demonstrate how to monitor <code>nginx_vts_server_requests_total</code> metrics.</p>
<hr />
<p>First of all, we need to install Prometheus and Prometheus Adapter with appropriate configuration as described step by step below.</p>
<p>Copy the <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">kube-prometheus</a> repository and refer to the <a href="https://github.com/prometheus-operator/kube-prometheus#kubernetes-compatibility-matrix" rel="nofollow noreferrer">Kubernetes compatibility matrix</a> in order to choose a compatible branch:</p>
<pre><code>$ git clone https://github.com/prometheus-operator/kube-prometheus.git
$ cd kube-prometheus
$ git checkout release-0.9
</code></pre>
<p>Install the <code>jb</code>, <code>jsonnet</code> and <code>gojsontoyaml</code> tools:</p>
<pre><code>$ go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest
$ go install github.com/google/go-jsonnet/cmd/jsonnet@latest
$ go install github.com/brancz/gojsontoyaml@latest
</code></pre>
<p>Uncomment the <code>(import 'kube-prometheus/addons/custom-metrics.libsonnet') +</code> line from the <code>example.jsonnet</code> file:</p>
<pre><code>$ cat example.jsonnet
local kp =
(import 'kube-prometheus/main.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
// (import 'kube-prometheus/addons/static-etcd.libsonnet') +
(import 'kube-prometheus/addons/custom-metrics.libsonnet') + <--- This line
// (import 'kube-prometheus/addons/external-metrics.libsonnet') +
...
</code></pre>
<p>Add the following rule to the <code>./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet</code> file in the <code>rules+</code> section:</p>
<pre><code> {
seriesQuery: "nginx_vts_server_requests_total",
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
},
</code></pre>
<p>After this update, the <code>./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet</code> file should look like this:<br />
<strong>NOTE:</strong> This is not the entire file, just an important part of it.</p>
<pre><code>$ cat custom-metrics.libsonnet
// Custom metrics API allows the HPA v2 to scale based on arbirary metrics.
// For more details on usage visit https://github.com/DirectXMan12/k8s-prometheus-adapter#quick-links
{
values+:: {
prometheusAdapter+: {
namespace: $.values.common.namespace,
// Rules for custom-metrics
config+:: {
rules+: [
{
seriesQuery: "nginx_vts_server_requests_total",
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
},
...
</code></pre>
<p>Use the jsonnet-bundler update functionality to update the <code>kube-prometheus</code> dependency:</p>
<pre><code>$ jb update
</code></pre>
<p>Compile the manifests:</p>
<pre><code>$ ./build.sh example.jsonnet
</code></pre>
<p>Now simply use <code>kubectl</code> to install Prometheus and other components as per your configuration:</p>
<pre><code>$ kubectl apply --server-side -f manifests/setup
$ kubectl apply -f manifests/
</code></pre>
<p>After configuring Prometheus, we can deploy a sample <code>hpa-prom-demo</code> Deployment:<br />
<strong>NOTE:</strong> I've deleted the annotations because I'm going to use a <a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md#related-resources" rel="nofollow noreferrer">ServiceMonitor</a> to describe the set of targets to be monitored by Prometheus.</p>
<pre><code>$ cat hpa-prome-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
labels:
app: nginx-server
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: LoadBalancer
</code></pre>
<p>Next, create a <code>ServiceMonitor</code> that describes how to monitor our NGINX:</p>
<pre><code>$ cat servicemonitor.yaml
kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
name: hpa-prom-demo
labels:
app: nginx-server
spec:
selector:
matchLabels:
app: nginx-server
endpoints:
- interval: 15s
path: "/status/format/prometheus"
port: http
</code></pre>
<p>After waiting some time, let's check the <code>hpa-prom-demo</code> logs to make sure that it is scrapped correctly:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hpa-prom-demo-bbb6c65bb-49jsh 1/1 Running 0 35m
$ kubectl logs -f hpa-prom-demo-bbb6c65bb-49jsh
...
10.4.0.9 - - [04/Feb/2022:09:29:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:32 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:47 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:02 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.2.12 - - [04/Feb/2022:09:30:23 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
...
</code></pre>
<p>Finally, we can check if our metrics work as expected:</p>
<pre><code>$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq . | grep -A 7 "nginx_vts_server_requests_per_second"
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
--
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/nginx_vts_server_requests_per_second"
},
"items": [
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "hpa-prom-demo-bbb6c65bb-49jsh",
"apiVersion": "/v1"
},
"metricName": "nginx_vts_server_requests_per_second",
"timestamp": "2022-02-04T09:32:59Z",
"value": "533m",
"selector": null
}
]
}
</code></pre>
| matt_j |
<p>I am trying to set up Kubernetes in my instance on a ubuntu 22.04 virtual machine. While in the installation process i am trying to add a gpg key for intra cluster communication. After running the command:</p>
<p><code>sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add</code></p>
<p>I got a message that <code>apt-key</code> add was <em>deprecated</em> and I should read the <code>apt-key(8) man</code> page. Can anyone tell me what exactly, I should type on my terminal instead of this!</p>
| Moni 93 | <p>I share this link.</p>
<p><a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management" rel="nofollow noreferrer">Install using native package management</a></p>
<ol>
<li>Update the apt package index and install packages needed to use the Kubernetes apt repository:</li>
</ol>
<pre><code>sudo apt-get update
sudo apt-get install -y ca-certificates curl
</code></pre>
<ol start="2">
<li>Download the Google Cloud public signing key:</li>
</ol>
<pre><code>sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
</code></pre>
<ol start="3">
<li>Add the Kubernetes apt repository:</li>
</ol>
<pre><code>echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
</code></pre>
<ol start="4">
<li>Update apt package index with the new repository and install kubectl:</li>
</ol>
<pre><code>sudo apt-get update
sudo apt-get install -y kubectl
</code></pre>
| Dante_KR |
<p>We are migrating the blog system that was previously deployed on EC2 to the AWS EKS cluster. On EC2 of the existing system, it operates in two containers, a web server (nginx) container and an AP server (django + gunicorn) container, and can be accessed normally from a browser. So, when I deployed it to the node (EC2) on AWS EKS in the same way, I could not access it from the browser and it was displayed as "502 Bad Gateway". The message "TIMEOUT (pid: 18294)" is displayed. We are currently investigating the cause of this, but the current situation is that we do not know. If anyone has any idea, I would appreciate it if you could teach me.</p>
<p>gunicorn of log・status</p>
<pre><code>root@blogsystem-apserver01:/# systemctl status gunicorn
● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-05-09 08:57:19 UTC; 5 days ago
Main PID: 18291 (gunicorn)
Tasks: 4 (limit: 4636)
Memory: 95.8M
CGroup: /kubepods/besteffort/podd270872c-cc5b-4a3b-92ed-f463ee5f5d77/1eafc79ffd656ff1c1bc39175ee06c7a5ca8692715c5e2bfe2f979d8718411ba/system.slice/gunicorn.service
├─18291 /home/ubuntu/python3/bin/python /home/ubuntu/python3/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/ubuntu/socket/myproject.sock myproject.wsgi:a
pplication
├─18295 /home/ubuntu/python3/bin/python /home/ubuntu/python3/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/ubuntu/socket/myproject.sock myproject.wsgi:a
pplication
├─18299 /home/ubuntu/python3/bin/python /home/ubuntu/python3/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/ubuntu/socket/myproject.sock myproject.wsgi:a
pplication
└─18300 /home/ubuntu/python3/bin/python /home/ubuntu/python3/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/ubuntu/socket/myproject.sock myproject.wsgi:a
pplication
May 09 08:57:20 blogsystem-apserver01 gunicorn[18291]: [2021-05-09 08:57:20 +0000] [18291] [INFO] Starting gunicorn 20.0.4
May 09 08:57:20 blogsystem-apserver01 gunicorn[18291]: [2021-05-09 08:57:20 +0000] [18291] [INFO] Listening at: unix:/home/ubuntu/socket/myproject.sock (18291)
May 09 08:57:20 blogsystem-apserver01 gunicorn[18291]: [2021-05-09 08:57:20 +0000] [18291] [INFO] Using worker: sync
May 09 08:57:20 blogsystem-apserver01 gunicorn[18293]: [2021-05-09 08:57:20 +0000] [18293] [INFO] Booting worker with pid: 18293
May 09 08:57:20 blogsystem-apserver01 gunicorn[18294]: [2021-05-09 08:57:20 +0000] [18294] [INFO] Booting worker with pid: 18294
May 09 08:57:20 blogsystem-apserver01 gunicorn[18295]: [2021-05-09 08:57:20 +0000] [18295] [INFO] Booting worker with pid: 18295
May 09 08:57:59 blogsystem-apserver01 gunicorn[18291]: [2021-05-09 08:57:59 +0000] [18291] [CRITICAL] WORKER TIMEOUT (pid:18293)
May 09 08:58:00 blogsystem-apserver01 gunicorn[18299]: [2021-05-09 08:58:00 +0000] [18299] [INFO] Booting worker with pid: 18299
May 09 08:58:01 blogsystem-apserver01 gunicorn[18291]: [2021-05-09 08:58:01 +0000] [18291] [CRITICAL] WORKER TIMEOUT (pid:18294)
May 09 08:58:02 blogsystem-apserver01 gunicorn[18300]: [2021-05-09 08:58:02 +0000] [18300] [INFO] Booting worker with pid: 18300
root@blogsystem-apserver01:/#
</code></pre>
<p>Further investigation
I've researched various things, but I can't conclude, but it seems that there is a possibility that it can be solved by changing the "sync" worker of gunicorn to the "givent" worker.</p>
<p>reference:
<a href="https://github.com/benoitc/gunicorn/issues/1194" rel="nofollow noreferrer">https://github.com/benoitc/gunicorn/issues/1194</a></p>
<p>I tried to edit the gunicorn config file and change it to a "givent" worker as below, but when I restart gunicorn and look at the status, it says "Runtime Error: gevent worker requires gevent 1.4 or higher" And I can't start gunicorn. Then, I installed a version of gevent of 1.4 or higher with "python3 -m pip install gevent", but again, "RuntimeError: gevent worker requires gevent 1.4 or higher" is displayed. I think that this matter may also be related to "WORKER TIMEOUT" of gunicorn mentioned above, so if you have any idea how to solve it, I would appreciate it if you could tell me.</p>
<p>・gunicorn configuration file</p>
<pre><code>(python3) ubuntu@blogsystem-apserver01:/etc/systemd/system$ more gunicorn.service
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/myproject
ExecStart=/home/ubuntu/python3/bin/gunicorn --access-logfile - --workers 3 --worker-class gevent --bind unix:/home/ubuntu/socket/myproject.sock myproject.wsgi:application
[Install]
WantedBy=multi-user.target
(python3) ubuntu@blogsystem-apserver01:/etc/systemd/system$
</code></pre>
<p>・gunicorn status</p>
<pre><code>root@blogsystem-apserver01:/etc/systemd/system# systemctl status gunicorn
● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-05-15 02:30:08 UTC; 1s ago
Process: 19182 ExecStart=/home/ubuntu/python3/bin/gunicorn --access-logfile - --workers 3 --worker-class gevent --bind unix:/home/ubuntu/socket/myproject.sock myproject.wsgi:a
pplication (code=exited, status=1/FAILURE)
Main PID: 19182 (code=exited, status=1/FAILURE)
May 15 02:30:08 blogsystem-apserver01 gunicorn[19182]: File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
May 15 02:30:08 blogsystem-apserver01 gunicorn[19182]: File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
May 15 02:30:08 blogsystem-apserver01 gunicorn[19182]: File "<frozen importlib._bootstrap_external>", line 783, in exec_module
May 15 02:30:08 blogsystem-apserver01 gunicorn[19182]: File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
May 15 02:30:08 blogsystem-apserver01 gunicorn[19182]: File "/home/ubuntu/python3/lib/python3.8/site-packages/gunicorn/workers/ggevent.py", line 16, in <module>
May 15 02:30:08 blogsystem-apserver01 gunicorn[19182]: raise RuntimeError("gevent worker requires gevent 1.4 or higher")
May 15 02:30:08 blogsystem-apserver01 gunicorn[19182]: RuntimeError: gevent worker requires gevent 1.4 or higher
May 15 02:30:08 blogsystem-apserver01 gunicorn[19182]: ]
May 15 02:30:08 blogsystem-apserver01 systemd[1]: gunicorn.service: Main process exited, code=exited, status=1/FAILURE
May 15 02:30:08 blogsystem-apserver01 systemd[1]: gunicorn.service: Failed with result 'exit-code'.
root@blogsystem-apserver01:/etc/systemd/system#
</code></pre>
<p>・gevent worker install</p>
<pre><code>root@blogsystem-apserver01:/etc/systemd/system# python3 -m pip install gevent
Requirement already satisfied: gevent in /usr/local/lib/python3.8/dist-packages (1.4.0)
Requirement already satisfied: greenlet>=0.4.14 in /usr/local/lib/python3.8/dist-packages (from gevent) (1.1.0)
WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
root@blogsystem-apserver01:/etc/systemd/system#
</code></pre>
<p>・Gunicorn status after reboot after installing gevent worker</p>
<pre><code>root@blogsystem-apserver01:/etc/systemd/system# systemctl restart gunicorn
root@blogsystem-apserver01:/etc/systemd/system#
root@blogsystem-apserver01:/etc/systemd/system# systemctl status gunicorn
● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-05-15 03:08:42 UTC; 1s ago
Process: 19196 ExecStart=/home/ubuntu/python3/bin/gunicorn --access-logfile - --workers 3 --worker-class gevent --bind unix:/home/ubuntu/socket/myproject.sock myproject.wsgi:a
pplication (code=exited, status=1/FAILURE)
Main PID: 19196 (code=exited, status=1/FAILURE)
May 15 03:08:42 blogsystem-apserver01 gunicorn[19196]: File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
May 15 03:08:42 blogsystem-apserver01 gunicorn[19196]: File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
May 15 03:08:42 blogsystem-apserver01 gunicorn[19196]: File "<frozen importlib._bootstrap_external>", line 783, in exec_module
May 15 03:08:42 blogsystem-apserver01 gunicorn[19196]: File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
May 15 03:08:42 blogsystem-apserver01 gunicorn[19196]: File "/home/ubuntu/python3/lib/python3.8/site-packages/gunicorn/workers/ggevent.py", line 16, in <module>
May 15 03:08:42 blogsystem-apserver01 gunicorn[19196]: raise RuntimeError("gevent worker requires gevent 1.4 or higher")
May 15 03:08:42 blogsystem-apserver01 gunicorn[19196]: RuntimeError: gevent worker requires gevent 1.4 or higher
May 15 03:08:42 blogsystem-apserver01 gunicorn[19196]: ]
May 15 03:08:42 blogsystem-apserver01 systemd[1]: gunicorn.service: Main process exited, code=exited, status=1/FAILURE
May 15 03:08:42 blogsystem-apserver01 systemd[1]: gunicorn.service: Failed with result 'exit-code'.
root@blogsystem-apserver01:/etc/systemd/system#
</code></pre>
| kan | <p>I solved it myself. I'm sorry for making a noise.</p>
<p>The cause was that the VPC of RDS originally used in the existing environment (AWS) and the VPC of the newly constructed AWS EKS cluster were different. As a result, the AP server was unable to connect to RDS and gunicorn was timing out. When I installed ESK and RDS in the same VPC, it became accessible from a browser and the problem was solved.</p>
| kan |
<p>I am a newbie to Kubernetes and trying to set up a cluster that runs Cassandra in my local machine. I have used kind to create a cluster that was successful. After that when I try to run <em><code>kubectl cluster-info</code></em>, I am getting the below error:</p>
<p><em><code>Unable to connect to the server: dial tcp 127.0.0.1:45451: connectex: No connection could be made because the target machine actively refused it.</code></em></p>
<p>On <em><code>docker container ls</code></em>, I could see the control-plane running on the container using the port as below:</p>
<pre class="lang-sh prettyprint-override"><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1c4b6101b8ff kindest/node:v1.18.2 "/usr/local/bin/entr" 3 hours ago Up 2 hours 127.0.0.1:45451->6443/tcp kind-cassandra-control-plane
625fee22e0e6 kindest/node:v1.18.2 "/usr/local/bin/entr" 3 hours ago Up 2 hours kind-cassandra-worker
</code></pre>
<p>Am able to view the config file by executing <em><code>kubectl config view</code></em> as below, which confirms the kubectl is able to read the correct config file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:45451
name: kind-kind-cassandra
contexts:
- context:
cluster: kind-kind-cassandra
user: kind-kind-cassandra
name: kind-kind-cassandra
current-context: kind-kind-cassandra
kind: Config
preferences: {}
users:
- name: kind-kind-cassandra
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
</code></pre>
<p><strong>UPDATE:</strong></p>
<p>When I run <em><code>netstat</code></em>, I could see below as the active connections on 127.0.0.1</p>
<pre><code> TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:2869
TCP 127.0.0.1:5354
TCP 127.0.0.1:5354
TCP 127.0.0.1:27015
TCP 127.0.0.1:49157
TCP 127.0.0.1:49158
TCP 127.0.0.1:49174
</code></pre>
<p>Any help is really appreciated. TIA</p>
| Prince | <p>Try using this command</p>
<pre><code>minikube start --driver=docker
</code></pre>
| Sunil |
<p>I've a task list to Kubernetes pods by its kind. For Eg List kubernetes pods that are in different namespace using Jsonpath.
I'm using the below command which is not working.</p>
<pre><code>kubectl get pods -o jsonpath='{.items[?(@.items.kind=="Elasticsearch")]}'
</code></pre>
| P Nisanth Reddy | <p>You can try to use the following command.</p>
<pre><code>kubectl get pods -o jsonpath='{.items[?(@.kind=="Pod")]}' --all-namespaces
</code></pre>
| Lejdi Prifti |
<p>I'm having issues getting the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">ingress-nginx</a> Helm Chart to install via Terraform with Minikube, yet I'm able to install it successfully via the command line. Here is my vanilla Terraform code -</p>
<pre><code>provider "kubernetes" {
host = "https://127.0.0.1:63191"
client_certificate = base64decode(var.client_certificate)
client_key = base64decode(var.client_key)
cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
}
provider "helm" {
kubernetes {
}
}
resource "helm_release" "nginx" {
name = "beta-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
namespace = "default"
}
</code></pre>
<p>I get the following logs when I apply the Terraform code above -</p>
<pre><code>helm_release.nginx: Still creating... [4m31s elapsed]
2022-01-26T14:32:49.623-0600 [TRACE] dag/walk: vertex "root" is waiting for "provider[\"registry.terraform.io/hashicorp/helm\"] (close)"
2022-01-26T14:32:49.624-0600 [TRACE] dag/walk: vertex "meta.count-boundary (EachMode fixup)" is waiting for "helm_release.nginx"
2022-01-26T14:32:49.624-0600 [TRACE] dag/walk: vertex "provider[\"registry.terraform.io/hashicorp/helm\"] (close)" is waiting for "helm_release.nginx"
2022-01-26T14:32:51.299-0600 [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2022/01/26 14:32:51 [DEBUG] Service does not have load balancer ingress IP address: default/beta-nginx-ingress-nginx-controller: timestamp=2022-01-26T14:32:51.299-0600
2022-01-26T14:32:53.302-0600 [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2022/01/26 14:32:53 [DEBUG] Service does not have load balancer ingress IP address: default/beta-nginx-ingress-nginx-controller: timestamp=2022-01-26T14:32:53.302-0600
2022-01-26T14:32:54.626-0600 [TRACE] dag/walk: vertex "provider[\"registry.terraform.io/hashicorp/helm\"] (close)" is waiting for "helm_release.nginx"
Warning: Helm release "beta-nginx" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then run Terraform again.
with helm_release.nginx,
on main.tf line 21, in resource "helm_release" "nginx":
21: resource "helm_release" "nginx" {
Error: timed out waiting for the condition
with helm_release.nginx,
on main.tf line 21, in resource "helm_release" "nginx":
21: resource "helm_release" "nginx" {
</code></pre>
<hr />
<p>When I try installing the Helm Chart via the command line <code>helm install beta-nginx ingress-nginx/ingress-nginx</code> it installs the chart no problem.</p>
<p>Here are a few version numbers:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Terraform</td>
<td>1.0.5</td>
</tr>
<tr>
<td>Minikube</td>
<td>1.25.1</td>
</tr>
<tr>
<td>Kubernetes</td>
<td>1.21.7</td>
</tr>
<tr>
<td>Helm</td>
<td>3.7.2</td>
</tr>
</tbody>
</table>
</div> | Ryan Grush | <p>This is because Terraform waits for LoadBalancer to get a public IP address, but this never happens, so the <code>Error: timed out waiting for the condition</code> error occurs:</p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
beta-nginx-ingress-nginx-controller LoadBalancer <PRIVATE_IP> <pending> 80:30579/TCP,443:30909/TCP 7m32s
</code></pre>
<p>You can install <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> to get a load-balancer implementation or create a NodePort instead of LoadBalancer. I'll briefly demonstrate the second option.</p>
<p>All you have to do is modify the <a href="https://github.com/kubernetes/ingress-nginx/blob/c1be3499eb98756af4d2f5a5d165e6ff11cceeb5/charts/ingress-nginx/values.yaml#L501" rel="nofollow noreferrer"><code>controller.service.type</code></a> value from the <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml" rel="nofollow noreferrer">values.yaml</a> file:</p>
<pre><code>$ cat beta-nginx.tf
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
resource "helm_release" "nginx" {
name = "beta-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
namespace = "default"
set {
name = "controller.service.type"
value = "NodePort"
}
}
$ terraform apply
...
+ set {
+ name = "controller.service.type"
+ value = "NodePort"
}
...
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
beta-nginx-ingress-nginx-controller NodePort <PRIVATE_IP> <none> 80:32410/TCP,443:31630/TCP 74s
</code></pre>
<p>As you can see above, the NodePort service has been created instead of the LoadBalancer.</p>
| matt_j |
<p>I am using prometheus to get the container resource request cpu cores. I am using the follwing code:</p>
<pre><code>kube_pod_container_resource_requests_cpu_cores
</code></pre>
<p>I am getting all the containers except one.</p>
<p>I used docker ps and i am seeing the container is started.</p>
<p>any idea why I am not getting the container in prometheus result ?</p>
| karlos | <p>Does your container/pod request a CPU? As I've noticed this metric shouldn't return a value when the container/pod doesn't have the following: <code>spec.containers[].resources.requests.cpu</code> in its deployment or other kinds of objects where the container is defined.</p>
| Hayk Davtyan |
<p>Here is the output I am getting:</p>
<pre><code> [root@ip-10-0-3-103 ec2-user]# kubectl get pod --namespace=migration
NAME READY STATUS RESTARTS AGE
clear-nginx-deployment-cc77649fb-j8mzj 0/1 Pending 0 118m
clear-nginx-deployment-temp-cc77649fb-hxst2 0/1 Pending 0 41s
</code></pre>
<p>Could not understand the message shown in json:</p>
<pre><code>*"status":
{
"conditions": [
{
"message": "0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.",
"reason": "Unschedulable",
"status": "False",
"type": "PodScheduled"
}
],
"phase": "Pending",
"qosClass": "BestEffort"
}*
</code></pre>
<p>If you could please help to get through this.
The earlier question on stackoverflow doesn't answer my query as my message output is different.</p>
| kkpareek | <p>This is due to the fact that your Pods have been instructed to claim storage, however, in your case there is storage available.
Check your Pods with <code>kubectl get pods <pod-name> -o yaml</code> and look at the exact yaml that has been applied to the cluster. In there you should be able to see that the Pod is trying to claim a PersistentVolume (PV).</p>
<p>To quickly create a PV backed by a <code>hostPath</code> apply the following yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: stackoverflow-hostpath
namespace: migration
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
</code></pre>
<p>Kubernetes will exponentially try to schedule the Pod again; to speed things up delete one of your pods (<code>kubectl delete pods <pod-name></code>) to reschedule it immediately.</p>
| F1ko |
<p>I have created one docker image and publish that image to Jfrog Artifactory.
Now, I am trying to create kubernetes Pod or trying to create deployment using that image.</p>
<p>Find the content of pod.yaml file</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: <name of pod>
spec:
nodeSelector:
type: "<name of node>"
containers:
- name: <name of container>
image: <name and path of image>
imagePullPolicy: Always
</code></pre>
<p>But I am getting <strong>ErrImagePull</strong> status after pod creation. That means pod is not getting created succesfully.
Error: error: code = Unknown desc = failed to pull and unpack image</p>
<p>Can anyone please help me with this?</p>
| vaijayanti | <p>If you work with private registry so you need to supply imagePullSecrets with the credentials to pull the image.</p>
| Anastasia Grinman |
<p>I am trying to understand the VirtualService and DestinationRule resources in relation with the namespace which should be defined and if they are really namespaced resources or they can be considered as cluster-wide resources also.</p>
<p>I have the following scenario:</p>
<ul>
<li>The frontend service (web-frontend) access the backend service (customers).</li>
<li>The frontend service is deployed in the frontend namespace</li>
<li>The backend service (customers) is deployed in the backend namespace</li>
<li>There are 2 versions of the backend service customers (2 deployments), one related to the version v1 and one related to the version v2.</li>
<li>The default behavior for the clusterIP service is to load-balance the request between the 2 deployments (v1 and v2) and my goal is by creating a DestinationRule and a VirtualService to direct the traffic only to the deployment version v1.</li>
<li>What I want to understand is which is the appropriate namespace to define such DestinationRule and a VirtualService resources. Should I create the necessary DestinationRule and VirtualService resources in the frontend namespace or in the backend namespace?</li>
</ul>
<p>In the frontend namespace I have the web-frontend deployment and and the related service as follow:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: frontend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
replicas: 1
selector:
matchLabels:
app: web-frontend
template:
metadata:
labels:
app: web-frontend
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/web-frontend:1.0.0
imagePullPolicy: Always
name: web
ports:
- containerPort: 8080
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers.backend.svc.cluster.local'
---
kind: Service
apiVersion: v1
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
selector:
app: web-frontend
type: NodePort
ports:
- port: 80
name: http
targetPort: 8080
</code></pre>
<p>I have expose the web-frontend service by defining the following Gateway and VirtualService resources as follow:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-all-hosts
# namespace: default # Also working
namespace: frontend
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: web-frontend
# namespace: default # Also working
namespace: frontend
spec:
hosts:
- "*"
gateways:
- gateway-all-hosts
http:
- route:
- destination:
host: web-frontend.frontend.svc.cluster.local
port:
number: 80
</code></pre>
<p>In the backend namespace I have the customers v1 and v2 deployments and related service as follow:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: backend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
namespace: backend
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/customers:1.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v2
namespace: backend
labels:
app: customers
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v2
template:
metadata:
labels:
app: customers
version: v2
spec:
containers:
- image: gcr.io/tetratelabs/customers:2.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: customers
namespace: backend
labels:
app: customers
spec:
selector:
app: customers
type: NodePort
ports:
- port: 80
name: http
targetPort: 3000
</code></pre>
<p>I have created the following DestinationRule and VirtualService resources to send the traffic only to the v1 deployment.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
host: customers.backend.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
hosts:
- "customers.backend.svc.cluster.local"
http:
## route - subset: v1
- route:
- destination:
host: customers.backend.svc.cluster.local
port:
number: 80
subset: v1
</code></pre>
<ul>
<li><p>The <strong>question</strong> is which is the appropriate namespace to define the VR and DR resources for the customer service?</p>
</li>
<li><p>From my test I see that I can use either the frontend namespace, or the backend namespace. Why the VR,DR can be created to the frontend namespace or in the backend namespaces and in both cases are working? Which is the correct one?</p>
</li>
<li><p>Are the DestinationRule and VirtualService resources really namespaced resources or can be considered as cluster-wide resources ?
Are the low level routing rules of the proxies propagated to all envoy proxies regardless of the namespace?</p>
</li>
</ul>
| Gerassimos Mitropoulos | <p>A DestinationRule to actually be applied during a request needs to be on the destination rule lookup path:</p>
<pre><code>-> client namespace
-> service namespace
-> the configured meshconfig.rootNamespace namespace (istio-system by default)
</code></pre>
<p>In your example, the "web-frontend" client is in the <strong>frontend</strong> Namespace (<code>web-frontend.frontend.svc.cluster.local</code>), the "customers" service is in the <strong>backend</strong> Namespace (<code>customers.backend.svc.cluster.local</code>), so the <code>customers</code> DestinationRule should be created in one of the following Namespaces: <strong>frontend</strong>, <strong>backend</strong> or <strong>istio-system</strong>. Additionally, please note that the <strong>istio-system</strong> Namespace isn't recommended unless the destination rule is really a global configuration that is applicable in all Namespaces.</p>
<p>To make sure that the destination rule will be applied we can use the <code>istioctl proxy-config cluster</code> command for the <code>web-frontend</code> Pod:</p>
<pre><code>$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 - customers.frontend
customers.backend.svc.cluster.local 80 v1 customers.frontend
customers.backend.svc.cluster.local 80 v2 customers.frontend
</code></pre>
<p>When the destination rule is created in the <strong>default</strong> Namespace, it will not be applied during the request:</p>
<pre><code>$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 -
</code></pre>
<p>For more information, see the <a href="https://istio.io/latest/docs/ops/best-practices/traffic-management/#cross-namespace-configuration" rel="nofollow noreferrer">Control configuration sharing in namespaces</a> documentation.</p>
| matt_j |
<p>I have a problem with authentication kubernetes webapp via oauth2-proxy/keycloak. You don't know what's wrong</p>
<ul>
<li>Webapp (test-app.domain.com)</li>
<li>oauth2-proxy (oauth2-proxy.domain.com)</li>
<li>keycloak (keycloak-test.domain.com)</li>
</ul>
<p>Those three app runs separately.</p>
<p><strong>description of the authentication procedure:</strong></p>
<p>After open <strong>test.domain.com</strong> is redirected to <a href="https://keycloak-test.domain.com/auth/realms/local/protocol/openid-connect/auth?approval_prompt=force&client_id=k8s2&redirect_uri=https%3A%2F%2Foauth2-proxy.domain.com%2Foauth2%2Fcallback&response_type=code&scope=openid+profile+email+users&state=7a6504626c89d85dad9337f57072d7e4%3Ahttps%3A%2F%2Ftest-app%2F" rel="noreferrer">https://keycloak-test.domain.com/auth/realms/local/protocol/openid-connect/auth?approval_prompt=force&client_id=k8s2&redirect_uri=https%3A%2F%2Foauth2-proxy.domain.com%2Foauth2%2Fcallback&response_type=code&scope=openid+profile+email+users&state=7a6504626c89d85dad9337f57072d7e4%3Ahttps%3A%2F%2Ftest-app%2F</a></p>
<p>Keycloak login page is displayed correctly but after user login I get: 500 Internal Server Error with URL <a href="https://oauth2-proxy.domain.com/oauth2/callback?state=753caa3a281921a02b97d3efeabe7adf%3Ahttps%3A%2F%2Ftest-app.domain.com%2F&session_state=f5d45a13-5383-4a79-aa7a-56bbaa16056f&code=5344ae72-a9ee-448f-95ef-45e413f69f4b.f5d45a13-5383-4a79-aa7a-56bbaa16056f.78732ee5-af17-43fc-9f52-856e06bfce04" rel="noreferrer">https://oauth2-proxy.domain.com/oauth2/callback?state=753caa3a281921a02b97d3efeabe7adf%3Ahttps%3A%2F%2Ftest-app.domain.com%2F&session_state=f5d45a13-5383-4a79-aa7a-56bbaa16056f&code=5344ae72-a9ee-448f-95ef-45e413f69f4b.f5d45a13-5383-4a79-aa7a-56bbaa16056f.78732ee5-af17-43fc-9f52-856e06bfce04</a></p>
<p><strong>LOG from oauth2-proxy</strong></p>
<pre><code>[2021/03/16 11:25:35] [stored_session.go:76] Error loading cookied session: cookie "_oauth2_proxy" not present, removing session
10.30.21.14:35382 - - [2021/03/16 11:25:35] oauth2-proxy.domain.com GET - "/oauth2/auth" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" 401 13 0.000
10.96.5.198:35502 - - [2021/03/16 11:25:35] oauth2-proxy.domain.com GET - "/oauth2/start?rd=https://test-app.domain.com/" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" 302 400 0.000
[2021/03/16 11:25:39] [oauthproxy.go:753] Error redeeming code during OAuth2 callback: email in id_token ([email protected]) isn't verified
10.96.5.198:35502 - - [2021/03/16 11:25:39] oauth2-proxy.domain.com GET - "/oauth2/callback?state=1fe22deb33ce4dc7e316f23927b8d821%3Ahttps%3A%2F%2Ftest-app.domain.com%2F&session_state=c69d7a8f-32f2-4a84-a6af-41b7d2391561&code=4759cce8-1c1c-4da3-ba94-9987c2ce3e02.c69d7a8f-32f2-4a84-a6af-41b7d2391561.78732ee5-af17-43fc-9f52-856e06bfce04" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" 500 345 0.030
</code></pre>
<p><strong>test-app ingress</strong></p>
<pre><code> apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-url: "oauth2-proxy.domain.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "oauth2-proxy.domain.com/oauth2/start?rd=$scheme://$best_http_host$request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "x-auth-request-user, x-auth-request-email, x-auth-request-access-token"
nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
name: test-app
namespace: its
spec:
rules:
- host: test-app.domain.com
http:
paths:
- path: /
backend:
serviceName: test-app
servicePort: http
tls:
- hosts:
- test-app.domain.com
secretName: cert-wild.test-proxy.domain.com
</code></pre>
<p><strong>oauth2-proxy config and ingress</strong></p>
<pre><code> containers:
- name: oauth2-proxy
image: quay.io/oauth2-proxy/oauth2-proxy:latest
ports:
- containerPort: 8091
args:
- --provider=oidc
- --client-id=k8s2
- --client-secret=Sd28cf1-1e14-4db1-8ed1-5ba64e1cd421
- --cookie-secret=x-1vrrMhC-886ITuz8ySNw==
- --oidc-issuer-url=https://keycloak-test.domain.com/auth/realms/local
- --email-domain=*
- --scope=openid profile email users
- --cookie-domain=.domain.com
- --whitelist-domain=.domain.com
- --pass-authorization-header=true
- --pass-access-token=true
- --pass-user-headers=true
- --set-authorization-header=true
- --set-xauthrequest=true
- --cookie-refresh=1m
- --cookie-expire=30m
- --http-address=0.0.0.0:8091
---
apiVersion: v1
kind: Service
metadata:
name: oauth2-proxy
labels:
name: oauth2-proxy
spec:
ports:
- name: http
port: 8091
targetPort: 8091
selector:
name: oauth2-proxy
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
name: oauth2-proxy
namespace: its
spec:
rules:
- host: oauth2-proxy.domain.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 8091
tls:
- hosts:
- oauth2-proxy.domain.com
secretName: cert-wild.oauth2-proxy.domain.com
</code></pre>
| Breed | <p>You can try setting --insecure-oidc-allow-unverified-email in your oauth2-proxy configuration.
Alternatively, in keycloak, mark user email verified in user settings..</p>
| user16472241 |
<p>I'm currently creating a Minikube cluster for the developers, they will each have their own Minikube cluster in their local machine for testing, assuming the developers don't know anything about Kubernetes, is creating a bash script to handle all the installations and the setup of the pod the recommended way? Is it possible to do it through Terraform instead?Or there's other way to do this easier? Thanks!</p>
| yeikls | <p>Depending on what your requirements are, choosing <a href="https://github.com/kubernetes/minikube" rel="nofollow noreferrer">Minikube</a> may or may not be the best way to go.
Just to give you some other options you might want to take a look at the following tools when it comes to local enviornments for developers (depending on their needs):</p>
<ul>
<li><a href="https://github.com/kubernetes-sigs/kind" rel="nofollow noreferrer">kind</a></li>
<li><a href="https://github.com/galexrt/k8s-vagrant-multi-node" rel="nofollow noreferrer">k8s-vagrant-multi-node</a></li>
</ul>
<p>Since you do not seem to care about Windows or other users (at least they weren't mentioned), a bash script <em>may</em> be the simplest way to go. However, usually that's were tools like <a href="https://github.com/ansible/ansible" rel="nofollow noreferrer">Ansible</a> come into play. They help you with automating things in a clear fashion <strong>and</strong> allow for proper testing. Some tools (like Ansible) even have support for certain Windows features that may be useful.</p>
<p><strong>TL;DR</strong></p>
<p>A Bash script is not the recommended way as it has lots of pain points that come with it, however, it may be the fastest approach depending on your skillset.
If you want to do it properly use tools like Ansible, Chef, Puppet, etc.</p>
| F1ko |
<p>Kubernetes provides us two deployment strategies. One is <strong><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update</a></strong> and another one is <strong><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Recreate</a></strong>. I should use Rolling Update when I don't want go off air. But when should I be using Recreate?</p>
| Aditya Bhuyan | <p>There are basically two reasons why one would want/need to use <code>Recreate</code>:</p>
<ul>
<li>Resource issues. Some clusters simply do not have enough resources to be able to schedule additional Pods, which then results in them being stuck and the update procedure with it. This happens especially for local development clusters and/or applications that consume large amout resources.</li>
<li>Bad applications. Some applications (especially legacy or monolithic setups) simply cannot handle it when new Pods - that do the exact same thing as they do - spin up. There are too many reasons as to why this may happen to cover all of them here but essentially it means that an application is not suitable for scaling.</li>
</ul>
| F1ko |
<p>I am trying to execute the <code>kubectl</code> command using python script but keep getting error. I have requirement to execute the <code>kubectl</code> command to create pod and check the <code>pod</code> log for any failure.</p>
<p>What am I doing wrong here?</p>
<pre><code>import subprocess
command = 'kubectl apply -f deployment.yaml'
check_output= subprocess.check_output(command)
print(check_output)
</code></pre>
<p>error</p>
<pre><code>Traceback (most recent call last):
File "/usr/bin/cma-scripts/kubectl.py", line 6, in <module>
check_output= subprocess.check_output(command)
File "/usr/local/lib/python3.9/subprocess.py", line 424, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/local/lib/python3.9/subprocess.py", line 505, in run
with Popen(*popenargs, **kwargs) as process:
File "/usr/local/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/local/lib/python3.9/subprocess.py", line 1821, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'kubectl apply -f deployment.yaml'
</code></pre>
| user1591156 | <p>You can execute <code>kubectl</code> commands with Python, but you can also use the <a href="https://github.com/kubernetes-client/python" rel="noreferrer">Python client</a> for the Kubernetes API.</p>
<p>Below I will give examples for both options.</p>
<h3>Executing kubectl commands with Python.</h3>
<p>You can use the <a href="https://docs.python.org/3.7/library/subprocess.html" rel="noreferrer">subprocess</a> module:</p>
<pre><code>$ cat script-1.py
#!/usr/bin/python3.7
import subprocess
subprocess.run(["kubectl", "apply", "-f", "deployment.yaml"])
$ ./script-1.py
deployment.apps/web-app created
</code></pre>
<p>You can also use the <a href="https://docs.python.org/3.7/library/os.html" rel="noreferrer">os</a> module:</p>
<pre><code>$ cat script-1.py
#!/usr/bin/python3.7
import os
os.system("kubectl apply -f deployment.yaml")
$ ./script-1.py
deployment.apps/web-app created
</code></pre>
<h3>Using the Python client for the kubernetes API.</h3>
<p>As previously mentioned, you can also use a Python client to create a Deployment.</p>
<p>Based on the <a href="https://github.com/kubernetes-client/python/blob/master/examples/deployment_create.py" rel="noreferrer">deployment_create.py</a> example, I've created a script to deploy <code>deployment.yaml</code> in the <code>default</code> Namespace:</p>
<pre><code>$ cat script-2.py
#!/usr/bin/python3.7
from os import path
import yaml
from kubernetes import client, config
def main():
config.load_kube_config()
with open(path.join(path.dirname(__file__), "deployment-1.yaml")) as f:
dep = yaml.safe_load(f)
k8s_apps_v1 = client.AppsV1Api()
resp = k8s_apps_v1.create_namespaced_deployment(
body=dep, namespace="default")
print("Deployment created. status='%s'" % resp.metadata.name)
if __name__ == '__main__':
main()
$ ./script-2.py
Deployment created. status='web-app'
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE
web-app 1/1 1 1
</code></pre>
| matt_j |
<p>I'm using K8s on GCP.</p>
<p>Here is my deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: simpleapp-direct
labels:
app: simpleapp-direct
role: backend
stage: test
spec:
replicas: 1
selector:
matchLabels:
app: simpleapp-direct
version: v0.0.1
template:
metadata:
labels:
app: simpleapp-direct
version: v0.0.1
spec:
containers:
- name: simpleapp-direct
image: gcr.io/applive/simpleapp-direct:latest
imagePullPolicy: Always
</code></pre>
<p>I first apply the deployment file with kubectl apply command</p>
<pre><code>kubectl apply -f deployment.yaml
</code></pre>
<p>The pods were created properly.</p>
<p>I was expecting that every time I would push a new image with the tag latest, the pods would be automatically killed and restart using the new images.</p>
<p>I tried the rollout command</p>
<pre><code>kubectl rollout restart deploy simpleapp-direct
</code></pre>
<p>The pods restart as I wanted.</p>
<p>However, I don't want to run this command every time there is a new latest build.
How can I handle this situation ?.</p>
<p>Thanks a lot</p>
| user1739211 | <p>Try to use image hash instead of tag in your Pod Definition.</p>
<p>Generally: there is no way to automatically restart pods when the new image is ready. It is generally advisable not to use image:latest (or just image_name) in Kubernetes as it can cause difficulties with rollback of your deployment. Also you need to make sure that the flag: imagePullPolicy is set to Always. Normally when you use CI/CD or git-ops your deployment is updated automatically by these tools when the new image is ready and passed thru the tests.</p>
<p>When your Docker image is updated, you need to setup a trigger on this update within your CI/CD pipilne to re-run the deployment. Not sure about the base system/image where you build your docker image, but you can add there kubernetes certificates and run the above commands like you do on your local computer.</p>
| yurasov |
<p>I am new to the custom controllers and trying to understand this. I have started referring the sample-controller but unable to find much difference or understand properly in between the example files</p>
<ol>
<li><a href="https://github.com/kubernetes/sample-controller/blob/master/artifacts/examples/crd.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/sample-controller/blob/master/artifacts/examples/crd.yaml</a></li>
<li><a href="https://github.com/kubernetes/sample-controller/blob/master/artifacts/examples/crd-status-subresource.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/sample-controller/blob/master/artifacts/examples/crd-status-subresource.yaml</a></li>
</ol>
<p>Both the files look similar to me except for the below part in crd-status-subresource.yaml.</p>
<pre><code>subresources:
status: {}
</code></pre>
<p>Can anyone help or give suggestions on this to proceed. ?</p>
| john snowker | <p>Just to be on the same page, here is a quick summary of what a controller in Kubernetes does:</p>
<p>It watches over a certain state - usually, CustomResources (CRs) that can be defined using CustomResourceDefinitions (CRDs) - and performs actions based on rules that you define inside your code.</p>
<blockquote>
<p>I have started referring the sample-controller but unable to find much difference or understand properly in between the example files</p>
</blockquote>
<p>The files you are referring to do not have any difference other than what you already pointed out, so there truly is nothing more to look for.</p>
<p>If you take a detailed look at a native Kubernetes object such as a Pod (<code>kubectl get pod <some-pod> -o yaml</code>) you will see that it has a <code>.status</code> field which simply stores additional information. By enabling the <code>status</code> subresource you are telling Kubernetes to create a CR where you can then go ahead and edit that additional <code>.status</code> field by accessing a new REST API path: <code>/apis/<group>/<version-name>/namespaces/*/<kind>/status</code>. If you don't need it, then don't define it, that's it.</p>
<p>As to why one may want to add the <code>status</code> subresource depends on the use-case of the CR. Sometimes you simply want to have a more verbose field to store information in for a user to look at and sometimes you are telling your controller to fetch data from there as it represents the current status of the object. Just take a look at a Pods <code>.status</code> field and you will see some nice additional data in there such as if the Pod is <code>Ready</code>, information about the containers, and so on.</p>
<p>With that being said, <code>status</code> is not the only subresource. You may also want to take a look at the <code>scale</code> subresource as well (depending on the use-case). For more information regarding subresources you can refer to: <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#subresources" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#subresources</a></p>
| F1ko |
<p>I have defined my service app running on port 9000. It is not web/http server it is simply just a service application running as windows service on that port to which other apps connect to (outside the container).
So I have defined Port 9000 in my service definition and in my config map definition. We are using NGINX as a proxy for accessing from outside and everything works.</p>
<p>Nginx Service:</p>
<pre><code> - name: 9000-tcp
nodePort: 30758
port: 9000
protocol: TCP
targetPort: 9000
</code></pre>
<p>Config Map:</p>
<pre><code>apiVersion: v1
data:
"9000": default/frontarena-ads-aks-test:9000
kind: ConfigMap
</code></pre>
<p>Service definition:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: frontarena-ads-aks-test
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 9000
selector:
app: frontarena-ads-aks-test
</code></pre>
<p>Ingress definition:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ads-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontarena-ads-aks-test
servicePort: 9000
</code></pre>
<p>As mentioned everything works. I know that TCP is used for L4 layer and HTTP for L7 Application Layer.
I need to access my app from another app solely by its hostname and port. Without any HTTP Url.
So basically does it mean that I do NOT need actually my Ingress Controller definition at all?
I do not need to deploy it at all?
I would only need it if I need HTTP access with some URL for example: <code>hostname:port/pathA or hostname:port/pathB</code></p>
<p>Is that correct? For regular TCP connection we do not need at all our Ingress YAML definition? Thank you</p>
| Veljko | <p>Yes, you don't need ingress at all in this case. According to kubernetes official doc, Ingress is:</p>
<blockquote>
<p>An API object that manages external access to the services in a cluster, typically HTTP.</p>
</blockquote>
<p>So, if you don't need any external access via http, you can omit ingress.</p>
<p>Ref: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
| Pulak Kanti Bhowmick |
<p>So currently I am trying to set up this Management Cluster through the Google Cloud shell using <a href="https://www.kubeflow.org/docs/distributions/gke/deploy/management-setup/" rel="nofollow noreferrer">this</a> guide. However, I have been facing along the steps.</p>
<p>First one is the fact that that kpt does not seem to have any <code>kpt cfg</code> functionality anymore. To combat this, I have downloaded the binary for <code>kpt 0.39.3</code>. because the latest one gives me the error:</p>
<pre><code>error: unknown command "cfg" for "kpt"
Did you mean this?
fn
pkg
</code></pre>
<p>So I made the <code>Kptfile</code> locally using <code>0.39.3</code> and then placed it in the directory for Google Cloud Shell to pickup. Now when I call <code>make apply-cluster</code>, I get the error:</p>
<pre><code>I0824 03:23:40.084196 1255 main.go:230] reconcile serviceusage.cnrm.cloud.google.com/Service container.googleapis.com
Unexpected error: error reconciling objects: error reconciling Service:PROJECT/container.googleapis.com: error fetching service "projects/PROJECT/services/container.googleapis.com": googleapi: Error 400: The resource id projects/PROJECT is invalid.
</code></pre>
<p>But I know for a fact that this is a functional</p>
| SDG | <p>I have just tried to follow the guide you mentioned <a href="https://www.kubeflow.org/docs/distributions/gke/deploy/management-setup/" rel="nofollow noreferrer">here</a> and everything worked for me.</p>
<p>In order to avoid your fist error I decided to erase the kpt using <code>gcloud components remove kpt</code>, and install the old one which is fully compatible with this kubeflow guide using this <a href="https://googlecontainertools.github.io/kpt/installation/binaries/" rel="nofollow noreferrer">link</a>.</p>
<p>Regarding your second error when you call the <code>make apply-cluster</code> and you receive the <code>The resource id projects/PROJECT is invalid</code> it can be related to not set the values <a href="https://www.kubeflow.org/docs/distributions/gke/deploy/management-setup/#configure-kpt-setter-values" rel="nofollow noreferrer">accordingly</a></p>
| J.Vander |
<p>I am new to Azure Kubernetes Service. I have created an Azure Kubernetes cluster and tried to deploy some workload in it. The .yaml file as follows</p>
<pre><code>- apiVersion: v1
kind: Namespace
metadata:
name: azure-vote
spec:
finalizers:
- kubernetes
- apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: 'yes'
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
- apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
- apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: azure-vote-back
- apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
</code></pre>
<p>When I deploy this .yaml via Azure CLI it gives me a validation error but doesn't indicate where is it? When I run the kubectl apply -f ./filename.yaml --validate=false it gives <strong>"cannot unmarshal array into Go value of type unstructured.detector"</strong> error. However, when I run the same yaml in Azure portal UI it runs without any error. Appreciate if someone can mention the reason for this and how to fix this.</p>
| Rama | <p>I tried to run the code you have provided in the <em><strong>Portal</strong></em> as well as <em><strong>Azure CLI</strong></em>. It successfully got created in <em>Portal UI</em> by adding the <code>YAML code</code> but using <em>Azure CLI</em> I received the same error as you :</p>
<p><a href="https://i.stack.imgur.com/bUZuG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bUZuG.png" alt="enter image description here" /></a></p>
<p>After doing some modifications in your <code>YAML file</code> and validating it , I ran the same command again and it successfully got deployed in <em><strong>Azure CLI</strong></em>:</p>
<p><strong>YAML File:</strong></p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: azure-vote
spec:
finalizers:
- kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
</code></pre>
<p><strong>Output:</strong></p>
<p><a href="https://i.stack.imgur.com/Br7Cy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Br7Cy.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/0nfuk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0nfuk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/dxeod.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dxeod.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/UnS4m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UnS4m.png" alt="enter image description here" /></a></p>
| Ansuman Bal |
<p>I am running FPM and nginx as two containers in one pod. My app is working, I can access it but the browser do not render the CSS files. No errors in the console.
My deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
volumes:
- name: shared-files
emptyDir: {}
- name: nginx-config-volume
configMap:
name: test
containers:
- image: test-php
name: app
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: shared-files
mountPath: /var/appfiles
lifecycle:
postStart:
exec:
command: ['sh', '-c', 'cp -r /var/www/* /var/appfiles']
- image: nginx
name: nginx
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- name: shared-files
mountPath: /var/appfiles
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
</code></pre>
<p>Nginx config:</p>
<pre><code>events {
}
http {
server {
listen 80;
root /var/appfiles/;
index index.php index.html index.htm;
# Logs
access_log /var/log/nginx/tcc-webapp-access.log;
error_log /var/log/nginx/tcc-webapp-error.log;
location / {
# try_files $uri $uri/ =404;
# try_files $uri $uri/ /index.php?q=$uri&$args;
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
}
</code></pre>
<p>I can open the page in the browser, I can see all components, links, buttons and so on but page is not rendered and looks like css is not load.</p>
| rholdberh | <p>In order to resolve your issue, please change <code>configMap</code> to its exact name as <code>nginx-configmap</code> and in your <em>configMap</em> nginx configuration file can be as the following:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: "nginx-configmap"
data:
nginx.conf: |
server {
listen 80;
server_name _;
charset utf-8;
root /var/appfiles/;
access_log /var/log/nginx/tcc-webapp-access.log;
error_log /var/log/nginx/tcc-webapp-error.log;
location / {
index index.php;
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
</code></pre>
<p>You can find <a href="https://medium.com/flant-com/stateful-app-files-in-kubernetes-d015311e5e6b" rel="nofollow noreferrer">the medium article</a> useful for you.</p>
| Bazhikov |
<p>I need to retrieve a list of pods by selecting their corresponding labels.
When the pods have a simple label <code>app=foo</code>, <code>k8s-app=bar</code>, the selection is quite easy:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get po -l 'app in (foo), k8s-app in (bar)'
</code></pre>
<p>The complexity comes with labels that contain special characters, for example: <code>app.kubernetes.io/name=foo</code>
So when I query only this label, I don't have a problem, but if I try to add this label to the existing query, it will end by returning <code>no resources were found</code>.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get po -l app.kubernetes.io/name=foo,app=bar
kubectl get po -l 'app.kubernetes.io/name in (foo), app in (bar)'
</code></pre>
<p>Any idea how can I join the two labels in a single query?</p>
| Tomer Leibovich | <p>You can use below command for retrieving a list of pods by selecting their corresponding labels.</p>
<pre><code>kubectl get pods --selector app=foo,k8s-app=bar
</code></pre>
| Nikunj Ranpura |
<p>I have set up a custom docker image registry on Gitlab and AKS for some reason fails to pull the image from there.<br />
Error that is being thrown out is:</p>
<pre><code>Failed to pull image "{registry}/{image}:latest": rpc error: code = FailedPrecondition desc =
failed to pull and unpack image "{registry}/{image}:latest": failed commit on ref "layer-sha256:e1acddbe380c63f0de4b77d3f287b7c81cd9d89563a230692378126b46ea6546": "layer-sha256:e1acddbe380c63f0de4b77d3f287b7c81cd9d89563a230692378126b46ea6546" failed size validation: 0 != 27145985: failed precondition
</code></pre>
<p>What is interesting is that the image does not have the layer with id</p>
<pre><code>sha256:e1acddbe380c63f0de4b77d3f287b7c81cd9d89563a230692378126b46ea6546
</code></pre>
<p>Perhaps something is cached on AKS side? I deleted the pod along with the deployment before redeploying.</p>
<p>I couldn't find much about this kind of errors and I have no idea what may be causing that. Pulling the same image from local docker environment works flawlessly.<br />
Any tip would be much appreciated!</p>
| szachmat | <p>• You can try scaling up the registry to run on all nodes. Kubernetes controller tries to be smart and routes node requests internally, instead of sending traffic to the loadbalancer IP. The issue though that if there is no registry service on that node, the packets go nowhere. So, scale up or route through a non-AKS LB.</p>
<p>• Also, clean the image layer cache folder in ${containerd folder}/io.containerd.content.v1.content/ingest.Containerd would not clean this cache automatically when some layer data broken. You can also try purging the contents in this path ${containerd folder}/io.containerd.content.v1.content/ingest.</p>
<p>• Might be this can be a TCP network connection issue between the AKS cluster and the docker image registry on Gitlab, so you can try using a proxy and configure it to close the connection between them after ‘X’ bytes of data are transferred as the retry of the pull starts over at 0% for the layer which then results in the same error because after some time we get a connection close and the layer was again not pulled completely. So, will recommend using a registry which is located near their cluster to have the higher throughput.</p>
<p>• Also try restarting the communication pipeline between AKS cluster and the docker image registry on gitlab, it fixes this issue for the time being until it re-occurs.</p>
<p>Please find the below link for more information: -</p>
<p><a href="https://docs.gitlab.com/ee/user/packages/container_registry/" rel="nofollow noreferrer">https://docs.gitlab.com/ee/user/packages/container_registry/</a></p>
| Kartik Bhiwapurkar |
<p>I'm wondering about an approach one has to take for our server setup. We have pods that are short lived. They are started up with 3 pods at a minimum and each server is waiting on a single request that it handles - then the pod is destroyed. I'm not sure of the mechanism that this pod is destroyed, but my question is not about this part anyway.</p>
<p>There is an "active session count" metric that I am envisioning. Each of these pod resources could make a rest call to some "metrics" pod that we would create for our cluster. The metrics pod would expose a <code>sessionStarted</code> and <code>sessionEnded</code> endpoint - which would increment/decrement the kubernetes <code>activeSessions</code> metric. That metric would be what is used for horizontal autoscaling of the number of pods needed.</p>
<p>Since having a pod as "up" counts as zero active sessions, the custom event that increments the session count would update the metric server session count with a rest call and then decrement again on session end (the pod being up does not indicate whether or not it has an active session).</p>
<p>Is it correct to think that I need this metric server (and write it myself)? Or is there something that Prometheus exposes where this type of metric is supported already - rest clients and all (for various languages), that could modify this metric?</p>
<p>Looking for guidance and confirmation that I'm on the right track. Thanks!</p>
| Mike | <p>It's impossible to give only one way to solve this and your question is more "opinion-based". However there is an useful <a href="https://stackoverflow.com/questions/68176705/how-to-get-active-connections-count-of-a-kubernetes-pod">similar question on StackOverFlow</a>, please check the comments that can give you some tips. If nothing works, probably you should write the script. There is no exact solution from Kubernetes's side.</p>
<p>Please also take into the consideration of <a href="https://flink.apache.org/flink-architecture.html" rel="nofollow noreferrer">Apache Flink</a>. It has <a href="https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/resource-providers/standalone/kubernetes/#using-standalone-kubernetes-with-reactive-mode" rel="nofollow noreferrer">Reactive Mode</a> in combination of Kubernetes:</p>
<blockquote>
<p><a href="https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/elastic_scaling/#reactive-mode" rel="nofollow noreferrer">Reactive Mode</a> allows to run Flink in a mode, where the Application Cluster is always adjusting the job parallelism to the available resources. In combination with Kubernetes, the replica count of the TaskManager deployment determines the available resources. Increasing the replica count will scale up the job, reducing it will trigger a scale down. This can also be done automatically by using a <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>.</p>
</blockquote>
| Bazhikov |
<p>I have an AKS cluster with autoscaling enabled, where rules are based on avg CPU.
<a href="https://i.stack.imgur.com/FxeML.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FxeML.png" alt="enter image description here" /></a></p>
<p>The default number of nodes is := <code>default = 5</code>, <code>min = 4</code> and <code>max = 7</code> and the scaling rules have a cooldown of 5 minutes.</p>
<p>I am trying to understand why the scaling rules cause continuous up- and downscaling
<a href="https://i.stack.imgur.com/YPgCL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YPgCL.png" alt="enter image description here" /></a></p>
<p>while the average CPU usage is low enough for 4 nodes.</p>
<p><a href="https://i.stack.imgur.com/nWHNX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nWHNX.png" alt="enter image description here" /></a></p>
<p>What I found even more surprising is that the Activity Log only highlights down scale events! They are consistent with a cooldown of 5 minutes, so AKS thinks that he's downscaling all the time, magically new nodes appear, and it keeps downscaling?</p>
<p>Who can explain what's going on here and what is causing it?</p>
<p><a href="https://i.stack.imgur.com/A8wfk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A8wfk.png" alt="enter image description here" /></a></p>
| Casper Dijkstra | <p>• Since, you have set the default node count in a node pool to be 5 but the CPU average utilization is not clicking 40% too so, it is automatically scaling it down to the lowest node count in the node pool configured, i.e., 4 and once, that node count is reached, it satisfies the auto scale down condition, and it once again scales up to the default node count of 5. This explains the continuous scaling up and down of the nodes in the first picture posted.</p>
<p>• While the continuous scaling down events that are captured in activity log at a default duration of 5 minutes is a cooldown of scaling events that have default delay of 5 minutes according to the below Microsoft documentation. And only scale down events are captured because CPU initialization and utilization after scaling down to 4 nodes to the default node count of 5 in a node pool trigger only scale down events according to horizontal auto scaler rules. Thus, only scale down events are recorded in the activity log: -</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/concepts-scale#cooldown-of-scaling-events" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/concepts-scale#cooldown-of-scaling-events</a></p>
<p>Also, find the below Kubernetes documentation for reference: -</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay</a></p>
| Kartik Bhiwapurkar |
<p>I had many dns records in my dns-zone in azure and i need to use External-dns to automate dns record creation/deletion but i need to filter by labels whene external-dns found other label in the aks ingrees than this one below he musn't touch it :</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sample-rule
labels:
ingress: externaldns
annotations:
kubernetes.io/ingress.class: "nginx"
ingress: "externaldns"
</code></pre>
<p>the Helm command :</p>
<pre><code>helm install external-dns-frontend-sint bitnami/external-dns \
--wait \
--namespace externaldns \
--set txtOwnerId=az-frontend-aks\
--set provider=azure \
--set azure.resourceGroup=az-tools \
--set txtOwnerId=az-frontend-ak \
--set azure.tenantId=xxxxxxxxxxxxxxxxxxxxxxx \
--set azure.subscriptionId=xxxxxxxxxxxxxxxxxxxxxxxx \
--set azure.aadClientId=xxxxxxxxxxxxxxxxx \
--set azure.aadClientSecret=xxxxxxxxxxxxxxx \
--set azure.cloud=AzurePublicCloud \
--set policy=sync \
--set labelfilter=”ingre=externaldns” \
--set annotationfilter=”ingress=externaldns” \
--set domainFilters={azdns.test.com}
</code></pre>
<p>i need to know how can i use this argument with Bitnami/external-dns chart to activate the label filter please.
any help please</p>
<p>Last : the filter doesn't work he created all record from the ingress in the same namespace</p>
| Inforedaster | <p>• You can use the label filter command with bitnami external dns charts as below to filter out the labels which are not passed as aks ingress in external dns.</p>
<pre><code>‘ $helm install my-release -f values.yaml bitnami/external-dns ‘
</code></pre>
<p>In the values.yaml file, specify the label filter and annotation filter parameters as below: -</p>
<pre><code> labelfilter: “ingress: ‘externaldns’”
annotationfilter: “ingress: ‘externaldns’”
</code></pre>
<p>OR</p>
<pre><code>‘ $helm install my-release \
--set-labelfilter=”ingress=externaldns” \
--set-annotationfilter=”ingress=externaldns” \
bitnami/external-dns ’
</code></pre>
<p>Also, please take into consideration that ‘annotation filter’ filters sources managed by external-dns via annotation using label selector while the ‘label filter’ only selects sources managed by external-dns using the label selector. Thus, filtering based on annotation means that the external-dns controller will receive all resources of that kind and then filter on the client-side. In larger clusters with many resources which change frequently this can cause performance issues. If only some resources need to be managed by an instance of external-dns then label filtering can be used instead of annotation filtering. This means that only those resources which match the selector specified in ‘--label-filter’ will be passed to the controller.</p>
<p>Please find the below links for reference: -</p>
<p><a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/faq.md#running-an-internal-and-external-dns-service" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/external-dns/blob/master/docs/faq.md#running-an-internal-and-external-dns-service</a></p>
<p><a href="https://github.com/bitnami/charts/tree/master/bitnami/external-dns/#external-dns-parameters" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/external-dns/#external-dns-parameters</a></p>
| Kartik Bhiwapurkar |
<p>I am new to Kubernetes, but I have been using Docker and Docker Compose for a long time. I am trying to find more information about how Kubernetes handles shared/read only config files compared to Docker Compose.</p>
<p>In my <code>docker-compose.yaml</code> file I am sharing specific config files to my containers using bind mounts, similar to this:</p>
<pre><code> ...
elasticsearch:
image: elasticsearch:7.3.2
environment:
- discovery.type=single-node
networks:
mycoolapp:
aliases:
- elasticsearch
ports:
- "9200:9200"
- "9300:9300"
volumes:
- elasticdata:/usr/share/elasticsearch/data
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
...
</code></pre>
<p>I have been reading up on Persistent Volumes and I believe this is what I need, however, my understanding still isn't 100% clear on a few issues.</p>
<ol>
<li>I'm using <code>azureFile</code> for my volume type, and I have copied my configs into the file share. How do I mount a sub folder of the file share into my container? <code>mountPath</code> only appears in <code>volumeMounts</code>, and I can't find where the corresponding location within the volume are.</li>
<li>How do I share just a single file?</li>
<li>How do I make the single file that I shared above read only?</li>
</ol>
| joe_coolish | <p>Kubernetes ConfigMap object will come handy here.</p>
<blockquote>
<p>A ConfigMap is an API object used to store non-confidential data in key-value pairs.</p>
</blockquote>
<p>Example:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: elastic-demo
data:
# property-like keys; each key maps to a simple value; available as env var
properties_file_name: "database.properties"
# file-like keys
database.properties: |
data1=value1
data2=value2
</code></pre>
<p>You can mount the above ConfigMap in volume in read only mode.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: elasticsearch
volumeMounts:
- name: elastic-volume
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: elastic-volume
configMap:
name: elastic-demo
</code></pre>
| Pulak Kanti Bhowmick |
<p>I am working on a microservice app and I use nginx ingress. I setup rules with 3 services, when I mention host in the rules like this bellow it always gives me 404 for all the services</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/issuer: "local-selfsigned"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- "tradephlo.local"
secretName: tls-ca
rules:
- host: "tradephlo.local"
- http:
paths:
- path: /api/main/?(.*)
pathType: Prefix
backend:
service:
name: tradephlo-main-srv
port:
number: 4000
- path: /api/integration/?(.*)
pathType: Prefix
backend:
service:
name: tradephlo-integration-srv
port:
number: 5000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: tradephlo-client-srv
port:
number: 3000
</code></pre>
<p>However if I put wildcard in the host under the rules it works perfectly</p>
<pre><code>rules:
- host: "*.tradephlo.local"
</code></pre>
<p>I don't want to generate wildcard SSL in the production. Please help me point out what I am doing wrong here.</p>
| Dave | <p>The problem is in dash <code>-</code> in the following line:</p>
<pre><code>rules:
- host: "tradephlo.local"
- http:
</code></pre>
<p>Otherwise, it is 2 different hosts - <code>tradephlo.local</code> abd <code>*</code>.</p>
<p>We can check this with the following command:</p>
<p><code>kubectl describe ing ingress-srv</code></p>
<p>And we get this:</p>
<pre><code>$ kubectl describe ing ingress-srv
Name: ingress-srv
Namespace: default
Address: xxxxxxxxxx
Default backend: default-http-backend:80 (10.60.0.9:8080)
TLS:
tls-ca terminates tradephlo.local
Rules:
Host Path Backends
---- ---- --------
*
/api/main/?(.*) nginx:80 (yyyyy:80)
</code></pre>
<p>And we get this after removed <code>-</code>:</p>
<pre><code>$ kubectl describe ing ingress-srv
Name: ingress-srv
Namespace: default
Address: xxxx
Default backend: default-http-backend:80 (10.60.0.9:8080)
TLS:
tls-ca terminates tradephlo.local
Rules:
Host Path Backends
---- ---- --------
tradephlo.local
/api/main/?(.*) nginx:80 (yyyyyy:80)
</code></pre>
<p>So there is no need to use wildcard, when you do this, ingress treats <code>*.tradephlo.local</code> as different host and proceeds to * rule.</p>
| Bazhikov |
<p>Terraform plan always forces AKS cluster to be recreated if we increase worker node in node pool</p>
<p>Trying Creating AKS Cluster with 1 worker node, via Terraform, it went well , Cluster is Up and running.</p>
<p><a href="https://i.stack.imgur.com/ZAQOc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZAQOc.png" alt="enter image description here" /></a></p>
<p>Post that, i tried to add one more worker node in my AKS, Terraform Show Plan: 2 to add, 0 to change, 2 to destroy.</p>
<p><a href="https://i.stack.imgur.com/IEKkD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IEKkD.png" alt="enter image description here" /></a></p>
<p>Not Sure how can we increase worker node in aks node pool, if it delate the existing node pool.</p>
<pre><code> default_node_pool {
name = var.nodepool_name
vm_size = var.instance_type
orchestrator_version = data.azurerm_kubernetes_service_versions.current.latest_version
availability_zones = var.zones
enable_auto_scaling = var.node_autoscalling
node_count = var.instance_count
enable_node_public_ip = var.publicip
vnet_subnet_id = data.azurerm_subnet.subnet.id
node_labels = {
"node_pool_type" = var.tags[0].node_pool_type
"environment" = var.tags[0].environment
"nodepool_os" = var.tags[0].nodepool_os
"application" = var.tags[0].application
"manged_by" = var.tags[0].manged_by
}
}
</code></pre>
<p>Error</p>
<pre><code>Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# azurerm_kubernetes_cluster.aks_cluster must be replaced
-/+ resource "azurerm_kubernetes_cluster" "aks_cluster" {
</code></pre>
<p>Thanks
Satyam</p>
| Satyam Pandey | <p><em><strong>I tested the same in my environment by creating a cluster with 2 node counts and then changed it to 3 using something like below :</strong></em></p>
<p><a href="https://i.stack.imgur.com/H9RHK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H9RHK.png" alt="enter image description here" /></a></p>
<p>If you are using <code>HTTP_proxy</code> then it will <em><strong>by default force a replacement on that block</strong></em> and that's the reason the whole cluster will get replaced with the new configurations.</p>
<p><a href="https://i.stack.imgur.com/FajBY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FajBY.png" alt="enter image description here" /></a></p>
<p>So, for a solution you can use lifecycle block in your code as I have done below:</p>
<pre><code> lifecycle {
ignore_changes = [http_proxy_config]
}
</code></pre>
<p><em><strong>The code will be :</strong></em></p>
<pre><code>resource "azurerm_kubernetes_cluster" "aks_cluster" {
name = "${var.global-prefix}-${var.cluster-id}-${var.envid}-azwe-aks-01"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
dns_prefix = "${var.global-prefix}-${var.cluster-id}-${var.envid}-azwe-aks-01"
kubernetes_version = var.cluster-version
private_cluster_enabled = var.private_cluster
default_node_pool {
name = var.nodepool_name
vm_size = var.instance_type
orchestrator_version = data.azurerm_kubernetes_service_versions.current.latest_version
availability_zones = var.zones
enable_auto_scaling = var.node_autoscalling
node_count = var.instance_count
enable_node_public_ip = var.publicip
vnet_subnet_id = azurerm_subnet.example.id
}
# RBAC and Azure AD Integration Block
role_based_access_control {
enabled = true
}
http_proxy_config {
http_proxy = "http://xxxx"
https_proxy = "http://xxxx"
no_proxy = ["localhost","xxx","xxxx"]
}
# Identity (System Assigned or Service Principal)
identity {
type = "SystemAssigned"
}
# Add On Profiles
addon_profile {
azure_policy {enabled = true}
}
# Network Profile
network_profile {
network_plugin = "azure"
network_policy = "calico"
}
lifecycle {
ignore_changes = [http_proxy_config]
}
}
</code></pre>
| Ansuman Bal |
<p>I'm trying to set up the use of dotnet-monitor in a windows pod. But if I understand correctly, there are no images for use on Windows nodes <a href="https://hub.docker.com/_/microsoft-dotnet-monitor" rel="nofollow noreferrer">https://hub.docker.com/_/microsoft-dotnet-monitor</a>. Is there any way to install dotnet-monitor utilities in Dockerfile windows pod to start collecting metrics from my windows application?</p>
| Vitalii Fedorenko | <p>The side car approach for setting up Dotnet Monitor in a Windows Container to get the diagnostics logs of a different container is currently not supported as mentioned by <em><strong>Jander-MSFT</strong></em> in this <em><strong><a href="https://github.com/dotnet/dotnet-monitor/issues/1160" rel="nofollow noreferrer"><code>Github Issue</code></a></strong></em> which might get resolved by this <em><strong><a href="https://github.com/dotnet/runtime/issues/63950" rel="nofollow noreferrer"><code>Issue</code></a>.</strong></em></p>
<p><em><strong>As a Solution , You will have to install the tool on the same Windows container by running the below command :</strong></em></p>
<pre><code>dotnet tool install --global dotnet-monitor --version 6.0.0
</code></pre>
<p>You can refer this <em><strong><a href="https://www.hanselman.com/blog/exploring-your-net-applications-with-dotnetmonitor" rel="nofollow noreferrer"><code>blog</code></a></strong></em> by <em><strong>Scott Hanselman</strong></em> for more details on the same .</p>
| Ansuman Bal |
<p>Ingress Nginx controller is returning 404 Not Found for the React application. I narrowed it down to the React app because if I try to hit posts.com/posts, it actually returns the JSON list of existing posts, but for the frontend app it keeps showing
GET <a href="http://posts.com/" rel="nofollow noreferrer">http://posts.com/</a> 404 (Not Found)</p>
<p>I looked to some other stackoverflow questions, to no avail unfortunately.</p>
<p><strong>ingress-srv.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "use"
spec:
rules:
- host: posts.com
http:
paths:
- path: /posts/create
pathType: Prefix
backend:
service:
name: posts-clusterip-srv
port:
number: 4000
- path: /posts
pathType: Prefix
backend:
service:
name: query-srv
port:
number: 4002
- path: /posts/?(.*)/comments
pathType: Prefix
backend:
service:
name: comments-srv
port:
number: 4001
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
</code></pre>
<p><strong>client-depl.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: brachikaa/client
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>frontend <strong>Dockerfile</strong></p>
<pre><code>FROM node:alpine
ENV CI=true
WORKDIR /app
COPY package.json ./
RUN npm install
COPY ./ ./
CMD ["npm", "start"]
</code></pre>
<p><strong>Logging the pod:</strong></p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/client-depl-f7cf996cf-cvh6m to minikube
Normal Pulling 11m kubelet Pulling image "brachikaa/client"
Normal Pulled 11m kubelet Successfully pulled image "brachikaa/client" in 42.832431635s
Normal Created 11m kubelet Created container client
Normal Started 11m kubelet Started container client
</code></pre>
<p>If you need any other logs, I will gladly provide them. Thanks.</p>
| Mehmed Duhovic | <p>In your yamls, there is a path "/?..." for handling the query parameters but this path will not receive traffic from "/" path as there is no prefix match. So you have to create a path "/" with type prefix to solve the issue. Then you can ignore current "/?..." path as it will match prefix with "/" path.</p>
<p>Please try this:</p>
<pre><code>ingress-srv.yaml
__________________
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "use"
spec:
rules:
- host: posts.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
- path: /posts/create
pathType: Prefix
backend:
service:
name: posts-clusterip-srv
port:
number: 4000
- path: /posts
pathType: Prefix
backend:
service:
name: query-srv
port:
number: 4002
- path: /posts/?(.*)/comments
pathType: Prefix
backend:
service:
name: comments-srv
port:
number: 4001
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
</code></pre>
| Pulak Kanti Bhowmick |
<p>Is there a way we can alter the resource names of the resources provisioned by AKS itself (screenshot below). I know I can change the node resource group name as per the documentation but cannot find any reference (or documentation) if we can change the AKS managed resource names. The resources for which I want to have custom naming specifically are:</p>
<ol>
<li>Load balancer</li>
<li>AKS Virtual Machine Scale Set</li>
</ol>
<p><a href="https://i.stack.imgur.com/H8cSs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H8cSs.png" alt="enter image description here" /></a></p>
| faizan | <p>You cannot change the resource names of the resources provisioned by AKS itself. Because it is managed by AKS only. you can give your own name of <code>node resource group</code> at the time of creation using IAC tool like <code>Terraform</code>, <code>Biceps</code> etc. But you can’t change the <code>node resource name</code> of AKS once created.</p>
<p>From Portal you can not assign your own name of <code>node_resource_group</code></p>
<p><strong>Note</strong> : <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#node_resource_group" rel="nofollow noreferrer">node_resource_group</a> - (Optional) The name of the Resource Group where the Kubernetes Nodes should exist. Changing this forces a new resource to be created.</p>
<p>Azure requires that a new, non-existent Resource Group is used, as otherwise the provisioning of the Kubernetes Service will fail.</p>
<p>Please Refer these document : <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#node_resource_group" rel="nofollow noreferrer">azurerm_kubernetes_cluster | Resources | hashicorp/azurerm | Terraform Registry</a></p>
<p><a href="https://learn.microsoft.com/en-us/azure/templates/microsoft.containerservice/managedclusters?tabs=bicep" rel="nofollow noreferrer">Microsoft.ContainerService/managedClusters - Bicep & ARM template reference | Microsoft Docs</a></p>
| RahulKumarShaw |
<p>The k8s scheduling implementation comes in two forms: <a href="https://v1-18.docs.kubernetes.io/docs/reference/scheduling/" rel="nofollow noreferrer">Scheduling Policies and Scheduling Profiles</a>.</p>
<p><strong>What is the relationship between the two?</strong> They seem to overlap but have some differences. For example, there is a <code>NodeUnschedulable</code> in the <code>profiles</code> but not in the <code>policy</code>. <code>CheckNodePIDPressure</code> is in the <code>policy</code>, but not in the <code>profiles</code></p>
<p>In addition, there is a default startup option in the scheduling configuration, but it is not specified in the scheduling policy. How can I know about the default startup policy?</p>
<p>I really appreciate any help with this.</p>
| moluzhui | <p>The difference is simple: Kubernetes does't support 'Scheduling policies' from v1.19 or later. Kubernetes v1.19 supports <a href="https://kubernetes.io/docs/reference/scheduling/config/#multiple-profiles" rel="nofollow noreferrer">configuring multiple scheduling policies</a> with a single scheduler. We are using this to define a bin-packing scheduling policy in all v1.19 clusters by default. 'Scheduling profiles' can be used to define a bin-packing scheduling policy in all v1.19 clusters by default.</p>
<p>To use that scheduling policy, all that is required is to specify the scheduler name bin-packing-scheduler in the Pod spec. For example:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
schedulerName: bin-packing-scheduler
containers:
- name: nginx
image: nginx:1.17.8
resources:
requests:
cpu: 200m
</code></pre>
<p>The pods of this deployment will be scheduled onto the nodes which already have the highest resource utilisation, to optimise for autoscaling or ensuring efficient pod placement when mixing large and small pods in the same cluster.</p>
<p>If a scheduler name is not specified then the default spreading algorithm will be used to distribute pods across all nodes.</p>
| Bazhikov |
<p>This is sort of strange behavior in our K8 cluster.</p>
<p>When we try to deploy a new version of our applications we get:</p>
<pre><code>Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "<container-id>" network for pod "application-6647b7cbdb-4tp2v": networkPlugin cni failed to set up pod "application-6647b7cbdb-4tp2v_default" network: Get "https://[10.233.0.1]:443/api/v1/namespaces/default": dial tcp 10.233.0.1:443: connect: connection refused
</code></pre>
<p>I used <code>kubectl get cs</code> and found <code>controller</code> and <code>scheduler</code> in <code>Unhealthy</code> state.</p>
<p>As describer <a href="https://github.com/kubernetes/kubernetes/issues/93472" rel="nofollow noreferrer">here</a> updated <code>/etc/kubernetes/manifests/kube-scheduler.yaml</code> and
<code>/etc/kubernetes/manifests/kube-controller-manager.yaml</code> by commenting <code>--port=0</code></p>
<p>When I checked <code>systemctl status kubelet</code> it was working.</p>
<pre><code>Active: active (running) since Mon 2020-10-26 13:18:46 +0530; 1 years 0 months ago
</code></pre>
<p>I had restarted kubelet service and <code>controller</code> and <code>scheduler</code> were shown healthy.</p>
<p>But <code>systemctl status kubelet</code> shows (soon after restart kubelet it showed running state)</p>
<pre><code>Active: activating (auto-restart) (Result: exit-code) since Thu 2021-11-11 10:50:49 +0530; 3s ago<br>
Docs: https://github.com/GoogleCloudPlatform/kubernetes<br> Process: 21234 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET
</code></pre>
<p>Tried adding <code>Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false" </code> to <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> as described <a href="http://Active:%20activating%20(auto-restart)%20(Result:%20exit-code)%20since%20Thu%202021-11-11%2010:50:49%20+0530;%203s%20ago%20%20%20%20%20%20Docs:%20https://github.com/GoogleCloudPlatform/kubernetes%20%20%20Process:%2021234%20ExecStart=/usr/bin/kubelet%20$KUBELET_KUBECONFIG_ARGS%20$KUBELET_CONFIG_ARGS%20$KUBELET_KUBEADM_ARGS%20$KUBELET" rel="nofollow noreferrer">here</a>, but still its not working properly.</p>
<p>Also removed <code>--port=0</code> comment in above mentioned manifests and tried restarting,still same result.</p>
<p><strong>Edit:</strong> This issue was due to <code>kubelet</code> certificate expired and fixed following <a href="https://github.com/kubernetes/kubeadm/issues/2054#issuecomment-606916146" rel="nofollow noreferrer">these</a> steps. If someone faces this issue, make sure <code>/var/lib/kubelet/pki/kubelet-client-current.pem</code> certificate and key values are base64 encoded when placing on <code>/etc/kubernetes/kubelet.conf</code></p>
<p>Many other suggested <code>kubeadm init</code> again. But this cluster was created using <code>kubespray</code> no manually added nodes.</p>
<p>We have baremetal k8 running on Ubuntu 18.04.
K8: v1.18.8</p>
<p>We would like to know any debugging and fixing suggestions.</p>
<p>PS:<br>
When we try to <code>telnet 10.233.0.1 443</code> from any node, first attempt fails and second attempt success.</p>
<p>Edit: Found this in <code>kubelet</code> service logs</p>
<pre><code>Nov 10 17:35:05 node1 kubelet[1951]: W1110 17:35:05.380982 1951 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "app-7b54557dd4-bzjd9_default": unexpected command output nsenter: cannot open /proc/12311/ns/net: No such file or directory
</code></pre>
| Sachith Muhandiram | <p>Posting comment as the community wiki answer for better visibility</p>
<hr />
<p>This issue was due to <code>kubelet</code> certificate expired and fixed following <a href="https://github.com/kubernetes/kubeadm/issues/2054#issuecomment-606916146" rel="nofollow noreferrer">these steps</a>. If someone faces this issue, make sure <code>/var/lib/kubelet/pki/kubelet-client-current.pem</code> certificate and key values are <code>base64</code> encoded when placing on <code>/etc/kubernetes/kubelet.conf</code></p>
| Bazhikov |
<p>I have bought a WildCard ssl certificate from Azure App Service Certificate. I also have an AKS Cluster. I want to put it in the secret and use in ingress. After purchase it stored secret file in Azure Key Vault. I downloaded it and then imported to create Azure Key Vault Certificate. Then with akv2k8s I created a secret file in my AKS and used it in ingress. After my application threw 'err_cert_authority_invalid' error.
Do I do anything wrong ??
There is not so many documentation on ssl and ingress. In many articles, they use 'Lets Encrypt' or 'Cert Manager'.</p>
<p><a href="https://akv2k8s.io/" rel="nofollow noreferrer">https://akv2k8s.io/</a></p>
<p><a href="https://i.stack.imgur.com/JEQgt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JEQgt.png" alt="enter image description here" /></a></p>
| Ruben Aleksanyan | <p>• It can be due to the misinterpretation that the certificate is issued by the staging environment or vice versa. Thus, for that purpose, I would suggest you to please check the <strong>‘stable/wordpress’</strong> helm chart with the ingress annotation <strong>'certmanager.k8s.io/cluster-issuer': 'letsencrypt-staging'</strong>. This will result in being issued a certificate from the fake issuer. Thus, even if your certificate is ingressed in your AKS as a secret, it will be shown as being issued from a fake issuer since the chain of certificate hash validation is broken in between. Please find below the curl for that purpose: -</p>
<pre><code> ‘ # curl -vkI https://blog.my-domain.com/
...
* Server certificate:
* subject: CN=blog.my-domain.com
* start date: May 13 08:51:13 2019 GMT
* expire date: Aug 11 08:51:13 2019 GMT
* issuer: CN=Fake LE Intermediate X1
... ‘
</code></pre>
<p>Then, list the ingresses as follows: -</p>
<pre><code> ‘ # kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
blog-wordpress blog.my-domain.com 35.200.214.186 80, 443 8m48s ’
</code></pre>
<p>and the certificates too: -</p>
<pre><code> ‘ # kubectl get certificates
NAME READY SECRET AGE
wordpress.local-tls True wordpress.local-tls 9m ’
</code></pre>
<p>Then, switch the issuer of the certificate to the one that has issued the certificate originally as below: -</p>
<pre><code> ‘ # kubectl edit ing blog-wordpress ’
</code></pre>
<p>And update the annotation as below: -</p>
<pre><code> ‘ certmanager.k8s.io/cluster-issuer: letsencrypt-prod ’
</code></pre>
<p>Once the ingress manifest is updated, then the certificate manifest will automatically be updated. To verify it, open the manifest for <strong>‘wordpress.local-tls’</strong> certificate resource as below: -</p>
<pre><code> ‘ kubectl edit certificate wordpress.local-tls ’
</code></pre>
<p>The issuer will be seen as updated as below: -</p>
<pre><code>‘ kubectl edit certificate wordpress.local-tls ’
</code></pre>
<p>Thus, in this way, you will be able to import a certificate secret in AKS. For more details, I would suggest you to please refer the below link for more details: -</p>
<p><a href="https://github.com/vmware-archive/kube-prod-runtime/issues/532" rel="nofollow noreferrer">https://github.com/vmware-archive/kube-prod-runtime/issues/532</a></p>
| Kartik Bhiwapurkar |
<pre><code>{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"ResourceDeploymentFailure","message":"The resource provision operation did not complete within the allowed timeout period."}]}.
</code></pre>
<p>I get this error message whenever I try to deploy my AKS Cluster, no matter if I deploy it through Terraform, The azure portal or Azure CLI.</p>
<p>The config I use is :</p>
<pre><code>az aks create --name Aks-moduleTf --max-count 1 --min-count 1 --network-plugin azure --vnet-subnet-id /subscriptions/<SUBID>/resourceGroups/MyResources/providers/Microsoft.Network/virtualNetworks/MyVnet/subnets/Mysubnet --node-count 1 --node-vm-size Standard_B2s --dns-service-ip X.X.X.X --resource-group MyResources --generate-ssh-keys --enable-cluster-autoscaler --service-cidr X.X.X.X/X
</code></pre>
<p>Thank you for your help.</p>
| Elies | <p>The Error you are getting beacuse issue with the NSGS [acls] of subnet that are restricting the traffic flow to the Azure management network to let the AKS creation work.</p>
<p>These NSGs are associated with the Subnet in Vnet that you are trying to create an AKS for.</p>
<blockquote>
<p>Apparently, when we created a new AKS(resource) with all the default
options by creating a new subnet with no NSGs, It worked.</p>
</blockquote>
<p><strong>Az CLI code</strong></p>
<pre><code>az aks create --resource-group v-rXXXXXtree --name Aks-moduleTf --max-count 1 --min-count 1 --network-plugin azure --vnet-subnet-id /subscriptions/b83cXXXXXXXXXXXXX074c23f/resourceGroups/v-rXXXXXXXXXe/providers/Microsoft.Network/virtualNetworks/Vnet1/subnets/Subnet1 --node-count 1 --node-vm-size Standard_B2s --dns-service-ip 10.2.0.10 --service-cidr 10.2.0.0/24 --generate-ssh-keys --enable-cluster-autoscaler
</code></pre>
<p><strong>Solution</strong> : If you are creating Azure resoruce with existing <code>vnet/subnet</code>. you need to disable(Select None) for the NSG of subnet.</p>
<p><a href="https://i.stack.imgur.com/f1nLq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f1nLq.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/zLrp3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zLrp3.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/dDrVt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dDrVt.png" alt="enter image description here" /></a></p>
<p><strong>Reference : You can check this <a href="https://social.msdn.microsoft.com/Forums/en-US/df3400a1-b31a-4159-a242-5f03000c04d2/appsrvenv-creation-times-out-quotthe-resource-provision-operation-did-not-complete-within-the?forum=windowsazurewebsitespreview" rel="nofollow noreferrer">link</a> one of the user faced this issue and went to Microsoft Support team and found the issue is with NSG</strong></p>
| RahulKumarShaw |
<p>I am new to Kubernetes Operators. I have a general question about how to conceive of cleanup at the point of deletion.</p>
<p>Let's say the Controller is managing a resource which consists of a Deployment among other things. This Deployment writes to some external database. I'd like the items from the Database to be deleted when the resource is deleted (but not when its Pod is simply restarted - thus it can't happen as part of the application's shut down logic).</p>
<p>It seems like the database purging would have to happen in the Controller then? But this makes me a bit uneasy since it seems like this knowledge of how values are stored is the knowledge of the resource being managed, not the Controller. Is the only other good option to have the Controller send a message to the underlying application to perform its own cleanup?</p>
<p>What is the general way to handle this type of thing?</p>
| vmayer | <p>Have you heard about Finalizers and <a href="https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/#owner-references" rel="nofollow noreferrer">Owner References</a> in Kubernetes? It's the Owner references describe how groups of objects are related. They are properties on resources that specify the relationship to one another, so entire trees of resources can be deleted.</p>
<p>To avoid futher copy-pasting, I will just leave the links here: <a href="https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/#understanding-finalizers" rel="nofollow noreferrer">Understanding Finalizers</a></p>
| Bazhikov |
<p>I would like to create a kubernetes Cronjob that create jobs (according to its schedule) only if the current date is between a configurable start date and end date.</p>
<p>I can't find a way to do this with the basic cronjob resource. Is there a way to do this ? Ideally without resorting to overkill components (airflow ...) ?</p>
| Tewfik | <p>Actually there is no way to configure the end date. You can run it in every day. You have to stop it manually when the end date will come.</p>
| Pulak Kanti Bhowmick |
<p>I'm trying to use an <code>Ingress</code> and <code>ExternalName</code> Service in Kubernetes to route traffic to an external storage service (DigitalOcean Spaces) - but no matter what I try, I get some form of http error.</p>
<p>Things I've tried:</p>
<ul>
<li><a href="https://github.com/kubernetes/ingress-nginx/pull/629#issue-116679227" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/pull/629#issue-116679227</a> (Error: 404 Not Found, nginx)</li>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/1809" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/1809</a> (Error: 502 Bad Gateway, nginx)</li>
<li>A fair bit of other tinkering which has been lost to time.</li>
</ul>
<p>How do I configure a K8s Ingress/Service to direct ingress requests from <code>example.com/static</code> to a storage bucket (e.g. <code><zone>.digitaloceanspaces.com/<bucket-name>/<path>/<object></code>)?</p>
| 1f928 | <p>It looks like some of the resources I was able to find were simply outdated. The following solution works as of Kubernetes v1.21.4.</p>
<p><strong>Important Notes</strong>:</p>
<ul>
<li>All <code>Ingress</code> annotations are <em>required</em>:
<ul>
<li><code>kubernetes.io/ingress.class: nginx</code> - necessary to engage Nginx ingress controller.</li>
<li><code>nginx.ingress.kubernetes.io/backend-protocol: HTTPS</code> - necessary to maintain HTTPS traffic to service (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="nofollow noreferrer">this replaces <code>/secure-backends</code> in older versions</a>).</li>
<li><code>nginx.ingress.kubernetes.io/upstream-vhost</code> - must match service <code>externalName</code>, removes hostname from request path (e.g. if this is missing and being tested through localhost, will likely encounter error: "No such bucket: localhost").</li>
<li><code>nginx.ingress.kubernetes.io/rewrite-target</code> - passes matched asset URL path through to service.</li>
</ul>
</li>
<li>The <code>path.service.port.number</code> in the Ingress definition must match whatever port the <code>ExternalName</code> service expects (443 in the case of our HTTPS traffic).</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: do-bucket-service
spec:
type: ExternalName
externalName: <zone>.digitaloceanspaces.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: do-bucket-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/rewrite-target: /<bucket>/$2
nginx.ingress.kubernetes.io/upstream-vhost: <zone>.digitaloceanspaces.com
spec:
rules:
- http:
paths:
- path: /path/to/static/assets(/|$)(.*)
pathType: Prefix
backend:
service:
name: do-bucket-service
port:
number: 443
</code></pre>
| 1f928 |
<p>I am using Azure cloud.
I want to push container logs to Azure loganalytics.But before doing it to my existing Azure running container, I thought of giving the below yaml a try from <a href="https://learn.microsoft.com/en-us/azure/container-instances/container-instances-log-analytics" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/container-instances/container-instances-log-analytics</a> :</p>
<pre><code>apiVersion: 2019-12-01
location: eastus
name: mycontainergroup001
properties:
containers:
- name: mycontainer001
properties:
environmentVariables: []
image: fluent/fluentd
ports: []
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
osType: Linux
restartPolicy: Always
diagnostics:
logAnalytics:
workspaceId: LOG_ANALYTICS_WORKSPACE_ID
workspaceKey: LOG_ANALYTICS_WORKSPACE_KEY
tags: null
type: Microsoft.ContainerInstance/containerGroups
</code></pre>
<p>I am trying to run the above yaml in AKS cluster by executing kubectl apply -f deploy-aci.yaml.
I get the below error:
error: unable to recognize ".\deploy-aci.yaml": no matches for kind "Microsoft.ContainerInstance/containerGroups" in version "2019-12-01"</p>
<p>API Resources:</p>
<pre><code> kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
tokenreviews authentication.k8s.io/v1 false TokenReview
localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview
selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling/v1 true HorizontalPodAutoscaler
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest
configs config config.gatekeeper.sh/v1alpha1 true Config
k8sazureallowedcapabilities constraints.gatekeeper.sh/v1beta1 false K8sAzureAllowedCapabilities
k8sazureallowedusersgroups constraints.gatekeeper.sh/v1beta1 false K8sAzureAllowedUsersGroupsk8sazureblockautomounttoken constraints.gatekeeper.sh/v1beta1 false K8sAzureBlockAutomountToken
k8sazureblockdefault constraints.gatekeeper.sh/v1beta1 false K8sAzureBlockDefault
k8sazureblockhostnamespace constraints.gatekeeper.sh/v1beta1 false K8sAzureBlockHostNamespacek8sazurecontainerallowedimages constraints.gatekeeper.sh/v1beta1 false K8sAzureContainerAllowedImages
k8sazurecontainerlimits constraints.gatekeeper.sh/v1beta1 false K8sAzureContainerLimits
k8sazurecontainernoprivilege constraints.gatekeeper.sh/v1beta1 false K8sAzureContainerNoPrivilege
k8sazurecontainernoprivilegeescalation constraints.gatekeeper.sh/v1beta1 false K8sAzureContainerNoPrivilegeEscalation
k8sazuredisallowedcapabilities constraints.gatekeeper.sh/v1beta1 false K8sAzureDisallowedCapabilities
k8sazureenforceapparmor constraints.gatekeeper.sh/v1beta1 false K8sAzureEnforceAppArmor
k8sazurehostfilesystem constraints.gatekeeper.sh/v1beta1 false K8sAzureHostFilesystem
k8sazurehostnetworkingports constraints.gatekeeper.sh/v1beta1 false K8sAzureHostNetworkingPorts
k8sazureingresshttpsonly constraints.gatekeeper.sh/v1beta1 false K8sAzureIngressHttpsOnly
k8sazurereadonlyrootfilesystem constraints.gatekeeper.sh/v1beta1 false K8sAzureReadOnlyRootFilesystem
k8sazureserviceallowedports constraints.gatekeeper.sh/v1beta1 false K8sAzureServiceAllowedPorts
leases coordination.k8s.io/v1 true Lease
endpointslices discovery.k8s.io/v1 true EndpointSlice
events ev events.k8s.io/v1 true Event
ingresses ing extensions/v1beta1 true Ingress
flowschemas flowcontrol.apiserver.k8s.io/v1beta1 false FlowSchema
prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta1 false PriorityLevelConfigurationingressclasses networking.k8s.io/v1 false IngressClass
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
runtimeclasses node.k8s.io/v1 false RuntimeClass
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
podsecuritypolicies psp policy/v1beta1 false PodSecurityPolicy
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
priorityclasses pc scheduling.k8s.io/v1 false PriorityClass
secretproviderclasses secrets-store.csi.x-k8s.io/v1 true SecretProviderClass
secretproviderclasspodstatuses secrets-store.csi.x-k8s.io/v1 true SecretProviderClassPodStatus
volumesnapshotclasses snapshot.storage.k8s.io/v1 false VolumeSnapshotClass
volumesnapshotcontents snapshot.storage.k8s.io/v1 false VolumeSnapshotContent
volumesnapshots snapshot.storage.k8s.io/v1 true VolumeSnapshot
constraintpodstatuses status.gatekeeper.sh/v1beta1 true ConstraintPodStatus
constrainttemplatepodstatuses status.gatekeeper.sh/v1beta1 true ConstraintTemplatePodStatus
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
constrainttemplates constraints templates.gatekeeper.sh/v1 false ConstraintTemplate
</code></pre>
<p>Out of the above apiresources , can I use any of them which would enables the execution of deploy-aci.yaml.</p>
<p>My kubernetes version is 1.21.7.</p>
| Unixquest945 | <p>This YAML won't be run in AKS cluster by executin<code>kubectl apply -f deploy-aci.yaml</code>. You have to make changes in your YAML code to make it run on AKS.</p>
<p>For Your infomation this YAML code specially writes for create Azure Container Instance and store the logs of Container in Log Analytics WorkSpace.</p>
<p>Also for AKS by default a Log Analytics WorkSpace created while creating AKS.So all the logs of pods/conatainer stored that workspace you don't need to create another workspace to store the logs.</p>
<p><a href="https://i.stack.imgur.com/ZlYqs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZlYqs.png" alt="enter image description here" /></a></p>
<p>So, Inspite of running <code>kubectl apply -f deploy-aci.yaml</code> cmdlet please run az <code>container create --resource-group myResourceGroup --name mycontainergroup001 --file deploy-aci.yaml</code> this cmdlet as given in the MS Document.</p>
<p>I also ran the same command and it has run succesfully and created a container instance.</p>
<p><a href="https://i.stack.imgur.com/UUuAK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UUuAK.png" alt="enter image description here" /></a></p>
| RahulKumarShaw |
<p>How to set basic auth to Kubernetes' readinessProbe correctly?</p>
<p>If set this config for Kubernetes' <code>readinessProbe</code> in deployment kind.</p>
<pre><code>readinessProbe:
httpGet:
path: /healthcheck
port: 8080
httpHeaders:
- name: Authorization
value: Basic <real base64 encoded data>
</code></pre>
<p>Deploy it to GKE, GCP's health check can't pass to access the inside application with using basic authentication.</p>
<p>But from <a href="https://stackoverflow.com/a/43948832/15279606">here</a>, it seems it should use this syntax. Why can't pass?</p>
<p>The server side is using JSON response at the /healthcheck point. Is it also necessary to set <code>Accept</code> or <code>Content-Type</code> to the <code>httpHeaders</code>?</p>
<p>And, is it good to set this health check to livenessProbe or readinessProbe?</p>
| realworld | <p>According to kubernetes doc:</p>
<blockquote>
<p>If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not necessarily need a liveness probe; the kubelet will automatically perform the correct action in accordance with the Pod's restartPolicy. If you'd like your container to be killed and restarted if a probe fails, then specify a liveness probe, and specify a restartPolicy of Always or OnFailure.</p>
</blockquote>
<p>Ref: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-liveness-probe" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-liveness-probe</a></p>
<blockquote>
<p>If you'd like to start sending traffic to a Pod only when a probe succeeds, specify a readiness probe. In this case, the readiness probe might be the same as the liveness probe, but the existence of the readiness probe in the spec means that the Pod will start without receiving any traffic and only start receiving traffic after the probe starts succeeding. If your container needs to work on loading large data, configuration files, or migrations during startup, specify a readiness probe.</p>
</blockquote>
<p>Ref: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-readiness-probe" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-readiness-probe</a></p>
<p>So, you can use health check according to your need. But in kubernetes doc, they give an example of health check as a liveliness probe.</p>
<p>Ref: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request</a></p>
<p>And it is the best practice to use Content-Type when you send request other then browser or other typical client. I believe using Content-Type: application/json will solve the issue if other things go right in server side.</p>
| Pulak Kanti Bhowmick |
<p>I'm running multiple containers in a pod. I have a persistence volume and mounting the same directories to containers.</p>
<p>My requirement is:</p>
<p>mount /opt/app/logs/app.log to container A where application writes data to <strong>app.log</strong></p>
<p>mount /opt/app/logs/app.log to container B to read data back from <strong>app.log</strong></p>
<pre><code>- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs/ => container A is writing data here to **app.log** file
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs/ => container B read data from **app.log**
name: data
</code></pre>
<p>The issue I'm facing is - when I mount the same directory <strong>/opt/app/logs/</strong> to container-B, I'm not seeing the <strong>app.log</strong> file.</p>
<p>Can someone help me with this, please? This can be achievable but I'm not sure what I'm missing here.</p>
| Jwary | <p>According to your requirements, you need something like below:</p>
<pre><code>- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs
name: data
</code></pre>
<p>Your application running on container-A will create or write files on the given path(/opt/app/logs) say app.log file. Then from container-B you'll find app.log file in the given path (/opt/app/logs). You can use any path here.</p>
<p>In your given spec you actually tried to mount a directory in a file(app.log). I think that's creating the issue.</p>
<p><b>Update-1:</b>
Here I give a full yaml file from a working example. You can do it by yourself to see how things work.</p>
<ol>
<li><p>kubectl exec -ti test-pd -c test-container sh</p>
</li>
<li><p>go to /test-path1</p>
</li>
<li><p>create some file using touch command. say "touch a.txt"</p>
</li>
<li><p>exit from test-container</p>
</li>
<li><p>kubectl exec -ti test-pd -c test sh</p>
</li>
<li><p>go to /test-path2</p>
</li>
<li><p>you will find a.txt file here.</p>
</li>
</ol>
<p>pvc.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pv-claim
spec:
storageClassName:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>pod.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: test-container
volumeMounts:
- mountPath: /test-path1
name: test-volume
- image: pkbhowmick/go-rest-api:2.0.1 #my-rest-api-server
name: test
volumeMounts:
- mountPath: /test-path2
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-pv-claim
</code></pre>
| Pulak Kanti Bhowmick |
<p>I'm running my deployment on OpenShift, and found that I need to have a GID of 2121 to have write access.</p>
<p>I still don't seem to have write access when I try this:</p>
<pre><code>security:
podSecurityContext:
fsGroup: 2121
</code></pre>
<p>This gives me a <code>2121 is not an allowed group</code> error.</p>
<p>However, this does seem to be working for me:</p>
<pre><code>security:
podSecurityContext:
fsGroup: 100010000 # original fsGroup value
supplementalGroups: [2121]
</code></pre>
<p>I am wondering what the difference of <code>fsGroup</code> and <code>supplementalGroups</code> is.</p>
<p>I've read the documentation <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems" rel="nofollow noreferrer">here</a> and have also looked at <code>kubectl explain deployment.spec.template.spec.securityContext</code>, but I still can't quite understand the difference.</p>
<p>Could I get some clarification on what are the different use cases?</p>
| Kun Hwi Ko | <p><code>FSGroup</code> is used to set the group that owns the pod volumes. This group will be used by Kubernetes to change the permissions of all files in volumes, when volumes are mounted by a pod.</p>
<blockquote>
<ol>
<li><p>The owning GID will be the FSGroup</p>
</li>
<li><p>The setgid bit is set (new files created in the volume will be owned by FSGroup)</p>
</li>
<li><p>The permission bits are OR'd with rw-rw----</p>
<p>If unset, the Kubelet will not modify the ownership and permissions of
any volume.</p>
</li>
</ol>
</blockquote>
<p>Some caveats when using <code>FSGroup</code>:</p>
<ul>
<li><p>Changing the ownership of a volume for slow and/or large file systems
can cause delays in pod startup.</p>
</li>
<li><p>This can harm other processes using the same volume if their
processes do not have permission to access the new GID.</p>
</li>
</ul>
<p><code>SupplementalGroups</code> - controls which supplemental group ID can be assigned to processes in a pod.</p>
<blockquote>
<p>A list of groups applied to the first process run in each container,
in addition to the container's primary GID. If unspecified, no groups
will be added to any container.</p>
</blockquote>
<p>Additionally from the <a href="https://docs.openshift.com/container-platform/4.9/storage/persistent_storage/persistent-storage-nfs.html#storage-persistent-storage-nfs-group-ids_persistent-storage-nfs" rel="noreferrer">OpenShift documentation</a>:</p>
<blockquote>
<p>The recommended way to handle NFS access, assuming it is not an option
to change permissions on the NFS export, is to use supplemental
groups. Supplemental groups in OpenShift Container Platform are used
for shared storage, of which NFS is an example. In contrast, block
storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup
value in the securityContext of the pod.</p>
</blockquote>
| Andrew Skorkin |
<p>I followed the example at <a href="https://github.com/SeldonIO/seldon-core/tree/master/examples/kubeflow" rel="nofollow noreferrer">https://github.com/SeldonIO/seldon-core/tree/master/examples/kubeflow</a>.</p>
<pre><code>1.kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80
2.kubectl create namespace kubeflow-user-example-com
3.kubectl config set-context $(kubectl config current-context) --namespace=kubeflow-user-example-com
4.s2i build . seldonio/seldon-core-s2i-python37:1.2.3 seldon-sentiment:0.1 --env MODEL_NAME=Transformer --env API_TYPE=REST --env SERVICE_TYPE=MODEL --env PERSISTENCE=0
</code></pre>
<p>s2i builds following class:</p>
<pre><code>class Transformer(object):
def __init__(self):
# with open('/mnt/lr.model', 'rb') as model_file:
# self._lr_model = dill.load(model_file)
def predict(self, X, feature_names):
# logging.warning(X)
# prediction = self._lr_model.predict_proba(X)
# logging.warning(prediction)
return X
</code></pre>
<p>The build is successfull:</p>
<pre><code>root@ubuntu-16gb-nbg1-3:/usr/src# s2i build . seldonio/seldon-core-s2i-python37:1.2.3 seldon-sentiment:0.1
---> Installing application source...
---> Installing dependencies ...
Looking in links: /whl
Collecting dill==0.3.2 (from -r requirements.txt (line 1))
WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.
Downloading https://files.pythonhosted.org/packages/e2/96/518a8ea959a734b70d2e95fef98bcbfdc7adad1c1e5f5dd9148c835205a5/dill-0.3.2.zip (177kB)
Requirement already satisfied: click==7.1.2 in /opt/conda/lib/python3.7/site-packages (from -r requirements.txt (line 2)) (7.1.2)
Requirement already satisfied: numpy==1.19.1 in /opt/conda/lib/python3.7/site-packages (from -r requirements.txt (line 3)) (1.19.1)
Collecting scikit-learn==0.23.2 (from -r requirements.txt (line 4))
WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.
Downloading https://files.pythonhosted.org/packages/f4/cb/64623369f348e9bfb29ff898a57ac7c91ed4921f228e9726546614d63ccb/scikit_learn-0.23.2-cp37-cp37m-manylinux1_x86_64.whl (6.8MB)
Collecting joblib>=0.11 (from scikit-learn==0.23.2->-r requirements.txt (line 4))
WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.
Downloading https://files.pythonhosted.org/packages/91/d4/3b4c8e5a30604df4c7518c562d4bf0502f2fa29221459226e140cf846512/joblib-1.2.0-py3-none-any.whl (297kB)
Collecting scipy>=0.19.1 (from scikit-learn==0.23.2->-r requirements.txt (line 4))
WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.
Downloading https://files.pythonhosted.org/packages/58/4f/11f34cfc57ead25752a7992b069c36f5d18421958ebd6466ecd849aeaf86/scipy-1.7.3-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (38.1MB)
Collecting threadpoolctl>=2.0.0 (from scikit-learn==0.23.2->-r requirements.txt (line 4))
WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.
Downloading https://files.pythonhosted.org/packages/61/cf/6e354304bcb9c6413c4e02a747b600061c21d38ba51e7e544ac7bc66aecc/threadpoolctl-3.1.0-py3-none-any.whl
Building wheels for collected packages: dill
Building wheel for dill (setup.py): started
Building wheel for dill (setup.py): finished with status 'done'
Created wheel for dill: filename=dill-0.3.2-cp37-none-any.whl size=78913 sha256=50910efb2cba1272a015391f4aff7604a5c4a48c855bfe5f597a43a54e44ab6d
Stored in directory: /root/.cache/pip/wheels/27/4b/a2/34ccdcc2f158742cfe9650675560dea85f78c3f4628f7daad0
Successfully built dill
Installing collected packages: dill, joblib, scipy, threadpoolctl, scikit-learn
Successfully installed dill-0.3.2 joblib-1.2.0 scikit-learn-0.23.2 scipy-1.7.3 threadpoolctl-3.1.0
WARNING: Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme.
Collecting pip-licenses
Downloading https://files.pythonhosted.org/packages/61/f5/3038406547e36376c3a17a6774f61c2e9ccb65777eabf0a20708e4dacd3d/pip_licenses-3.5.4-py3-none-any.whl
Collecting PTable (from pip-licenses)
Downloading https://files.pythonhosted.org/packages/ab/b3/b54301811173ca94119eb474634f120a49cd370f257d1aae5a4abaf12729/PTable-0.9.2.tar.gz
Building wheels for collected packages: PTable
Building wheel for PTable (setup.py): started
Building wheel for PTable (setup.py): finished with status 'done'
Created wheel for PTable: filename=PTable-0.9.2-cp37-none-any.whl size=22906 sha256=7310d6e2974596f5fe2f72c3553b5386faab7eac59448d6c3e86b5d0bd3a775f
Stored in directory: /root/.cache/pip/wheels/22/cc/2e/55980bfe86393df3e9896146a01f6802978d09d7ebcba5ea56
Successfully built PTable
Installing collected packages: PTable, pip-licenses
Successfully installed PTable-0.9.2 pip-licenses-3.5.4
created path: ./licenses/license_info.csv
created path: ./licenses/license.txt
Build completed successfully
</code></pre>
<p>After successfull building i proceed with step 5.</p>
<pre><code>5.kubectl create -f seldon-sentiment-test.yaml
</code></pre>
<p>The yaml looks like that</p>
<pre><code> apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
labels:
app: seldon
name: seldon-sentiment-test
namespace: kubeflow-user-example-com
spec:
annotations:
project_name: NLP Pipeline
deployment_version: v1
name: seldon-sentiment-test
predictors:
- componentSpecs:
- spec:
containers:
- image: seldon-sentiment:0.1
imagePullPolicy: IfNotPresent
name: sentiment
resources:
requests:
memory: 1Mi
terminationGracePeriodSeconds: 20
graph:
children: []
endpoint:
type: REST
name: sentiment
type: MODEL
name: sentiment
replicas: 1
annotations:
predictor_version: v1
</code></pre>
<p>Than i checked the status and it stucks in creating with</p>
<pre><code>kubectl get sdep -n kubeflow-user-example-com seldon-sentiment-test -o json | jq .status
</code></pre>
<p>Output:</p>
<pre><code>{
"address": {
"url": "http://seldon-sentiment-test-sentiment.kubeflow-user-example-com.svc.cluster.local:8000/api/v1.0/predictions"
},
"deploymentStatus": {
"seldon-sentiment-test-sentiment-0-sentiment": {
"replicas": 1
}
},
"replicas": 1,
"serviceStatus": {
"seldon-sentiment-test-sentiment-sentiment": {
"grpcEndpoint": "seldon-sentiment-test-sentiment-sentiment.kubeflow-user-example-com:9500",
"httpEndpoint": "seldon-sentiment-test-sentiment-sentiment.kubeflow-user-example-com:9000",
"svcName": "seldon-sentiment-test-sentiment-sentiment"
}
},
"state": "Creating"
}
</code></pre>
<p>The pod stucks also in pending:</p>
<pre><code>kubeflow-user-example-com seldon-sentiment-test-sentiment-0-sentiment-8946df95-qq688 0/3 Pending 0 3m9s
</code></pre>
| GigliOneiric | <p>I pushed the image to docker. The pods are started now.</p>
<pre><code>docker run -d -p 5000:5000 --restart=always --name registry registry:2
docker tag seldon-sentiment:0.1 localhost:5000/seldon-sentiment:0.1
docker push localhost:5000/seldon-sentiment:0.1
</code></pre>
| GigliOneiric |
<p>I have installed kube-prometheus-stack as a <strong>dependency</strong> in my helm chart on a local docker for Mac Kubernetes cluster v1.19.7. I can view the default prometheus targets provided by the kube-prometheus-stack.</p>
<p>I have a python flask service that provides metrics which I can view successfully in the kubernetes cluster using <code>kubectl port forward</code>.</p>
<p>However, I am unable to get these metrics displayed on the prometheus targets web interface.</p>
<p>The <a href="https://hub.kubeapps.com/charts/prometheus-community/kube-prometheus-stack#!" rel="noreferrer">kube-prometheus-stack</a> documentation states that Prometheus.io/scrape does not support annotation-based discovery of services. Instead the the reader is referred to the concept of <code>ServiceMonitors</code> and <code>PodMonitors</code>.</p>
<p>So, I have configured my service as follows:</p>
<pre class="lang-yaml prettyprint-override"><code>---
kind: Service
apiVersion: v1
metadata:
name: flask-api-service
labels:
app: flask-api-service
spec:
ports:
- protocol: TCP
port: 4444
targetPort: 4444
name: web
selector:
app: flask-api-service
tier: backend
type: ClusterIP
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: flask-api-service
spec:
selector:
matchLabels:
app: flask-api-service
endpoints:
- port: web
</code></pre>
<p>Subsequently, I have setup a port forward to view the metrics:</p>
<pre class="lang-sh prettyprint-override"><code>Kubectl port-forward prometheus-flaskapi-kube-prometheus-s-prometheus-0 9090
</code></pre>
<p>Then visited prometheus web page at <code>http://localhost:9090</code></p>
<p>When I select the Status->Targets menu option, my flask-api-service is not displayed.</p>
<p>I know that the service is up and running and I have checked that I can view the metrics for a pod for my flask-api-service using <code>kubectl port-forward <pod name> 4444</code>.</p>
<p>Looking at a similar <a href="https://stackoverflow.com/a/65648944">issue</a> it looks as though there is a configuration value <code>serviceMonitorSelectorNilUsesHelmValues</code> that defaults to true. Setting this to false makes the operator look outside it’s release labels in helm??</p>
<p>I tried adding this to the <code>values.yml</code> of my helm chart in addition to the <code>extraScrapeConfigs</code> configuration value. However, the <em>flask-api-service</em> still does not appear as an additional target on the prometheus web page when clicking the Status->Targets menu option.</p>
<pre class="lang-yaml prettyprint-override"><code>prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
extraScrapeConfigs: |
- job_name: 'flaskapi'
static_configs:
- targets: ['flask-api-service:4444']
</code></pre>
<p>How do I get my <em>flask-api-service</em> recognised on the prometheus targets page at <code>http://localhost:9090</code>?</p>
<p>I am installing Kube-Prometheus-Stack as a dependency via my helm chart with default values as shown below:</p>
<p><strong>Chart.yaml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v2
appVersion: "0.0.1"
description: A Helm chart for flaskapi deployment
name: flaskapi
version: 0.0.1
dependencies:
- name: kube-prometheus-stack
version: "14.4.0"
repository: "https://prometheus-community.github.io/helm-charts"
- name: ingress-nginx
version: "3.25.0"
repository: "https://kubernetes.github.io/ingress-nginx"
- name: redis
version: "12.9.0"
repository: "https://charts.bitnami.com/bitnami"
</code></pre>
<p><strong>Values.yaml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>docker_image_tag: dcs3spp/
hostname: flaskapi-service
redis_host: flaskapi-redis-master.default.svc.cluster.local
redis_port: "6379"
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
extraScrapeConfigs: |
- job_name: 'flaskapi'
static_configs:
- targets: ['flask-api-service:4444']
</code></pre>
| anon_dcs3spp | <p>Prometheus custom resource definition has a field called <code>serviceMonitorSelector</code>. Prometheus only listens to those matched serviceMonitor. In case of helm deployment it is your release name.</p>
<pre><code>release: {{ $.Release.Name | quote }}
</code></pre>
<p>So adding this field in your serviceMonitor should solve the issue. Then you serviceMonitor manifest file will be:</p>
<pre><code>
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: flask-api-service
labels:
release: <your_helm_realese_name_>
spec:
selector:
matchLabels:
app: flask-api-service
endpoints:
- port: web
</code></pre>
| Pulak Kanti Bhowmick |
<p>We have an ingress resource hostname say <code>xyz.int.com</code> setup on two k8s cluster A and B. The ingress controller used is nginx. On DNS we have setup <code>xyz.int.com</code> to point to the loadbalancer IPs in respective clusters.</p>
<p>For some strange reason, in one cluster I'm getting the below warning and not getting any status code for request if its a success or not:</p>
<pre><code>2022/01/17 17:58:00 [warn] 13239#13239: *94097411 a client request body is buffered to a temporary file /tmp/client-body/0001505726, client: 10.9.8.0, server: xyz.int.com, request: "POST /api/vss0/an/log/83f740daa89b3d3638b37a6a06de49a59f1f5126129a9a6?clientTimeInMs=1642442284833&sdkV=811409&gpid=a15e3b7c2-e1366-4327d-83a93-f7619&devNet=WIFI&locale=en-IN&region=IN HTTP/1.1", host: "xyz.int.com"
</code></pre>
<p>Whereas the same endpoint in another cluster works fine, and there is no explicit difference in both the nginx controller or ingress resource.</p>
<p>What can be the issue? Kindly assist.</p>
| Sanjay M. P. | <p>Summarizing the comments:</p>
<p>This warning message means that the size of the uploaded file was larger than the in-memory buffer reserved for uploads.</p>
<p>Please refer to <a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size" rel="nofollow noreferrer">this description</a> explaining how <code>client_body_buffer_size</code> works:</p>
<blockquote>
<p>Sets buffer size for reading client request body. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms.</p>
</blockquote>
| Bazhikov |
<p>The kubernetes official documentation for Service objects has some annotations regarding connection-draining, timeout, additional-tags etc. but these are limited to AWS.</p>
<p>I was hoping to find out the same for K8S deployment on Azure cloud.</p>
<p>For example,</p>
<pre><code> annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "300"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "600"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "6"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
</code></pre>
<p>From the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#connection-draining-on-aws" rel="nofollow noreferrer">official documentation listed here</a></p>
<p>If not such annotations exist, can someone help me achieve the same on Azure cloud. Thanks in advance!</p>
| Gauraang Khurana | <p>You can find:</p>
<ul>
<li>list of annotations supported for Azure Kubernetes services with type
LoadBalancer:</li>
</ul>
<p><a href="https://kubernetes-sigs.github.io/cloud-provider-azure/topics/loadbalancer/#loadbalancer-annotations" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/cloud-provider-azure/topics/loadbalancer/#loadbalancer-annotations</a></p>
<ul>
<li>list of Azure App Gateway Ingress controller kubernetes annotations:</li>
</ul>
<p><a href="https://azure.github.io/application-gateway-kubernetes-ingress/annotations/" rel="nofollow noreferrer">https://azure.github.io/application-gateway-kubernetes-ingress/annotations/</a></p>
| kavyaS |
<p>I have a k8s environment, where I am running 3 masters and 7 worker nodes. Daily my pods are in evicted states due to disk pressure.</p>
<p>I am getting the below error on my worker node.</p>
<pre><code>Message: The node was low on resource: ephemeral-storage.
</code></pre>
<pre><code>Status: Failed
Reason: Evicted
Message: Pod The node had condition: [DiskPressure].
</code></pre>
<p>But my worker node has enough resources to schedule pods.</p>
| Anvesh Muppeda | <p>Having analysed the comments it looks like pods go in the Evicted state when they're using more resources then available depending on a particular pod limit. A solution in that case might be manually deleting the evicting pods since they're not using resources at that given time. To read more about Node-pressure Eviction one can visit <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/" rel="nofollow noreferrer">the official documentation</a>.</p>
| Jakub Siemaszko |
<p>I have created a pod on Kubernetes and mounted a local volume but when I try to execute the ls command on locally mounted volume, I get a permission denied error. If I disable SELINUX then everything works fine. I am unable to make out how do I make it work with SELinux enabled.</p>
<h3>Following is the output of permission denied:</h3>
<pre><code>kubectl apply -f testpod.yaml
root@olcne-operator-ol8 opc]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/testpod 1/1 Running 0 5s
# kubectl exec -i -t testpod /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@testpod /]# cd /u01
[root@testpod u01]# ls
ls: cannot open directory '.': Permission denied
[root@testpod u01]#
</code></pre>
<h3>Following is the testpod.yaml</h3>
<pre><code>cat testpod.yaml
kind: Pod
apiVersion: v1
metadata:
name: testpod
labels:
name: testpod
spec:
hostname: testpod
restartPolicy: Never
volumes:
- name: swvol
hostPath:
path: /u01
containers:
- name: testpod
image: oraclelinux:8
imagePullPolicy: Always
securityContext:
privileged: false
command: [/usr/sbin/init]
volumeMounts:
- mountPath: "/u01"
name: swvol
</code></pre>
<h3>Selinux Configuration on worker node:</h3>
<pre><code># sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: actual (secure)
Max kernel policy version: 31
---
# semanage fcontext -l | grep kub | grep container_file
/var/lib/kubelet/pods(/.*)? all files system_u:object_r:container_file_t:s0
/var/lib/kubernetes/pods(/.*)? all files system_u:object_r:container_file_t:s0
</code></pre>
<h3>Machine OS Details</h3>
<pre><code> rpm -qa | grep kube
kubectl-1.20.6-2.el8.x86_64
kubernetes-cni-0.8.1-1.el8.x86_64
kubeadm-1.20.6-2.el8.x86_64
kubelet-1.20.6-2.el8.x86_64
kubernetes-cni-plugins-0.9.1-1.el8.x86_64
----
cat /etc/oracle-release
Oracle Linux Server release 8.4
---
uname -r
5.4.17-2102.203.6.el8uek.x86_64
</code></pre>
| drifter | <p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p>
<p>SELinux labels can be assigned with <code>seLinuxOptions</code>:</p>
<pre><code>apiVersion: v1
metadata:
name: testpod
labels:
name: testpod
spec:
hostname: testpod
restartPolicy: Never
volumes:
- name: swvol
hostPath:
path: /u01
containers:
- name: testpod
image: oraclelinux:8
imagePullPolicy: Always
command: [/usr/sbin/init]
volumeMounts:
- mountPath: "/u01"
name: swvol
securityContext:
seLinuxOptions:
level: "s0:c123,c456"
</code></pre>
<p>From the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#discussion" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p><code>seLinuxOptions</code>: Volumes that support SELinux labeling are relabeled
to be accessible by the label specified under <code>seLinuxOptions</code>.
Usually you only need to set the <code>level</code> section. This sets the
Multi-Category Security (MCS) label given to all Containers in the Pod
as well as the Volumes.</p>
</blockquote>
<p>Based on the information from the <a href="https://stackoverflow.com/questions/51000791/how-to-mount-hostpath-volume-in-kubernetes-with-selinux">original post on stackoverflow</a>:</p>
<blockquote>
<p><strong>You can only specify the level portion of an SELinux label</strong> when relabeling a path destination pointed to by a <code>hostPath</code> volume. This
is automatically done so by the <code>seLinuxOptions.level</code> attribute
specified in your <code>securityContext</code>.</p>
<p>However attributes such as <code>seLinuxOptions.type</code> currently have no
effect on volume relabeling. As of this writing, this is still an
<a href="https://github.com/projectatomic/adb-atomic-developer-bundle/issues/117" rel="nofollow noreferrer">open issue within
Kubernetes</a></p>
</blockquote>
| Andrew Skorkin |
<p>I'm new to Kubernetes and Helm Charts and was looking to find an answer to my question here.</p>
<p>When I run <code>kubectl get all</code> and look under services, I get something like:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/leader LoadBalancer 10.3.245.137 104.198.205.71 80:30125/TCP, 8888:30927/TCP 54s
</code></pre>
<p>My services are configured in my Helm Chart as:</p>
<pre><code>ports:
name: api
port: 80
targetPort: 8888
name: api2
port: 8888
targetPort: 8888
</code></pre>
<p>When I run <code>kubectl describe svc leader</code>, I get:</p>
<pre><code>Type: LoadBalancer
Port: api 80/TCP
TargetPort: 8888/TCP
NodePort: api 30125/TCP
EndPoints: <some IP>:8888
Port: api 8888/TCP
TargetPort: 8888/TCP
NodePort: api 30927/TCP
EndPoints: <some IP>:8888
</code></pre>
<p>I always thought that <code>NodePort</code> is the port that exposes my cluster externally, and Port would be the port exposed on the service internally which routes to <code>TargetPorts</code> on the Pods. I got this understanding from <a href="https://stackoverflow.com/questions/49981601/difference-between-targetport-and-port-in-kubernetes-service-definition">here</a>.</p>
<p>However, it seems I can open up <code>104.198.205.71:80</code> or <code>104.198.205.71:8888</code>, but I can't for <code>104.198.205.71:30125</code> or <code>104.198.205.71:30927</code>. My expectation is I should be able to access 104.198.205.71 through the <code>NodePorts</code>, and not through the Ports. Is my understanding incorrect?</p>
| Kun Hwi Ko | <p>Furthermore, to read more about accessing your resources from outside of your cluster using Publishing Services (NodePort is also mentioned there) you can refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">the official documentation</a>.</p>
| Jakub Siemaszko |
<p>I am using prometheus operator to monitor my Kubernetes cluster. I want to change the scrape_interval for some targets dynamically (increase and decrease it when needed at runtime).</p>
<p>Any suggestions to do that?</p>
<p>Thanks</p>
| khaoulaZ | <pre><code>kubectl get secret -n monitoring prometheus-k8s -o json | jq -r '.data."prometheus.yaml.gz"' | base64 -d | gzip -d
</code></pre>
<p>and then</p>
<pre><code>kubectl edit secret -n monitoring prometheus-k8s
</code></pre>
| CloudNativeLab |
<p>I am trying to deploy a service using helm. The cluster is Azure AKS & I have one DNS zone associated with a cluster that can be used for ingress.</p>
<p>But the issue is that the DNS zone is in k8s secret & I want to use it in ingress as host. like below</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
tls:
- hosts:
- {{ .Chart.Name }}.{{ .Values.tls.host }}
rules:
- host: {{ .Chart.Name }}.{{ .Values.tls.host }}
http:
paths:
-
pathType: Prefix
backend:
service:
name: {{ .Chart.Name }}
port:
number: 80
path: "/"
</code></pre>
<p>I want <code>.Values.tls.host</code> value from secret. Currently, it is hardcoded in <code>values.yaml</code> file.</p>
| PSKP | <p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p>
<p>For the current version of Helm (3.8.0), it seems not possible to use values right from Secret <strong>with standard approach</strong>.
Based on the information from <a href="https://helm.sh/" rel="nofollow noreferrer">Helm website</a>:</p>
<blockquote>
<p>A template directive is enclosed in <code>{{</code> and <code>}}</code> blocks.
The values that are passed into a template can be thought of as <em>namespaced
objects</em>, where a dot (<code>.</code>) separates each namespaced element.</p>
</blockquote>
<p><a href="https://helm.sh/docs/chart_template_guide/builtin_objects/" rel="nofollow noreferrer">Objects are passed into a template from the template engine</a> and can be:</p>
<ul>
<li>Release</li>
<li>Values</li>
<li>Chart</li>
<li>Files</li>
<li>Capabilities</li>
<li>Template</li>
</ul>
<p><a href="https://helm.sh/docs/chart_template_guide/values_files/" rel="nofollow noreferrer">Contents for Values objects can come from multiple sources</a>:</p>
<blockquote>
<ul>
<li>The <code>values.yaml</code> file in the chart</li>
<li>If this is a subchart, the <code>values.yaml</code> file of a parent chart</li>
<li>A values file if passed into <code>helm install</code> or <code>helm upgrade</code> with the <code>-f</code> flag (<code>helm install -f myvals.yaml ./mychart</code>)</li>
<li>Individual parameters passed with <code>--set</code> (such as <code>helm install --set foo=bar ./mychart</code>)</li>
</ul>
</blockquote>
| Andrew Skorkin |
<p>I have a Service configured to be accessible via HTTP.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
</code></pre>
<p>And an Ngynx Ingress configured to make that internal service accessible from a specific secure <code>subdomain.domain</code></p>
<pre><code>kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: myservice-ingress
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/myservice-ingress
annotations:
certmanager.k8s.io/issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTP
spec:
tls:
- hosts:
- myservice.mydomain.com
secretName: myservice-ingress-secret-tls
rules:
- host: myservice.mydomain.com
http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
status:
loadBalancer:
ingress:
- {}
</code></pre>
<p>So when I reach <code>https://myservice.mydomain.com</code> I can access to my service through HTTPS.
Is it safe enough or should I configure my service and pods to communicate only through <code>HTTPS</code>?</p>
| kaizokun | <p>It's expected behaviour since you've set <code>TLS</code> in your Ingress.</p>
<blockquote>
<p>Note that by default the controller <strong>redirects</strong> (308) to <strong>HTTPS</strong> if TLS <strong>is enabled</strong> for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: "false" in the NGINX <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-redirect" rel="nofollow noreferrer">ConfigMap</a>.</p>
</blockquote>
<p>To configure this feature for specific ingress resources, you can use the <code>nginx.ingress.kubernetes.io/ssl-redirect: "false"</code> annotation in the particular resource.</p>
<p>About your question: "Is it safe enough.." - it's opinion based question, so I can answer to use better <code>HTTPS</code>, rather than <code>HTTP</code>, but it's just my opinion. You can always find the difference between <code>HTTP</code> and <code>HTTPS</code></p>
| Bazhikov |
<p>Is there any way to inject a port value for a service (and other places) from a <code>ConfigMap</code>? Tried this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: service
namespace: namespace
spec:
ports:
- port: 80
targetPort:
valueFrom:
configMapKeyRef:
name: config
key: PORT
protocol: TCP
selector:
app: service
</code></pre>
<p>But got an error</p>
<pre><code>ValidationError(Service.spec.ports[0].targetPort): invalid type for io.k8s.apimachinery.pkg.util.intstr.IntOrString: got "map", expected "string"
</code></pre>
| Terion | <p>OK, so I've checked it more in-depth and it looks like you can't make a reference like this to the ConfigMap in your <em>service.spec</em> definition. This kind of usage of the <code>valueFrom</code> can be used only for container environment variables as described in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data" rel="nofollow noreferrer">here</a>.</p>
<p>On the other hand you can specify in your deployment.spec (in that case <em>service.spec.ports.targetPort</em>) the <code>targetPort</code> by name, for example <code>mycustomport</code> and reference to this <code>mycustomport</code> between deployment.spec and service.spec.</p>
<p>A note as per the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#serviceport-v1-core" rel="nofollow noreferrer">Kubernetes API reference docs</a>:</p>
<blockquote>
<p>targetPort - Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service</a></p>
</blockquote>
| Jakub Siemaszko |
<p><code>Kubectl version</code> gives the following output.</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:04:16Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I have used <code>kubectl</code> to edit persistent volume from 8Gi to 30Gi as <a href="https://i.stack.imgur.com/xYH2o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xYH2o.png" alt="enter image description here" /></a></p>
<p>However, when I exec the pod and run <code>df -h</code> I see the following:</p>
<p><a href="https://i.stack.imgur.com/n02oH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n02oH.png" alt="enter image description here" /></a></p>
<p>I have deleted the pods but it again shows the same thing. if I <code>cd</code> into <code>cd/dev</code> I don't see disk and <code>vda1</code> folder there as well. I think I actually want the <code>bitnami/influxdb</code> to be 30Gi. Please guide and let me know if more info is needed.</p>
| Sami Hassan | <p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p>
<p>Based on the comments provided here, there could be several reasons for this behavior.</p>
<ol>
<li>According to the documentation from the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims" rel="nofollow noreferrer">Kubernetes website</a>, manually changing the PersistentVolume size will not change the volume size:</li>
</ol>
<blockquote>
<p><strong>Warning:</strong> Directly editing the size of a PersistentVolume can prevent
an automatic resize of that volume. If you edit the capacity of a
PersistentVolume, and then edit the .spec of a matching
PersistentVolumeClaim to make the size of the PersistentVolumeClaim
match the PersistentVolume, then no storage resize happens. The
Kubernetes control plane will see that the desired state of both
resources matches, conclude that the backing volume size has been
manually increased and that no resize is necessary.</p>
</blockquote>
<ol>
<li>It also depends on how Kubernetes running and support for the <code>allowVolumeExpansion</code> feature. From <a href="https://github.com/digitalocean/csi-digitalocean/issues/291#issuecomment-598783816" rel="nofollow noreferrer">DigitalOcean</a>:</li>
</ol>
<blockquote>
<p>are you running one of DigitalOcean's managed clusters, or a DIY
cluster running on DigitalOcean infrastructure? In case of the latter,
which version of our CSI driver do you use? (You need v1.2.0 or later
for volume expansion to be supported.)</p>
</blockquote>
| Andrew Skorkin |
<p>I've installed Kubernetes on windows 10 pro. I ran into a problem where the UI wasn't accepting the access token I had generated for some reason.</p>
<p>So I went into docker and reset the cluster so I could start over:</p>
<p><a href="https://i.stack.imgur.com/2vVld.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2vVld.png" alt="enter image description here" /></a></p>
<p>But now when I try to apply my configuration again I get an error:</p>
<pre><code>kubectl apply -f .\recommended.yaml
Unable to connect to the server: dial tcp 127.0.0.1:61634: connectex: No connection could be made because the target machine actively refused it.
</code></pre>
<p>I have my <code>KUBECONFIG</code> variable set:</p>
<pre><code>$env:KUBECONFIG
C:\Users\bluet\.kube\config
</code></pre>
<p>And I have let kubernetes know about the config with this command:</p>
<pre><code>[Environment]::SetEnvironmentVariable("KUBECONFIG", $HOME + "\.kube\config", [EnvironmentVariableTarget]::Machine)
</code></pre>
<p>Yet, the issue remains! How can I resolve this? Docker seems fine.</p>
| Tim Dunphy | <p>This <a href="https://stackoverflow.com/questions/54012973/kubernetes-error-unable-to-connect-to-the-server-dial-tcp-127-0-0-18080">stack overflow</a> answered my question.</p>
<p>This is what it says:</p>
<blockquote>
<p>If you have kubectl already installed and pointing to some other environment, such as minikube or a GKE cluster, be sure to change
context so that kubectl is pointing to docker-desktop:</p>
</blockquote>
<pre><code>kubectl config get-contexts
kubectl config use-context docker-desktop
</code></pre>
<p>Apparently I had installed <code>minikube</code> which is what messed it up. Switching back to a docker context is what saved the day.</p>
| Tim Dunphy |
<p>You can find mentions of that resource in the following Questions: <a href="https://stackoverflow.com/questions/53230623/forbidden-to-access-kubernetes-api-server">1</a>, <a href="https://stackoverflow.com/questions/49396607/where-can-i-get-a-list-of-kubernetes-api-resources-and-subresources">2</a>. But I am not able to figure out what is the use of this resource.</p>
| yash thakkar | <p>Yes, it's true, the provided (in comments) link to the documentation might be confusing so let me try to clarify you this.</p>
<p>As per <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#so-many-proxies" rel="nofollow noreferrer">the official documentation</a> the apiserver proxy:</p>
<blockquote>
<ul>
<li>is a bastion built into the apiserver</li>
<li>connects a user outside of the cluster to cluster IPs which otherwise might not be reachable</li>
<li>runs in the apiserver processes</li>
<li>client to proxy uses HTTPS (or http if apiserver so configured)</li>
<li>proxy to target may use HTTP or HTTPS as chosen by proxy using available information</li>
<li><strong>can be used to reach a Node, Pod, or Service</strong></li>
<li>does load balancing when used to reach a Service</li>
</ul>
</blockquote>
<p>So answering your question - setting <code>node/proxy</code>resource in <code>clusterRole</code> allows k8s services access kubelet endpoints on specific node and path.</p>
<p>As per <a href="https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/#control-plane-to-node" rel="nofollow noreferrer">the official documentation</a>:</p>
<blockquote>
<p>There are two primary communication paths from the control plane
(apiserver) to the nodes. The first is from the apiserver to the
kubelet process which runs on each node in the cluster. The second is
from the apiserver to any node, pod, or service through the
apiserver's proxy functionality.</p>
</blockquote>
<p>The connections from the apiserver to the kubelet are used for:</p>
<ul>
<li>fetching logs for pods</li>
<li>attaching (through kubectl) to running pods</li>
<li>providing the kubelet's port-forwarding functionality</li>
</ul>
<p>Here are also few running examples of using <code>node/proxy</code> resource in <code>clusterRole</code>:</p>
<ol>
<li><a href="https://devopscube.com/setup-prometheus-monitoring-on-kubernetes/" rel="nofollow noreferrer">How to Setup Prometheus Monitoring On Kubernetes Cluster</a></li>
<li><a href="https://acloudguru.com/blog/engineering/running-prometheus-on-kubernetes" rel="nofollow noreferrer">Running Prometheus on Kubernetes</a></li>
</ol>
| Jakub Siemaszko |
Subsets and Splits