Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
β | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
β |
---|---|---|---|
<p>I have deployed Gitlab from gitlab official helm chart. When I deployed it I didn't enable LDAP. Be informed that I didn't edit the values.yaml rather I used <code>helm update --install XXX</code> command to do it.</p>
<p>My question is how do I extract the helm values.yaml of my existing helm deployment (Name: <code>prime-gitlab</code>). I know how to use <code>helm value show</code> command to download the value.yaml from the gitlab / artifactoryhub but here I would like extract my existing value.yaml so I can edit the LDAP part in the values.yaml file.</p>
<pre class="lang-none prettyprint-override"><code>01:36 AM β root on my-k8s-man-01 Ξ [~] Ξ© helm ls -n prime-gitlab
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
prime-gitlab prime-gitlab 1 2022-02-12 01:02:15.901215658 -0800 PST deployed gitlab-5.7.2 14.7.2
</code></pre>
| mht.haque | <p>The answer here is very short. Exactly, like @DavidMaze mentioned in comments section you're looking for <a href="https://docs.helm.sh/docs/helm/helm_get_values/" rel="noreferrer"><code>helm get values</code></a>.</p>
<p>To this command one can use several options.</p>
<blockquote>
<p>This command downloads a values file for a given release.</p>
<pre><code>helm get values RELEASE_NAME [flags]
</code></pre>
</blockquote>
<p><em>Options:</em></p>
<pre><code> -a, --all dump all (computed) values
-h, --help help for values
-o, --output format prints the output in the specified format. Allowed values: table, json, yaml (default table)
--revision int get the named release with revision
</code></pre>
| kkopczak |
<p>We create around two hundred ingress resources when we deploy the environment to k8s. We see that they are added sequentially in k8s, adding one ingress takes ~5-10 seconds, so adding 200 ingresses took us ~30 minutes.</p>
<p>The code looks like this:
<code>kubectl apply -n namespace-1 -f file-that-contains-200-ingresses.yml -server-side=true --force-conflicts=true</code></p>
<p>Is it possible to speed up that process? Can we do the update of the nginx configuration in one batch?</p>
| gzaripov | <p>I don't think this is possible with <code>kubectl apply...</code> and single file since each resource is separate <code>API</code> call and needs to go via all checks. You can find more derail description <a href="https://github.com/jamiehannaford/what-happens-when-k8s" rel="nofollow noreferrer">here</a> if you would like to know what is happening when you send create request to <code>kube-api</code>.</p>
<p>What I can advise is to split this file with <a href="https://stackoverflow.com/a/72087064/15441928">yq</a> and apply single files in parallel in your <code>CI</code>.</p>
| MichaΕ Lewndowski |
<p>We have elasticsearch cluster at <code>${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}</code>
and filebeat pod at k8s cluster that exports other pods' logs</p>
<p>There is <code>filebeat.yml</code>:</p>
<pre><code>filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition:
equals:
kubernetes.namespace: develop
config:
- type: container
paths:
- /var/log/containers/*-${data.kubernetes.container.id}.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
hints.enabled: true
hints.default_config:
type: container
multiline.type: pattern
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
http:
enabled: true
host: localhost
port: 5066
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}'
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
indices:
- index: "develop"
when:
equals:
kubernetes.namespace: "develop"
- index: "kubernetes-dev"
when:
not:
and:
- equals:
kubernetes.namespace: "develop"
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- decode_json_fields:
fields: ["message"]
add_error_key: true
process_array: true
overwrite_keys: false
max_depth: 10
target: json_message
</code></pre>
<p>I've checked: filebeat has access to <code>/var/log/containers/</code> on kuber but elastic cluster still doesn't get any <code>develop</code> or <code>kubernetes-dev</code> indices. (Cluster has relative index templates for this indices)</p>
<p><code>http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_cluster/health?pretty</code>:</p>
<pre><code>{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 14,
"active_shards" : 28,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
</code></pre>
<p>Filebeat log:</p>
<pre><code>{
"log.level": "info",
"@timestamp": "2022-11-25T08:35:18.084Z",
"log.logger": "monitoring",
"log.origin": {
"file.name": "log/log.go",
"file.line": 184
},
"message": "Non-zero metrics in the last 30s",
"service.name": "filebeat",
"monitoring": {
"metrics": {
"beat": {
"cgroup": {
"cpu": {
"stats": {
"periods": 38
}
},
"cpuacct": {
"total": {
"ns": 1576170001
}
},
"memory": {
"mem": {
"usage": {
"bytes": 4096
}
}
}
},
"cpu": {
"system": {
"ticks": 13570,
"time": {
"ms": 56
}
},
"total": {
"ticks": 23320,
"time": {
"ms": 90
},
"value": 23320
},
"user": {
"ticks": 9750,
"time": {
"ms": 34
}
}
},
"handles": {
"limit": {
"hard": 1048576,
"soft": 1048576
},
"open": 11
},
"info": {
"ephemeral_id": "a88f461e-1fd3-48d3-a1c7-3bfea1124ab9",
"uptime": {
"ms": 7530198
},
"version": "8.1.0"
},
"memstats": {
"gc_next": 17767824,
"memory_alloc": 16385608,
"memory_total": 709250600,
"rss": 104206336
},
"runtime": {
"goroutines": 23
}
},
"filebeat": {
"harvester": {
"open_files": 0,
"running": 0
}
},
"libbeat": {
"config": {
"module": {
"running": 0
}
},
"output": {
"events": {
"active": 0
}
},
"pipeline": {
"clients": 0,
"events": {
"active": 0
}
}
},
"registrar": {
"states": {
"current": 0
}
},
"system": {
"load": {
"1": 3.99,
"15": 3.91,
"5": 4.08,
"norm": {
"1": 0.4988,
"15": 0.4888,
"5": 0.51
}
}
}
},
"ecs.version": "1.6.0"
}
}
</code></pre>
<p>Where can be problem? Im absolutely new to elk so working with it not from the start is a bit hard</p>
| Rumotameru | <p>In the end I've found out that elastic cluster's version used by company is much older than that of filebeat (8.1 against 7.9). So temporary allowing usage of older versions has solved the issue for time being.</p>
<pre><code>output.elasticsearch:
allow_older_versions: true
</code></pre>
<p>BUT @Paulo also had a point and construction <code>not + and</code> was excessive in the end.</p>
| Rumotameru |
<p>I just created a new AKS cluster that has to replace an old cluster. The new cluster is now ready to replace the old one, except for one crucial thing, it's outbound ip address. The address of the old cluster must be used so that our existing DNS records do not have to change.</p>
<p><strong>How do I change the public IP address of the Azure load balancer (that is used by the nginx ingress controller) of the new cluster to the one used by the old cluster?</strong>
The old cluster is still running, I want to switch it off / delete it when the new cluster is available. Some down time needed to switch the ip address is acceptable.</p>
<p>I think that the ip first has to be deleted from the load balancer's Frontend IP configuration of the old cluster and can then be added to the Frontend IP configuration of the load balancer used in the new cluster. But I need to know exactly how to do this and what else need to be done if needed (maybe adding a backend pool?)</p>
<p><strong>Update</strong></p>
<p>During the installation of the new cluster I already added the public ip address of the load balancer of the old cluster in the yaml of the new ingress-nginx-controller.
The nginx controller load balancer in the new cluster is in the state <em>Pending</em> and continuously generating events with message "Ensuring Load Balancer". Could it be that simple that I only need to assign an other ip address to the ingress-nginx-controller load balancer in the old cluster so that the ip can be used in the new cluster?</p>
| Patrick Koorevaar | <p>You have to create a static public IP address for the AKS cluster. Once you delete the old cluster, the public IP address and load balancer associated with it will be deleted as well. You can check and try this documentation[1] for a detailed guide.</p>
<p>[1] <a href="https://learn.microsoft.com/en-us/azure/aks/static-ip" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/static-ip</a></p>
| Marvin Lucero |
<p>I am attempting to create a kubernetes ConfigMap with helm, which simply consists of the first line within a config file. I put my file in <code>helm/config/file.txt</code>, which has several lines of content, but I only want to extract the first. My first attempt at this was to loop over the lines of the file (naturally), but quit out after the first loop:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: one-line-cm
data:
first-line:
{{- range .Files.Lines "config/file.txt" }}
{{ . }}
{{ break }} # not a real thing
{{- end }}
</code></pre>
<p>Unfortunately, <code>break</code> doesn't seem to be a concept/function in helm, even though it is within golang. I discovered this the hard way, as well as reading about a similar question in this other post: <a href="https://stackoverflow.com/questions/60966946/helm-break-loop-range-in-template">Helm: break loop (range) in template</a></p>
<p>I'm not stuck on using a loop, I'm just wondering if there's another solution to perform the simple task of extracting the first line from a file with helm syntax.</p>
| tubensandwich | <p>EDIT:<br />
I've determined the following is the cleanest solution:</p>
<pre><code>.Files.Lines "config/file.txt" | first
</code></pre>
<p>(As a side note, I had to pipe to <code>squote</code> in my acutal solution due to my file contents containing special characters)</p>
<hr />
<p>After poking around in the helm <a href="https://helm.sh/docs/chart_template_guide/function_list/" rel="nofollow noreferrer">docs</a> for alternative functions, I came up with a solution that works, it's just not that pretty:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: one-line-cm
data:
first-line: |
{{ index (regexSplit "\\n" (.Files.Get "config/file.txt") -1) 0 }}
</code></pre>
<p>This is what's happening above (working inside outward):</p>
<ol>
<li><code>.Files.Get "config/file.txt"</code> is returning a string representation of the file contents.</li>
<li><code>regexSplit "\\n" <step-1> -1</code> is splitting the file contents from step-1 by newline (-1 means return the max number of substring matches possible)</li>
<li><code>index <step-2> 0</code> is grabbing the first item (index 0) from the list returned by step-2.</li>
</ol>
<p>Hope this is able to help others in similar situations, and I am still open to alternative solution suggestions.</p>
| tubensandwich |
<p>I have an API Service running as a Docker Image and now I want to test it on Kubernetes with Docker Desktop, but I can't get it running.
The docker image's name is <code>api_service</code></p>
<p>this is the yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api-service
spec:
selector:
matchLabels:
app: my-api-service
replicas: 1
template:
metadata:
labels:
app: my-api-service
spec:
containers:
- name: api_service
image: api_service
ports:
- containerPort: 5001
apiVersion: v1
kind: Service
metadata:
name: my-api-service
spec:
selector:
ports:
- protocol: TCP
port: 5001
targetPort: 5001
</code></pre>
<p>By checking with <code>kubectl get pods --all-namespaces</code>,
The status is <code>ImagePullBackOff</code>.
What am I doing wrong?</p>
<p>update:</p>
<p>calling kubectl describe:</p>
<pre><code>Name: my-api-service-7ffdb9d6b7-x5zs8
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Mon, 15 Aug 2022 13:55:47 +0200
Labels: app=my-api-service
pod-template-hash=7ffdb9d6b7
Annotations: <none>
Status: Pending
IP: 10.1.0.15
IPs:
IP: 10.1.0.15
Controlled By: ReplicaSet/my-api-service-7ffdb9d6b7
Containers:
my-api-service:
Container ID:
Image: api_service
Image ID:
Port: 5001/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hgghw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-hgghw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 4m37s (x413 over 99m) kubelet Back-off pulling image "api_service"
</code></pre>
| stanvooz | <p>You need to have a built image in a repository and not use a local docker image. Kubernetes does not share the repository with your local docker/crd image rep. As I see, you built an image with a tag, and then you want to use it in k8s/deployment, which should not be done. You can however, link kubernetes with your local image docker repository and try it that way. But still, k8s is a big whale and doing this in production may cause grave mistakes. Please follow this <a href="https://medium.com/swlh/how-to-run-locally-built-docker-images-in-kubernetes-b28fbc32cc1d" rel="nofollow noreferrer">link</a></p>
<p>Altough anytime you dont know where the problem is, you can check the <strong>kubelet</strong> logs, as <strong>kubelet</strong> pulls the images. Depending on your k8s version, run the command to get logs from the kubelet</p>
| Pavol KrajkoviΔ |
<p>I have a new install of K8s master and node both on ubuntu-18. The master is using weave for CNI and all pods are running:</p>
<pre><code>$ sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-29qg5 1/1 Running 0 31m
kube-system coredns-6d4b75cb6d-kxxc8 1/1 Running 0 31m
kube-system etcd-ubuntu-18-extssd 1/1 Running 2 31m
kube-system kube-apiserver-ubuntu-18-extssd 1/1 Running 2 31m
kube-system kube-controller-manager-ubuntu-18-extssd 1/1 Running 2 31m
kube-system kube-proxy-nvqjl 1/1 Running 0 31m
kube-system kube-scheduler-ubuntu-18-extssd 1/1 Running 2 31m
kube-system weave-net-th4kv 2/2 Running 0 31m
</code></pre>
<p>When I execute the <code>kubeadm join</code> command on the node I get the following error:</p>
<pre><code>sudo kubeadm join 192.168.0.12:6443 --token ikk2kd.177ij0f6n211sonl --discovery-token-ca-cert-hash sha256:8717baa3c634321438065f40395751430b4fb55f43668fac69489136335721dc
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: E0724 16:24:41.009234 8391 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-07-24T16:24:41-06:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>The only problem showing up in <code>journalctl -r -u kubelet</code> is:</p>
<pre><code>kubelet.service: Main process exited, code=exited, status=1/FAILURE
...
Error: failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml
</code></pre>
<p>That is from several minutes before the <code>join</code> failed when kubelet was trying to start. I would expect that config.yaml file to be missing until the node joined a cluster.</p>
<p>The preflight error message says</p>
<pre><code>[ERROR CRI]: container runtime is not running: output: E0724 16:32:41.120653 10509 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
</code></pre>
<p>What is this trying to tell me?</p>
<p>====Edit=====
I am running CrashPlan on the worker node that is failing, but I have <code>fs.inotify.max_user_watches=1048576</code> in /etc/sysctl.conf.</p>
<p>This node worked before both with on-prem master and with GKE with kubernetes 1.20.</p>
| Dean Schulze | <p>[ERROR CRI]: container runtime is not running: output: E0724 16:32:41.120653 10509 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"..............</p>
<p>rm /etc/containerd/config.toml...
systemctl restart containerd...now u can run kubeadm init command</p>
| syed adeeb |
<p>In the new <strong>Kubespray</strong> release <strong>containerd</strong> is set as default, but the old one isn't.</p>
<p>I want to change docker to containerd in old version and install it with that version.</p>
<p>When I looked the <code>offline.yml</code> I don't see any option for <strong>containerd</strong> in <strong>Redhat</strong>. Below is the code from <code>offline.yml</code>:</p>
<pre><code># CentOS/Redhat/AlmaLinux/Rocky Linux
## Docker / Containerd
docker_rh_repo_base_url: "{{ yum_repo }}/docker-ce/$releasever/$basearch"
docker_rh_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"
# Fedora
## Docker
docker_fedora_repo_base_url: "{{ yum_repo }}/docker-ce/{{ ansible_distribution_major_version }}/{{ ansible_architecture }}"
docker_fedora_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"
## Containerd
containerd_fedora_repo_base_url: "{{ yum_repo }}/containerd"
containerd_fedora_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"
# Debian
## Docker
docker_debian_repo_base_url: "{{ debian_repo }}/docker-ce"
docker_debian_repo_gpgkey: "{{ debian_repo }}/docker-ce/gpg"
## Containerd
containerd_debian_repo_base_url: "{{ ubuntu_repo }}/containerd"
containerd_debian_repo_gpgkey: "{{ ubuntu_repo }}/containerd/gpg"
containerd_debian_repo_repokey: 'YOURREPOKEY'
# Ubuntu
## Docker
docker_ubuntu_repo_base_url: "{{ ubuntu_repo }}/docker-ce"
docker_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/docker-ce/gpg"
## Containerd
containerd_ubuntu_repo_base_url: "{{ ubuntu_repo }}/containerd"
containerd_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/containerd/gpg"
containerd_ubuntu_repo_repokey: 'YOURREPOKEY'
</code></pre>
<p>How should I set containerd in <code>offline.ym</code>l and how to find which version of containerd is stable with this Kubespray?</p>
<p>Thanks for answering</p>
| Orkun | <p>Always try to dig in history in documentation. Since you're looking for outdated version see <a href="https://github.com/kubernetes-sigs/kubespray/commit/8f2b0772f9ca2d146438638e1fb9f7484cbdbd55#:%7E:text=calicoctl%2Dlinux%2D%7B%7B%20image_arch%20%7D%7D%22-,%23%20CentOS/Redhat,extras_rh_repo_gpgkey%3A%20%22%7B%7B%20yum_repo%20%7D%7D/containerd/gpg%22,-%23%20Fedora" rel="nofollow noreferrer">this fragment</a> of offline.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code># CentOS/Redhat
## Docker
## Docker / Containerd
docker_rh_repo_base_url: "{{ yum_repo }}/docker-ce/$releasever/$basearch"
docker_rh_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"
## Containerd
extras_rh_repo_base_url: "{{ yum_repo }}/centos/{{ ansible_distribution_major_version }}/extras/$basearch"
extras_rh_repo_gpgkey: "{{ yum_repo }}/containerd/gpg"
</code></pre>
<p>Reference: <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer"><em>kubespray documentation</em></a>.</p>
| kkopczak |
<p>I've recently used google Kubernetes engine to deploy my magento project and i successfully deployed it. and my next step was in each git push my jenkins pipeline will start building and update the project in my kubernetes cluster.
i've been looking for tutorials but i got no documentation about how to run kubectl in jenkins with my GKE credentials.
if anyone is familiar with this kind of task and have any reference please help me.</p>
| nessHaf | <p>Actually you're asking for this question</p>
<blockquote>
<p>how to run kubectl in jenkins with my GKE credentials</p>
</blockquote>
<p>In a research of how you can manage it I found this <a href="https://blog.bewgle.com/2020/04/13/setting-up-ci-cd-for-gke-with-jenkins/" rel="nofollow noreferrer">tutorial</a> about Setting up CI/CD for GKE with Jenkins which contains the steps in order to build an update the project. You can also take a look for whole tutorial, might help you with your project.<br>
But we're going to check the part where it says <strong>Jenkins Job Build</strong>; you'll find the authentication method and getting the credentials for cluster:</p>
<blockquote>
<p>#To activate creds for the first time, can also be done in Jenkins machine directly and get credentials for kubectl</p>
<pre><code>gcloud auth activate-service-account account_name --key-file [KEY_FILE]
</code></pre>
<p>#Get credentials for cluster</p>
<pre><code>gcloud container clusters get-credentials <cluster-name> --zone <zone-name> --project <project-name>
</code></pre>
</blockquote>
<p>And also, deploying it to the project:</p>
<blockquote>
<p>#To create new deployment</p>
<pre><code>kubectl create deployment <deployment-name> --image=gcr.io/<project-name>/nginx:${version}
</code></pre>
<p>#For rolling update</p>
<pre><code>kubectl set image deployment/<app_name> nginx=gcr.io/<project-name>/<appname>/nginx:${version} --record
</code></pre>
</blockquote>
| Sergio NH |
<p>I'm currently learning kubernetes bases and i would like to expose a mongodb outside of my cluser. I've setting up my nginx ingress controller and followig this <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">doc</a> to expose plain TCP connexion.</p>
<p>This is my Ingress Service configuration :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-4.0.15
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: NodePort
ipFamilyPolicy: SingleStack
externalIPs:
- 172.30.63.51
ipFamilies:
- IPv4
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
- name: proxied-tcp-27017
port: 27017
protocol: TCP
targetPort: 27017
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
</code></pre>
<p>The configmap to proxy TCP connexions :</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
27017: "global-stack/global-stack-mongo-svc:27017"
</code></pre>
<p>My ingress conroller works well on ports 80 and 443 to expose my services but impossible for me to access to port 27017</p>
<p>Result of kubectl get svc -n ingress-nginx :</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
ingress-nginx-controller NodePort 10.97.149.93 172.30.63.51 80:30159/TCP,443:32585/TCP,27017:30098/TCP
ingress-nginx-controller-admission ClusterIP 10.107.33.165 <none> 443/TCP
</code></pre>
<p>External IP is well responding to curl 172.30.63.51:80</p>
<pre><code><html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
</code></pre>
<p>But can't respond to port 27017 :</p>
<pre><code>curl: (7) Failed to connect to 172.30.63.51 port 27017: Connection refused
</code></pre>
<p>My mongo service :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: global-stack-mongo-svc
namespace: global-stack
labels:
app: global-stack-mongo-app
spec:
type: ClusterIP
ports:
- name: http
port: 27017
protocol: TCP
targetPort: 27017
selector:
app: global-stack-mongo-app
</code></pre>
<p>The services's cluster IP is 10.244.1.57 and well responding</p>
<pre><code>>> curl 10.244.1.57:27017
It looks like you are trying to access MongoDB over HTTP on the native driver port.
</code></pre>
<p>If anyone could help me I would be very grateful.
Thanks</p>
<p>Guieme.</p>
| Guieme | <p>After some research I solved my issue.</p>
<p>in the nginx-ingress documentation, it's not described but you need to mapp the TCP config map with the ingress-controller container with this lines into the deployment file :</p>
<pre><code> args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --tcp-services-configmap=ingress-nginx/tcp-services
</code></pre>
| Guieme |
<p>I am learning kubernetes and created first pod using below command</p>
<pre><code>kubectl run helloworld --image=<image-name> --port=8080
</code></pre>
<p>The Pod creation was successful.
But since it is neither a ReplicationController or a Deloyment, how could I expose it as a service. Please advise.</p>
| Patty | <p>Please refer to the documentation of <strong>kubernetes service concept</strong> <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/</a>
At the end of the page, there also is an interactive tutorial in minikube</p>
| Pavol KrajkoviΔ |
<p>I have installed Minikube, but when I run <code>minikube start</code>, I get this error:</p>
<pre class="lang-none prettyprint-override"><code>π minikube v1.17.1 on Ubuntu 20.04
β¨ Using the docker driver based on existing profile
π Starting control plane node minikube in cluster minikube
π Updating the running docker "minikube" container ...
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.0 ...
π€¦ Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
βͺ Generating certificates and keys ...
βͺ Booting up control plane ...
π’ initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.8.0-40-generic
DOCKER_VERSION: 20.10.0
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
[
βͺ Generating certificates and keys ...
βͺ Booting up control plane ...
π£ Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.8.0-40-generic
DOCKER_VERSION: 20.10.0
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
πΏ minikube is exiting due to an error. If the above message is not useful, open an issue:
π https://github.com/kubernetes/minikube/issues/new/choose
β Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.8.0-40-generic
DOCKER_VERSION: 20.10.0
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
π‘ Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
πΏ Related issue: https://github.com/kubernetes/minikube/issues/4172
</code></pre>
<p>I can't understand what the problem is here. It was working, but then I got a similar error. It says:</p>
<blockquote>
<p>π³ Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...| β Unable to load cached images: loading cached images: stat /home/feiz-nouri/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4: no such file or directory</p>
</blockquote>
<p>I uninstalled it and then reinstalled it, but I still got the error.</p>
<p>How can I fix this?</p>
| feiz | <p>You can use <code>minikube delete</code> to delete the old cluster. After that start Minikube using <code>minikube start</code>.</p>
| Abetti |
<p>I have deployed my running application in AKS. I want to add new disk (Harddisk of 30GB) but I don't know how to do it.</p>
<p>I want to attach 3 disks.</p>
<p>Here is details of AKS:</p>
<ul>
<li>Node size: <code>Standard_DS2_v2</code></li>
<li>Node pools: <code>1 node pool</code></li>
<li>Storage is:</li>
</ul>
<hr />
<pre><code>default (default) kubernetes.io/azure-disk Delete WaitForFirstConsumer true
</code></pre>
<p>Please, tell me how to add it.</p>
| Mohd Rashid | <p>Based on <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#:%7E:text=A%20PersistentVolume%20(PV)%20is%20a,node%20is%20a%20cluster%20resource.&text=Pods%20can%20request%20specific%20levels%20of%20resources%20(CPU%20and%20Memory)." rel="nofollow noreferrer">Kubernetes documentation</a>:</p>
<blockquote>
<p>A <em>PersistentVolume</em> (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">Storage Classes</a>.</p>
<p>It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV.</p>
</blockquote>
<p>In the <a href="https://learn.microsoft.com/en-us/azure/aks/concepts-storage#persistent-volumes" rel="nofollow noreferrer">Azure documentation</a> one can find clear guides how to:</p>
<ul>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume" rel="nofollow noreferrer"><em>create a static volume using Azure Disks</em></a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer"><em>create a static volume using Azure Files</em></a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv" rel="nofollow noreferrer"><em>create a dynamic volume using Azure Disks</em></a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv" rel="nofollow noreferrer"><em>create a dynamic volume using Azure Files</em></a></li>
</ul>
<p><strong>NOTE</strong>:
Before you begin you should have <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="nofollow noreferrer">existing AKS cluster</a> and Azure CLI version 2.0.59 or <a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli" rel="nofollow noreferrer">later installed</a> and configured. To check your version run:</p>
<pre><code>az --version
</code></pre>
<p>See also <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">this documentation</a>.</p>
| kkopczak |
<p>Having trouble running the <code>manage.py migrate</code> command in our kubernetes cluster. It seems to have lost permission to run anything. None of the <code>manage.py</code> commands work, they all get the same issue.</p>
<p>I have no ability to change the permissions or ownership on the container. This worked in the past (at least Nov 2021) but using the latest version causes this error. Does anyone have any idea why the commands no longer work?</p>
<pre><code>bash-4.4$ ./manage.py migrate
Traceback (most recent call last):
File "./manage.py", line 12, in <module>
execute_from_command_line(sys.argv)
File "/venv/lib64/python3.8/site-packages/django/core/management/__init__.py", line 425, in execute_from_command_line
utility.execute()
File "/venv/lib64/python3.8/site-packages/django/core/management/__init__.py", line 419, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/venv/lib64/python3.8/site-packages/django/core/management/base.py", line 373, in run_from_argv
self.execute(*args, **cmd_options)
File "/venv/lib64/python3.8/site-packages/django/core/management/base.py", line 417, in execute
output = self.handle(*args, **options)
File "/venv/lib64/python3.8/site-packages/django/core/management/base.py", line 90, in wrapped
res = handle_func(*args, **kwargs)
File "/venv/lib64/python3.8/site-packages/django/core/management/commands/migrate.py", line 75, in handle
self.check(databases=[database])
File "/venv/lib64/python3.8/site-packages/django/core/management/base.py", line 438, in check
all_issues = checks.run_checks(
File "/venv/lib64/python3.8/site-packages/django/core/checks/registry.py", line 77, in run_checks
new_errors = check(app_configs=app_configs, databases=databases)
File "/venv/lib64/python3.8/site-packages/tcms/core/checks.py", line 15, in check_installation_id
with open(filename, "w", encoding="utf-8") as file_handle:
PermissionError: [Errno 13] Permission denied: '/Kiwi/uploads/installation-id'
</code></pre>
| Jose | <p>Needed to add this to <code>deployment.yaml</code>:</p>
<pre><code>securityContext:
fsGroup: 1001
</code></pre>
| Jose |
<p>This should be fairly easy, or I might doing something wrong, but after a while digging into it I couldn't find a solution.</p>
<p>I have a Terraform configuration that contains a <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret" rel="nofollow noreferrer">Kubernetes Secret</a> resource which data comes from Vault. The resource configuration looks like this:</p>
<pre><code>resource "kubernetes_secret" "external-api-token" {
metadata {
name = "external-api-token"
namespace = local.platform_namespace
annotations = {
"vault.security.banzaicloud.io/vault-addr" = var.vault_addr
"vault.security.banzaicloud.io/vault-path" = "kubernetes/${var.name}"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
}
data = {
"EXTERNAL_API_TOKEN" = "vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN"
}
}
</code></pre>
<p>Everything is working fine so far, but every time I do <code>terraform plan</code> or <code>terraform apply</code>, it marks that resource as "changed" and updates it, even when I didn't touch the resource or other resources related to it. E.g.:</p>
<pre><code>... (other actions to be applied, unrelated to the offending resource) ...
# kubernetes_secret.external-api-token will be updated in-place
~ resource "kubernetes_secret" "external-api-token" {
~ data = (sensitive value)
id = "platform/external-api-token"
type = "Opaque"
metadata {
annotations = {
"vault.security.banzaicloud.io/vault-addr" = "https://vault.infra.megacorp.io:8200"
"vault.security.banzaicloud.io/vault-path" = "kubernetes/gke-pipe-stg-2"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
generation = 0
labels = {}
name = "external-api-token"
namespace = "platform"
resource_version = "160541784"
self_link = "/api/v1/namespaces/platform/secrets/external-api-token"
uid = "40e93d16-e8ef-47f5-92ac-6d859dfee123"
}
}
Plan: 3 to add, 1 to change, 0 to destroy.
</code></pre>
<p>It is saying that the data for this resource has been changed. However the data in Vault remains the same, nothing has been modified there. This update happens every single time now.</p>
<p>I was thinking on to use the <a href="https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changes" rel="nofollow noreferrer"><code>ignore_changes</code></a> lifecycle feature, but I assume this will make any changes done in Vault secret to be ignored by Terraform, which I also don't want. <strong>I would like the resource to be updated only when the secret in Vault was changed.</strong></p>
<p>Is there a way to do this? What am I missing or doing wrong?</p>
| JosΓ© L. PatiΓ±o | <p>You need to add in the Terraform Lifecycle ignore changes meta argument to your code. For data with API token values but also annotations for some reason Terraform seems to assume that, that data changes every time a plan or apply or even destroy has been run. I had a similar issue with Azure KeyVault.</p>
<p>Here is the code with the lifecycle ignore changes meta argument included:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>resource "kubernetes_secret" "external-api-token" {
metadata {
name = "external-api-token"
namespace = local.platform_namespace
annotations = {
"vault.security.banzaicloud.io/vault-addr" = var.vault_addr
"vault.security.banzaicloud.io/vault-path" = "kubernetes/${var.name}"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
}
data = {
"EXTERNAL_API_TOKEN" = "vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN"
}
lifecycle {
ignore_changes = [
# Ignore changes to data, and annotations e.g. because a management agent
# updates these based on some ruleset managed elsewhere.
data,annotations,
]
}
}</code></pre>
</div>
</div>
</p>
<p>link to meta arguments with lifecycle:</p>
<p><a href="https://www.terraform.io/language/meta-arguments/lifecycle" rel="nofollow noreferrer">https://www.terraform.io/language/meta-arguments/lifecycle</a></p>
| Jason |
<p>I am encountering a weird behavior when I try to attach <code>podAffinity</code> to the <strong>Scheduler deployment from the official Airflow helm chart</strong>, like:</p>
<pre><code> affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- postgresql
topologyKey: "kubernetes.io/hostname"
</code></pre>
<p>With an example Deployment to which the <code>podAffinity</code> should "hook up" to:</p>
<pre><code>metadata:
name: {{ template "postgresql.fullname" . }}
labels:
app: postgresql
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
serviceName: {{ template "postgresql.fullname" . }}-headless
replicas: 1
selector:
matchLabels:
app: postgresql
release: {{ .Release.Name | quote }}
template:
metadata:
name: {{ template "postgresql.fullname" . }}
labels:
app: postgresql
chart: {{ template "postgresql.chart" . }}
</code></pre>
<p>Which results in:</p>
<pre><code>NotTriggerScaleUp: pod didn't trigger scale-up: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod affinity rules
</code></pre>
<p><strong>However, applying the same <code>podAffinity</code> config to the Webserver deployment works just fine. Plus, changing the example Deployment to a vanilla nginx manifested itself in the outcome.</strong></p>
<p>It does not seem to be any resource limitation issue since I already tried various configs, every time with the same result.
I do not use any custom configurations apart from node affinity.</p>
<p>Has anyone encounter the same or has any idea what I might do wrong?</p>
<p><strong>Setup:</strong></p>
<ul>
<li>AKS cluster</li>
<li>Airflow helm chart 1.1.0</li>
<li>Airflow 1.10.15 (but I don't think this matters)</li>
<li>kubectl client (1.22.1) and server (1.20.7)</li>
</ul>
<p><strong>Links to Airflow charts:</strong></p>
<ul>
<li><a href="https://github.com/apache/airflow/blob/main/chart/templates/scheduler/scheduler-deployment.yaml" rel="nofollow noreferrer">Scheduler</a></li>
<li><a href="https://github.com/apache/airflow/blob/main/chart/templates/webserver/webserver-deployment.yaml" rel="nofollow noreferrer">Webserver</a></li>
</ul>
| Bennimi | <p>I've recreated this scenario on my GKE cluster and I've decided to provide a Community Wiki answer to show that the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">podAffinity</a> on the <a href="https://github.com/apache/airflow/blob/main/chart/templates/scheduler/scheduler-deployment.yaml" rel="nofollow noreferrer">Scheduler</a> works as expected.
I will describe step by step how I tested it below.</p>
<hr />
<ol>
<li>In the <code>values.yaml</code> file, I've configured the <code>podAffinity</code> as follows:</li>
</ol>
<pre><code>$ cat values.yaml
...
# Airflow scheduler settings
scheduler: affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- postgresql
topologyKey: "kubernetes.io/hostname"
...
</code></pre>
<ol start="2">
<li>I've installed the <a href="https://airflow.apache.org/docs/helm-chart/stable/index.html#installing-the-chart" rel="nofollow noreferrer">Airflow</a> on a Kubernetes cluster using the Helm package manager with the <code>values.yaml</code> file specified.</li>
</ol>
<pre><code>$ helm install airflow apache-airflow/airflow --values values.yaml
</code></pre>
<p>After a while we can check the status of the <code>scheduler</code>:</p>
<pre><code>$ kubectl get pods -owide | grep "scheduler"
airflow-scheduler-79bfb664cc-7n68f 0/2 Pending 0 8m6s <none> <none> <none> <none>
</code></pre>
<ol start="3">
<li>I've created an example Deployment with the <code>app: postgresql</code> label:</li>
</ol>
<pre><code>$ cat test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgresql
name: test
spec:
replicas: 1
selector:
matchLabels:
app: postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- image: nginx
name: nginx
$ kubectl apply -f test.yaml
deployment.apps/test created
$ kubectl get pods --show-labels | grep test
test-7d4c9c654-7lqns 1/1 Running 0 2m app=postgresql,...
</code></pre>
<ol start="4">
<li>Finally, we can check that the <code>scheduler</code> was successfully created:</li>
</ol>
<pre><code>$ kubectl get pods -o wide | grep "scheduler\|test"
airflow-scheduler-79bfb664cc-7n68f 2/2 Running 0 14m 10.X.1.6 nodeA
test-7d4c9c654-7lqns 1/1 Running 0 2m27s 10.X.1.5 nodeA
</code></pre>
<hr />
<p>Additionally, detailed informtion on <code>pod affinity</code> and <code>pod anti-affinity</code> can be found in the <a href="https://docs.openshift.com/container-platform/4.9/nodes/scheduling/nodes-scheduler-pod-affinity.html#nodes-scheduler-pod-affinity-about_nodes-scheduler-pod-affinity" rel="nofollow noreferrer">Understanding pod affinity</a> documentation:</p>
<blockquote>
<p>Pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value labels on other pods.</p>
<p>Pod affinity can tell the scheduler to locate a new pod on the same node as other pods if the label selector on the new pod matches the label on the current pod.</p>
<p>Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod.</p>
</blockquote>
| kkopczak |
<p>I am following the conference demo here <a href="https://www.youtube.com/watch?v=3KtEAa7_duA&list=PLsClnAJ27pEXdSwW2tI0uc0YJ2wzxJG6b" rel="nofollow noreferrer">https://www.youtube.com/watch?v=3KtEAa7_duA&list=PLsClnAJ27pEXdSwW2tI0uc0YJ2wzxJG6b</a></p>
<p>My aim would be to start all kubernetes components by hand to understand its architecture better, however I stumble upon this proble when I start the api server:</p>
<pre><code>root@BLQ00667LT:/home/user/kubernetes# ./kubernetes/server/bin/kube-apiserver --etcd-servers=http://localhost:2379
W0704 11:13:35.394474 4924 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0704 11:13:35.394558 4924 server.go:391] external host was not specified, using 172.17.89.222
W0704 11:13:35.394569 4924 authentication.go:527] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
E0704 11:13:35.395059 4924 run.go:74] "command failed" err="[service-account-issuer is a required flag, --service-account-signing-key-file and --service-account-issuer are required flags]"
</code></pre>
<p>I suspect the kubernetes verson used by the tutorial is different from mine, mine being much more recent as the video is from 2019 and I downloaded the latest Kub. v1.28</p>
<p>Would you know what's wrong or if there is any other tutorial or learning path I could follow ?</p>
| pedr0 | <p>Thank for asking,</p>
<p>If you want to know more about kubernetes architecture you can try this</p>
<p><a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way</a> from Kelsey</p>
<p>it's easier because it has code that you can copy and paste, but you need to adapt with your environment, such as subnet, route and network interfaces.</p>
<p>because naturally it's for google cloud.</p>
<p>Good luck</p>
| Aji Mufti Zakaria |
<p>I am getting this error when I want to install docker.io (<code>sudo apt-get install docker.io</code>)</p>
<p>he following information may help to resolve the situation:</p>
<p>The following packages have unmet dependencies:
containerd.io : Conflicts: containerd
Conflicts: runc
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.</p>
<p>I have tried to reinstall the containerd and runc but didn't solve the problem.</p>
| Oliver Domokos | <p><strong>try this</strong></p>
<p><code>sudo apt-get remove docker docker-engine docker.io containerd runc</code></p>
<p><code>sudo apt-get purge docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-compose</code></p>
<p><code>sudo rm -rf /etc/bash_completion.d/docker /usr/local/bin/docker-compose /etc/bash_completion.d/docker-compose</code></p>
<p><code>sudo apt install containerd -y</code></p>
<p><code>sudo apt install -y docker.io docker-compose</code></p>
| Somesh Mahajan |
<p>i have a question about kubernetes networking.</p>
<p>My working senario:</p>
<ul>
<li>I have a Jenkins container my localhost and this container up and running. Inside Jenkins, i have a job. Access jenkins , i use "http://localhost:8080" url. (jenkins is not runing inside kubernetes)</li>
<li>My flask app, trigger the Jenkins job with this command:</li>
</ul>
<blockquote>
<pre><code> @app.route("/create",methods=["GET","POST"])
def create():
if request.method =="POST":
dosya_adi=request.form["sendmail"]
server = jenkins.Jenkins('http://localhost:8080/', username='my-user-name', password='my-password')
server.build_job('jenkins_openvpn', {'FILE_NAME': dosya_adi}, token='my-token')
</code></pre>
</blockquote>
<ul>
<li>Then, i did Dockerize this flask app. My image name is: "jenkins-app"</li>
<li>If i run this command, everythings perfect:</li>
</ul>
<blockquote>
<p><code>docker run -it --network="host" --name=jenkins-app jenkins-app</code></p>
</blockquote>
<p>But i want to do samething with kubernetes. For that i wrote this yml file.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: jenkins-pod
spec:
hostNetwork: true
containers:
- name: jenkins-app
image: jenkins-app:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
</code></pre>
<ul>
<li>With this yml file, i access the flask app using port 5000. While i want to trigger jenkins job, i get an error like this: requests.exceptions.ConnectionError</li>
</ul>
<p>Would you suggest if there is a way to do this with Kubernetes?</p>
| ennur | <p>I create an endpoint.yml file and add in this file below commands, this solve my problem:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Endpoints
metadata:
name: jenkins-server
subsets:
- addresses:
- ip: my-ps-ip
ports:
- port: 8080
</code></pre>
<p>Then, I change this line in my flask app like this:</p>
<pre class="lang-py prettyprint-override"><code>server = jenkins.Jenkins('http://my-ps-ip:8080/', username='my-user-name', password='my-password')
</code></pre>
| ennur |
<p>I have a requirement to convert a multipod setup to a single pod with multiple container. I had pod x running x microservice and pod y running y microservice with below rest endpoint.</p>
<ul>
<li><code>http://x:8080/{context path-x}/endpoint</code></li>
<li><code>http://y:8080/{context path-y}/endpoint</code></li>
</ul>
<p>I want to have pod z with x and y microservice with container x exposed on 8080 port and y on 8081 within same pod. I am able to achieve these with multi-container pod.</p>
<p>My problem is now the URL are changed</p>
<ul>
<li><code>http://z:8080/{context path-x}/endpoint</code></li>
<li><code>http://z:8081/{context path-y}/endpoint</code></li>
</ul>
<p>I am looking for way in which I can hit endpoints without the change is URL or minimum hit with below URLs</p>
<ul>
<li><code>http://x:8080/{context path-x}/endpoint</code></li>
<li><code>http://y:8081/{context path-y}/endpoint</code></li>
</ul>
<p>My real project requirement has 5 container on single pods and has 100s of endpoints exposed</p>
<p>How can I achieve this?</p>
| NAGESH KAMAT | <p>Here's how I addressed my problem:</p>
<p>Application Deployment File (x and y containers on deployment z)</p>
<pre><code>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: z
spec:
replicas: 1
progressDeadlineSeconds: 600
selector:
matchLabels:
component: z
template:
metadata:
annotations:
version: v1.0
labels:
component: z
occloud.oracle.com/open-network-policy: allow
name: z
spec:
containers:
- name: x
image:x:dev
ports:
- containerPort: 8080
- name: y
image: y:dev
ports:
- containerPort: 8081
---
kind: Service
apiVersion: v1
metadata:
name: x
annotations:
version: v1.0
spec:
selector:
component: z
ports:
- name: x
port: 8080
targetPort: 8080
type: ClusterIP
---
kind: Service
apiVersion: v1
metadata:
name: y
annotations:
version: v1.0
spec:
selector:
component: z
ports:
- name: y
port: 8080
targetPort: 8081
type: ClusterIP
</code></pre>
<p>http://x:8080/{context path-x}/endpoint
http://y:8080/{context path-y}/endpoint</p>
| NAGESH KAMAT |
<p>I installed Prometheus on my Kubernetes cluster with Helm, using the community chart <a href="https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> - and I get some beautiful dashboards in the bundled Grafana instance. I now wanted the recommender from the Vertical Pod Autoscaler to use Prometheus as a data source for historic metrics, <a href="https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/FAQ.md#how-can-i-use-prometheus-as-a-history-provider-for-the-vpa-recommender" rel="nofollow noreferrer">as described here</a>. Meaning, I had to make a change to the Prometheus scraper settings for cAdvisor, and <a href="https://stackoverflow.com/a/65421764/310937">this answer</a> pointed me in the right direction, as after making that change I can now see the correct <code>job</code> tag on metrics from cAdvisor.</p>
<p>Unfortunately, now some of the charts in the Grafana dashboards are broken. It looks like it no longer picks up the CPU metrics - and instead just displays "No data" for the CPU-related charts.</p>
<p>So, I assume I have to tweak the charts to be able to pick up the metrics correctly again, but I don't see any obvious places to do this in Grafana?</p>
<p>Not sure if it is relevant for the question, but I am running my Kubernetes cluster on Azure Kubernetes Service (AKS).</p>
<p>This is the full <code>values.yaml</code> I supply to the Helm chart when installing Prometheus:</p>
<pre class="lang-yaml prettyprint-override"><code>kubeControllerManager:
enabled: false
kubeScheduler:
enabled: false
kubeEtcd:
enabled: false
kubeProxy:
enabled: false
kubelet:
serviceMonitor:
# Diables the normal cAdvisor scraping, as we add it with the job name "kubernetes-cadvisor" under additionalScrapeConfigs
# The reason for doing this is to enable the VPA to use the metrics for the recommender
# https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/FAQ.md#how-can-i-use-prometheus-as-a-history-provider-for-the-vpa-recommender
cAdvisor: false
prometheus:
prometheusSpec:
retention: 15d
storageSpec:
volumeClaimTemplate:
spec:
# the azurefile storage class is created automatically on AKS
storageClassName: azurefile
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 50Gi
additionalScrapeConfigs:
- job_name: 'kubernetes-cadvisor'
scheme: https
metrics_path: /metrics/cadvisor
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
</code></pre>
<p>Kubernetes version: 1.21.2</p>
<p>kube-prometheus-stack version: 18.1.1</p>
<p>helm version: version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.5"}</p>
| SΓΈren Pedersen | <p>Unfortunately, I don't have access to Azure AKS, so I've reproduced this issue on my GKE cluster. Below I'll provide some explanations that may help to resolve your problem.</p>
<p>First you can try to execute this <code>node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate</code> rule to see if it returns any result:</p>
<p><a href="https://i.stack.imgur.com/UZ4uk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UZ4uk.png" alt="enter image description here" /></a></p>
<p>If it doesn't return any records, please read the following paragraphs.</p>
<h2>Creating a scrape configuration for cAdvisor</h2>
<p>Rather than creating a completely new scrape configuration for cadvisor, I would suggest using one that is generated by default when <code>kubelet.serviceMonitor.cAdvisor: true</code>, but with a few modifications such as changing the label to <code>job=kubernetes-cadvisor</code>.</p>
<p>In my example, the 'kubernetes-cadvisor' scrape configuration looks like this:</p>
<p><strong>NOTE:</strong> I added this config under the <code>additionalScrapeConfigs</code> in the <code>values.yaml</code> file (the rest of the <code>values.yaml</code> file may be like yours).</p>
<pre><code>- job_name: 'kubernetes-cadvisor'
honor_labels: true
honor_timestamps: true
scrape_interval: 30s
scrape_timeout: 10s
metrics_path: /metrics/cadvisor
scheme: https
authorization:
type: Bearer
credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
follow_redirects: true
relabel_configs:
- source_labels: [job]
separator: ;
regex: (.*)
target_label: __tmp_prometheus_job_name
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
separator: ;
regex: kubelet
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_service_label_k8s_app]
separator: ;
regex: kubelet
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: https-metrics
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Node;(.*)
target_label: node
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Pod;(.*)
target_label: pod
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: service
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: pod
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_container_name]
separator: ;
regex: (.*)
target_label: container
replacement: $1
action: replace
- separator: ;
regex: (.*)
target_label: endpoint
replacement: https-metrics
action: replace
- source_labels: [__metrics_path__]
separator: ;
regex: (.*)
target_label: metrics_path
replacement: $1
action: replace
- source_labels: [__address__]
separator: ;
regex: (.*)
modulus: 1
target_label: __tmp_hash
replacement: $1
action: hashmod
- source_labels: [__tmp_hash]
separator: ;
regex: "0"
replacement: $1
action: keep
kubernetes_sd_configs:
- role: endpoints
kubeconfig_file: ""
follow_redirects: true
namespaces:
names:
- kube-system
</code></pre>
<h3>Modifying Prometheus Rules</h3>
<p>By default, Prometheus rules fetching data from cAdvisor use <code>job="kubelet"</code> in their PromQL expressions:</p>
<p><a href="https://i.stack.imgur.com/oNFnF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oNFnF.png" alt="enter image description here" /></a></p>
<p>After changing <code>job=kubelet</code> to <code>job=kubernetes-cadvisor</code>, we also need to modify this label in the Prometheus rules:<br />
<strong>NOTE:</strong> We just need to modify the rules that have <code>metrics_path="/metrics/cadvisor</code> (these are rules that retrieve data from cAdvisor).</p>
<pre><code>$ kubectl get prometheusrules prom-1-kube-prometheus-sta-k8s.rules -o yaml
...
- name: k8s.rules
rules:
- expr: |-
sum by (cluster, namespace, pod, container) (
irate(container_cpu_usage_seconds_total{job="kubernetes-cadvisor", metrics_path="/metrics/cadvisor", image!=""}[5m])
) * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (
1, max by(cluster, namespace, pod, node) (kube_pod_info{node!=""})
)
record: node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate
...
here we have a few more rules to modify...
</code></pre>
<p>After modifying Prometheus rules and waiting some time, we can see if it works as expected. We can try to execute <code>node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate</code> as in the beginning.</p>
<p>Additionally, let's check out our Grafana to make sure it has started displaying our dashboards correctly:
<a href="https://i.stack.imgur.com/Z7LRc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Z7LRc.png" alt="enter image description here" /></a></p>
| kkopczak |
<p>I created a service account user and got the token for the user. However, ever time I try to access the names spaces I get the following error:</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "namespaces is forbidden: User \"system:serviceaccount:default:svcacc\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "namespaces"
},
"code": 403
}
</code></pre>
<p>This is my service account:</p>
<pre><code>Name: svcacc-token-87jd6
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: svcacc
kubernetes.io/service-account.uid: 384aa590-dac4-472c-a9a7-116c5fb0562b
Type: kubernetes.io/service-account-token
</code></pre>
<p>Do I need to give the service account roles or add it to a group? This is running in AWS EKS, not sure if that make a difference.</p>
<p>I am trying to use ServiceNow discovery to discover my Kubernetes cluster. Regardless if I am using ServiceNow or Postman, I get the same message.</p>
<p>EDIT:
Ended up using YAML to configure the service account and roles.</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: svcacc
namespace: default
---
# Create ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: svcacc
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: svcacc
namespace: default
</code></pre>
<p>Once this was configured I updated the <code>kubeconfig</code> and ran to get token:</p>
<pre><code>$(kubectl describe secrets "$(kubectl describe serviceaccount svcacc -n default| grep -i Tokens | awk '{print $2}')" -n default | grep token: | awk '{print $2}')
</code></pre>
| Brandon Wilson | <p>To clarify I am posting a Community Wiki answer.</p>
<p>You solved this problem using YAML file to configure the service account and roles.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: svcacc
namespace: default
---
# Create ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: svcacc
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: svcacc
namespace: default
</code></pre>
<p>And after that you updated the <code>kubeconfig</code> and ran to get token:</p>
<pre class="lang-yaml prettyprint-override"><code>$(kubectl describe secrets "$(kubectl describe serviceaccount svcacc -n default| grep -i Tokens | awk '{print $2}')" -n default | grep token: | awk '{print $2}')
</code></pre>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Here</a> is documentation about RBAC Authorization with many examples.</p>
<blockquote>
<p>Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization.</p>
</blockquote>
| kkopczak |
<p>I have a Kubernetes cluster that is running a Jenkins Pod with a service set up for Metallb. Currently when I try to hit the <code>loadBalancerIP</code> for the pod outside of my cluster I am unable to. I also have a <code>kube-verify</code> pod that is running on the cluster with a service that is also using Metallb. When I try to hit that pod outside of my cluster I can hit it with no problem.</p>
<p>When I switch the service for the Jenkins pod to be of type <code>NodePort</code> it works but as soon as I switch it back to be of type <code>LoadBalancer</code> it stops working. Both the Jenkins pod and the working <code>kube-verify</code> pod are running on the same node.</p>
<p>Cluster Details:
The master node is running and is connected to my router wirelessly. On the master node I have dnsmasq setup along with iptable rules that forward the connection from the wireless port to the Ethernet port. Each of the nodes is connected together via a switch via Ethernet. Metallb is setup up in layer2 mode with an address pool that is on the same subnet as the ip address of the wireless port of the master node. The <code>kube-proxy</code> is set to use <code>strictArp</code> and <code>ipvs</code> mode.</p>
<p><strong>Jenkins Manifest:</strong></p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-sa
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
---
apiVersion: v1
kind: Secret
metadata:
name: jenkins-secret
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
type: Opaque
data:
jenkins-admin-password: ***************
jenkins-admin-user: ********
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
data:
jenkins.yaml: |-
jenkins:
authorizationStrategy:
loggedInUsersCanDoAnything:
allowAnonymousRead: false
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${jenkins-admin-username}"
name: "Jenkins Admin"
password: "${jenkins-admin-password}"
disableRememberMe: false
mode: NORMAL
numExecutors: 0
labelString: ""
projectNamingStrategy: "standard"
markupFormatter:
plainText
clouds:
- kubernetes:
containerCapStr: "10"
defaultsProviderTemplate: "jenkins-base"
connectTimeout: "5"
readTimeout: 15
jenkinsUrl: "jenkins-ui:8080"
jenkinsTunnel: "jenkins-discover:50000"
maxRequestsPerHostStr: "32"
name: "kubernetes"
serverUrl: "https://kubernetes"
podLabels:
- key: "jenkins/jenkins-agent"
value: "true"
templates:
- name: "default"
#id: eeb122dab57104444f5bf23ca29f3550fbc187b9d7a51036ea513e2a99fecf0f
containers:
- name: "jnlp"
alwaysPullImage: false
args: "^${computer.jnlpmac} ^${computer.name}"
command: ""
envVars:
- envVar:
key: "JENKINS_URL"
value: "jenkins-ui:8080"
image: "jenkins/inbound-agent:4.11-1"
ttyEnabled: false
workingDir: "/home/jenkins/agent"
idleMinutes: 0
instanceCap: 2147483647
label: "jenkins-agent"
nodeUsageMode: "NORMAL"
podRetention: Never
showRawYaml: true
serviceAccount: "jenkins-sa"
slaveConnectTimeoutStr: "100"
yamlMergeStrategy: override
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
security:
apiToken:
creationOfLegacyTokenEnabled: false
tokenGenerationOnCreationEnabled: false
usageStatisticsEnabled: true
unclassified:
location:
adminAddress:
url: jenkins-ui:8080
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
labels:
type: local
spec:
storageClassName: local-storage
claimRef:
name: jenkins-pv-claim
namespace: devops-tools
capacity:
storage: 16Gi
accessModes:
- ReadWriteMany
local:
path: /mnt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- heine-cluster1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jenkins-cr
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
# This role is used to allow Jenkins scheduling of agents via Kubernetes plugin.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins-role-schedule-agents
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec", "pods/log", "persistentvolumeclaims", "events"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods", "pods/exec", "persistentvolumeclaims"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
# The sidecar container which is responsible for reloading configuration changes
# needs permissions to watch ConfigMaps
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins-casc-reload
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins-cr
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
# We bind the role to the Jenkins service account. The role binding is created in the namespace
# where the agents are supposed to run.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-schedule-agents
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-role-schedule-agents
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-watch-configmaps
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-casc-reload
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
annotations:
metallb.universe.tf/address-pool: default
spec:
type: LoadBalancer
loadBalancerIP: 172.16.1.5
ports:
- name: ui
port: 8080
targetPort: 8080
externalTrafficPolicy: Local
selector:
app: jenkins
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-agent
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
spec:
ports:
- name: agents
port: 50000
targetPort: 50000
selector:
app: jenkins
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
version: v1
tier: backend
annotations:
checksum/config: c0daf24e0ec4e4cb59c8a66305181a17249770b37283ca8948e189a58e29a4a5
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
containers:
- name: jenkins
image: "heineza/jenkins-master:2.323-jdk11-1"
imagePullPolicy: Always
args: [ "--httpPort=8080"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Chicago
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
ports:
- containerPort: 8080
name: ui
- containerPort: 50000
name: agents
resources:
limits:
cpu: 2000m
memory: 4096Mi
requests:
cpu: 50m
memory: 256Mi
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
readOnly: false
- name: jenkins-config
mountPath: /var/jenkins_home/jenkins.yaml
- name: admin-secret
mountPath: /run/secrets/jenkins-admin-username
subPath: jenkins-admin-user
readOnly: true
- name: admin-secret
mountPath: /run/secrets/jenkins-admin-password
subPath: jenkins-admin-password
readOnly: true
serviceAccountName: "jenkins-sa"
volumes:
- name: jenkins-cache
emptyDir: {}
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pv-claim
- name: jenkins-config
configMap:
name: jenkins
- name: admin-secret
secret:
secretName: jenkins-secret
</code></pre>
<p>This Jenkins manifest is a modified version of what the Jenkins helm-chart generates. I redacted my secret but in the actual manifest there are <code>base64</code> encoded strings. Also, the docker image I created and use in the deployment uses the Jenkins 2.323-jdk11 image as a base image and I just installed some plugins for Configuration as Code, kubernetes, and Git. What could be preventing the Jenkins pod from being accessible outside of my cluster when using Metallb?</p>
| heineza | <p>MetalLB doesn't allow by default to re-use/share the same LoadBalancerIP addresscase.</p>
<p>According to <a href="https://metallb.universe.tf/usage/" rel="nofollow noreferrer">MetalLB documentation</a>:</p>
<blockquote>
<p>MetalLB respects the <code>spec.loadBalancerIP</code> parameter, so if you want your service to be set up with a specific address, you can request it by setting that parameter.</p>
<p>If MetalLB <strong>does not own</strong> the requested address, or if the address is <strong>already in use</strong> by another service, assignment will fail and MetalLB will log a warning event visible in <code>kubectl describe service <service name></code>.<a href="https://metallb.universe.tf/usage/#requesting-specific-ips" rel="nofollow noreferrer">[1]</a></p>
</blockquote>
<p>In case you need to have services on a single IP you can enable selective IP sharing. To do so you have to add the <code>metallb.universe.tf/allow-shared-ip</code> annotation to services.</p>
<blockquote>
<p>The value of the annotation is a βsharing key.β Services can share an IP address under the following conditions:</p>
<ul>
<li>They both have the same sharing key.</li>
<li>They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).</li>
<li>They both use the <code>Cluster</code> external traffic policy, or they both point to the <em>exact</em> same set of pods (i.e. the pod selectors are identical). <a href="https://metallb.universe.tf/usage/#ip-address-sharing" rel="nofollow noreferrer">[2]</a></li>
</ul>
</blockquote>
<hr />
<p><strong>UPDATE</strong></p>
<p>I tested your setup successfully with one minor difference -
I needed to remove: <code>externalTrafficPolicy: Local</code> from Jenkins Service spec.</p>
<p>Try this solution, if it still doesn't work then it's a problem with your cluster environment.</p>
| kkopczak |
<p>I have two separate IngressControllers, one internal and one external. I would like to define which controller to use for each Ingress.</p>
<p>I have defined <code>--ingress.class=hapxroxy-ext</code> arg for the external controller and <code>--empty-ingress-class</code> for the internal.</p>
<p>Ingress Services</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
annotations:
labels:
run: ext-haproxy-ingress
name: ext-haproxy-ingress
namespace: ext-haproxy-controller
spec:
selector:
run: ext-haproxy-ingress
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
annotations:
"service.beta.kubernetes.io/azure-load-balancer-internal": "true"
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: haproxy-controller
spec:
selector:
run: haproxy-ingress
type: LoadBalancer
</code></pre>
<p>I have IngressClasses.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: external-lb
spec:
controller: haproxy.org/ingress-controller/hapxroxy-ext
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: internal-lb
annotations:
"ingressclass.kubernetes.io/is-default-class": "true"
spec:
controller: haproxy.org/ingress-controller
</code></pre>
<p>I have one Ingress</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
"kubernetes.io/ingress.class": internal-lb
spec:
ingressClassName: internal-lb
...
</code></pre>
<p>Despite mapping the Ingress to just <code>internal-lb</code>, both <code>internal-lb</code> and <code>external-lb</code> handle requests.</p>
<p>It seems pretty straightforward in the <a href="https://www.haproxy.com/documentation/kubernetes/latest/configuration/controller/#--ingressclass" rel="nofollow noreferrer">docs</a>, but I'm missing something.</p>
| logicaldiagram | <p>This issue is due to a bug in <a href="https://github.com/haproxytech/kubernetes-ingress" rel="nofollow noreferrer">https://github.com/haproxytech/kubernetes-ingress</a> when using IngressClassName in ingress.yaml. If you remove IngressClassName from your ingress.yaml and just use "kubernetes.io/ingress.class": annotation the issue goes away, it more of a workaround than a fix.</p>
<p>This issue has been raised and still open see link below for updates.</p>
<p><a href="https://github.com/haproxytech/kubernetes-ingress/issues/354#issuecomment-904551220" rel="nofollow noreferrer">https://github.com/haproxytech/kubernetes-ingress/issues/354#issuecomment-904551220</a></p>
| sniip-code |
<p>I have a github action that builds a docker image and pushes it to our repo.</p>
<pre><code>docker build -t mySuperCoolTag --build-arg PIP_INDEX_URL=${{ secrets.PIP_INDEX_URL }} .
docker push mySuperCoolTag
</code></pre>
<p>Per our deployment process, we take the SHA of the latest image, add it to our yaml files for K8s to read and use.</p>
<p>Originally, I incorrectly thought that the local SHA of the image was the same being pushed to the repo, and I grabbed it and added it to the file like so:</p>
<pre><code> docker images --no-trunc --quiet mySuperCoolTag
dockerSHA=$(docker images --no-trunc --quiet mySuperCoolTag)
#replace the current SHA in the configuration with the latest SHA
sed -i -E "s/sha256:\w*/$dockerSHA/g" config-file.yaml
</code></pre>
<p>This ended up not being the SHA I was looking for. π
</p>
<p><code>docker push</code> <em>does</em> output the expected SHA, but I'm not too sure how to programmatically grab that SHA save having a script read the output and grabbing it from there, but I'm hoping there is a more succinct way to do it. Any idea?</p>
| Another Stackoverflow User | <p>Ended up using this command instead:</p>
<pre><code>dockerSHA=$(docker inspect --format='{{index .RepoDigests 0}}' mySuperCoolTag | perl -wnE'say /sha256.*/g')
</code></pre>
<p>And it just works.</p>
| Another Stackoverflow User |
<p>I am trying to create cluster by using <a href="https://gridscale.io/en/community/tutorials/kubernetes-cluster-with-kubeadm/" rel="nofollow noreferrer">this article</a> in my WSl Ubuntu. But It returns some errors.</p>
<p>Errors:</p>
<pre><code>yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ sudo systemctl daemon-reload
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ sudo systemctl restart kubelet
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.1
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Service-Docker]: docker service is not active, please run 'systemctl start docker.service'
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>I don't understand the reason when I use <code>sudo systemctl restart kubelet</code>. Error like this occurs:</p>
<pre><code>docker service is not enabled, please run 'systemctl enable docker.service'
</code></pre>
<p>When I use:</p>
<pre><code>yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ systemctl enable docker.service
Failed to enable unit, unit docker.service does not exist.
</code></pre>
<p>But I have docker images still runnig:</p>
<p><a href="https://i.stack.imgur.com/Zs5eQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zs5eQ.png" alt="enter image description here" /></a></p>
<p>What is wrong while creating Cluster Kubernetes in WSL? Is there any good tutorial for creating cluster in WSL?</p>
| Penguen | <p>Tutorial you're following is designed for cloud Virtual machines with Linux OS on them (this is important since WSL works a bit differently).
E.g. SystemD is not presented in WSL, behaviour you're facing is currently <a href="https://github.com/MicrosoftDocs/WSL/issues/457" rel="nofollow noreferrer">in development phase</a>.</p>
<p>What you need is to follow designated tutorial for WSL (WSL2 in this case). Also see that docker is set up on Windows machine and shares its features with WSL integration. Please find <a href="https://kubernetes.io/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/" rel="nofollow noreferrer">Kubernetes on Windows desktop tutorial</a> (this uses KinD or minikube which is enough for development and testing)</p>
<p>Also there's a <a href="https://kubernetes.io/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/#minikube-enabling-systemd" rel="nofollow noreferrer">part for enabling SystemD</a> which can potentially resolve your issue on a state where you are (I didn't test this as I don't have a windows machine).</p>
| moonkotte |
<p>In short:
I want to remove a docker image, but if I do so it tells me that it cannot be removed because the image is being used by a running container. But as far as I can tell there is no container running at all.</p>
<p>In detail:
I call <code>docker images -a</code> to see call images. This way I determine the Image ID which I want to delete and call <code>docker image rm {ID}</code> where {ID} is the String which should be deleted (it worked for other images so I am pretty confident so far).
I get the response:</p>
<p><em>Error response from daemon: conflict: unable to delete {ID} (cannot be forced) - image is being used by running container 08815cd48523</em>
(The ID at the end seems to change with every call)</p>
<p>This error appears to be pretty easy to understand, but if I call <code>docker ps -a</code>, it shows me that I do not have a single container running and therefore no container running with the specified ID.</p>
<p>This problem occurs with some images. But all seem to be related to Kubernetes.
Does anyone know what the problem is?</p>
<p>As asked for in the comments hear docker inspect on one of the invisible containers (I replaced all part where I was not sure if it contains sensetive data with "removed_for_post"):</p>
<pre><code>[
{
"Id": "Removed_for_post",
"Created": "2021-10-05T07:04:33.2059908Z",
"Path": "/pause",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 3570,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-10-05T07:04:33.4266642Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
"ResolvConfPath": "/var/lib/docker/containers/ Removed_for_post /resolv.conf",
"HostnamePath": "/var/lib/docker/containers/ Removed_for_post /hostname",
"HostsPath": "/var/lib/docker/containers/ Removed_for_post /hosts",
"LogPath": "/var/lib/docker/containers/ Removed_for_post / Removed_for_post -json.log",
"Name": "/k8s_POD_etcd-docker-desktop_kube-system_ Removed_for_post ",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "host",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": -998,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"no-new-privileges"
],
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 2,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "/kubepods/kubepods/besteffort/removed_for_Post",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/ Removed_for_post -init/diff:/var/lib/docker/overlay2/ Removed_for_post /diff",
"MergedDir": "/var/lib/docker/overlay2/Removed_for_post /merged",
"UpperDir": "/var/lib/docker/overlay2/Removed_for_post /diff",
"WorkDir": "/var/lib/docker/overlay2/Removed_for_post /work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "docker-desktop",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "k8s.gcr.io/pause:3.2",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/pause"
],
"OnBuild": null,
"Labels": {
"annotation.kubeadm.kubernetes.io/etcd.advertise-client-urls": "https://192.168.65.4:2379",
"annotation.kubernetes.io/config.hash": " Removed_for_post ",
"annotation.kubernetes.io/config.seen": "2021-10-05T07:04:32.243805800Z",
"annotation.kubernetes.io/config.source": "file",
"component": "etcd",
"io.kubernetes.container.name": "POD",
"io.kubernetes.docker.type": "podsandbox",
"io.kubernetes.pod.name": "etcd-docker-desktop",
"io.kubernetes.pod.namespace": "kube-system",
"io.kubernetes.pod.uid": "removed for Post",
"tier": "control-plane"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": " Removed_for_post",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/default",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"host": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": " Removed_for_post ",
"EndpointID": " Removed_for_post ",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"DriverOpts": null
}
}
}
}
]``
</code></pre>
| Manuel | <p>Your container seems to be created by an other tool like docker-compose as I guess it is local on your computer..</p>
<p>Which means you can not stop its creation with docker rm statement easily.</p>
<h2>EDIT 1 - Kubernetes</h2>
<ul>
<li>You seems to be using docker desktop</li>
<li>Your container seems to be <code>etcd</code> which is part of kubernetes master system (very important) :</li>
</ul>
<blockquote>
<p>k8s_POD_etcd-docker-desktop_kube-system_ Removed_for_post</p>
</blockquote>
<p>So, as it is part of kubernetes components, you should remove it only if you want to not use kubernetes.</p>
<blockquote>
<p>By removing kubernetes or etcd pod.</p>
</blockquote>
<p>If docker-desktop, should not be a problem.</p>
<p>To summarize, <code>your container is created by a pod which is a resource handled by kubernetes</code></p>
<h2>docker-compose</h2>
<ul>
<li>find your docker-compose directory, usually containers names start with project name and go int its directory.</li>
<li>execute <code>docker-compose down</code></li>
</ul>
<h2>docker</h2>
<ul>
<li>Try to restart docker daemon,
for instance in a linux based environment :
<code>sudo systemctl restart docker</code></li>
<li>A reboot of your machine should works too.</li>
</ul>
<p>Finally you should be able to remove containers as proposed in your question.</p>
| Etienne Dijon |
<p>We are trying to create namespace with specific node pool. How to achieve that on Azure Kubernetes?</p>
<pre><code>error: Unable to create namespace with specific node pool.
Ex: namespace for user nodepool1
</code></pre>
| satish p | <p>Posting this as a community wiki, feel free to edit and expend it.</p>
<p>As <a href="https://stackoverflow.com/users/5951680/luca-ghersi">Luca Ghersi</a> mentioned in comments, it's possible to have namespaces assigned to a specific nodes. For this matter there's an admission controller <code>PodNodeSelector</code> (you can read about it on <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podnodeselector" rel="nofollow noreferrer">kubernetes official documentation</a>).</p>
<p>In short words:</p>
<blockquote>
<p>This admission controller defaults and limits what node selectors may
be used within a namespace by reading a namespace annotation and a
global configuration.</p>
</blockquote>
<p>Based on <a href="https://learn.microsoft.com/en-us/azure/aks/faq#what-kubernetes-admission-controllers-does-aks-support-can-admission-controllers-be-added-or-removed" rel="nofollow noreferrer">Azure FAQ</a>, Azure AKS has this admission controller enabled by default.</p>
<pre><code>AKS supports the following admission controllers:
- NamespaceLifecycle
- LimitRanger
- ServiceAccount
- DefaultStorageClass
- DefaultTolerationSeconds
- MutatingAdmissionWebhook
- ValidatingAdmissionWebhook
- ResourceQuota
- PodNodeSelector
- PodTolerationRestriction
- ExtendedResourceToleration
Currently, you can't modify the list of admission controllers in AKS.
</code></pre>
| moonkotte |
<p>Is there an immutable equivalent for labels in Kubernetes that we can attach to nodes? I want to use labels to identify and segregate nodes but I want to ensure those labels dont get modified.</p>
<p>I was looking at annotations but are they immutable?</p>
| sethu | <p>There is nothing like immutable labels or anything like that in Kubernetes. But labels attached to the kubernetes nodes can only be updated by a cluster admin.</p>
| Manmohan Mittal |
<p>I am trying to connect a folder in windows to a container folder. This is for a .NET app that needs to read files in a folder. In a normal docker container, with docker-compose, the app works without problems, but since this is only one of several different apps that we will have to monitor, we are trying to get kubernetes involved. That is also where we are failing.
As a beginner with kubernetes, I used kompose.exe to convert the compose files to kuberetes style. However, no matter if I use hostPath or persistentVolumeClaim as a flag, I do not get things to work "out of the box". With hostPath, the path is very incorrect, and with persistentVolumeClaim I get a warning saying volume mount on the host is not supported.
I, therefore, tried to do that part myself but can get it to work with neither persistent volume nor entering mount data in the deployment file directly.
The closest I have come is that I can enter the folder, and I can change to subfolders within, but as soon as I try to run any other command, be it 'ls' or 'cat', I get "Operation not permitted".
Here is my docker compose file, which works as expected by</p>
<pre><code>version: "3.8"
services:
test-create-hw-file:
container_name: "testcreatehwfile"
image: test-create-hw-file:trygg
network_mode: "host"
volumes:
- /c/temp/testfiles:/app/files
</code></pre>
<p>Running konvert compose on that file:</p>
<pre><code>PS C:\temp> .\kompose.exe convert -f .\docker-compose-tchwf.yml --volumes hostPath -v
DEBU Checking validation of provider: kubernetes
DEBU Checking validation of controller:
DEBU Docker Compose version: 3.8
WARN Service "test-create-hw-file" won't be created because 'ports' is not specified
DEBU Compose file dir: C:\temp
DEBU Target Dir: .
INFO Kubernetes file "test-create-hw-file-deployment.yaml" created
</code></pre>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\temp\kompose.exe convert -f .\docker-compose-tchwf.yml --volumes hostPath -v
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: test-create-hw-file
name: test-create-hw-file
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: test-create-hw-file
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: C:\temp\kompose.exe convert -f .\docker-compose-tchwf.yml --volumes hostPath -v
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: test-create-hw-file
spec:
containers:
- image: test-create-hw-file:trygg
name: testcreatehwfile
resources: {}
volumeMounts:
- mountPath: /app/files
name: test-create-hw-file-hostpath0
restartPolicy: Always
volumes:
- hostPath:
path: C:\temp\c\temp\testfiles
name: test-create-hw-file-hostpath0
status: {}
</code></pre>
<p>Running kubectl apply on that file just gives the infamous error "Error: Error response from daemon: invalid mode: /app/files", which means, as far as I can understand, not that the "/app/files" is wrong, but the format of the supposedly connected folder is incorrect. This is the quite weird <code>C:\temp\c\temp\testfiles</code> row. After googling and reading a lot, I have two ways of changing that, to either <code>/c/temp/testfiles</code> or <code>/host_mnt/c/temp/testfiles</code>. Both end up in the same "Operation not permitted". I am checking this via going into the CLI on the container in the docker desktop.</p>
<p>The image from the test is just an app that does nothing right now other than wait for five minutes to not quit before I can check the folder.
I am logged on to the shell as root, and I have this row for the folder when doing 'ls -lA':</p>
<pre><code>drwxrwxrwx 1 root root 0 Feb 7 12:04 files
</code></pre>
<p>Also, the <code>docker-user</code> has full access to the <code>c:\temp\testfiles</code> folder.</p>
<p><strong>Some version data:</strong></p>
<pre><code>Docker version 20.10.12, build e91ed57
Kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:32:32Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
Kompose version
1.26.1 (a9d05d509)
Host OS: Windows 10, 21H2
</code></pre>
<p>//Trygg</p>
| Tryggen | <p>Glad that my initial comment solved your issue. I would like to expand my thoughts a little in a form of an official answer.</p>
<p>To mount volumes using Kubernetes on Docker Desktop for Windows the path will be:</p>
<pre class="lang-yaml prettyprint-override"><code>/run/desktop/mnt/host/c/PATH/TO/FILE
</code></pre>
<p>Unfortunately there is no official documentation but <a href="https://github.com/docker/for-win/issues/5325#issuecomment-567594291" rel="noreferrer">here</a> is a good comment with explanation that this is related to Docker Daemon:</p>
<blockquote>
<p>/mnt/wsl is actually the mount point for the cross-distro mounts tmpfs<br />
Docker Daemon mounts it in its /run/desktop/mnt/host/wsl directory</p>
</blockquote>
| RadekW |
<p>I have used this document for creating kafka <a href="https://kow3ns.github.io/kubernetes-kafka/manifests/" rel="nofollow noreferrer">https://kow3ns.github.io/kubernetes-kafka/manifests/</a></p>
<p>able to create zookeeper, facing issue with the creation of kafka.getting error to connect with the zookeeper.</p>
<p>this is the manifest i have used for creating
for kafka:</p>
<p><a href="https://kow3ns.github.io/kubernetes-kafka/manifests/kafka.yaml" rel="nofollow noreferrer">https://kow3ns.github.io/kubernetes-kafka/manifests/kafka.yaml</a>
for Zookeeper</p>
<p><a href="https://github.com/kow3ns/kubernetes-zookeeper/blob/master/manifests/zookeeper.yaml" rel="nofollow noreferrer">https://github.com/kow3ns/kubernetes-zookeeper/blob/master/manifests/zookeeper.yaml</a></p>
<p><strong>The logs of the kafka</strong></p>
<pre><code> kubectl logs -f pod/kafka-0 -n kaf
[2021-10-19 05:37:14,535] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 1000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 0.10.2-IV0
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
listeners = PLAINTEXT://:9093
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /var/lib/kafka
log.dirs = /tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 0.10.2-IV0
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 1440
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
port = 9092
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
producer.purgatory.purge.interval.requests = 1000
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.enabled.mechanisms = [GSSAPI]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol = GSSAPI
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
unclean.leader.election.enable = true
zookeeper.connect = zk-cs.default.svc.cluster.local:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2021-10-19 05:37:14,569] INFO starting (kafka.server.KafkaServer)
[2021-10-19 05:37:14,570] INFO Connecting to zookeeper on zk-cs.default.svc.cluster.local:2181 (kafka.server.KafkaServer)
[2021-10-19 05:37:14,579] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2021-10-19 05:37:14,583] INFO Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:host.name=kafka-0.kafka-hs.kaf.svc.cluster.local (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.version=1.8.0_131 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/connect-api-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-file-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-json-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-runtime-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-transforms-0.10.2.1.jar:/opt/kafka/bin/../libs/guava-18.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b05.jar:/opt/kafka/bin/../libs/jackson-annotations-2.8.0.jar:/opt/kafka/bin/../libs/jackson-annotations-2.8.5.jar:/opt/kafka/bin/../libs/jackson-core-2.8.5.jar:/opt/kafka/bin/../libs/jackson-databind-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/opt/kafka/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b05.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.24.jar:/opt/kafka/bin/../libs/jersey-common-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.24.jar:/opt/kafka/bin/../libs/jersey-guava-2.24.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.24.jar:/opt/kafka/bin/../libs/jersey-server-2.24.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.3.jar:/opt/kafka/bin/../libs/kafka-clients-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-streams-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-streams-examples-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-tools-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-1.3.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/reflections-0.9.10.jar:/opt/kafka/bin/../libs/rocksdbjni-5.0.1.jar:/opt/kafka/bin/../libs/scala-library-2.11.8.jar:/opt/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.21.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.21.jar:/opt/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.9.jar (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:os.version=5.4.141-67.229.amzn2.x86_64 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:user.name=kafka (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:user.home=/home/kafka (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,584] INFO Initiating client connection, connectString=zk-cs.default.svc.cluster.local:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@5e0826e7 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,591] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2021-10-19 05:37:14,592] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkException: Unable to connect to zk-cs.default.svc.cluster.local:2181
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:72)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1228)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:106)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:88)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:326)
at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: zk-cs.default.svc.cluster.local: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:70)
... 10 more
[2021-10-19 05:37:14,594] INFO shutting down (kafka.server.KafkaServer)
[2021-10-19 05:37:14,597] INFO shut down completed (kafka.server.KafkaServer)
[2021-10-19 05:37:14,597] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkException: Unable to connect to zk-cs.default.svc.cluster.local:2181
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:72)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1228)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:106)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:88)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:326)
at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: zk-cs.default.svc.cluster.local: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:70)
... 10 more
</code></pre>
<p><a href="https://i.stack.imgur.com/P1FuF.png" rel="nofollow noreferrer">Crash-Loop-Kafka</a>
<a href="https://i.stack.imgur.com/Ang7w.png" rel="nofollow noreferrer">kafka deployed manifest</a></p>
| Madan | <p>Your Kafka and Zookeeper deployments are running in the <code>kaf</code> namespace according to your screenshots, presumably you have set this up manually and applied the configurations while in that namespace? Neither the Kafka or Zookeeper YAML files explicitly state a namespace in metadata, so will be deployed to the active namespace when created.</p>
<p>Anyway, the Kafka deployment YAML you have is hardcoded to assume Zookeeper is setup in the <code>default</code> namespace, with the following line:</p>
<pre><code> --override zookeeper.connect=zk-cs.default.svc.cluster.local:2181 \
</code></pre>
<p>Change this to:</p>
<pre><code> --override zookeeper.connect=zk-cs.kaf.svc.cluster.local:2181 \
</code></pre>
<p>and it should connect. Whether that's by downloading and locally editing the YAML file etc.</p>
<p>Alternatively deploy Zookeeper into the <code>default</code> namespace.</p>
<p>I also recommend looking at other options like <a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka/#installing-the-chart" rel="nofollow noreferrer">Bitnami Kafka Helm charts</a> which deploy Zookeeper as needed with Kafka, manages most of the connection details and allows for easier customisation. It is also kept far more up to date.</p>
| clarj |
<p>I am new in DevOps amd I am deploying an application on Openshift where users can upload PDf / jpg ...</p>
<p>However, I am not sure if provisioning persists volume is enough, and how it's possible to display all these files later ( graphical interface ) . I need some solution similar to S3 bucket in AWS.</p>
| Bilal | <p>MinIO provides a consistent, performant and scalable object store because it is Kubernetes-native by design and S3 compatible from inception.</p>
<p><a href="https://min.io/product/private-cloud-red-hat-openshift" rel="nofollow noreferrer">https://min.io/product/private-cloud-red-hat-openshift</a></p>
<p>We have installed & tested minio in openshift 3.11 successfully.</p>
| Manmohan Mittal |
<h1>Suspending & resuming my virtual-machine does break the k8s deployment</h1>
<p>When I suspend with <code>minikube stop</code> and then resume the Virtual Machine with <code>minikube start</code>, Minikube re-deploys my app from scratch.</p>
<p>I see this behaviour with newer versions of Minikube higher than <em>v1.18</em> (I run on <em>v1.19</em>).</p>
<hr />
<h1>The setup:</h1>
<ul>
<li>The <em>Kubernetes</em> deployment mounts a volume with the source code from my host machine, via <code>hostPath</code>.\</li>
<li>Also I have a container of <code>initContainers</code> that setups the application.</li>
</ul>
<p>Since the new <em>"redeploy behaviour on resume"</em> happens, the init-container <strong>breaks my deploy, <em>if</em> I have work-in-progress code on my host machine</strong>..</p>
<h1>The issue:</h1>
<p>Now, if I have temporary/non-perfectly-running code, I cannot suspend the machine with unfinished work anymore, between working days; because every time I resume it <strong>Minikube will try to deploy again but with broken code</strong> and fail with an <code>Init:CrashLoopBackOff</code>.</p>
<h1>The workaround:</h1>
<p>For now, each time I resume the machine I need to</p>
<ol>
<li>stash/commit my WIP code</li>
<li>checkout the last commit with working deployment</li>
<li>run the deployment & wait for it to complete the initialization (minutes...)</li>
<li>checkout/stash-pop the code saved at point <em>1)</em>.</li>
</ol>
<p>I can survive, but the workflow is terrible.</p>
<h1>How do I restore the old behaviour?</h1>
<ul>
<li><em>How do I make my deploys to stay untouched, as expected when suspending the VM, instead of being re-deployed every time I resume?</em></li>
</ul>
| Kamafeather | <p>In short words there are two ways to achieve what you want:</p>
<ul>
<li>On current versions of <code>minikube</code> and <code>virtualbox</code> you can use <code>save state</code> option in Virtual box directly.</li>
<li>Move initContianer's code to a separate <code>job</code>.</li>
</ul>
<p><strong>More details about minikube + virtual box</strong></p>
<p>I have an environment with minikube version 1.20, virtual box 6.1.22 (from yesterday) and MacOS. Also minikube driver is set to <code>virtualbox</code>.</p>
<p>First with <code>minikube</code> + <code>VirtualBox</code>. Different scenarios:</p>
<p><code>minikube stop</code> does following:</p>
<blockquote>
<p>Stops a local Kubernetes cluster. This command stops the underlying VM
or container, but keeps user data intact.</p>
</blockquote>
<p>What happens is virtual machine where minikube is set up stops entirely. <code>minikube start</code> starts the VM and all processes in it. All containers are started as well, so if your pod has an init-container, it will run first anyway.</p>
<p><code>minikube pause</code> pauses all processes and free up CPU resourses while memory will still be allocated. <code>minikube unpause</code> brings back CPU resources and continues executing containers from a state when they were paused.</p>
<p>Based on different scenarios I tried with <code>minikube</code> it's not achievable using only minikube commands. To avoid any state loss on your <code>minikube</code> environment due to host restart or necessity to stop a VM to get more resources, you can use <code>save state</code> feature in VirtualBox in UI or cli. Below what it does:</p>
<blockquote>
<p><strong>VBoxManage controlvm savestate</strong>: Saves the current state of the VM to disk and then stops the VM.</p>
</blockquote>
<p>Virtual box creates something like a snapshot with all memory content within this snapshot. When virtual machine is restarted, Virtual box will restore the state of VM to the state when the VM was saved.</p>
<p>One more assumption is if this works the same way in v. 1.20 - this is expected behaviour and not a bug (otherwise it would've been fixed already)</p>
<p><strong>Init-container and jobs</strong></p>
<p>You may consider moving your init-container's code to a a separate <code>job</code> so you will avoid any issues with unintended pod restarts and braking your deployment in the main container. Also it's advised to have init-container's code idempotent.
Here's a quote from official documentation:</p>
<blockquote>
<p>Because init containers can be restarted, retried, or re-executed,
init container code should be idempotent. In particular, code that
writes to files on <code>EmptyDirs</code> should be prepared for the possibility
that an output file already exists.</p>
</blockquote>
<p>This can be achieved by using <code>jobs</code> in Kubernetes which you can run manually when you need to do so.
To ensure following the workflow you can place a check for a <code>Job completion</code> or a specific file on a data volume to the deployment's pod init container to indicate that code is working, deployment will be fine.</p>
<p>Links with more information:</p>
<ul>
<li><p><a href="https://www.virtualbox.org/manual/ch08.html#vboxmanage-controlvm" rel="nofollow noreferrer">VirtualBox <code>save state</code></a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior" rel="nofollow noreferrer">initContainers</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">kubernetes jobs</a></p>
</li>
</ul>
| moonkotte |
<p>I am new to Kubernetes and this is my first time deploying a react-django web app to Kubernetes cluster.</p>
<p>I have created:</p>
<ol>
<li>frontend.yaml # to run npm server</li>
<li>backend.yaml # to run django server</li>
<li>backend-service.yaml # to make django server accessible for react.</li>
</ol>
<p>In my frontend.yaml file I am passing <code>REACT_APP_HOST</code> and <code>REACT_APP_PORT</code> as a env variable and changed URLs in my react app to:</p>
<pre><code>axios.get('http://'+`${process.env.REACT_APP_HOST}`+':'+`${process.env.REACT_APP_PORT}`+'/todolist/api/bucket/').then(res => {
setBuckets(res.data);
setReload(false);
}).catch(err => {
console.log(err);
})
</code></pre>
<p>and my URL becomes <code>http://backend-service:8000/todolist/api/bucket/</code></p>
<p>here <code>backend-service</code> is name of backend-service that I am passing using env variable <code>REACT_APP_HOST</code>.</p>
<p>I am not getting any errors, but when I used <code>kubectl port-forward <frontend-pod-name> 3000:3000</code> and accessed <code>localhost:3000</code> I saw my react app page but it did not hit any django apis.</p>
<p>On chrome, I am getting error:</p>
<pre><code>net::ERR_NAME_NOT_RESOLVED
</code></pre>
<p>and in Mozilla:</p>
<pre><code>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://backend-service:8000/todolist/api/bucket/. (Reason: CORS request did not succeed).
</code></pre>
<p>Please help on this issue, I have spent 3 days but not getting any ideas.</p>
<p><strong>frontend.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: frontend
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: frontend
spec:
containers:
- image: 1234567890/todolist:frontend-v13
name: react-todolist
env:
- name: REACT_APP_HOST
value: "backend-service"
- name: REACT_APP_PORT
value: "8000"
ports:
- containerPort: 3000
volumeMounts:
- mountPath: /var/log/
name: frontend
command: ["/bin/sh", "-c"]
args:
- npm start;
volumes:
- name: frontend
hostPath:
path: /var/log/
</code></pre>
<p><strong>backend.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
name: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
serviceAccountName: backend-sva
containers:
- image: 1234567890/todolist:backend-v11
name: todolist
env:
- name: DB_NAME
value: "todolist"
- name: MYSQL_HOST
value: "mysql-service"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_PASSWORD
value: "mysql123"
ports:
- containerPort: 8000
volumeMounts:
- mountPath: /var/log/
name: backend
command: ["/bin/sh", "-c"]
args:
- apt-get update;
apt-get -y install vim;
python manage.py makemigrations bucket;
python manage.py migrate;
python manage.py runserver 0.0.0.0:8000
volumes:
- name: backend
hostPath:
path: /var/log/
</code></pre>
<p><strong>backend-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: backend
name: backend-service
spec:
ports:
- port: 8000
targetPort: 8000
selector:
app: backend
status:
loadBalancer: {}
</code></pre>
<p><strong>frontend docker file</strong></p>
<pre><code>FROM node:14.16.1-alpine
COPY package.json /app/react-todolist/react-todolist/
WORKDIR /app/react-todolist/react-todolist/
RUN npm install
COPY . /app/react-todolist/react-todolist/
EXPOSE 3000
</code></pre>
<p><strong>backend docker file</strong></p>
<pre><code>FROM python:3.6
COPY requirements.txt ./app/todolist/
WORKDIR /app/todolist/
RUN pip install -r requirements.txt
COPY . /app/todolist/
</code></pre>
<p><strong>django settings</strong></p>
<pre><code>CORS_ORIGIN_ALLOW_ALL=True
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# Rest Frame Work
'rest_framework',
# CORS
'corsheaders',
# Apps
'bucket',
]
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: todolist-ingress
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /
backend:
serviceName: frontend-service
servicePort: 3000
- path: /
backend:
serviceName: backend-service
servicePort: 8000
</code></pre>
<p><strong>react axios api</strong></p>
<pre><code>useEffect(() => {
axios.get('http://'+`${process.env.REACT_APP_HOST}`+':'+`${process.env.REACT_APP_PORT}`+'/todolist/api/bucket/', {
headers: {"Access-Control-Allow-Origin": "*"}
}).then(res => {
setBuckets(res.data);
setReload(false);
}).catch(err => {
console.log(err);
})
}, [reload])
</code></pre>
<p><strong>web app github link</strong> <a href="https://github.com/vgautam99/ToDoList" rel="nofollow noreferrer">https://github.com/vgautam99/ToDoList</a></p>
| Vikas Gautam | <p>Welcome to the community!</p>
<p>I reproduced your example and made it work fine.
I forked your repository, made some changes to js files and package.json and added Dockerfiles (you can see this commit <a href="https://github.com/fivecatscats/ToDoList/commit/74790836659232284832688beb2e1779660d7615" rel="nofollow noreferrer">here</a></p>
<p>Since I didn't change database settings in <code>settings.py</code> I attached it as a <code>configMap</code> to backend deployment (see <a href="https://github.com/fivecatscats/ToDoList/blob/master/backend-deploy.yaml#L37-L39" rel="nofollow noreferrer">here</a> how it's done). Config map was created by this command:</p>
<p><code>kubectl create cm django1 --from-file=settings.py</code></p>
<p>The trickiest part here is to use your domain name <code>kubernetes.docker.internal</code> and add your port with <code>/backend</code> path to environment variables you're passing to your frontend application (see <a href="https://github.com/fivecatscats/ToDoList/blob/master/frontend-deploy.yaml#L21-L24" rel="nofollow noreferrer">here</a>)</p>
<p>Once this is done, it's time to set up an ingress controller (this one uses apiVersion - <code>extestions/v1beta1</code> as it's done in your example, however it'll be deprecated soon, so it's advised to use <code>networking.k8s.io/v1</code> - example of a newer apiVersion is <a href="https://github.com/fivecatscats/ToDoList/blob/master/ingress-after-1-22.yaml" rel="nofollow noreferrer">here</a>):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: todolist-backend-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /backend(/|$)(.*)
backend:
serviceName: backend-service
servicePort: 8000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: todolist-frontend-ingress
annotations:
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /
backend:
serviceName: frontend-service
servicePort: 3000
</code></pre>
<p>I set it up in two different ingresses because there are some issues with <code>rewrite-target</code> and <code>regex</code> using root path <code>/</code>. As you can see we use <code>rewrite-target</code> here because requests are supposed to hit <code>/todolist/api/bucket</code> path instead of <code>/backend/todolist/api/bucket</code> path.
Please see <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">nginx rewrite annotations</a></p>
<p>Next step is to find an IP address to test your application from the node where kubernetes is running and from the web.
To find IP addresses and ports run <code>kubectl get svc</code> and <code>find ingress-nginx-controller</code>:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend-service ClusterIP 10.100.242.79 <none> 8000/TCP 21h
frontend-service ClusterIP 10.110.102.96 <none> 3000/TCP 21h
ingress-nginx-controller LoadBalancer 10.107.31.20 192.168.1.240 80:31323/TCP,443:32541/TCP 8d
</code></pre>
<p>There are two options: <code>CLUSTER-IP</code> and <code>EXTERNAL-IP</code> if you have loadbalancer set up.
On your kubernetes control plane, you can run a simple checking test with <code>curl</code> command using <code>CLUSTER-IP</code> address. In my case it looks like:</p>
<p><code>curl http://kubernetes.docker.internal/ --resolve kubernetes.docker.internal:80:10.107.31.20</code></p>
<p>And next test is:</p>
<p><code>curl -I http://kubernetes.docker.internal/backend/todolist/api/bucket --resolve kubernetes.docker.internal:80:10.107.31.20</code></p>
<p>Output will be like:</p>
<pre><code>HTTP/1.1 301 Moved Permanently
Date: Fri, 14 May 2021 12:21:59 GMT
Content-Type: text/html; charset=utf-8Content-Length: 0
Connection: keep-alive
Location: /todolist/api/bucket/
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Vary: Origin
</code></pre>
<p>Next step is access your application via web browser. You'll need to correct <code>/etc/hosts</code> on your local machine (linux/Mac OS, for Windows it's a bit different, but very easy to find) to match <code>kubernetes.docker.internal</code> domain with proper IP address.</p>
<p>If you're using a <code>load balancer</code> then <code>EXTERNAL-IP</code> is the right address.
If you don't have a <code>load balancer</code> then it's possible to reach out to the node directly. You can find IP address in cloud console and add it to <code>/etc/hosts</code>. In this case you will need to use a different port. In my case it was <code>31323</code> (you can find it above in <code>kubectl get svc</code> output).</p>
<p>When it's set up, I hit the application in my web-browser by <code>http://kubernetes.docker.internal:31323</code></p>
<p>(Repository is <a href="https://github.com/fivecatscats/ToDoList" rel="nofollow noreferrer">here</a> feel free to use everything you need from it)</p>
| moonkotte |
<p>Iβm looking for a breakdown of the minimal requirements for a kubelet implementation. Something like sequence diagrams/descriptions and APIs.</p>
<p>Iβm looking to write a minimal kubelet I can run on a reasonably capable microcontroller so that app binaries can be loaded and managed from an existing cluster (the container engine would actually flash to a connected microcontroller and restart). Iβve been looking through the kubelet code and thereβs a lot to follow so any starting points would be helpful.</p>
<p>A related question, does a kubelet need to run gRPC or can it fall back to a RESTful api? (thereβs no existing gRPC I can run on the micro but there is nanopb and existing https APIs)</p>
| Trevor | <p>This probably won't be a full answer, however there are some details that will help you.</p>
<p>First I'll start with related question about using <code>gRPC</code> and/or <code>REST API</code>.
Based on the <a href="https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go#L160-L174" rel="nofollow noreferrer">kubelet code</a> there is a new server creation part to handle HTTP requests. Taking this into account, we can consider <code>kubelet</code> gets requests to its HTTPS endpoint.
Also indirectly seen from <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#overview" rel="nofollow noreferrer">kubelet authentication/authorization documentation</a>, there are details only about <code>HTTPS endpoint</code>.</p>
<p>Moving to an API part. It's still not documented properly so the best way to find some information is to look into code, e.g. <a href="https://github.com/kubernetes/kubernetes/blob/bd239d42e463bff7694c30c994abd54e4db78700/pkg/kubelet/server/server.go#L76-L84" rel="nofollow noreferrer">about endpoints</a></p>
<p>Last part is <a href="https://www.deepnetwork.com/blog/2020/01/13/kubelet-api.html" rel="nofollow noreferrer">this useful page</a> where a lot of information about <code>kubelet API</code> is gathered</p>
| moonkotte |
<p>Looks like there is no support to delete HorizontalPodAutoscaler using fabric8's K8S Java client ver:6.0.0.</p>
<p>Although It is straightforward to create HorizontalPodAutoscaler using fabric8's K8S Java client ver:6.0.0.</p>
<p>E.g.</p>
<pre><code> HorizontalPodAutoscalerStatus hpaStatus = k8sClient.resource(createHPA())
.inNamespace(namespace)
.createOrReplace().getStatus();
</code></pre>
<pre><code>public HorizontalPodAutoscaler createHPA(){
return new HorizontalPodAutoscalerBuilder()
.withNewMetadata()
.withName(applicationName)
.addToLabels("name", applicationName)
.endMetadata()
.withNewSpec()
.withNewScaleTargetRef()
.withApiVersion(hpaApiVersion)
.withKind("Deployment")
.withName(applicationName)
.endScaleTargetRef()
.withMinReplicas(minReplica)
.withMaxReplicas(maxReplica)
.addNewMetric()
.withType("Resource")
.withNewResource()
.withName("cpu")
.withNewTarget()
.withType("Utilization")
.withAverageUtilization(cpuAverageUtilization)
.endTarget()
.endResource()
.endMetric()
.addNewMetric()
.withType("Resource")
.withNewResource()
.withName("memory")
.withNewTarget()
.withType("AverageValue")
.withAverageValue(new Quantity(memoryAverageValue))
.endTarget()
.endResource()
.endMetric()
.withNewBehavior()
.withNewScaleDown()
.addNewPolicy()
.withType("Pods")
.withValue(podScaleDownValue)
.withPeriodSeconds(podScaleDownPeriod)
.endPolicy()
.withStabilizationWindowSeconds(podScaledStabaliztionWindow)
.endScaleDown()
.endBehavior()
.endSpec().build();
}
</code></pre>
<p>Any solution to delete HorizontalPodAutoscaler using fabric8's K8S Java client ver:6.0.0 will be appriciated.</p>
| Vinay | <p>First, Need to identify which API group <code>(v1, v2beta1, v2beta2)</code> was used during deployment creation based on the same API group the autoscaling function need to be call to get the HPA instance and then go ahead to perform any action on that HPA instance.</p>
<p>In my case the deployment was created with v2beta2 API group, Below code snnipt helped me to deleted the HorizontalPodAutoscaler object from the provided name space.</p>
<pre><code>k8sClient.autoscaling().v2beta2().horizontalPodAutoscalers().inNamespace("test").withName("myhpa").delete()
</code></pre>
| Vinay |
<p>I have a next js app that I am trying to deploy to a kubernetes cluster as a deployment. Parts of the application contain axios http requests that reference an environment variable containing the value of a backend service.</p>
<p>If I am running locally, everything works fine, here is what I have in my <code>.env.local</code> file:</p>
<pre><code>NEXT_PUBLIC_BACKEND_URL=http://localhost:8080
</code></pre>
<p>Anywhere in the app, I can successfully access this variable with <code>process.env.NEXT_PUBLIC_BACKEND_URL</code>.</p>
<p>When I create a kubernetes deployment, I try to inject that same env variable via a configMap and the variable shows as <code>undefined</code>.</p>
<p><code>deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: my-site-frontend
name: my-site-frontend
spec:
replicas: 1
selector:
matchLabels:
app: my-site-frontend
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: my-site-frontend
spec:
containers:
- image: my-site:0.1
name: my-site
resources: {}
envFrom:
- configMapRef:
name: my-site-frontend
imagePullSecrets:
- name: dockerhub
</code></pre>
<p><code>configMap.yaml</code></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-site-frontend
data:
NEXT_PUBLIC_BACKEND_URL: backend_service
</code></pre>
<p>When I run the deployment and expose the application via a nodePort, I see these environment variables as <code>undefined</code> in my browser console. All api calls to my backend_service (ClusterIP) fail as you can imagine.</p>
<p>I can see the env variable is present when I exec into the running pod.</p>
<pre><code>my-mac:manifests andy$ k get pods
NAME READY STATUS RESTARTS AGE
my-site-frontend-77fb459dbf-d996n 1/1 Running 0 25m
---
my-mac:manifests andy$ k exec -it my-site-frontend-77fb459dbf-d996n -- sh
---
/app $ env | grep NEXT_PUBLIC
NEXT_PUBLIC_BACKEND_URL=backend_service
</code></pre>
<p>Any idea as to why the build process for my app does not account for this variable?</p>
<p>Thanks!</p>
| Andy | <p><strong>Make sure kubernetes part did the job right</strong></p>
<p>First what's needed to check if environment actually get to the pod. Your option works, however there are cases when <code>kubectl exec -it pod_name -- sh / bash</code> creates a different session and all configmaps can be reloaded again.</p>
<p>So let's check if it works right after pod is created and environment is presented.</p>
<p>I created a deployment with your base, put <code>nginx</code> image and extended <code>spec</code> part with:</p>
<pre><code>command: ["/bin/bash", "-c"]
args: ["env | grep BACKEND_URL ; nginx -g \"daemon off;\""]
</code></pre>
<p>Right after pod started, got logs and confirmed environment is presented:</p>
<pre><code>kubectl logs my-site-frontend-yyyyyyyy-xxxxx -n name_space | grep BACKEND
NEXT_PUBLIC_BACKEND_URL=SERVICE_URL:8000
</code></pre>
<p><strong>Why browser doesn't show environment variables</strong></p>
<p>This is part is more tricky. Based on some research on <code>next.js</code>, variables should be set before project building (more details <a href="https://nextjs.org/docs/basic-features/environment-variables#exposing-environment-variables-to-the-browser" rel="nofollow noreferrer">here</a>):</p>
<blockquote>
<p>The value will be inlined into JavaScript sent to the browser because
of the NEXT_PUBLIC_ prefix. This inlining occurs at build time, so
your various NEXT_PUBLIC_ envs need to be set when the project is
built.</p>
</blockquote>
<p>You can also see a <a href="https://github.com/vercel/next.js/tree/canary/examples/environment-variables" rel="nofollow noreferrer">good example of using environment variables</a> from <code>next.js</code> github project. You can try <code>Open in StackBlitz</code> option, very convenient and transparent.</p>
<p>At this point you may want to introduce DNS names since IPs can be changed and also different URL paths for front and back ends (depending on the application, below is an example of <code>react</code> app)</p>
<p><strong>Kubernetes ingress</strong></p>
<p>If you decide to use DNS, then you may run into necessity to route the traffic.</p>
<p>Short note what ingress is:</p>
<blockquote>
<p>An API object that manages external access to the services in a
cluster, typically HTTP.</p>
<p>Ingress may provide load balancing, SSL termination and name-based
virtual hosting.</p>
</blockquote>
<p>Why this is needed. Once you have DNS endpoint, frontend and backend should be separated and have the same domain name to avoid any CORS policies and etc (this is possible to resolve of course, here's more for testing and developing on a local cluster).</p>
<p><a href="https://stackoverflow.com/questions/67470540/react-is-not-hitting-django-apis-on-kubernetes-cluster/67534740#67534740">This is a good case</a> for solving issues with <code>react</code> application with <code>python</code> backend. Since <code>next.js</code> is a an open-source React front-end development web framework, it should be useful.</p>
<p>In this case, there's a frontend which is located on <code>/</code> and has service on <code>3000</code> port and backend which located on <code>/backend</code> (please see deployment with example).
Then below is how to setup <code>/etc/hosts</code>, test it and have the deployed app work.</p>
<p>Useful links:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes ingress and how to start with</a></li>
<li><a href="https://github.com/fivecatscats/ToDoList" rel="nofollow noreferrer">repository with all necessary yamls</a> where the SO answer is linked to</li>
</ul>
| moonkotte |
<p>Upon submitting few jobs (say, 50) targeted on a single node, I am getting pod status as "OutOfpods" for few jobs. I have reduced the maximum number of pods on this worker node to "10", but still observe above issue.
Kubelet configuration is default with no changes.</p>
<p>kubernetes version: v1.22.1</p>
<ul>
<li>Worker Node</li>
</ul>
<p>Os: CentOs 7.9
memory: 528 GB
CPU: 40 cores</p>
<p>kubectl describe pod :</p>
<blockquote>
<p>Warning OutOfpods 72s kubelet Node didn't have enough resource: pods, requested: 1, used: 10, capacity: 10</p>
</blockquote>
| krahil | <p>I have realized this to be a known issue for kubelet v1.22 as confirmed <a href="https://github.com/kubernetes/kubernetes/issues/104560" rel="nofollow noreferrer">here</a>. The fix will be reflected in the next latest release.</p>
<p>Simple resolution here is to downgrade kubernetes to v1.21.</p>
| krahil |
<p>I'm trying to override the node selector for a <code>kubectl run</code>.</p>
<pre><code>kubectl run -it powershell --image=mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215 --restart=Never --overrides='{ "apiVersion": "v1", "spec": { "template": { "spec": { "nodeSelector": { "kubernetes.io/os": "windows" } } } } }' -- pwsh
</code></pre>
<p>But I get "Invalid Json Path".</p>
<p>This is my yaml if I do a deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
...
nodeSelector:
kubernetes.io/os: windows
</code></pre>
<p>and if I do <code>get pods -o json</code> I get:</p>
<pre><code>{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
...
},
"spec": {
...
"nodeSelector": {
"kubernetes.io/os": "windows"
}
</code></pre>
| Carlos Garcia | <p><code>kubectl run</code> is a command to start a <code>Pod</code>. You can read more about it <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer">here</a></p>
<pre><code>kubectl run -it powershell --image=mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215 --restart=Never --overrides='{ "apiVersion": "v1", "spec": { "template": { "spec": { "nodeSelector": { "kubernetes.io/os": "windows" } } } } }' -- pwsh
</code></pre>
<p>Using a command above you are trying run a <code>Pod</code> with specification <code>"template": { "spec": { </code> which is used only for <code>Deployment</code> and that is why you get an error <code>Invalid Json Path</code>.</p>
<p><code>nodeSelector</code> as you can see in <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">documentation</a> could be specify under <code>spec</code> in <code>Pod</code>config file as below:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
</code></pre>
<p>When you add <code>--dry-run=client -o yaml</code>to your command to see how the object would be processed, you will see below output which doesn't have <code>nodeSelector</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: powershell
name: powershell
spec:
containers:
- image: mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215
name: powershell
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
</code></pre>
<p>To solve your issue, you can delete <code>template</code> and <code>spec</code> from you command which should look as below:</p>
<pre><code>kubectl run -it powershell --image=mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215 --restart=Never --overrides='{ "apiVersion": "v1", "spec": { "nodeSelector": { "kubernetes.io/os": "windows" } } }' -- pwsh
</code></pre>
<p>Adding <code>--dry-run=client -o yaml</code>to see what will be changed, you will see that <code>nodeSelector</code> exist:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: powershell
name: powershell
spec:
containers:
- image: mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215
name: powershell
resources: {}
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: windows
restartPolicy: Never
status: {}
</code></pre>
| RadekW |
<p>I have a use case wherein I have a Rest API running on a POD inside kubernetes cluster, and the helm pre-upgrade hook which runs a k8s Job needs to access Rest API, What is the best way to expose this URL so that helm hook can access it. I do not want to hardcode any Ip.</p>
| Jinu Mohan | <p>Posting this as a community wiki, feel free to edit and expand it for better experience.</p>
<p>As David Maze and Lucia pointed out in comments, services are accessible by their IPs and URLs based on service names.</p>
<p>This part is covered and well explained in official kubenetes documentation <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for services and pods</a></p>
| moonkotte |
<p>I have an (AKS) Kubernetes cluster running a couple of pods. Those pods have dynamic persistent volume claims. An example is:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
namespace: prd
spec:
accessModes:
- ReadWriteOnce
storageClassName: custom-azure-disk-retain
resources:
requests:
storage: 50Gi
</code></pre>
<p>The disks are Azure Managed Disks and are backupped (snapshots) with the Azure backup center. In the backup center I can create a disk from a snapshot.</p>
<p>Here is my question: how can I use the new disk in the PVC? Because I don't think I can patch the PV with a new DiskURI.</p>
<p>What I figured out myself is how to use the restored dik directly as a volume. But if I'm not mistaken this does not use a PVC anymore meaning I can not benefit from dynamically resizing the disk.</p>
<p>I'm using kustomize, here is how I can link the restored disk directoy in the deployment's yaml:</p>
<pre><code>- op: remove
path: "/spec/template/spec/volumes/0/persistentVolumeClaim"
- op: add
path: "/spec/template/spec/volumes/0/azureDisk"
value: {kind: Managed, diskName: mysql-restored-disk, diskURI: <THE_URI>}
</code></pre>
<p>Some people will tell me to use <a href="https://velero.io/" rel="nofollow noreferrer">Velero</a> but we're not ready for that yet.</p>
| Charlie | <p>You are using dynamic provisioning and then you want to hardcode DiskURIs? With this you also have to bind pods to nodes. This will be a nightmare when you have a disaster recovery case.</p>
<p>To be honest, use Velereo :) Invest the time to get comfortable with it, your MTTR will thank you.</p>
<p>Here is a quick start article with AKS: <a href="https://dzone.com/articles/setup-velero-on-aks" rel="nofollow noreferrer">https://dzone.com/articles/setup-velero-on-aks</a></p>
| Philip Welz |
<p>I have a Mac with Apple Silicon (M1) and I have minikube installed. The installation was done following <a href="https://medium.com/@seohee.sophie.kwon/how-to-run-a-minikube-on-apple-silicon-m1-8373c248d669" rel="nofollow noreferrer">https://medium.com/@seohee.sophie.kwon/how-to-run-a-minikube-on-apple-silicon-m1-8373c248d669</a> by executing:</p>
<pre><code>curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-arm64
sudo install minikube-darwin-arm64 /usr/local/bin/minikube
</code></pre>
<p>How do I remove minikube?</p>
| Amir Paster | <p>Have you tried to follow any online material to delete Minikube?? Test if this works for you and let me know if you face any issues.</p>
<p>Try using the below command :</p>
<pre><code>minikube stop; minikube delete &&
docker stop $(docker ps -aq) &&
rm -rf ~/.kube ~/.minikube &&
sudo rm -rf /usr/local/bin/localkube /usr/local/bin/minikube &&
launchctl stop '*kubelet*.mount' &&
launchctl stop localkube.service &&
launchctl disable localkube.service &&
sudo rm -rf /etc/kubernetes/ &&
docker system prune -af --volumes
</code></pre>
<p>Reference used: <a href="https://gist.github.com/rahulkumar-aws/65e6fbe16cc71012cef997957a1530a3" rel="noreferrer">Delete minikube on Mac</a></p>
| sidharth vijayakumar |
<p>I have a simple app that I need to deploy in K8S (running on AWS EKS) and expose it to the outside world.</p>
<p>I know that I can add a service with the type LoadBalancer and viola K8S will create AWS ALB for me.</p>
<pre><code>spec:
type: LoadBalancer
</code></pre>
<p>However, the issue is that it will <strong>create</strong> a new LB.</p>
<p>The main reason why this is an issue for me is that I am trying to separate out infrastructure creation/upgrades (vs. software deployment/upgrade). All of my infrastructures will be managed by Terraform and all of my software will be defined via K8S YAML files (may be Helm in the future).</p>
<p>And the creation of a load balancer (infrastructure) breaks this model.</p>
<p>Two questions:</p>
<ul>
<li><p>Do I understand correctly that you can't change this behavior (<strong>create</strong> vs. <strong>use existing</strong>)?</p>
</li>
<li><p>I read multiple articles about K8S and all of them lead me into the direction of Ingress + Ingress Controller. Is this the way to solve this problem?</p>
</li>
</ul>
<p>I am hesitant to go in this direction. There are tons of steps to get it working and it will take time for me to figure out how to retrofit it in Terraform and k8s YAML files</p>
| Victor Ronin | <p>Short Answer , you can only change it to "<strong>NodePort</strong>" and couple the existing LB manually by adding EKS nodes with the right exposed port.</p>
<p>like</p>
<pre><code>spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: **30080**
</code></pre>
<p>But to attach it like a native, that is not supported by AWS k8s Controller yet and may not be a priority to do support such behavior as :</p>
<ul>
<li>Configuration: Controllers get configuration from k8s config maps or special CustomResourceDefinitions(CRDs) that will conflict with any manual
config on the already existing LB and my lead to wiping existing configs as not tracked in configs source.</li>
</ul>
<hr />
<p>Q: Direct expose or overlay ingress :</p>
<blockquote>
<p>Note: Use ingress ( Nginx or AWS ALB ) if you have (+1) services to expose or you need to add controls on exposed APIs.</p>
</blockquote>
| Tamer Elfeky |
<p>In my helm chart, I have a few files that need credentials to be inputted
For example</p>
<pre><code><Resource
name="jdbc/test"
auth="Container"
driverClassName="com.microsoft.sqlserver.jdbc.SQLServerDriver"
url="jdbc:sqlserver://{{ .Values.DB.host }}:{{ .Values.DB.port }};selectMethod=direct;DatabaseName={{ .Values.DB.name }};User={{ Values.DB.username }};Password={{ .Values.DB.password }}"
/>
</code></pre>
<p>I created a secret</p>
<pre><code>Name: databaseinfo
Data:
username
password
</code></pre>
<p>I then create environment variables to retrieve those secrets in my deployment.yaml:</p>
<pre><code>env:
- name: DBPassword
valueFrom:
secretKeyRef:
key: password
name: databaseinfo
- name: DBUser
valueFrom:
secretKeyRef:
key: username
name: databaseinfo
</code></pre>
<p>In my values.yaml or this other file, I need to be able to reference to this secret/environment variable. I tried the following but it does not work:
values.yaml</p>
<pre><code>DB:
username: $env.DBUser
password: $env.DBPassword
</code></pre>
| Oplop98 | <p>you can't pass variables from any template to <code>values.yaml</code> with helm. Just from <code>values.yaml</code> to the templates.</p>
<p>The answer you are seeking was posted by <a href="https://stackoverflow.com/users/3061469/mehowthe">mehowthe</a> :</p>
<p>deployment.yaml =</p>
<pre><code> env:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value }}
{{- end }}
</code></pre>
<p>values.yaml =</p>
<pre><code>env:
- name: "DBUser"
value: ""
- name: "DBPassword"
value: ""
</code></pre>
<p>then</p>
<p><code>helm install chart_name --name release_name --set env.DBUser="FOO" --set env.DBPassword="BAR"</code></p>
| Philip Welz |
<p>i wanted to install minikube and after the start command a got the following error text :</p>
<pre><code>π minikube v1.26.1 on Ubuntu 22.04
β minikube skips various validations when --force is supplied; this may lead to unexpected behavior
β¨ Using the docker driver based on existing profile
π The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
π‘ If you are running minikube within a VM, consider using --driver=none:
π https://minikube.sigs.k8s.io/docs/reference/drivers/none/
π‘ Tip: To remove this root owned cluster, run: sudo minikube delete
π Starting control plane node minikube in cluster minikube
π Pulling base image ...
β Stopping node "minikube" ...
π Powering off "minikube" via SSH ...
π₯ Deleting "minikube" in docker ...
π€¦ StartHost failed, but will try again: boot lock: unable to open /tmp/juju-mke11f63b5835bf422927bf558fccac7a21a838f: permission denied
πΏ Failed to start docker container. Running "minikube delete" may fix it: boot lock: unable to open /tmp/juju-mke11f63b5835bf422927bf558fccac7a21a838f: permission denied
β Exiting due to HOST_JUJU_LOCK_PERMISSION: Failed to start host: boot lock: unable to open /tmp/juju-mke11f63b5835bf422927bf558fccac7a21a838f: permission denied
π‘ Suggestion: Run 'sudo sysctl fs.protected_regular=0', or try a driver which does not require root, such as '--driver=docker'
πΏ Related issue: https://github.com/kubernetes/minikube/issues/6391
</code></pre>
| Toufik Benkhelifa | <p>Can you run the below command if this Minikube is installed in a lower environment ?</p>
<pre><code>rm /tmp/juju-*
</code></pre>
<p><a href="https://github.com/kubernetes/minikube/issues/5660" rel="nofollow noreferrer">unable to open /tmp/juju-kubeconfigUpdate: permission denied</a></p>
| sidharth vijayakumar |
<p>I have ran a docker container locally and it stores data in a file (currently no volume is mounted). I stored some data using the API. After that I failed the container using <code>process.exit(1)</code> and started the container again. The previously stored data in the container survives (as expected). But when I do this same thing in Kubernetes (minikube) the data is lost.</p>
| karan525 | <p>Posting this as a community wiki for better visibility, feel free to edit and expand it.</p>
<p>As described in comments, kubernetes replaces failed containers with new (identical) ones and this explain why container's filesystem will be clean.</p>
<p>Also as said containers should be stateless. There are different options how to run different applications and take care about its data:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/" rel="nofollow noreferrer">Run a stateless application using a Deployment</a></li>
<li>Run a stateful application either as a <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">single instance</a> or as a <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">replicated set</a></li>
<li><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">Run automated tasks with a CronJob</a></li>
</ul>
<p>Useful links:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/" rel="nofollow noreferrer">Kubernetes workloads</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">Pod lifecycle</a></li>
</ul>
| moonkotte |
<p>How can egress from a Kubernetes pod be limited to only specific FQDN/DNS with Azure CNI Network Policies?</p>
<p>This is something that can be achieved with:</p>
<p>Istio</p>
<pre><code>apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: googleapis
namespace: default
spec:
destination:
service: "*.googleapis.com"
ports:
- port: 443
protocol: https
</code></pre>
<p>Cilium</p>
<pre><code>apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "fqdn"
spec:
endpointSelector:
matchLabels:
app: some-pod
egress:
- toFQDNs:
- matchName: "api.twitter.com"
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
</code></pre>
<p>OpenShift</p>
<pre><code>apiVersion: network.openshift.io/v1
kind: EgressNetworkPolicy
metadata:
name: default-rules
spec:
egress:
- type: Allow
to:
dnsName: www.example.com
- type: Deny
to:
cidrSelector: 0.0.0.0/0
</code></pre>
<p>How can something similar be done with Azure CNI Network Policies?</p>
| ALeX | <p>ATM network policies with FQDN/DNS rules are not supported on AKS.</p>
<p>If you use Azure CNI & Azure Policy Plugin you get the default Kubernetes Network Policies.</p>
<p>If you use Azure CNI & Calico Policy Plugin you get advanced possibilities like Global Network Polices but not the FQDN/DNS one. This is a paid feature on Calico Cloud unfortunately.</p>
| Philip Welz |
<p>I'm looking to create a small web application that lists some data about the ingresses in my cluster. The application will be hosted in the cluster itself, so I assume i'm going to need a service account attached to a backend application that calls the kubernetes api to get the data, then serves that up to the front end through a GET via axios etc. Am I along the right lines here?</p>
| Sheen | <p>You can use the JavaScript Kubernetes Client package for node directly in you node application to access kubeapi server over REST APIs</p>
<pre><code>npm install @kubernetes/client-node
</code></pre>
<p>You can use either way to provide authentication information to your kubernetes client</p>
<p>This is a code which worked for me</p>
<pre><code>const k8s = require('@kubernetes/client-node');
const cluster = {
name: '<cluster-name>',
server: '<server-address>',
caData: '<certificate-data>'
};
const user = {
name: '<cluster-user-name>',
certData: '<certificate-data>',
keyData: '<certificate-key>'
};
const context = {
name: '<context-name>',
user: user.name,
cluster: cluster.name,
};
const kc = new k8s.KubeConfig();
kc.loadFromOptions({
clusters: [cluster],
users: [user],
contexts: [context],
currentContext: context.name,
});
const k8sApi = kc.makeApiClient(k8s.NetworkingV1Api);
k8sApi.listNamespacedIngress('<namespace>').then((res) => {
console.log(res.body);
});
</code></pre>
<p>You need to Api client according to your ingress in my case I was using networkingV1Api</p>
<p>You can get further options from
<a href="https://github.com/kubernetes-client/javascript" rel="nofollow noreferrer">https://github.com/kubernetes-client/javascript</a></p>
| Vishwas Karale |
<p>As part of kubernetes 1.19, <a href="https://kubernetes.io/blog/2020/09/04/kubernetes-1-19-introducing-structured-logs/" rel="nofollow noreferrer">structured logging</a> has been implemented.</p>
<p>I've <a href="https://kubernetes.io/docs/concepts/cluster-administration/system-logs/" rel="nofollow noreferrer">read</a> that kubernetes log's engine is <code>klog</code> and structured logs are following this format :</p>
<pre><code><klog header> "<message>" <key1>="<value1>" <key2>="<value2>" ...
</code></pre>
<p>Cool ! But even better, you apparently can pass a <code>--logging-format=json</code> flag to <code>klog</code> so logs are generated in <code>json</code> directly !</p>
<pre><code>{
"ts": 1580306777.04728,
"v": 4,
"msg": "Pod status updated",
"pod":{
"name": "nginx-1",
"namespace": "default"
},
"status": "ready"
}
</code></pre>
<p>Unfortunately, I haven't been able to find out how and where I should specify that <code>--logging-format=json</code> flag.</p>
<p>Is it a <code>kubectl</code> command? I'm using Azure's aks.</p>
| Will | <p><code>--logging-format=json</code> is a flag which need to be set on all Kuberentes System Components ( Kubelet, API-Server, Controller-Manager & Scheduler). You can check all flags <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">here</a>.</p>
<p>Unfortunately you cant do it right now with AKS as you have the managed control plane from Microsoft.</p>
| Philip Welz |
<p>I've an existing deployment in my Kubernetes cluster. I want to read its deployment.yaml file from Kubernetes environment using fabric8 client.Functionality similar to this command - kubectl get deploy deploymentname -o yaml.
Please help me to get its fabric8 Java client equivalent.</p>
<p>Objective : I want to get deployment.yaml for a resource and save it with me , perform some experiments in the Kubernetes environment and after the experiments done,I want to revert back to previous deployment. So I need to have deployment.yaml handy to roll back the operation.
Please help.</p>
<p>Thanks,
Sapna</p>
| Sapna Girdhani | <p>You can get the yaml representation of an object with the <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/utils/Serialization.java#L140" rel="nofollow noreferrer">Serialization#asYaml</a> method.</p>
<p>For example:</p>
<pre><code>System.out.println(Serialization.asYaml(client.apps().deployments().inNamespace("abc").withName("ms1").get()));
</code></pre>
| VANAN |
<p>I have kubernetes cluster with two replicas of a PostgreSQL database in it, and I wanted to see the values stored in the database.</p>
<p>When I <code>exec</code> myself into one of the two postgres pod (<code>kubectl exec --stdin --tty [postgres_pod] -- /bin/bash</code>) and check the database from within, I have only a partial part of the DB. The rest of the DB data is on the other Postgres pod, and I don't see any directory created by the persistent volumes with all the database stored.</p>
<p>So in short I create 4 tables; in one <em>postgres pod</em> I have 4 tables but 2 are empty, in the other <em>postgres pod</em> there are 3 tables and the tables that were empty in the first pod, here are filled with data.</p>
<p>Why the pods don't have the same data in it?</p>
<p>How can I access and download the entire database?</p>
<p>PS. I deploy the cluster using HELM in minikube.</p>
<hr />
<p>Here are the YAML files:</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: database-pg
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGDATA: /data/pgdata
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
ports:
- name: postgres
port: 5432
nodePort: 30432
type: NodePort
selector:
app: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres-service
selector:
matchLabels:
app: postgres
replicas: 2
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
volumeMounts:
- name: postgres-disk
mountPath: /data
# Config from ConfigMap
envFrom:
- configMapRef:
name: postgres-config
volumeClaimTemplates:
- metadata:
name: postgres-disk
spec:
accessModes: ["ReadWriteOnce"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 2
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
</code></pre>
| janboro | <p>I found a solution to my problem of downloading the volume directory, however when I run multiple replicasets of postgres, the tables of the DB are still scattered between the pods.</p>
<p>Here's what I did to download the postgres volume:</p>
<p>First of all, minikube supports some specific directories for volume appear:</p>
<blockquote>
<p>minikube is configured to persist files stored under the following directories, which are made in the Minikube VM (or on your localhost if running on bare metal). You may lose data from other directories on reboots.</p>
<pre><code>/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner
</code></pre>
</blockquote>
<p>So I've changed the mount path to be under the <code>/data</code> directory. This made the database volume visible.</p>
<p>After this I ssh'ed into minikube and copied the database volume to a new directory (I used <code>/home/docker</code> as the user of <code>minikube</code> is <code>docker</code>).</p>
<pre><code>sudo cp -R /data/pgdata /home/docker
</code></pre>
<p>The volume <code>pgdata</code> was still owned by <code>root</code> (access denied error) so I changed it to be owned by <code>docker</code>. For this I also set a new password which I knew:</p>
<pre><code>sudo passwd docker # change password for docker user
sudo chown -R docker: /home/docker/pgdata # change owner from root to docker
</code></pre>
<p>Then you can exit and copy the directory into you local machine:</p>
<pre><code>exit
scp -r $(minikube ssh-key) docker@$(minikube ip):/home/docker/pgdata [your_local_path].
</code></pre>
<p><em>NOTE</em></p>
<p>Mario's advice, which is to use <code>pgdump</code> is probably a better solution to copy a database. I still wanted to download the volume directory to see if it has the full database, when the pods have only a part of all the tables. In the end it turned out it doesn't.</p>
| janboro |
<p>Attempting to deploy autoscaling to my cluster, but the target shows "unknown", I have tried different metrics servers to no avail. I followed [this githhub issue](https"//github.com/kubernetes/minikube/issues4456/) even thought I'm using Kubeadm not minikube and it did not change the problem.</p>
<p>I also <a href="https://stackoverflow.com/questions/54106725/docker-kubernetes-mac-autoscaler-unable-to-find-metrics">followed this Stack post</a> with no success either.</p>
<p>I'm running Ubuntu 20.0.4 LTS.</p>
<p>Using kubernetes version 1.23.5, for kubeadm ,kubcectl, ect</p>
<p>Following the advice the other stack post, I grabbed the latest version via curl</p>
<p><code>curl -L https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml</code></p>
<p>I edited the file to be as followed:</p>
<pre><code> spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubectl-insecure-tls
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --metric-resolution=15s
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
</code></pre>
<p>I then ran kubectl apply -f components.yaml</p>
<p>Still did not work:</p>
<p>$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
teastore-webui-hpa Deployment/teastore-webui <unknown>/50% 1 20 1 20h</p>
<p>Another suggestion was specifically declaring limits.</p>
<pre><code>$ kubectl autoscale deployment teastore-webui --max=20 --cpu-percent=50 --min=1
horizontalpodautoscaler.autoscaling/teastore-webui autoscaled
group8@group8:~/Downloads/TeaStore-master/examples/kubernetes$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
teastore-webui Deployment/teastore-webui <unknown>/50% 1 20 0 4s
teastore-webui-hpa Deployment/teastore-webui <unknown>/50% 1 20 1 20h
</code></pre>
<p>That also did not work.</p>
<p>Here is an exert of the deployment and service config that I'm trying to autoscale.</p>
<pre><code> spec:
containers:
- name: teastore-webui
image: descartesresearch/teastore-webui
ports:
- containerPort: 8080
env:
- name: HOST_NAME
value: "teastore-webui"
- name: REGISTRY_HOST
value: "teastore-registry"
resources:
requests:
cpu: "250m"
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: teastore-webui-hpa
labels:
app: teastore
spec:
maxReplicas: 20
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: teastore-webui
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
---
apiVersion: v1
kind: Service
metadata:
name: teastore-webui
labels:
app: teastore
run: teastore-webui
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30080
protocol: TCP
selector:
run: teastore-webui
</code></pre>
<p>Based on other suggestions I have the resource specifically declared as cpu with 50% utilization, and CPU requests are set to 250 milicores.</p>
<pre><code> $kubectl describe hpa
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Name: teastore-webui
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Sat, 02 Apr 2022 16:07:25 -0400
Reference: Deployment/teastore-webui
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 1
Max replicas: 20
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 29m (x12 over 32m) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 2m12s (x121 over 32m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
</code></pre>
| sintaxerror | <p>Syntaxerror on line 6 of this yaml. It needs to be <code>- --kubelet-insecure-tls</code> and not <code>- --kubectl-insecure-tls</code></p>
<pre><code>spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubectl-insecure-tls
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --metric-resolution=15s
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
</code></pre>
<p>Noticed by checking the log files with</p>
<pre><code>kubectl logs -f metric-server -n kube-system
</code></pre>
<p>Thank you David Maze for the comment.</p>
| sintaxerror |
<p>By using the reference of <a href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-nginx-tls" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-nginx-tls</a> this document, I'm trying to fetch the TLS secrets from AKV to AKS pods.
Initially I created and configured <strong>CSI driver configuration</strong> with using <strong>User Assigned Managed Identity</strong>.</p>
<p>I have performed the following steps:</p>
<ul>
<li>Create AKS Cluster with 1 nodepool.</li>
<li>Create AKV.</li>
<li>Created user assigned managed identity and assign it to the nodepool i.e. to the VMSS created for AKS.</li>
<li>Installed CSI Driver helm chart in AKS's <strong>"kube-system"</strong> namespace. and completed all the requirement to perform this operations.</li>
<li>Created the TLS certificate and key.</li>
<li>By using TLS certificate and key, created .pfx file.</li>
<li>Uploaded that .pfx file in the AKV certificates named as <strong>"ingresscert"</strong>.</li>
<li>Created new namespace in AKS named as "ingress-test".</li>
<li>Deployed secretProviderClass in that namespace are as follows.:</li>
</ul>
<pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-tls
spec:
provider: azure
secretObjects: # secretObjects defines the desired state of synced K8s secret objects
- secretName: ingress-tls-csi
type: kubernetes.io/tls
data:
- objectName: ingresscert
key: tls.key
- objectName: ingresscert
key: tls.crt
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: "7*******-****-****-****-***********1"
keyvaultName: "*****-*****-kv" # the name of the AKV instance
objects: |
array:
- |
objectName: ingresscert
objectType: secret
tenantId: "e*******-****-****-****-***********f" # the tenant ID of the AKV instance
</code></pre>
<ul>
<li>Deployed the <strong>nginx-ingress-controller</strong> helm chart in the same namespace, where certificates are binded with application.</li>
<li>Deployed the Busy Box deployment are as follows:</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-one
labels:
app: busybox-one
spec:
replicas: 1
selector:
matchLabels:
app: busybox-one
template:
metadata:
labels:
app: busybox-one
spec:
containers:
- name: busybox
image: k8s.gcr.io/e2e-test-images/busybox:1.29-1
command:
- "/bin/sleep"
- "10000"
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-tls"
---
apiVersion: v1
kind: Service
metadata:
name: busybox-one
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: busybox-one
</code></pre>
<ul>
<li>Check secret is created or not by using command</li>
</ul>
<pre><code>kubectl get secret -n <namespaceName>
</code></pre>
<p><strong>One thing to notice here is, if I attach shell with the busy box pod and go to the mount path which I provided to mount secrets I have seen that secrets are successfully fetched there. But this secrets are not showing in the AKS's secret list.</strong></p>
<p>I have troubleshooted all the AKS,KV and manifest files but not found anything.
IF there is anything I have missed or anyone has solution for this please let me know.</p>
<p>Thanks in advance..!!!</p>
| Kaivalya Dambalkar | <p>i added this as a new answer, bcs the formatting was bad in the comments:</p>
<p>As you are using the Helm chart, you have to activate the secret sync in the <code>values.yaml</code> of the Helm Chart:</p>
<pre><code>secrets-store-csi-driver:
syncSecret:
enabled: true
</code></pre>
<p>I would still recommend to use the <code>csi-secrets-store-provider-azure</code> as AKS Addon instead of the Helm-Chart</p>
| Philip Welz |
<p>I am running a code which opens a raw socket inside a docker container with kubernetes as the orchestrator.</p>
<p>Following is my sample code:</p>
<pre><code>#include <stdio.h>
#include <sys/socket.h>
#include <stdlib.h>
#include <errno.h>
#include <netinet/tcp.h>
#include <netinet/ip.h>
#include <arpa/inet.h>
#include <unistd.h>
int main (void)
{
//Create a raw socket
int s = socket (AF_INET, SOCK_RAW, IPPROTO_SCTP);
if(s == -1)
{
perror("Failed to create socket");
exit(1);
}
}
</code></pre>
<p>On running the code as a non-root user in my container/pod, I got this error.</p>
<pre><code>./rawSocTest
Failed to create socket: Operation not permitted
</code></pre>
<p>This is obvious as it requires root level privileges to open a raw socket. This I corrected by setting capability cap_net_raw.</p>
<pre><code>getcap rawSocTest
rawSocTest = cap_net_raw+eip
</code></pre>
<p>Now when I run it again. I am getting a different error.</p>
<pre><code>./rawSocTest
bash: ./rawSocTest: Permission denied
</code></pre>
<p>As per my understanding, setting the capability should have fixed my issue. Am I missing something here? or Is this a known limitation of container?</p>
<p>Thanks in advance.</p>
| Shighil | <p>I ran it using user kubernetes-admin</p>
<p>Adding the sample deployment file for container definition to what was stated earlier</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
labels:
app: httpd
spec:
replicas: 1
selector:
matchLabels:
app: httpd
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: my-apache2
image: docker.io/arunsippy12/my-apache2:latest
securityContext:
allowPrivilegeEscalation: true
capabilities:
add: ["NET_RAW"]
ports:
- containerPort: 80
</code></pre>
<p>Also u can refer the k8s documentation:
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
| Arun Sippy |
<p>I am trying to deploy the aws-load-balancer-controller on my Kubernetes cluster on AWS = by following the steps given in <a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html</a></p>
<p>After the yaml file is applied and while trying to check the status of the deployment , I get :</p>
<pre><code>$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 0/1 1 0 6m39s
</code></pre>
<p>I tried to debug it and I got this :</p>
<pre><code>$ kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller
{"level":"info","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"error","logger":"setup","msg":"unable to create controller","controller":"Ingress","error":"the server could not find the requested resource"}
</code></pre>
<p>The yaml file is pulled directly from <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.3.0/v2_3_0_full.yaml" rel="noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.3.0/v2_3_0_full.yaml</a> and apart from changing the Kubernetes cluster name, no other modifications are done.</p>
<p>Please let me know if I am missing some step in the configuration.
Any help would be highly appreciated.</p>
| sethu2912 | <p>I am not sure if this helps, but for me the issue was that the version of the aws-load-balancer-controller was not compatible with the version of Kubernetes.</p>
<ul>
<li>aws-load-balancer-controller = v2.3.1</li>
<li>Kubernetes/EKS = 1.22</li>
</ul>
<p>Github issue for more information:
<a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2495" rel="noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2495</a></p>
| Gearheads |
<p>What is the cause of 'terraform apply' giving me the error below on my local machine? It seems to run fine on the build server.</p>
<p>I've also checked the related stackoverflow messages:</p>
<ul>
<li>Windows Firewall is disabled, thus 80 is allowed on the private network</li>
<li>config_path in AKS is not used, no kubeconfig seems to be configured anywhere</li>
</ul>
<pre><code>Plan: 3 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
kubernetes_namespace.azurevotefront-namespace: Creating...
kubernetes_service.azurevotefront-metadata: Creating...
kubernetes_deployment.azurevotefront-namespace: Creating...
β·
β Error: Post "http://localhost/api/v1/namespaces": dial tcp 127.0.0.1:80: connect: connection refused
β
β with kubernetes_namespace.azurevotefront-namespace,
β on kubernetes.tf line 1, in resource "kubernetes_namespace" "azurevotefront-namespace":
β 1: resource "kubernetes_namespace" "azurevotefront-namespace" {
β
β΅
β·
β Error: Failed to create deployment: Post "http://localhost/apis/apps/v1/namespaces/azurevotefront-namespace/deployments": dial tcp 127.0.0.1:80: connect: connection refused
β
β with kubernetes_deployment.azurevotefront-namespace,
β on main.tf line 1, in resource "kubernetes_deployment" "azurevotefront-namespace":
β 1: resource "kubernetes_deployment" "azurevotefront-namespace" {
β
β΅
β·
β Error: Post "http://localhost/api/v1/namespaces/azurevotefront-namespace/services": dial tcp 127.0.0.1:80: connect: connection refused
β
β with kubernetes_service.azurevotefront-metadata,
β on main.tf line 47, in resource "kubernetes_service" "azurevotefront-metadata":
β 47: resource "kubernetes_service" "azurevotefront-metadata" {
</code></pre>
<p>Kubernetes.tf</p>
<pre class="lang-json prettyprint-override"><code>resource "kubernetes_namespace" "azurevotefront-namespace" {
metadata {
annotations = {
name = "azurevotefront-annotation"
}
labels = {
mylabel = "azurevotefront-value"
}
name = "azurevotefront-namespace"
}
}
</code></pre>
<p>Provider.tf</p>
<pre class="lang-json prettyprint-override"><code>terraform {
backend "azurerm" {
key = "terraform.tfstate"
resource_group_name = "MASKED"
storage_account_name = "MASKED"
access_key = "MASKED"
container_name = "MASKED"
}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>2.68"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.4"
}
}
}
provider "azurerm" {
tenant_id = "MASKED"
subscription_id = "MASKED"
client_id = "MASKED"
client_secret = "MASKED"
features {}
}
</code></pre>
| Jay | <p>as mentioned in the comments you are missing the kubernetes provider config:</p>
<pre><code>provider "kubernetes" {
host = azurerm_kubernetes_cluster.aks.kube_admin_config.0.host
client_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.cluster_ca_certificate)
}
</code></pre>
| Philip Welz |
<p>I use aws-load-balancer-eip-allocations assign static IP to LoadBalancer service using k8s on AWS. The version of EKS is v1.16.13. The doc at <a href="https://github.com/kubernetes/kubernetes/blob/v1.16.0/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L208-L211" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/v1.16.0/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L208-L211</a>, line 210 and 211 says "static IP addresses for the NLB. Only supported on elbv2 (NLB)". I do not know what the elbv2 is. I use the code below. But, I did not get static IP. Is elbv2 the problem? How do I use elbv2? Please also refer to <a href="https://github.com/kubernetes/kubernetes/pull/69263" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/69263</a> as well.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-0187de53333555567"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
</code></pre>
| Melissa Jenner | <p>have in mind that you need 1 EIP per subnet/zone and by default EKS uses a minimum of 2 zones.</p>
<p>This is a working example you may found useful:</p>
<pre><code>metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-xxxxxxxxxxxxxxxx,subnet-yyyyyyyyyyyyyyyyy"
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-wwwwwwwwwwwwwwwww,eipalloc-zzzzzzzzzzzzzzzz"
</code></pre>
<p>I hope this is useful to you</p>
| William AΓ±ez |
<p>In microservices environment deployed to the Kubernetes cluster, why will we use API gateway (for example Spring cloud gateway) if Kubernetes supplies the same service with Ingress?</p>
| Ohad | <p>Ingress controller makes one Kubernetes service that gets exposed as LoadBalancer.For simple understanding, you can consider ingress as Nginx server which just do the work of forwarding the traffic to services based on the ruleset.ingress don't have much functionality like API gateway. Some of ingress don't support authentication, rate limiting, application routing, security, merging response & request, and other add-ons/plugin options.</p>
<p>API gateway can also do the work of simple routing but it mostly gets used when you need higher flexibility, security and configuration options.While multiple teams or projects can share a set of Ingress controllers, or Ingress controllers can be specialized on a perβenvironment basis, there are reasons you might choose to deploy a dedicated API gateway inside Kubernetes rather than leveraging the existing Ingress controller. Using both an Ingress controller and an API gateway inside Kubernetes can provide flexibility for organizations to achieve business requirements</p>
<p>For accessing database</p>
<p>If this database and cluster are somewhere in the cloud you could use internal Database IP. If not you should provide the IP of the machine where this Database is hosted.</p>
<p>You can also refer to this <a href="https://medium.com/@ManagedKube/kubernetes-access-external-services-e4fd643e5097" rel="nofollow noreferrer">Kubernetes Access External Services</a> article.</p>
| Sai Chandra Gadde |
<p>In Terraform I wrote a resource that deploys to AKS. I want to apply the terraform changes multiple times, but don't want to have the error below. The system automatically needs to detect whether the resource already exists / is identical. Currently it shows me 'already exists', but I don't want it to fail. Any suggestions how I can fix this issue?</p>
<pre><code>β Error: services "azure-vote-back" already exists
β
β with kubernetes_service.example2,
β on main.tf line 91, in resource "kubernetes_service" "example2":
β 91: resource "kubernetes_service" "example2" {
</code></pre>
<pre class="lang-json prettyprint-override"><code>provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "aks" {
name = "kubernetescluster"
resource_group_name = "myResourceGroup"
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.aks.kube_config[0].host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}
resource "kubernetes_namespace" "azurevote" {
metadata {
annotations = {
name = "azurevote-annotation"
}
labels = {
mylabel = "azurevote-value"
}
name = "azurevote"
}
}
resource "kubernetes_service" "example" {
metadata {
name = "azure-vote-front"
}
spec {
selector = {
app = kubernetes_pod.example.metadata.0.labels.app
}
session_affinity = "ClientIP"
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
resource "kubernetes_pod" "example" {
metadata {
name = "azure-vote-front"
labels = {
app = "azure-vote-front"
}
}
spec {
container {
image = "mcr.microsoft.com/azuredocs/azure-vote-front:v1"
name = "front"
env {
name = "REDIS"
value = "azure-vote-back"
}
}
}
}
resource "kubernetes_pod" "example2" {
metadata {
name = "azure-vote-back"
namespace = "azure-vote"
labels = {
app = "azure-vote-back"
}
}
spec {
container {
image = "mcr.microsoft.com/oss/bitnami/redis:6.0.8"
name = "back"
env {
name = "ALLOW_EMPTY_PASSWORD"
value = "yes"
}
}
}
}
resource "kubernetes_service" "example2" {
metadata {
name = "azure-vote-back"
namespace = "azure-vote"
}
spec {
selector = {
app = kubernetes_pod.example2.metadata.0.labels.app
}
session_affinity = "ClientIP"
port {
port = 6379
target_port = 6379
}
type = "ClusterIP"
}
}
</code></pre>
| Jay | <p>Thats the ugly thing with deploying thing inside Kubernetes with terraform....you will meet this nice errors from time to time and thats why it is not recommended to do it :/</p>
<p>You could try to just <a href="https://www.terraform.io/cli/commands/state/rm" rel="nofollow noreferrer">remove the record from the state file</a>:</p>
<p><code>terraform state rm 'kubernetes_service.example2'</code></p>
<p>Terraform now will no longer track this record and the good thing <strong>it will not be deleted</strong> on the remote system.</p>
<p>On the next run terraform then will recognise that this resource exists on the remote system and add the record to the state.</p>
| Philip Welz |
<p>I am trying to enable kubernetes for Docker Desktop. Kubernetes is however failing to start.</p>
<p>My log file shows:</p>
<pre><code>cannot get lease for master node: Get "https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop": x509: certificate signed by unknown authority: Get "https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop": x509: certificate signed by unknown authority
</code></pre>
<p>I have NO_PROXY env var set already, and my hosts file has
<code>127.0.0.1 kubernetes.docker.internal</code> at the end, as was suggested <a href="https://stackoverflow.com/questions/66364587/docker-for-windows-stuck-at-kubernetes-is-starting">here</a></p>
<p>I appreciate any help</p>
| Sienna | <p>Below work around can help you resolve your issue.</p>
<p>You can solve this by</p>
<ul>
<li>Open ~.kube\config in a text editor</li>
<li>Replace <a href="https://kubernetes.docker.internal:6443" rel="nofollow noreferrer">https://kubernetes.docker.internal:6443</a> to https://localhost:6443</li>
<li>Try connecting again</li>
</ul>
<p>From this <a href="https://forums.docker.com/t/waiting-for-kubernetes-to-be-up-and-running/47009" rel="nofollow noreferrer">issue</a></p>
<ul>
<li>Reset Docker to factory settings</li>
<li>Quit Docker</li>
<li>Set the KUBECONFIG environment variable to %USERPROFILE%.kube\config</li>
<li>Restart Docker and enable Kubernetes (still took a few minutes to start)</li>
</ul>
<p>Attaching troubleshooting <a href="https://bobcares.com/blog/docker-x509-certificate-signed-by-unknown-authority/" rel="nofollow noreferrer">blog1</a>, <a href="https://velaninfo.com/rs/techtips/docker-certificate-authority/" rel="nofollow noreferrer">bolg2</a> for your reference.</p>
| Sai Chandra Gadde |
<p>I am using the opentelemetry-ruby otlp exporter for auto instrumentation:
<a href="https://github.com/open-telemetry/opentelemetry-ruby/tree/main/exporter/otlp" rel="nofollow noreferrer">https://github.com/open-telemetry/opentelemetry-ruby/tree/main/exporter/otlp</a></p>
<p>The otel collector was installed as a daemonset:
<a href="https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector" rel="nofollow noreferrer">https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector</a></p>
<p>I am trying to get the OpenTelemetry collector to collect traces from the Rails application. Both are running in the same cluster, but in different namespaces.</p>
<p>We have enabled auto-instrumentation in the app, but the rails logs are currently showing these errors:</p>
<p><code>E, [2022-04-05T22:37:47.838197 #6] ERROR -- : OpenTelemetry error: Unable to export 499 spans</code></p>
<p>I set the following env variables within the app:</p>
<pre><code>OTEL_LOG_LEVEL=debug
OTEL_EXPORTER_OTLP_ENDPOINT=http://0.0.0.0:4318
</code></pre>
<p>I can't confirm that the application can communicate with the collector pods on this port.
Curling this address from the rails/ruby app returns "Connection Refused". However I am able to curl <code>http://<OTEL_POD_IP>:4318</code> which returns 404 page not found.</p>
<p>From inside a pod:</p>
<pre><code># curl http://localhost:4318/
curl: (7) Failed to connect to localhost port 4318: Connection refused
# curl http://10.1.0.66:4318/
404 page not found
</code></pre>
<p>This helm chart created a daemonset but there is no service running. Is there some setting I need to enable to get this to work?</p>
<p>I confirmed that otel-collector is running on every node in the cluster and the daemonset has HostPort set to 4318.</p>
| alig227 | <p>The correct solution is to use the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Kubernetes Downward API</a> to fetch the node IP address, which will allow you to export the traces directly to the daemonset pod within the same node:</p>
<pre class="lang-yaml prettyprint-override"><code> containers:
- name: my-app
image: my-image
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(HOST_IP):4318
</code></pre>
<p>Note that using the deployment's service as the endpoint (<code><service-name>.<namespace>.svc.cluster.local</code>) is incorrect, as it effectively bypasses the daemonset and sends the traces directly to the deployment, which makes the daemonset useless.</p>
| dudicoco |
<h2>My specific case</h2>
<p>I have machines in a <a href="https://k3s.io" rel="nofollow noreferrer">k3s</a> cluster. I upgraded from an older version (v1.20?) to v1.21.1+k3s1 a few days ago by running <code>curl -sfL https://get.k3s.io | sh -</code> with <code>INSTALL_K3S_CHANNEL</code> set to <code>latest</code>.</p>
<p>My main reason for installing was that I wanted the bundled ingress controller to go from using traefik v1 to v2.</p>
<p>The upgrade worked, but I still have traefik 1.81.0:</p>
<pre><code>$ k -n kube-system describe deployment.apps/traefik
Name: traefik
Namespace: kube-system
CreationTimestamp: Mon, 29 Mar 2021 22:26:11 -0700
Labels: app=traefik
app.kubernetes.io/managed-by=Helm
chart=traefik-1.81.0
heritage=Helm
release=traefik
Annotations: deployment.kubernetes.io/revision: 1
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
Selector: app=traefik,release=traefik
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=traefik
chart=traefik-1.81.0
heritage=Helm
release=traefik
</code></pre>
<pre><code>$ k -n kube-system describe addon traefik
Name: traefik
Namespace: kube-system
Labels: <none>
Annotations: <none>
API Version: k3s.cattle.io/v1
Kind: Addon
Metadata:
Creation Timestamp: 2021-03-25T05:30:34Z
Generation: 1
Managed Fields:
API Version: k3s.cattle.io/v1
Fields Type: FieldsV1
fieldsV1:
f:spec:
.:
f:checksum:
f:source:
f:status:
Manager: k3s
Operation: Update
Time: 2021-03-25T05:30:34Z
Resource Version: 344
UID: ...
Spec:
Checksum: 2925a96b84dfaab024323ccc7bf1c836b77b9b5f547e0a77348974c7f1e67ad2
Source: /var/lib/rancher/k3s/server/manifests/traefik.yaml
Status:
Events: <none>
</code></pre>
<h2>What I understand about k3s addons</h2>
<p>k3s installs with <a href="https://rancher.com/docs/rke/latest/en/config-options/add-ons/" rel="nofollow noreferrer">addons</a> including ingress, DNS, local-storage. These are set up using helm charts and a custom resource definition called <code>addon</code>.</p>
<p>For traefik, there's also a job that appears called <code>helm-install-traefik</code>. It looks like this job ran when I upgraded the cluster:</p>
<pre><code>$ k describe jobs -A
Name: helm-install-traefik
Namespace: kube-system
Selector: controller-uid=b2130dde-45ff-4d27-8e22-ee8f7a621d35
Labels: helmcharts.helm.cattle.io/chart=traefik
objectset.rio.cattle.io/hash=c42f5b5dd9ee50718523a82c68d4392a7dec9fc4
Annotations: objectset.rio.cattle.io/applied:
...
objectset.rio.cattle.io/id: helm-controller
objectset.rio.cattle.io/owner-gvk: helm.cattle.io/v1, Kind=HelmChart
objectset.rio.cattle.io/owner-name: traefik
objectset.rio.cattle.io/owner-namespace: kube-system
Parallelism: 1
Completions: 1
Start Time: Thu, 17 Jun 2021 09:13:07 -0700
Completed At: Thu, 17 Jun 2021 09:13:22 -0700
Duration: 15s
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=b2130dde-45ff-4d27-8e22-ee8f7a621d35
helmcharts.helm.cattle.io/chart=traefik
job-name=helm-install-traefik
Annotations: helmcharts.helm.cattle.io/configHash: ...
Service Account: helm-traefik
Containers:
helm:
Image: rancher/klipper-helm:v0.5.0-build20210505
Port: <none>
Host Port: <none>
Args:
install
Environment:
NAME: traefik
VERSION:
REPO:
HELM_DRIVER: secret
CHART_NAMESPACE: kube-system
CHART: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz
HELM_VERSION:
TARGET_NAMESPACE: kube-system
NO_PROXY: .svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
Mounts:
/chart from content (rw)
/config from values (rw)
Volumes:
values:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: chart-values-traefik
Optional: false
content:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: chart-content-traefik
Optional: false
Events: <none>
</code></pre>
<p>Looks like my addons weren't re-created in the update:</p>
<pre><code>$ k -n kube-system get addons
NAME AGE
aggregated-metrics-reader 90d
auth-delegator 90d
auth-reader 90d
ccm 90d
coredns 90d
local-storage 90d
metrics-apiservice 90d
metrics-server-deployment 90d
metrics-server-service 90d
resource-reader 90d
rolebindings 90d
traefik 90d
</code></pre>
<h2>The question</h2>
<p>The docs give the impression that running the k3s install script should update add-ons. Should it? If so, why hasn't my traefik deployment been upgraded? What can I do to force it to upgrade?</p>
| foobarbecue | <p>Posting this as a community wiki, feel free to edit and expand.</p>
<p>First from your question about <code>job run</code>, you can see in output that traefik had chart for 1.18.0 version:</p>
<pre><code>CHART: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz
</code></pre>
<p>Related to <code>traefik</code> I found some information in k3s documentation:</p>
<blockquote>
<p>If Traefik is not disabled K3s versions 1.20 and earlier will install
Traefik v1, while K3s versions 1.21 and later will install Traefik v2
if v1 is not already present.</p>
<p>To migrate from an older Traefik v1 instance please refer to the
<a href="https://doc.traefik.io/traefik/migration/v1-to-v2/" rel="nofollow noreferrer">Traefik documentation</a> and <a href="https://github.com/traefik/traefik-migration-tool" rel="nofollow noreferrer">migration tool</a>.</p>
</blockquote>
<p><a href="https://rancher.com/docs/k3s/latest/en/networking/#traefik-ingress-controller" rel="nofollow noreferrer">Reference for the above</a></p>
<p>Based on my research, <a href="https://rancher.com/docs/k3s/latest/en/upgrades/basic/" rel="nofollow noreferrer">upgrading using command line</a> works only for system components of kubernetes as there is no word about addons while for <code>RKE</code> it's clearly stated that addons are updated:</p>
<blockquote>
<p>When a cluster is upgraded with rke up, using the default options, the
following process is used:</p>
<p>1 - The etcd plane gets get updated, one node at a time.</p>
<p>2 -Controlplane nodes get updated, one node at a time. This includes the controlplane components and worker plane components of the controlplane nodes.</p>
<p>3 - Worker plane components of etcd nodes get updated, one node at a time.</p>
<p>4 - Worker nodes get updated in batches of a configurable size. The
default configuration for the maximum number of unavailable nodes is
ten percent, rounded down to the nearest node, with a minimum batch
size of one node.</p>
<p>5 - <strong>Addons get upgraded one by one</strong>.</p>
</blockquote>
<p><a href="https://rancher.com/docs/rke/latest/en/upgrades/how-upgrades-work/" rel="nofollow noreferrer">Reference for RKE</a></p>
| moonkotte |
<p>We are deploying azure ingress.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name1
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
</code></pre>
<hr />
<p>If we do not explicitly mention ingress class(<code>kubernetes.io/ingress.class:</code>) in the manifest, what would be the default ingress controller type?</p>
<p>Nginx documentation says default as N/A.
<a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/</a></p>
| overexchange | <p>If you do not specify any ingress class explicitly, the value is omitted. You can specify a default ingress class for your cluster as you can read <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#default-ingress-class" rel="nofollow noreferrer">here</a>.</p>
<p>You should also migrate to the IngressClassName field as the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation" rel="nofollow noreferrer">annotation is deprecated</a>.</p>
| Philip Welz |
<p>I'm unable to find any references other than this link that confirms that the failure has to be consecutive. <a href="https://github.com/kubernetes/website/issues/37414" rel="nofollow noreferrer">https://github.com/kubernetes/website/issues/37414</a></p>
<p>Background: Our Java application is getting restarted every day because of liveness probe failure. The application's access logs don't show 3 consecutive failures. So wanted to understand the behavior of probes.</p>
| Raghu | <p>Liveness check is created when Kubernetes creates pod and is recreated each time that Pod is restarted. In your configuration you have set initialDelaySeconds: 20 so after creating a pod, Kubernetes will wait 20 seconds, then it will call liveness probe 3 times (as default value failureThreshold: 3). After 3 fails, Kubernetes will restart this pod according to RestartPolicy. Also in logs you will be able to find in logs.</p>
<p>When you are using <code>kubectl get events</code> you are getting events only from the last hour.</p>
<pre><code>Kubectl get events
LAST SEEN TYPE REASON OBJECT
47m Normal Starting node/kubeadm
43m Normal Scheduled pod/liveness-http
43m Normal Pulling pod/liveness-http
43m Normal Pulled pod/liveness-http
43m Normal Created pod/liveness-http
43m Normal Started pod/liveness-http
4m41s Warning Unhealthy pod/liveness-http
40m Warning Unhealthy pod/liveness-http
12m20s Warning BackOff pod/liveness-http
</code></pre>
<p>same command after ~1 hour:</p>
<pre><code>LAST SEEN TYPE REASON OBJECT
43s Normal Pulling pod/liveness-http
8m40s Warning Unhealthy pod/liveness-http
20m Warning BackOff pod/liveness-http
</code></pre>
<p>So that might be the reason you are seeing only one failure.</p>
<p>Liveness probe can be configured using the fields below:</p>
<ul>
<li><p>initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.</p>
</li>
<li><p>periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.</p>
</li>
<li><p>timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</p>
</li>
<li><p>successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.</p>
</li>
<li><p>failureThreshold: When a probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the container. In case of a readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.</p>
</li>
</ul>
<p>If you set the minimal values for periodSeconds, timeoutSeconds, successThreshold and failureThreshold you can expect more frequent checks and faster restarts.</p>
<p>Liveness probe :</p>
<ul>
<li>Kubernetes will restart a container in a pod after <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="noreferrer">failureThreshold</a> times. By default it is 3 times - so after 3 failed probes.</li>
<li>Depending on your configuration of the container, time needed for container termination could be very differential</li>
<li>You can adjust both <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="noreferrer">failureThreshold</a> and <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="noreferrer">terminationGracePeriodSeconds</a> period parameters, so the container will be restarted immediately after every failed probe</li>
</ul>
<p>In <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="noreferrer">liveness probe configuration</a> and <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="noreferrer">best practices</a> you can find more information.</p>
| Sai Chandra Gadde |
<p>I've scanned through all resources, still cannot find a way to change <code>extraPortMappings</code> in Kind cluster without deleting and creating again.</p>
<p>Is it possible and how?</p>
| Dmitry Dyachkov | <p>It's not said explicitly in the official docs, but I found some references that confirm: your thoughts are correct and changing <code>extraPortMappings</code> (as well as other cluster settings) is only possible with recreation of the kind cluster.</p>
<blockquote>
<p>if you use extraPortMappings in your config, they are βfixedβ and
cannot be modified, unless you recreate the cluster.</p>
</blockquote>
<p><a href="https://blog.devgenius.io/updating-kind-kubernetes-api-certificate-after-reboot-1521b43f7574" rel="nofollow noreferrer">Source - Issues I Came Across</a></p>
<blockquote>
<p>Note that the cluster configuration cannot be changed. The only
workaround is to delete the cluster (see below) and create another one
with the new configuration.</p>
</blockquote>
<p><a href="https://10clouds.com/blog/kubernetes-environment-devs/" rel="nofollow noreferrer">Source - Kind Installation</a></p>
<blockquote>
<p>However, there are obvious disadvantages in the configuration and
update of the cluster, and the cluster can only be configured and
updated by recreating the cluster. So you need to consider
configuration options when you initialize the cluster.</p>
</blockquote>
<p><a href="https://www.programmersought.com/article/12347371148/" rel="nofollow noreferrer">Source - Restrictions</a></p>
| moonkotte |
<p>I have created one docker image and publish that image to Jfrog Artifactory.
Now, I am trying to create kubernetes Pod or trying to create deployment using that image.</p>
<p>Find the content of pod.yaml file</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: <name of pod>
spec:
nodeSelector:
type: "<name of node>"
containers:
- name: <name of container>
image: <name and path of image>
imagePullPolicy: Always
</code></pre>
<p>But I am getting <strong>ErrImagePull</strong> status after pod creation. That means pod is not getting created succesfully.
Error: error: code = Unknown desc = failed to pull and unpack image</p>
<p>Can anyone please help me with this?</p>
| vaijayanti | <p>Please assure that you're creating the secret kubernetes.io/dockerconfigjson under the same namespace.</p>
| Anastasia Grinman |
<p>It's currently possible to allow a single domain or subdomain but I would like to allow multiple origins. I have tried many things like adding headers with snipets but had no success.</p>
<p>This is my current ingress configuration:</p>
<pre><code>kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: nginx-ingress
namespace: default
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress
uid: adcd75ab-b44b-420c-874e-abcfd1059592
resourceVersion: '259992616'
generation: 7
creationTimestamp: '2020-06-10T12:15:18Z'
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
ingress.kubernetes.io/enable-cors: 'true'
ingress.kubernetes.io/force-ssl-redirect: 'true'
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
nginx.ingress.kubernetes.io/cors-allow-credentials: 'true'
nginx.ingress.kubernetes.io/cors-allow-headers: 'Authorization, X-Requested-With, Content-Type'
nginx.ingress.kubernetes.io/cors-allow-methods: 'GET, PUT, POST, DELETE, HEAD, OPTIONS'
nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com'
nginx.ingress.kubernetes.io/enable-cors: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: 'true'
</code></pre>
<p>I also would like to extend the cors-allow-origin like:</p>
<pre><code>nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com, https://otherexample.com'
</code></pre>
<p>Is it possible to allow multiple domains in other ways?</p>
| Vural | <p>Ingress-nginx doesnβt support CORS with multiple origins.</p>
<p>However, you can try to use annotation: <strong>nginx.ingress.kubernetes.io/configuration-snippet</strong></p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_origin ~* "^https?://((?:exactmatch\.com)|(?:regexmatch\.com))$") {
add_header "Access-Control-Allow-Origin" "$http_origin" always;
add_header "Access-Control-Allow-Methods" "GET, PUT, POST, OPTIONS" always;
add_header "Access-Control-Allow-Headers" "DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization" always;
add_header "Access-Control-Expose-Headers" "Content-Length,Content-Range" always;
}
</code></pre>
<p>Here to find more information <a href="https://github.com/kubernetes/ingress-nginx/issues/5496" rel="nofollow noreferrer">ingress nginx issues</a>.</p>
| HKBN-ITDS |
<p>I am trying to achieve <strong>Client IP based routing</strong> using Istio features.</p>
<p>I have two versions of application <strong>V1(Stable)</strong> and <strong>V2(Canary)</strong>. I want to route the traffic to the canary version(V2) of the application if the Client IP is from a particular CIDR block (Mostly the CIDR my org) and all other traffic should be routed to the stable version(V1) which is the live traffic.</p>
<p>Is there any way to achieve this feature using Istio?</p>
| AkshayBadri | <p>Yes, this is possible.</p>
<hr />
<p>Since you have a <code>load balancer</code> in front of the kubernetes cluster, first question to address is <strong>preserve</strong> <code>client IP</code> because due to NAT <code>load balancer</code> opens another session to the internal side to kubernetes cluster and <code>source IP</code> is lost. It has to be preserved. This can be done in different ways depending on <code>load balancer</code> type used. Please see:</p>
<p><a href="https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/#source-ip-address-of-the-original-client" rel="nofollow noreferrer">Source IP address of the original client</a></p>
<hr />
<p>Next part is to configure a <code>virtual service</code> to route the traffic based on <code>client IP</code>. It was solutioned for HTTP traffic. Below is a working example:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app-vservice
namespace: test
spec:
hosts:
- "app-service"
http:
- match:
- headers:
x-forwarded-for:
exact: 123.123.123.123
route:
- destination:
host: app-service
subset: v2
- route:
- destination:
host: app-service
subset: v1
</code></pre>
<p><a href="https://github.com/istio/istio/issues/24852#issuecomment-647831912" rel="nofollow noreferrer">Source - Github issue comment</a></p>
<p>There is also a <a href="https://github.com/istio/istio/issues/24852#issuecomment-647704245" rel="nofollow noreferrer">comment</a> about TCP and usage <code>addresses</code> in <code>ServiceEntry</code>.</p>
| moonkotte |
<p>For each namespace in K8s (existing ones), I would like to create an object which contains a text, for example the Jenkins URL of the job.</p>
<p>Which K8s object should be used for this?</p>
<p>Thanks,</p>
| Tom S | <p>As @jordanm said, you can use config maps as volumes, you can add them as values by following the below method</p>
<p>Create ConfigMap From Literal Values, using the --from-literal option.</p>
<p>To do so, follow the basic syntax:</p>
<pre><code>kubectl create configmap [configmap_name] --from-literal [key1]=[value1]
</code></pre>
<p>To see details from a Kubernetes ConfigMap and the values for keys, use the command:</p>
<pre><code>kubectl get configmaps [configmap_name] -o yaml
</code></pre>
<p>The output should display information in the yaml format:</p>
<pre><code>β¦
apiVersion: v1
data:
key1: value1
β¦
</code></pre>
<p>Once you have created a ConfigMap, you can mount the configuration to the pod by using volumes.
Add a volume section to the to the yaml file of your pod:</p>
<pre><code>volumes:
- name: config
configMap
name: [configmap_name]
items:
- key: [key/file_name]
path: [inside_the_pod]
</code></pre>
<p>For more info refer to this <a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_create_configmap/" rel="nofollow noreferrer">document</a>.</p>
| Sai Chandra Gadde |
<p>When I am developing an operator, I need to build a pod object. When I write pod.Spec.Volumes, what I need to mount is a configmap type, so I operate according to the structure in the core\v1 package to ConfigMapVolumeSource When I created the structure, I found that the name field of the configmap was not specified. There were only four other fields. The directory of my file was:</p>
<p><a href="https://i.stack.imgur.com/TreRW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TreRW.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/z8sZr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z8sZr.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/xYcTd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xYcTd.png" alt="enter image description here" /></a>
So when I build the pod, it will report an error. This field is required</p>
<p><a href="https://i.stack.imgur.com/ENReK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ENReK.png" alt="enter image description here" /></a></p>
<p>Am I using the wrong version?
thank you very much for your helpοΌ</p>
| gophergfer | <p>The <code>name</code> field is inherited from <code>LocalObjectReference</code> struct.</p>
| ice coffee |
<p>Say that I have a job history limit > 1, is there a way to use kubectl to find which jobs that have been spawned by a CronJob?</p>
| Anders StrΓΆmberg | <p>Use label.</p>
<pre><code>$kubectl get jobs -n namespace -l created-by=cronjob
</code></pre>
<p><strong>created-by=cronjob</strong> which define at your cronjob.</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
metadata:
labels:
created-by: cronjob
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
| HKBN-ITDS |
<p>I have some questions regarding my minikube cluster, specifically why there needs to be a tunnel, what the tunnel means actually, and where the port numbers come from.</p>
<h2>Background</h2>
<p>I'm obviously a total kubernetes beginner...and don't have a ton of networking experience.</p>
<p>Ok. I have the following docker image which I pushed to docker hub. It's a hello express app that just prints out "Hello world" at the <code>/</code> route.</p>
<p>DockerFile:</p>
<pre><code>FROM node:lts-slim
RUN mkdir /code
COPY package*.json server.js /code/
WORKDIR /code
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]
</code></pre>
<p>I have the following pod spec:</p>
<p>web-pod.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
</code></pre>
<p>The following service:</p>
<p>web-service.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web-pod
ports:
- port: 8080
targetPort: 3000
protocol: TCP
name: http
</code></pre>
<p>And the following deployment:</p>
<p>web-deployment.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
service: web-service
template:
metadata:
labels:
app: web-pod
service: web-service
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
protocol: TCP
</code></pre>
<p>All the objects are up and running and look good after I create them with kubectl.</p>
<p>I do this:</p>
<pre><code>$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h5m
web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
</code></pre>
<ol start="4">
<li>Then, as per a book I'm reading if I do:</li>
</ol>
<pre><code>$ curl $(minikube ip):8080 # or :32177, # or :3000
</code></pre>
<p>I get no response.</p>
<p>I found when I do this, however I can access the app by going to <code>http://127.0.0.1:52650/</code>:</p>
<pre><code>$ minikube service web-service
|-----------|-------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|---------------------------|
| default | web-service | http/8080 | http://192.168.49.2:32177 |
|-----------|-------------|-------------|---------------------------|
π Starting tunnel for service web-service.
|-----------|-------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|------------------------|
| default | web-service | | http://127.0.0.1:52472 |
|-----------|-------------|-------------|------------------------|
</code></pre>
<h2>Questions</h2>
<ol>
<li>what this "tunnel" is and why we need it?</li>
<li>what the targetPort is for (8080)?</li>
<li>What this line means when I do <code>kubectl get services</code>:</li>
</ol>
<pre><code>web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
</code></pre>
<p>Specifically, what is that port mapping means and where <code>32177</code> comes from?</p>
<ol start="4">
<li>Is there some kind of problem with simply mapping the internal port to the same port number externally, e.g. 3000:3000? If so, do we specifically have to provide this mapping?</li>
</ol>
| Aaron | <p>Let me answer on all your questions.</p>
<p>0 - There's no need to create pods separately (unless it's something to test), this should be done by creating deployments (or statefulsets, depends on the app and needs) which will create a <code>replicaset</code> which will be responsible for keeping right amount of pods in operational conditions. (you can get familiar with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployments in kubernetes</a>.</p>
<hr />
<p>1 - <a href="https://minikube.sigs.k8s.io/docs/commands/tunnel/" rel="nofollow noreferrer">Tunnel</a> is used to expose the service from inside of VM where minikube is running to the host machine's network. Works with <code>LoadBalancer</code> service type. Please refer to <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">access applications in minikube</a>.</p>
<p>1.1 - Reason why the application is not accessible on the <code>localhost:NodePort</code> is NodePort is exposed within VM where <code>minikube</code> is running, not on your local machine.</p>
<p>You can find minikube VM's IP by running <code>minikube IP</code> and then <code>curl %GIVEN_IP:NodePort</code>. You should get a response from your app.</p>
<hr />
<p>2 - <code>targetPort</code> indicates the service with which port connection should be established. Please refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">define the service</a>.</p>
<p>In <code>minikube</code> it may be confusing since it's pointed to the <code>service port</code>, not to the <code>targetPort</code> which is define within the service. I think idea was to indicate on which port <code>service</code> is accessible within the cluster.</p>
<hr />
<p>3 - As for this question, there are headers presented, you can treat them literally. For instance:</p>
<pre><code>$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-service NodePort 10.106.206.158 <none> 80:30001/TCP 21m app=web-pod
</code></pre>
<p><code>NodePort</code> comes from your <code>web-service.yaml</code> for <code>service</code> object. <code>Type</code> is explicitly specified and therefore <code>NodePort</code> is allocated. If you don't specify <code>type</code> of service, it will be created as <code>ClusterIP</code> type and will be accessible only within kubernetes cluster. Please refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Publishing Services (ServiceTypes)</a>.</p>
<p>When service is created with <code>ClusterIP</code> type, there won't be a <code>NodePort</code> in output. E.g.</p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-service ClusterIP 10.106.206.158 <none> 80/TCP 23m
</code></pre>
<p><code>External-IP</code> will pop up when <code>LoadBalancer</code> service type is used. Additionally for <code>minikube</code> address will appear once you run <code>minikube tunnel</code> in a different shell. After your service will be accessible on your host machine by <code>External-IP</code> + <code>service port</code>.</p>
<hr />
<p>4 - There are not issues with such mapping. Moreover this is a default behaviour for kubernetes:</p>
<blockquote>
<p>Note: A Service can map any incoming port to a targetPort. By default
and for convenience, the targetPort is set to the same value as the
port field.</p>
</blockquote>
<p>Please refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">define a service</a></p>
<hr />
<p>Edit:</p>
<p>Depending on the <a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="nofollow noreferrer">driver</a> of <code>minikube</code> (usually this is a <code>virtual box</code> or <code>docker</code> - can be checked on linux VM in <code> .minikube/profiles/minikube/config.json</code>), <code>minikube</code> can have different port forwarding. E.g. I have a <code>minikube</code> based on <code>docker</code> driver and I can see some mappings:</p>
<pre><code>$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ebcbc898b557 gcr.io/k8s-minikube/kicbase:v0.0.23 "/usr/local/bin/entrβ¦" 5 days ago Up 5 days 127.0.0.1:49157->22/tcp, 127.0.0.1:49156->2376/tcp, 127.0.0.1:49155->5000/tcp, 127.0.0.1:49154->8443/tcp, 127.0.0.1:49153->32443/tcp minikube
</code></pre>
<p>For instance 22 for ssh to ssh into <code>minikube VM</code>. This may be an answer why you got response from <code>http://127.0.0.1:52650/</code></p>
| moonkotte |
<p>Ingress is not forwarding traffic to pods.
Application is deployed on Azure Internal network.
I can access app successfully using pod Ip and port but when trying Ingress IP/ Host I am getting 404 not found. I do not see any error in Ingress logs.
Bellow are my config files.
Please help me if I am missing anything or a how I can troubleshoot to find issue.</p>
<p>Deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-helloworld-one
spec:
replicas: 1
selector:
matchLabels:
app: aks-helloworld-one
template:
metadata:
labels:
app: aks-helloworld-one
spec:
containers:
- name: aks-helloworld-one
image: <image>
ports:
- containerPort: 8290
protocol: "TCP"
env:
- name: env1
valueFrom:
secretKeyRef:
name: configs
key: env1
volumeMounts:
- mountPath: "mnt/secrets-store"
name: secrets-mount
volumes:
- name: secrets-mount
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-keyvault"
imagePullSecrets:
- name: acr-secret
---
apiVersion: v1
kind: Service
metadata:
name: aks-helloworld-one
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8290
selector:
app: aks-helloworld-one
</code></pre>
<p>Ingress.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: ingress-basic
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: aks-helloworld
port:
number: 80
</code></pre>
| megha | <p>Correct your service name and service port in ingress.yaml.</p>
<pre><code>spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
# wrong: name: aks-helloworld
name: aks-helloworld-one
port:
# wrong: number: 80
number: 8080
</code></pre>
<p>Actually, you can use below command to confirm if ingress has any endpoint.</p>
<pre><code>kubectl describe ingress hello-world-ingress -n ingress-basic
</code></pre>
| HKBN-ITDS |
<p>I am trying to debug my pod throwing CrashLoopBackOff error. When I run decribe command, I found that <code>Back-off restarting failed container</code> is the error. I excuted the logs for the failing pod and I got the below data.</p>
<pre><code>vagrant@master:~> kubectl logs pod_name
standard_init_linux.go:228: exec user process caused: exec format error
vagrant@master:/vagrant> kubectl logs -p pod_name
unable to retrieve container logs for containerd://db0f2dbd549676d8bf1026e5757ff45847c62152049b36037263f81915e948eavagrant
</code></pre>
<p>Why I am not able to execute the logs command?</p>
<p>More details:</p>
<p><a href="https://i.stack.imgur.com/wIejs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wIejs.png" alt="enter image description here" /></a></p>
<p>yaml file is as follows</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
service: udaconnect-app
name: udaconnect-app
spec:
ports:
- name: "3000"
port: 3000
targetPort: 3000
nodePort: 30000
selector:
service: udaconnect-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: udaconnect-app
name: udaconnect-app
spec:
replicas: 1
selector:
matchLabels:
service: udaconnect-app
template:
metadata:
labels:
service: udaconnect-app
spec:
containers:
- image: udacity/nd064-udaconnect-app:latest
name: udaconnect-app
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
cpu: "64m"
limits:
memory: "256Mi"
cpu: "256m"
restartPolicy: Always
</code></pre>
<p>My vagrant file</p>
<pre><code>default_box = "opensuse/Leap-15.2.x86_64"
Vagrant.configure("2") do |config|
config.vm.define "master" do |master|
master.vm.box = default_box
master.vm.hostname = "master"
master.vm.network 'private_network', ip: "192.168.0.200", virtualbox__intnet: true
master.vm.network "forwarded_port", guest: 22, host: 2222, id: "ssh", disabled: true
master.vm.network "forwarded_port", guest: 22, host: 2000 # Master Node SSH
master.vm.network "forwarded_port", guest: 6443, host: 6443 # API Access
for p in 30000..30100 # expose NodePort IP's
master.vm.network "forwarded_port", guest: p, host: p, protocol: "tcp"
end
master.vm.provider "virtualbox" do |v|
v.memory = "3072"
v.name = "master"
end
master.vm.provision "shell", inline: <<-SHELL
sudo zypper refresh
sudo zypper --non-interactive install bzip2
sudo zypper --non-interactive install etcd
sudo zypper --non-interactive install apparmor-parser
curl -sfL https://get.k3s.io | sh -
SHELL
end
config.vm.provider "virtualbox" do |vb|
vb.memory = "4096"
vb.cpus = 4
end
</code></pre>
<p>Any help is appreciated.</p>
| codeX | <p>Summarizing the comments: <code>CrashLoopBackOff</code> error occurs, when there is a mismatch of AMD64 and ARM64 devices. According to your docker image <code>udacity/nd064-udaconnect-app</code>, we can see that it's <a href="https://hub.docker.com/r/udacity/nd064-udaconnect-app/tags" rel="nofollow noreferrer">AMD64 arch</a> and your box <code>opensuse/Leap-15.2.x86_64</code> is <a href="https://en.opensuse.org/openSUSE:AArch64" rel="nofollow noreferrer">ARM64 arch</a>.</p>
<p>Hence, you have to change either your docker image, or the box in order to resolve this issue.</p>
| Bazhikov |
<p>I am following <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer">setting up the Azure File share to the pod</a>.</p>
<ul>
<li>created the namespace</li>
<li>created the secrets as specified</li>
<li>pod configuration</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: test-storage-pod
namespace: storage-test
spec:
containers:
- image: nginx:latest
name: test-storage-pod
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: azure
mountPath: /mnt/azure-filestore
volumes:
- name: azure
azureFile:
secretName: azure-storage-secret
shareName: appdata/data
readOnly: false
</code></pre>
<ul>
<li><code>kubectl describe -n storage-test pod/<pod-name></code> or <code>kubectl get -n storage-test event</code></li>
</ul>
<pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE
2m13s Normal Scheduled pod/test-storage-pod Successfully assigned storage-test/test-storage-pod to aks-default-1231523-vmss00001a
6s Warning FailedMount pod/test-storage-pod MountVolume.SetUp failed for volume "azure" : Couldn't get secret default/azure-storage-secret
11s Warning FailedMount pod/test-storage-pod Unable to attach or mount volumes: unmounted volumes=[azure], unattached volumes=[default-token-gzxk8 azure]: timed out waiting for the condition
</code></pre>
<p>Question:</p>
<ul>
<li>the secret is created under the namespace storage-test as well, is that Kubelet first checks the storage under default namespace?</li>
</ul>
| Tim | <p>Probably you are working default namespace, that's why Kubelet first checks the default namespace. Please try to switch to your created namespace with the command:</p>
<blockquote>
<p>kubens storage-test</p>
</blockquote>
<p>Try to run your pod under storage-test namespace once again.</p>
| Bazhikov |
<p>I have created a container which is basically a directory containing configuration files. These configuration files will be used by another container which contains the application executables.
Both the containers are deployed in the same pod.</p>
<p>I have created the pv and pvc for the 1st container like -</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: app-pv
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/data
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-pvc
labels:
type: local
spec:
storageClassName: standard
volumeName: app-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
</code></pre>
<p>Next I am creating the deployment of both the containers like -</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
selector:
matchLabels:
app: app
strategy:
type: Recreate
template:
metadata:
labels:
app: app
spec:
containers:
- name: lshome
image: <local_registry>/container1:latest
imagePullPolicy: Always
volumeMounts:
- name: container1-persistant-storage
mountPath: /home/app/config
- name: wildfly
image: <local_registry>/container2:latest
imagePullPolicy: Always
volumeMounts:
- name: container1-config
mountPath: /home/app/config
volumes:
- name: container1-persistant-storage
persistentVolumeClaim:
claimName: app-pvc
- name: container1-config
persistentVolumeClaim:
claimName: app-pvc
</code></pre>
<p>What I want is that the data under /home/app/config directory in container 1 be available to container 2 at the same directory structure. I have created the same directory structure in container 2 as well.</p>
<p>When I am trying to create the deployment, its giving the message - Unable to attach or mount volumes and then not able to create the deployment.</p>
| Toulick | <p>Don't define two volumes, delete <strong>container1-config</strong> and it works. Like below.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
selector:
matchLabels:
app: app
strategy:
type: Recreate
template:
metadata:
labels:
app: app
spec:
containers:
- name: lshome
image: <local_registry>/container1:latest
imagePullPolicy: Always
volumeMounts:
- name: container1-persistant-storage
mountPath: /home/app/config
- name: wildfly
image: <local_registry>/container2:latest
imagePullPolicy: Always
volumeMounts:
- name: container1-persistant-storage
mountPath: /home/app/config
volumes:
- name: container1-persistant-storage
persistentVolumeClaim:
claimName: app-pvc
</code></pre>
| HKBN-ITDS |
<p>I'm using a HPA based on a custom metric on GKE.</p>
<p>The HPA is not working and it's showing me this error log:</p>
<blockquote>
<p>unable to fetch metrics from custom metrics API: the server is currently unable to handle the request</p>
</blockquote>
<p>When I run <code>kubectl get apiservices | grep custom</code> I get</p>
<blockquote>
<p>v1beta1.custom.metrics.k8s.io services/prometheus-adapter False (FailedDiscoveryCheck) 135d</p>
</blockquote>
<p>this is the HPA spec config :</p>
<pre><code>spec:
scaleTargetRef:
kind: Deployment
name: api-name
apiVersion: apps/v1
minReplicas: 3
maxReplicas: 50
metrics:
- type: Object
object:
target:
kind: Service
name: api-name
apiVersion: v1
metricName: messages_ready_per_consumer
targetValue: '1'
</code></pre>
<p>and this is the service's spec config :</p>
<pre><code>spec:
ports:
- name: worker-metrics
protocol: TCP
port: 8080
targetPort: worker-metrics
selector:
app.kubernetes.io/instance: api
app.kubernetes.io/name: api-name
clusterIP: 10.8.7.9
clusterIPs:
- 10.8.7.9
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
</code></pre>
<p>What should I do to make it work ?</p>
| mohamed wael thabet | <p>First of all, confirm that the Metrics Server POD is running in your <code>kube-system</code> namespace. Also, you can use the following manifest:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
</code></pre>
<p>If so, take a look into the logs and look for any <em><strong>stackdriver adapterβs</strong></em> line. This issue is commonly caused due to a problem with the <code>custom-metrics-stackdriver-adapter</code>. It usually crashes in the <code>metrics-server</code> namespace. To solve that, use the resource from this <a href="https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter.yaml" rel="nofollow noreferrer">URL</a>, and for the deployment, use this image:</p>
<pre><code>gcr.io/google-containers/custom-metrics-stackdriver-adapter:v0.10.1
</code></pre>
<p>Another common root cause of this is an <strong>OOM</strong> issue. In this case, adding more memory solves the problem. To assign more memory, you can specify the new memory amount in the configuration file, as the following example shows:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
</code></pre>
<p>In the above example, the Container has a memory request of 100 MiB and a memory limit of 200 MiB. In the manifest, the "--vm-bytes", "150M" argument tells the Container to attempt to allocate 150 MiB of memory. You can visit this Kubernetes Official <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">Documentation</a> to have more references about the Memory settings.</p>
<p>You can use the following threads for more reference <a href="https://stackoverflow.com/questions/61098043/gke-hpa-using-custom-metrics-unable-to-fetch-metrics">GKE - HPA using custom metrics - unable to fetch metrics</a>, <a href="https://stackoverflow.com/questions/60541105/stackdriver-metadata-agent-cluster-level-gets-oomkilled/60549732#60549732">Stackdriver-metadata-agent-cluster-level gets OOMKilled</a>, and <a href="https://github.com/GoogleCloudPlatform/k8s-stackdriver/issues/303" rel="nofollow noreferrer">Custom-metrics-stackdriver-adapter pod keeps crashing</a>.</p>
| Nestor Daniel Ortega Perez |
<p>I am toying with the spark operator in kubernetes, and I am trying to create a Spark Application resource with the following manifest.</p>
<pre><code>apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: pyspark-pi
namespace: spark-jobs
spec:
batchScheduler: volcano
batchSchedulerOptions:
priorityClassName: routine
type: Python
pythonVersion: "3"
mode: cluster
image: "<image_name>"
imagePullPolicy: Always
mainApplicationFile: local:///spark-files/csv_data.py
arguments:
- "10"
sparkVersion: "3.0.0"
restartPolicy:
type: OnFailure
onFailureRetries: 3
onFailureRetryInterval: 10
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
timeToLiveSeconds: 86400
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 3.0.0
serviceAccount: driver-sa
volumeMounts:
- name: sparky-data
mountPath: /spark-data
executor:
cores: 1
instances: 2
memory: "512m"
labels:
version: 3.0.0
volumeMounts:
- name: sparky-data
mountPath: /spark-data
volumes:
- name: sparky-data
hostPath:
path: /spark-data
</code></pre>
<p>I am running this in kind, where I have defined a volume mount to my local system where the data to be processed is present. I can see the volume being mounted in the kind nodes. But when I create the above resource, the driver pod crashes by giving the error 'no such path'. I printed the contents of the root directory of the driver pod and I could not see the mounted volume. What is the problem here and how do I fix this?</p>
| imawful | <p>The issue is related to permissions. When mounting a volume to a pod, you need to make sure that the permissions are set correctly. Specifically, you need to make sure that the user or group that is running the application in the pod has the correct permissions to access the data.You should also make sure that the path to the volume is valid, and that the volume is properly mounted.To check if a path exists, you can use the exec command:</p>
<pre><code>kubectl exec <pod_name> -- ls
</code></pre>
<p>Try to add <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">security context</a> which gives privilege and access control settings for a Pod</p>
<p>For more information follow this <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">document</a>.</p>
| Sai Chandra Gadde |
<p>I'm trying to understand how the Kubernetes <strong>HorizontalPodAutoscaler</strong> works.
Until now, I have used the following configuration:</p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: my-deployment
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
</code></pre>
<p>This uses the <code>targetCPUUtilizationPercentage</code> parameter but I would like to use a metric for the memory percentage used, but I was not able to find any example.
Any hint?</p>
<p>I found also that there is this type of configuration to support multiple metrics, but the <strong>apiVersion</strong> is <code>autoscaling/v2alpha1</code>. Can this be used in a production environment?</p>
<pre><code>kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2alpha1
metadata:
name: WebFrontend
spec:
scaleTargetRef:
kind: ReplicationController
name: WebFrontend
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80
- type: Object
object:
target:
kind: Service
name: Frontend
metricName: hits-per-second
targetValue: 1k
</code></pre>
| Salvatore Calla' | <p>Here is a manifest example for what you need, that includes <strong>Memory Metrics</strong>:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: web-servers
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-servers
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 20
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 30Mi
</code></pre>
<p>An important thing to notice is that, as you can see, it uses the <strong>autoscaling/v2beta2 API version</strong>, so you need to follow all the previous instructions listed <a href="https://loft.sh/blog/kubernetes-horizontal-pod-autoscaling/" rel="nofollow noreferrer">here</a>.</p>
<p>Regarding the possibility to use the <strong>autoscaling/v2alpha1</strong>, yes, you can use it, as it includes support for scaling on memory and custom metrics as this <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">URL</a> specifies, but keep in mind that alpha versions are released for testing, as they are not final versions.</p>
<p>For more <strong>autoscaling/v2beta2 YAMLβs</strong> examples and a deeper look into memory metrics, you can take a look at this <a href="https://stackoverflow.com/questions/69184304/how-memory-metric-is-evaluated-by-kubernetes-hpa">thread</a>.</p>
| Nestor Daniel Ortega Perez |
<p>I have a statefulset deployed with 1 replica for jenkins. few days back the node on which jenkins pod was running went into NotReady State . Once Node went in NotReady state, Jenkins pod went in Terminating state and stuck there for long time until Node went back in Ready State.</p>
<p>Ideally, my Jenkins pod should have been re-scheduled to a healthy node in case my current node is not healthy. due to this my jenkins application had downtime for the time node was in Not Ready State.</p>
<p>Can anything be done in this case in order to prevent such downtime in statefulset pods</p>
<p>Kubectl version:</p>
<p>Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.13", GitCommit:"53c7b65d4531a749cd3a7004c5212d23daa044a9", GitTreeState:"clean", BuildDate:"2021-07-15T20:58:11Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.13", GitCommit:"53c7b65d4531a749cd3a7004c5212d23daa044a9", GitTreeState:"clean", BuildDate:"2021-07-15T20:53:19Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}</p>
| user3426197 | <p>Your issue could be because the cluster doesn't have enough resources or because the pod scheduler can't make a decision because there aren't enough labels on the nodes. You might want to look into the Kubernetes scheduler's logs to find out exactly what happened in this case. To better distribute the workload, you might want to think about expanding the size of your cluster or adding additional nodes if the resources on the cluster were insufficient. In addition, you might want to make sure that all of the cluster's nodes have accurate labels so that the pod scheduler knows which node is best for running your Jenkins pod.</p>
<p>A <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="nofollow noreferrer">node affinity</a> rule in your Jenkins statefulset's deployment manifest can be set up to prevent downtime caused by a node entering the Not Ready state. The pod will only be scheduled for nodes that match a particular label, such as "Ready=true," as a result of this. Additionally, you can use additional labels to indicate the health status of your cluster's nodes in your affinity rule. The pod will be rescheduled to a healthy node whenever a node enters the Not Ready state in this manner. Last but not least, you might want to think about using a pod disruption budget for your statefulset. This will make it less likely that the pod will be evicted from a node when it is not healthy.</p>
<p>Attaching supporting <a href="https://komodor.com/learn/how-to-fix-kubernetes-node-not-ready-error/" rel="nofollow noreferrer">blog-1</a> and <a href="https://www.datadoghq.com/blog/debug-kubernetes-pending-pods/" rel="nofollow noreferrer">blog-2</a> for reference.</p>
| Sai Chandra Gadde |
<p>Can someone please help to spot the issue with <code>ingress-2</code> ingress rule? why <code>ingress-1</code> is working vs <code>ingress-2</code> is not working.</p>
<p><strong>Description of my setup, I have two deployments:</strong></p>
<p>1st deployment is of <code>nginx</code><br />
2nd deployment is of <code>httpd</code></p>
<p>Both of the deployments are exposed via <code>ClusterIP</code> services named <code>nginx-svc</code> and <code>httpd-svc</code> respectively. All the <code>endpoints</code> are proper for the services. However, while
setting up the ingress for these services, I am not able to setup the ingress using <code>host</code> (as described in <code>ingress-2</code>). however, when I am using <code>ingress-1</code>, things work fine.</p>
<p>// my host file for name resolution</p>
<pre><code>grep myapp.com /etc/hosts
127.0.0.1 myapp.com
</code></pre>
<p>// deployment details</p>
<pre><code>kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 3/3 3 3 29m
httpd 3/3 3 3 29m
</code></pre>
<p>// service details</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 7h48m
nginx-svc ClusterIP 10.152.183.233 <none> 80/TCP 28m
httpd-svc ClusterIP 10.152.183.58 <none> 80/TCP 27m
</code></pre>
<p>// endpoints details</p>
<pre><code>kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:16443 7h51m
nginx-svc 10.1.198.86:80,10.1.198.87:80,10.1.198.88:80 31m
httpd-svc 10.1.198.89:80,10.1.198.90:80,10.1.198.91:80 31m
</code></pre>
<p>Attempt-1: <code>ingress-1</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-1
spec:
rules:
- http:
paths:
- path: /nginx
pathType: Prefix
backend:
service:
name: nginx-svc
port:
number: 80
- path: /httpd
pathType: Prefix
backend:
service:
name: httpd-svc
port:
number: 80
</code></pre>
<p>// Example showing that ingress routing is working fine when <code>ingress-1</code> is used:</p>
<pre><code> curl myapp.com/nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
curl myapp.com/httpd
<html><body><h1>It works!</h1></body></html>
</code></pre>
<p>// following ingress rule is not working as I was expecting</p>
<p>Attempt-2: <code>ingress-2</code></p>
<pre><code>kind: Ingress
metadata:
name: ingress-2
spec:
rules:
- host: "myapp.com"
http:
paths:
- pathType: Prefix
path: "/nginx"
backend:
service:
name: nginx-svc
port:
number: 80
- pathType: Prefix
path: "/httpd"
backend:
service:
name: httpd-svc
port:
number: 80
</code></pre>
<p>// I could not spot any issue in the ing describe</p>
<pre><code>kubectl describe ingress ingress-2
Name: ingress-2
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
myapp.com
/nginx nginx-svc:80 (10.1.198.86:80,10.1.198.87:80,10.1.198.88:80)
/httpd httpd-svc:80 (10.1.198.89:80,10.1.198.90:80,10.1.198.91:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m15s (x2 over 10m) nginx-ingress-controller Scheduled for sync
</code></pre>
<p>// example showing ingress routing is not working with this ingress resource</p>
<pre><code>curl myapp.com/nginx
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.1</center>
</body>
</html>
curl myapp.com/httpd
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL was not found on this server.</p>
</body></html>
</code></pre>
| monk | <h2 id="difference-between-ingresses">Difference between ingresses</h2>
<p>I created a one node <code>microk8s</code> cluster <a href="https://microk8s.io/" rel="noreferrer">following official documentation</a> and I wasn't able to reproduce behaviour you described which is correct behaviour. Added two pods with <code>mendhak/http-https-echo</code> image (highly recommend: very convenient for troubleshooting ingress or understanding how ingress works) and two services for each of pods.</p>
<p>The difference between two ingress rules is first ingress rule listens on all domains (HOSTS):</p>
<pre><code>$ mkctl get ing -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-1 public * 127.0.0.1 80 2m53s
$ curl -I --header "Host: myapp.com" http://127.0.0.1/httpd
HTTP/1.1 200 OK
$ curl -I --header "Host: example.com" http://127.0.0.1/httpd
HTTP/1.1 200 OK
$ curl -I --header "Host: myapp.com" http://127.0.0.1/missing_url
HTTP/1.1 404 Not Found
</code></pre>
<p>While the second ingress rule will serve only <code>myapp.com</code> domain (HOST):</p>
<pre><code>$ mkctl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-2 public myapp.com 127.0.0.1 80 60s
$ curl -I --header "Host: myapp.com" http://127.0.0.1/httpd
HTTP/1.1 200 OK
$ curl -I --header "Host: example.com" http://127.0.0.1/httpd
HTTP/1.1 404 Not Found
</code></pre>
<h2 id="what-exactly-happens">What exactly happens</h2>
<p>Last results in your question actually show that ingress is working as expected. You're getting responses not from <code>kubernetes ingress</code> but from pods within the cluster. First response is <code>404</code> from <code>nginx 1.21.0</code> and second is <code>404</code> from <code>apache</code>.</p>
<p>This happens because ingress sends requests to pods with the same <code>path</code> from URL without any transformations. For instance (this output I got using image mentioned above):</p>
<pre><code>$ curl myapp.com/httpd
{
"path": "/httpd"
...
</code></pre>
<p>While both <code>nginx</code> and <code>apache</code> are serving on <code>/</code>.</p>
<h2 id="how-to-resolve-it">How to resolve it</h2>
<p>Nginx ingress has a lot of features and one of them is <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">rewriting</a> which helps to transform <code>paths</code> from what ingress gets to what goes to pods.</p>
<p>For example, if request goes to <code>http://myapp.com/nginx</code> then it will be directed to <code>nginx</code> service with <code>/nginx</code> path which will cause <code>nginx</code> to throw <code>404</code> since there's nothing on this <code>path</code>.</p>
<p>Ingress rule below fixes this by adding <code>rewrite-target</code> to <code>/</code> which we need to pass to <code>nginx</code> and <code>apache</code> services:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-2
annotations:
# kubernetes.io/ingress.class: nginx # this should be uncommented if ingress used in "regular" cluster
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.com
http:
paths:
- path: /nginx
pathType: Prefix
backend:
service:
name: service-a
port:
number: 80
- path: /httpd
pathType: Prefix
backend:
service:
name: service-b
port:
number: 80
</code></pre>
<p>Quick test how it works:</p>
<pre><code>$ curl myapp.com/nginx
{
"path": "/",
...
</code></pre>
<p>And</p>
<pre><code>$ curl myapp.com/httpd
{
"path": "/",
...
</code></pre>
<p>As you can see now <code>path</code> is <code>/</code>.</p>
<p>Switching image to <code>nginx</code> and:</p>
<pre><code>$ curl myapp.com/nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
...
</code></pre>
<h2 id="useful-links">Useful links</h2>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Kubernetes ingress</a></li>
<li><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">Nginx ingress - rewrite</a></li>
</ul>
| moonkotte |
<p>We can check the service accounts in Kubernetes Cluster. Likewise, Is it possible to check the existing users and groups of my Kubernetes cluster with Cluster Admin privileges. If yes then how ? If no then why ?</p>
<p>NOTE: I am using EKS</p>
| Aman | <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p>This won't answer everything, however there are some concepts and ideas.</p>
<p>In short words there's no easy way. It's not possible to do using kubernetes itself. Reason for this is:</p>
<blockquote>
<p>All Kubernetes clusters have two categories of users: service accounts
managed by Kubernetes, and normal users.</p>
<p>It is assumed that a cluster-independent service manages normal users
in the following ways:</p>
<ul>
<li>an administrator distributing private keys</li>
<li>a user store like Keystone or Google Accounts</li>
<li>a file with a list of usernames and passwords</li>
</ul>
<p>In this regard, Kubernetes does not have objects which represent normal
user accounts. Normal users cannot be added to a cluster through an
API call.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#users-in-kubernetes" rel="nofollow noreferrer">Source</a></p>
<p><a href="https://stackoverflow.com/questions/51612976/how-to-view-members-of-subject-with-group-kind">More details and examples from another answer on SO</a></p>
<hr />
<p>As for EKS part which is mentioned, it should be done using AWS IAM in connection to kubernetes RBAC. Below articles about setting up IAM roles in kubernetes cluster. Same way it will be possible to find which role has <code>cluster admin</code> permissions:</p>
<ul>
<li><a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">Managing users or IAM roles for your cluster</a></li>
<li><a href="https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/" rel="nofollow noreferrer">provide access to other IAM users and roles</a></li>
</ul>
<p>If another tool is used for identity managing, it should be used (e.g. LDAP)</p>
| moonkotte |
<p>I'm trying to learn Kubernetes as I go and I'm currently trying to deploy a test application that I made.</p>
<p>I have 3 containers and each container is running on its own pod</p>
<ul>
<li>Front end App (Uses Nuxtjs)</li>
<li>Backend API (Nodejs)</li>
<li>MongoDB</li>
</ul>
<p>For the Front End container I have configured an External Service (LoadBalancer) which is working fine. I can access the app from my browser with no issues.</p>
<p>For the backend API and MongoDB I configured an Internal Service for each. The communication between Backend API and MongoDB is working. The problem that I'm having is communicating the Frontend with the Backend API.</p>
<p>I'm using the Axios component in Nuxtjs and in the nuxtjs.config.js file I have set the Axios Base URL to be http://service-name:portnumber/. But that does not work, I'm guessing its because the url is being call from the client (browser) side and not from the server. If I change the Service type of the Backend API to LoadBalancer and configure an IP Address and Port Number, and use that as my axios URL then it works. However I was kind of hoping to keep the BackEnd-API service internal. Is it possible to call the Axios base URL from the server side and not from client-side.</p>
<p>Any help/guidance will be greatly appreciated.</p>
<p>Here is my Front-End YML File</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mhov-ipp
name: mhov-ipp
namespace: mhov
spec:
replicas: 1
selector:
matchLabels:
app: mhov-ipp
template:
metadata:
labels:
app: mhov-ipp
spec:
containers:
- image: mhov-ipp:1.1
name: mhov-ipp
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: "development"
- name: PORT
value: "8080"
- name: TITLE
value: "MHOV - IPP - K8s"
- name: API_URL
value: "http://mhov-api-service:4000/"
---
apiVersion: v1
kind: Service
metadata:
name: mhov-ipp-service
spec:
selector:
app: mhov-ipp
type: LoadBalancer
ports:
- protocol: TCP
port: 8082
targetPort: 8080
nodePort: 30600
</code></pre>
<p>Here is the backend YML File</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mhov-api-depl
labels:
app: mhov-api
spec:
replicas: 1
selector:
matchLabels:
app: mhov-api
template:
metadata:
labels:
app: mhov-api
spec:
containers:
- name: mhov-api
image: mhov-api:1.0
ports:
- containerPort: 4000
env:
- name: mongoURI
valueFrom:
configMapKeyRef:
name: mhov-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mhov-api-service
spec:
selector:
app: mhov-api
ports:
- protocol: TCP
port: 4000
targetPort: 4000
</code></pre>
| Carlos Sosa | <h2 id="what-is-ingress-and-how-to-install-it">What is ingress and how to install it</h2>
<p>Your guess is correct. Frontend is running in browser and browser "doesn't know" where backend is and how to reach out to it. You have two options here:</p>
<ul>
<li>as you did with exposing backend outside your cluster</li>
<li>use advanced solution such as <code>ingress</code></li>
</ul>
<p>This will move you forward and will need to change some configuration of your application such as URL since application will be exposed to "the internet" (not really, but you can do it using cloud).</p>
<p>What is <code>ingress</code>:</p>
<blockquote>
<p>Ingress is api object which exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.</p>
</blockquote>
<p>Most common option is <code>nginx ingress</code> - their page <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX Ingress Controller</a>.</p>
<p>Installation depends on cluster type, however I suggest using <code>helm</code>. (if you're not familiar with <code>helm</code>, it's a template engine which uses charts to install and setup application. There are quite a lot already created charts, e.g. <a href="https://kubernetes.github.io/ingress-nginx/deploy/#using-helm" rel="nofollow noreferrer">ingress-nginx</a>.</p>
<p>If you're using <code>minikube</code> for example, it already has built-in <code>nginx-ingress</code> and can be enabled as addon.</p>
<h2 id="how-to-expose-services-using-ingress">How to expose services using ingress</h2>
<p>Once you have working ingress, it's type to create rules for it.</p>
<p>What you need is to have ingress which will be able to communicate with frontend and backend as well.</p>
<p>Example taken from official kubernetes documentation:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: service1
port:
number: 4200
- path: /bar
pathType: Prefix
backend:
service:
name: service2
port:
number: 8080
</code></pre>
<p>In this example, there are two different services available on different <code>paths</code> within the <code>foo.bar.com</code> hostname and both services are within the cluster. No need to expose them out of the cluster since traffic will be directed through <code>ingress</code>.</p>
<h2 id="actual-solution-how-to-approach">Actual solution (how to approach)</h2>
<p><a href="https://stackoverflow.com/q/67470540/15537201">This is very similar configuration</a> which was fixed and started working as expected. This is my answer and safe to share :)</p>
<p>As you can see OP faced the same issue when frontend was accessible, while backend wasn't.</p>
<p>Feel free to use anything out of that answer/repository.</p>
<h2 id="useful-links">Useful links:</h2>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes ingress</a></li>
</ul>
| moonkotte |
<p>I've been reading the Google Cloud documentation about <strong>hybrid GKE cluster</strong> with <a href="https://cloud.google.com/anthos/multicluster-management/connect/registering-a-cluster" rel="nofollow noreferrer">Connect</a> or completely on prem with GKE on-prem and VMWare.</p>
<p>However, I see that GKE with <strong>Connect</strong> you can manage the on-prem Kubernetes cluster from Google Cloud dashboard.</p>
<p>But, what I am trying to find, is, to mantain a hybrid cluster with GKE mixing <strong>on-prem and cloud nodes</strong>. Graphical example:</p>
<p><a href="https://i.stack.imgur.com/r9ZJ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r9ZJ0.png" alt="enter image description here" /></a></p>
<p>For the above solution, the master node is managed by GCloud, but the ideal solution is to manage <strong>multiple node masters</strong> (High availability) <strong>on cloud</strong> and nodes on prem. Graphical example:</p>
<p><a href="https://i.stack.imgur.com/B78pa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B78pa.png" alt="enter image description here" /></a></p>
<p>Is it possible to apply some or both of the proposed solutions on Google Cloud with GKE?</p>
| user1911 | <p>If you want to maintain hybrid clusters, mixing on prem and cloud nodes, you need to use Anthos.</p>
<p>Anthos is a modern application management platform that provides a consistent development and operations experience for cloud and on-premises environments.</p>
<p>The primary computing environment for Anthos uses Anthos clusters, which extend GKE for use on Google Cloud, on-premises, or multicloud to manage Kubernetes installations in the environments where you intend to deploy your applications. These offerings bundle upstream Kubernetes releases and provide management capabilities for creating, scaling, and upgrading conformant Kubernetes clusters. With Kubernetes installed and running, you have access to a common orchestration layer that manages application deployment, configuration, upgrade, and scaling.</p>
<p>If you want to know more about Anthos in GCP please <a href="https://cloud.google.com/anthos/docs/concepts/overview" rel="nofollow noreferrer">follow this link.</a></p>
| Ismael Clemente Aguirre |
<p>I created a cronjob with the following spec in GKE:</p>
<pre><code># cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: collect-data-cj-111
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Allow
startingDeadlineSeconds: 100
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: collect-data-cj-111
image: collect_data:1.3
restartPolicy: OnFailure
</code></pre>
<p>I create the cronjob with the following command:</p>
<pre><code>kubectl apply -f collect_data.yaml
</code></pre>
<p>When I later watch if it is running or not (as I scheduled it to run every 5th minute for for the sake of testing), here is what I see:</p>
<pre><code>$ kubectl get pods --watch
NAME READY STATUS RESTARTS AGE
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 Pending 0 0s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 Pending 0 1s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ContainerCreating 0 1s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ErrImagePull 0 3s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ImagePullBackOff 0 17s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ErrImagePull 0 30s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ImagePullBackOff 0 44s
</code></pre>
<p>It does not seem to be able to pull the image from Artifact Registry. I have both GKE and Artifact Registry created under the same project.</p>
<p>What can be the reason? After spending several hours in docs, I still could not make progress and I am quite new in the world of GKE.</p>
<p>If you happen to recommend me to check anything, I really appreciate if you also describe where in GCP I should check/control your recommendation.</p>
<hr />
<p>ADDENDUM:</p>
<p>When I run the following command:</p>
<pre><code>kubectl describe pods
</code></pre>
<p>The output is quite large but I guess the following message should indicate the problem.</p>
<pre><code> Failed to pull image "collect_data:1.3": rpc error: code = Unknown
desc = failed to pull and unpack image "docker.io/library/collect_data:1.3":
failed to resolve reference "docker.io/library/collect_data:1.3": pull
access denied, repository does not exist or may require authorization:
server message: insufficient_scope: authorization failed
</code></pre>
<p>How do I solve this problem step by step?</p>
| edn | <p>From the error shared, I can tell that the image is not being pulled from Artifact Registry, and the reason for failure is because, by default, GKE pulls it directly from Docker Hub unless specified otherwise. Since there is no collect_data image there, hence the error.</p>
<p>The correct way to specify an image stored in Artifact Registry is as follows:</p>
<pre><code>image: <location>-docker.pkg.dev/<project>/<repo-name>/<image-name:tag>
</code></pre>
<p>Be aware that the registry format has to be set to "docker" if you are using a docker-containerized image.</p>
<p>Take a look at the <a href="https://cloud.google.com/artifact-registry/docs/docker/quickstart#gcloud" rel="nofollow noreferrer">Quickstart for Docker</a> guide, where it is specified how to pull and push docker images to Artifact Registry along with the permissions required.</p>
| Gabriel Robledo Ahumada |
<p>I am trying to use the kubernetes extension in vscode.
However, when I try to click on any item in the menu list (see image), I receive the error popup <code>Unable to connect to the server: Forbidden</code>.</p>
<p><a href="https://i.stack.imgur.com/1JaDO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1JaDO.png" alt="image" /></a></p>
<p>The kubernetes debug logs are however completely empty, and the kubectl CLI also seems to work fine. For example the command <code>kubectl config get-contexts</code> returns:</p>
<pre><code>CURRENT NAME CLUSTER AUTHINFO NAMESPACE
....
* ftxt-gpus-dev.oa ftxt-gpus-dev.oa username my-namespace
</code></pre>
<p>When I run <code>kubectl auth can-i --list</code> I get the following:</p>
<pre><code>Resources Non-Resource URLs Resource Names Verbs
pods/exec [] [] [*]
pods/portforward [] [] [*]
pods/status [] [] [*]
pods [] [] [*]
secrets [] [] [*]
cronjobs.batch [] [] [*]
jobs.batch [] [] [*]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
events [] [] [get list watch]
namespaces/status [] [] [get list watch]
namespaces [] [] [get list watch]
nodes/status [] [] [get list watch]
nodes [] [] [get list watch]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
</code></pre>
| Ford O. | <p>This error means that the correct Role-Based Access Control (RBAC) permissions or the correct authorization policy are not set. To fix this error, you should first check the RBAC permissions for the user account you are attempting to use. You can do this by running the command <code>kubectl get clusterrolebinding</code> to view the current RBAC permissions. If you donβt have a role binding try to create one using <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes RBAC</a>.</p>
<p>Also you should check the authorization policy for the cluster. This can be done by running the command <code>kubectl get authorizationpolicies</code>. If the authorization policy is set to deny access to all users, then you should update the policy to allow the user to access the cluster.</p>
| Sai Chandra Gadde |
<p>Our system runs on GKE in a VPC-native network.
We've recently upgraded from v1.9 to v1.21, and when we transferred the configuration, I've noticed the spec.template.spec.affinity.nodeAffinity in out kube-dns deployment is deleted and ignored.
I tried manually adding this with "kubectl apply -f kube-dns-deployment.yaml"</p>
<p>I get "deployment.apps/kube-dns configured", but after a few seconds the kube-dns reverts to a configuration without this affinity.</p>
<p>This is the relevant code in the yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
name: kube-dns
namespace: kube-system
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
components.gke.io/component-name: kubedns
prometheus.io/port: "10054"
prometheus.io/scrape: "true"
scheduler.alpha.kubernetes.io/critical-pod: ""
seccomp.security.alpha.kubernetes.io/pod: runtime/default
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: cloud.google.com/gke-nodepool
operator: In
values:
- pool-1
weight: 20
- preference:
matchExpressions:
- key: cloud.google.com/gke-nodepool
operator: In
values:
- pool-3
- training-pool
weight: 1
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloud.google.com/gke-nodepool
operator: In
values:
- pool-1
- pool-3
- training-pool
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- kube-dns
topologyKey: kubernetes.io/hostname
weight: 100
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- kube-dns
topologyKey: cloud.google.com/hostname
containers:
....
dnsPolicy: Default
nodeSelector:
kubernetes.io/os: linux
</code></pre>
<p>This is what I get when I run <em>$ kubectl get deployment kube-dns -n kube-system -o yaml</em>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
....
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
name: kube-dns
namespace: kube-system
resourceVersion: "16650828"
uid: ....
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
components.gke.io/component-name: kubedns
prometheus.io/port: "10054"
prometheus.io/scrape: "true"
scheduler.alpha.kubernetes.io/critical-pod: ""
seccomp.security.alpha.kubernetes.io/pod: runtime/default
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- kube-dns
topologyKey: kubernetes.io/hostname
weight: 100
containers:
...
dnsPolicy: Default
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 65534
supplementalGroups:
- 65534
serviceAccount: kube-dns
serviceAccountName: kube-dns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: components.gke.io/gke-managed-components
operator: Exists
volumes:
- configMap:
defaultMode: 420
name: kube-dns
optional: true
name: kube-dns-config
status:
...
</code></pre>
<p>As you can see, GKE just REMOVES the NodeAffinity part, as well as one part of the podAffinity.</p>
| Yonatan Huber | <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/kube-dns" rel="nofollow noreferrer">kube-dns</a> is a service discovery mechanism within GKE, and the default DNS provider used by the clusters. It is managed by Google and that is why the changes are not holding, and most probably that part of the code was removed in the new version.</p>
<p>If you need to apply a custom configuration, you can do that following the guide <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/custom-kube-dns" rel="nofollow noreferrer">Setting up a custom kube-dns Deployment</a>.</p>
| Gabriel Robledo Ahumada |
<p>I need to create a Kubernetes clientset using a token extracted from JSON service account key file.</p>
<p>I explicitly provide this token inside the config, however it still looks for Google Application-Default credentials, and crashes because it cannot find them.</p>
<p>Below is my code:</p>
<pre><code>package main
import (
"context"
"encoding/base64"
"fmt"
"io/ioutil"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
gke "google.golang.org/api/container/v1"
"google.golang.org/api/option"
"k8s.io/client-go/kubernetes"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/clientcmd/api"
)
const (
projectID = "my_project_id"
clusterName = "my_cluster_name"
scope = "https://www.googleapis.com/auth/cloud-platform"
)
func main() {
ctx := context.Background()
// Read JSON key and extract the token
data, err := ioutil.ReadFile("sa_key.json")
if err != nil {
panic(err)
}
creds, err := google.CredentialsFromJSON(ctx, data, scope)
if err != nil {
panic(err)
}
token, err := creds.TokenSource.Token()
if err != nil {
panic(err)
}
fmt.Println("token", token.AccessToken)
// Create GKE client
tokenSource := oauth2.StaticTokenSource(token)
gkeClient, err := gke.NewService(ctx, option.WithTokenSource(tokenSource))
if err != nil {
panic(err)
}
// Create a dynamic kube config
inMemKubeConfig, err := createInMemKubeConfig(ctx, gkeClient, token, projectID)
if err != nil {
panic(err)
}
// Use it to create a rest.Config
config, err := clientcmd.NewNonInteractiveClientConfig(*inMemKubeConfig, clusterName, &clientcmd.ConfigOverrides{CurrentContext: clusterName}, nil).ClientConfig()
if err != nil {
panic(err)
}
// Create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err) // this where the code crashes because it can't find the Google ADCs
}
fmt.Printf("clientset %+v\n", clientset)
}
func createInMemKubeConfig(ctx context.Context, client *gke.Service, token *oauth2.Token, projectID string) (*api.Config, error) {
k8sConf := api.Config{
APIVersion: "v1",
Kind: "Config",
Clusters: map[string]*api.Cluster{},
AuthInfos: map[string]*api.AuthInfo{},
Contexts: map[string]*api.Context{},
}
// List all clusters in project with id projectID across all zones ("-")
resp, err := client.Projects.Zones.Clusters.List(projectID, "-").Context(ctx).Do()
if err != nil {
return nil, err
}
for _, f := range resp.Clusters {
name := fmt.Sprintf("gke_%s_%s_%s", projectID, f.Zone, f.Name) // My custom naming convention
cert, err := base64.StdEncoding.DecodeString(f.MasterAuth.ClusterCaCertificate)
if err != nil {
return nil, err
}
k8sConf.Clusters[name] = &api.Cluster{
CertificateAuthorityData: cert,
Server: "https://" + f.Endpoint,
}
k8sConf.Contexts[name] = &api.Context{
Cluster: name,
AuthInfo: name,
}
k8sConf.AuthInfos[name] = &api.AuthInfo{
Token: token.AccessToken,
AuthProvider: &api.AuthProviderConfig{
Name: "gcp",
Config: map[string]string{
"scopes": scope,
},
},
}
}
return &k8sConf, nil
}
</code></pre>
<p>and here is the error message:</p>
<pre><code>panic: cannot construct google default token source: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
</code></pre>
| Emre Chomko | <p>Here's what worked for me.</p>
<p>It is based on this <a href="https://gist.github.com/ahmetb/548059cdbf12fb571e4e2f1e29c48997" rel="nofollow noreferrer">gist</a>
and it's exactly what I was looking for. It uses an <code>oauth2.TokenSource</code> object which can be fed with a variety of token types so it's quite flexible.</p>
<p>It took me a long time to find this solution so I hope this helps somebody!</p>
<pre><code>package main
import (
"context"
"encoding/base64"
"fmt"
"io/ioutil"
"log"
"net/http"
gke "google.golang.org/api/container/v1"
"google.golang.org/api/option"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
)
const (
googleAuthPlugin = "gcp"
projectID = "my_project"
clusterName = "my_cluster"
zone = "my_cluster_zone"
scope = "https://www.googleapis.com/auth/cloud-platform"
)
type googleAuthProvider struct {
tokenSource oauth2.TokenSource
}
// These funcitons are needed even if we don't utilize them
// So that googleAuthProvider is an rest.AuthProvider interface
func (g *googleAuthProvider) WrapTransport(rt http.RoundTripper) http.RoundTripper {
return &oauth2.Transport{
Base: rt,
Source: g.tokenSource,
}
}
func (g *googleAuthProvider) Login() error { return nil }
func main() {
ctx := context.Background()
// Extract a token from the JSON SA key
data, err := ioutil.ReadFile("sa_key.json")
if err != nil {
panic(err)
}
creds, err := google.CredentialsFromJSON(ctx, data, scope)
if err != nil {
panic(err)
}
token, err := creds.TokenSource.Token()
if err != nil {
panic(err)
}
tokenSource := oauth2.StaticTokenSource(token)
// Authenticate with the token
// If it's nil use Google ADC
if err := rest.RegisterAuthProviderPlugin(googleAuthPlugin,
func(clusterAddress string, config map[string]string, persister rest.AuthProviderConfigPersister) (rest.AuthProvider, error) {
var err error
if tokenSource == nil {
tokenSource, err = google.DefaultTokenSource(ctx, scope)
if err != nil {
return nil, fmt.Errorf("failed to create google token source: %+v", err)
}
}
return &googleAuthProvider{tokenSource: tokenSource}, nil
}); err != nil {
log.Fatalf("Failed to register %s auth plugin: %v", googleAuthPlugin, err)
}
gkeClient, err := gke.NewService(ctx, option.WithTokenSource(tokenSource))
if err != nil {
panic(err)
}
clientset, err := getClientSet(ctx, gkeClient, projectID, org, env)
if err != nil {
panic(err)
}
// Demo to make sure it works
pods, err := clientset.CoreV1().Pods("").List(ctx, metav1.ListOptions{})
if err != nil {
panic(err)
}
log.Printf("There are %d pods in the cluster", len(pods.Items))
for _, pod := range pods.Items {
fmt.Println(pod.Name)
}
}
func getClientSet(ctx context.Context, client *gke.Service, projectID, name string) (*kubernetes.Clientset, error) {
// Get cluster info
cluster, err := client.Projects.Zones.Clusters.Get(projectID, zone, name).Context(ctx).Do()
if err != nil {
panic(err)
}
// Decode cluster CA certificate
cert, err := base64.StdEncoding.DecodeString(cluster.MasterAuth.ClusterCaCertificate)
if err != nil {
return nil, err
}
// Build a config using the cluster info
config := &rest.Config{
TLSClientConfig: rest.TLSClientConfig{
CAData: cert,
},
Host: "https://" + cluster.Endpoint,
AuthProvider: &clientcmdapi.AuthProviderConfig{Name: googleAuthPlugin},
}
return kubernetes.NewForConfig(config)
}
</code></pre>
| Emre Chomko |
<p>We have created two machine deployments.</p>
<pre><code>kubectl get machinedeployment -A
NAMESPACE NAME REPLICAS AVAILABLE-REPLICAS PROVIDER OS KUBELET AGE
kube-system abc 3 3 hetzner ubuntu 1.24.9 116m
kube-system vnr4jdxd6s-worker-tgl65w 1 1 hetzner ubuntu 1.24.9 13d
</code></pre>
<pre><code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
abc-b6647d7cb-bcprj Ready <none> 62m v1.24.9
abc-b6647d7cb-llsq8 Ready <none> 65m v1.24.9
abc-b6647d7cb-mtlsl Ready <none> 58m v1.24.9
vnr4jdxd6s-worker-tgl65w-59ff7fc46c-d9tm6 Ready <none> 13d v1.24.9
</code></pre>
<p>We know that we can add a label to a specific node</p>
<pre><code>kubectl label nodes abc-b6647d7cb-bcprj key=value
</code></pre>
<p>But our nodes are autoscaled.
We would like to install, for example, MariaDB Galera on specific machinedeployment node.
Is it somehow possible to annotate all nodes with a particular machinedeployments?</p>
<p>Is it somehow possible to annotate all nodes with a particular machinedeployments?</p>
| portableunit | <p>To annotate all nodes with a particular machinedeployment. You can use the <a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_annotate/" rel="nofollow noreferrer">kubectl annotate</a> command to annotate all nodes in a particular machinedeployment with a specific key-value pair. For example, to annotate all nodes with a machinedeployment of nginx-deployment, you can run the following command:</p>
<pre><code>kubectl annotate nodes --all deployment=nginx-deployment key=value
</code></pre>
<p>This will annotate all nodes in the machinedeployment of nginx-deployment with the specified key-value pair.</p>
<p>For more information follow this <a href="https://www.kubermatic.com/blog/annotating-machine-deployment-for-autoscaling/" rel="nofollow noreferrer">blog by Seyi Ewegbemi</a>.</p>
| Sai Chandra Gadde |
<p>I created a node js TLS server, dockerized it, and created a K8S Deployment and ClusterIP service for it. I created a DNS for the LoadBalancer service external IP of istio-ingressgateway and Iβm using this DNS to try access this TLS server using istio but for some reason this error appears</p>
<pre><code>[2022-02-10T04:28:38.302Z] "- - -" 0 NR filter_chain_not_found - "-" 0 0 3087 - "-" "-" "-" "-" "-" "-" - - 10.120.22.33:7070 10.101.31.172:44748 - -
</code></pre>
<p>The node server.js file:</p>
<pre><code>const tls = require("tls");
const fs = require("fs");
const options = {
key: fs.readFileSync("server-key.pem"),
cert: fs.readFileSync("server-cert.pem"),
rejectUnauthorized: false,
};
process.env["NODE_TLS_REJECT_UNAUTHORIZED"] = 0;
const server = tls.createServer(options, (socket) => {
console.log(
"server connected",
socket.authorized ? "authorized" : "unauthorized"
);
socket.write("welcome!\n");
socket.setEncoding("utf8");
socket.pipe(socket);
});
server.listen(7070, () => {
console.log("server bound");
});
</code></pre>
<p>The client.js file I use to connect to the server:</p>
<pre><code>const tls = require("tls");
const fs = require("fs");
const options = {
ca: [fs.readFileSync("server-cert.pem", { encoding: "utf-8" })],
};
var socket = tls.connect(
7070,
"HOSTNAME",
options,
() => {
console.log(
"client connected",
socket.authorized ? "authorized" : "unauthorized"
);
process.stdin.pipe(socket);
process.stdin.resume();
}
);
socket.setEncoding("utf8");
socket.on("data", (data) => {
console.log(data);
});
socket.on("end", () => {
console.log("Ended");
});
</code></pre>
<p>The cluster service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nodejs-service
namespace: nodejs-tcp
spec:
ports:
- name: web
port: 7070
protocol: TCP
targetPort: 7070
selector:
app: nodejs
sessionAffinity: None
type: ClusterIP
</code></pre>
<p>The istio-gateway.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: nodejs-gw
namespace: nodejs-tcp
spec:
selector:
istio: istio-ingressgateway
servers:
- hosts:
- 'HOSTNAME'
port:
name: tls
number: 7070
protocol: TLS
tls:
credentialName: tls-secret
mode: PASSTHROUGH
</code></pre>
<p>In credentialName, I created a generic secret that holds the values of the private key and the certificate of the server</p>
<p>The istio-virtual-service.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nodejs-vs
namespace: nodejs-tcp
spec:
gateways:
- nodejs-gw
hosts:
- 'HOSTNAME'
tls:
- match:
- port: 7070
sniHosts:
- HOSTNAME
route:
- destination:
host: nodejs-service
port:
number: 7070
</code></pre>
<p>The Istio version Iβm using:</p>
<pre><code>client version: 1.12.2
control plane version: 1.12.2
data plane version: 1.12.2 (159 proxies)
</code></pre>
<p>Your help is so much appreciated. Thanks in advance.</p>
| Kareem Yasser | <p>One thing I noticed right away is that you are using the incorrect selector in your <code>istio-gateway</code>, it should be:</p>
<pre><code>spec:
selector:
istio: ingressgateway
</code></pre>
<p>A good troubleshooting starting point would be to get the routes for your <code>ingressgateway</code> and validate that you see the expected ones.</p>
<ol>
<li>First you need to know your pod's name:</li>
</ol>
<pre><code>kubectl get pods -n <namespace_of_your_app>
NAME READY STATUS RESTARTS AGE
pod/my-nginx-xxxxxxxxx-xxxxx 2/2 Running 0 50m
</code></pre>
<p>In my deployment, it is an nginx pod.</p>
<ol start="2">
<li>Once you have the name, you can get the routes specific to your hostname:</li>
</ol>
<pre><code>istioctl pc routes <your_pod_name>.<namespace>
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
my-nginx.default.svc.cluster.local:443 my-nginx /*
</code></pre>
<p>This is an output example for a hostname "my-nginx". If the output returns no route, it usually means that it does not match SNI and/or cannot find a specific route.</p>
| Gabriel Robledo Ahumada |
<p>i want to access dotted named value with helm to use in ConfigMap
the value is something like that</p>
<pre><code>valuenum:
joji.json: zok
</code></pre>
<p>i want to use it in ConfigMap with helm as this</p>
<pre><code>{{ toYaml .Values.valuenum.joji.json }}
</code></pre>
<p>it returns syntax error.
could not find a fix for it.</p>
| MrZorkiPongi | <p>I found the answer myself, when using index we can search for nested variables with quotes.</p>
<pre><code>{{ index .Values.valuenum "joji.json" }}
</code></pre>
<p><a href="https://helm.sh/docs/chart_template_guide/variables/" rel="nofollow noreferrer">link for helm doc about index and more</a></p>
| MrZorkiPongi |
<p>I am having trouble with kubernetes cluster and setting up a load balancer with Digital Ocean. The config has worked before, but I'm not sure if something is an outdated version or needs some other change to make this work. Is there a way to ensure the SyncLoadBalancer succeeds? I have waited for more than an hour and the load balancer has long been listed as online in the DigitalOcean dashboard.</p>
<pre><code>Name: my-cluster
Namespace: default
Labels: app.kubernetes.io/instance=prod
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=my-company
app.kubernetes.io/part-of=my-company
app.kubernetes.io/version=1.1.67
helm.sh/chart=my-company-0.1.51
Annotations: kubernetes.digitalocean.com/load-balancer-id: e7bbf8b7-29e0-407c-adce-XXXXXXXXX
meta.helm.sh/release-name: prod
meta.helm.sh/release-namespace: default
service.beta.kubernetes.io/do-loadbalancer-certificate-id: 8be22723-b242-4bea-9963-XXXXXXXX
service.beta.kubernetes.io/do-loadbalancer-disable-lets-encrypt-dns-records: false
service.beta.kubernetes.io/do-loadbalancer-name: prod-load-balancer
service.beta.kubernetes.io/do-loadbalancer-protocol: https
service.beta.kubernetes.io/do-loadbalancer-size-unit: 1
Selector: app.kubernetes.io/instance=prod,app.kubernetes.io/name=my-company,app.kubernetes.io/part-of=my-company
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.245.16.78
IPs: 10.245.16.78
LoadBalancer Ingress: 24.199.70.237
Port: https 443/TCP
TargetPort: http/TCP
NodePort: https 32325/TCP
Endpoints: 10.244.0.163:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning SyncLoadBalancerFailed 18m service-controller Error syncing load balancer: failed to ensure load balancer: load-balancer is not yet active (current status: new)
Warning SyncLoadBalancerFailed 18m service-controller Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID e7bbf8b7-29e0-407c-adce-94a3205b38b5: PUT https://api.digitalocean.com/v2/load_balancers/e7bbf8b7-29e0-407c-adce-94a3205b38b5: 403 (request "b06545a5-c701-46d1-be84-3740196c21c7") Load Balancer can't be updated while it processes previous actions
Warning SyncLoadBalancerFailed 18m service-controller Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID e7bbf8b7-29e0-407c-adce-94a3205b38b5: PUT https://api.digitalocean.com/v2/load_balancers/e7bbf8b7-29e0-407c-adce-94a3205b38b5: 403 (request "27b58084-7ff0-46a3-830b-6210a12278ab") Load Balancer can't be updated while it processes previous actions
Warning SyncLoadBalancerFailed 17m service-controller Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID e7bbf8b7-29e0-407c-adce-94a3205b38b5: PUT https://api.digitalocean.com/v2/load_balancers/e7bbf8b7-29e0-407c-adce-94a3205b38b5: 403 (request "22ff352c-8486-4a69-8ffc-a4bba64147dc") Load Balancer can't be updated while it processes previous actions
Warning SyncLoadBalancerFailed 17m service-controller Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID e7bbf8b7-29e0-407c-adce-94a3205b38b5: PUT https://api.digitalocean.com/v2/load_balancers/e7bbf8b7-29e0-407c-adce-94a3205b38b5: 403 (request "ec7f0138-99ba-4932-b1ff-1cfe46ed24c5") Load Balancer can't be updated while it processes previous actions
Normal EnsuringLoadBalancer 15m (x10 over 10h) service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 15m (x5 over 10h) service-controller Ensured load balancer
</code></pre>
| Brettins | <p>Below are troubleshooting steps that might help resolve your issue:</p>
<ol>
<li>If you are specifying 2 ports in the yaml file the load balancer will take the whole range between the 2 ports, thus blocking and making them unable to be reused for another service</li>
<li>If you are already using a port ex:8086 for the forwarding rule it cannot be reused for another service</li>
<li>If you have health checks enabled on your load balancer check if those health checks are all passing.</li>
<li>Verify that the load balancer is reachable from the public internet.</li>
<li>Finally restart the cluster and try to deploy again.</li>
</ol>
<p>For more information follow troubleshooting <a href="https://docs.digitalocean.com/support/check-your-load-balancers-connectivity/#:%7E:text=Check%20the%20status%20of%20load,pointed%20at%20the%20load%20balancer." rel="nofollow noreferrer">documentation</a>. Adding <a href="https://github.com/digitalocean/digitalocean-cloud-controller-manager/issues/110" rel="nofollow noreferrer">issue</a> with a similar error.</p>
| Sai Chandra Gadde |
<p>I have a GKE cluster running with several persistent disks for storage.
To set up a staging environment, I created a second cluster inside the same project.
Now I want to use the data from the persistent disks of the production cluster in the staging cluster.</p>
<p>I already created persistent disks for the staging cluster. What is the best approach to move over the production data to the disks of the staging cluster.</p>
| jjmurre | <p>You can use the open source tool <a href="https://velero.io/" rel="nofollow noreferrer">Velero</a> which is designed to migrate Kubernetes cluster resources.</p>
<p>Follow these steps to migrate a persistent disk within GKE clusters:</p>
<ol>
<li>Create a GCS bucket:</li>
</ol>
<pre><code>BUCKET=<your_bucket_name>
gsutil mb gs://$BUCKET/
</code></pre>
<ol start="2">
<li>Create a <a href="https://cloud.google.com/iam/docs/service-accounts" rel="nofollow noreferrer">Google Service Account</a> and store the associated email in a variable for later use:</li>
</ol>
<pre><code>GSA_NAME=<your_service_account_name>
gcloud iam service-accounts create $GSA_NAME \
--display-name "Velero service account"
SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
--filter="displayName:Velero service account" \
--format 'value(email)')
</code></pre>
<ol start="3">
<li>Create a custom role for the Service Account:</li>
</ol>
<pre><code>PROJECT_ID=<your_project_id>
ROLE_PERMISSIONS=(
compute.disks.get
compute.disks.create
compute.disks.createSnapshot
compute.snapshots.get
compute.snapshots.create
compute.snapshots.useReadOnly
compute.snapshots.delete
compute.zones.get
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.list
)
gcloud iam roles create velero.server \
--project $PROJECT_ID \
--title "Velero Server" \
--permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role projects/$PROJECT_ID/roles/velero.server
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
</code></pre>
<ol start="4">
<li>Grant access to Velero:</li>
</ol>
<pre><code>gcloud iam service-accounts keys create credentials-velero \
--iam-account $SERVICE_ACCOUNT_EMAIL
</code></pre>
<ol start="5">
<li>Download and install Velero on the source cluster:</li>
</ol>
<pre><code>wget https://github.com/vmware-tanzu/velero/releases/download/v1.8.1/velero-v1.8.1-linux-amd64.tar.gz
tar -xvzf velero-v1.8.1-linux-amd64.tar.gz
sudo mv velero-v1.8.1-linux-amd64/velero /usr/local/bin/velero
velero install \
--provider gcp \
--plugins velero/velero-plugin-for-gcp:v1.4.0 \
--bucket $BUCKET \
--secret-file ./credentials-velero
</code></pre>
<p>Note: The download and installation was performed on a Linux system, which is the OS used by Cloud Shell. If you are managing your GCP resources via Cloud SDK, the release and installation process could vary.</p>
<ol start="6">
<li>Confirm that the velero pod is running:</li>
</ol>
<pre><code>$ kubectl get pods -n velero
NAME READY STATUS RESTARTS AGE
velero-xxxxxxxxxxx-xxxx 1/1 Running 0 11s
</code></pre>
<ol start="7">
<li>Create a backup for the PV,PVCs:</li>
</ol>
<pre><code>velero backup create <your_backup_name> --include-resources pvc,pv --selector app.kubernetes.io/<your_label_name>=<your_label_value>
</code></pre>
<ol start="8">
<li>Verify that your backup was successful with no errors or warnings:</li>
</ol>
<pre><code>$ velero backup describe <your_backup_name> --details
Name: your_backup_name
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.21.6-gke.1503
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=21
Phase: Completed
Errors: 0
Warnings: 0
</code></pre>
<hr />
<p>Now that the Persistent Volumes are backed up, you can proceed with the migration to the destination cluster following these steps:</p>
<ol>
<li>Authenticate in the destination cluster</li>
</ol>
<pre><code>gcloud container clusters get-credentials <your_destination_cluster> --zone <your_zone> --project <your_project>
</code></pre>
<ol start="2">
<li>Install Velero using the same parameters as step 5 on the first part:</li>
</ol>
<pre><code>velero install \
--provider gcp \
--plugins velero/velero-plugin-for-gcp:v1.4.0 \
--bucket $BUCKET \
--secret-file ./credentials-velero
</code></pre>
<ol start="3">
<li>Confirm that the velero pod is running:</li>
</ol>
<pre><code>kubectl get pods -n velero
NAME READY STATUS RESTARTS AGE
velero-xxxxxxxxxx-xxxxx 1/1 Running 0 19s
</code></pre>
<ol start="4">
<li>To avoid the backup data being overwritten, change the bucket to read-only mode:</li>
</ol>
<pre><code>kubectl patch backupstoragelocation default -n velero --type merge --patch '{"spec":{"accessMode":"ReadOnly"}}'
</code></pre>
<ol start="5">
<li>Confirm Velero is able to access the backup from bucket:</li>
</ol>
<pre><code>velero backup describe <your_backup_name> --details
</code></pre>
<ol start="6">
<li>Restore the backed up Volumes:</li>
</ol>
<pre><code>velero restore create --from-backup <your_backup_name>
</code></pre>
<ol start="7">
<li>Confirm that the persistent volumes have been restored on the destination cluster:</li>
</ol>
<pre><code>kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
redis-data-my-release-redis-master-0 Bound pvc-ae11172a-13fa-4ac4-95c5-d0a51349d914 8Gi RWO standard 79s
redis-data-my-release-redis-replicas-0 Bound pvc-f2cc7e07-b234-415d-afb0-47dd7b9993e7 8Gi RWO standard 79s
redis-data-my-release-redis-replicas-1 Bound pvc-ef9d116d-2b12-4168-be7f-e30b8d5ccc69 8Gi RWO standard 79s
redis-data-my-release-redis-replicas-2 Bound pvc-65d7471a-7885-46b6-a377-0703e7b01484 8Gi RWO standard 79s
</code></pre>
<p>Check out this <a href="https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8" rel="nofollow noreferrer">tutorial</a> as a reference.</p>
| Gabriel Robledo Ahumada |
<p>I want to ask why the Kubernetes networking driver <a href="https://www.weave.works/docs/net/latest/overview/" rel="nofollow noreferrer">weave</a> container is generating a lot of logs?</p>
<p>The log file size is 700MB after two days.</p>
<p>How can I solve that?</p>
| karlos | <h2 id="logs-in-kubernetes">Logs in kubernetes</h2>
<p>As it was said in comment, kubernetes is not responsible for log rotation. This is from kubernetes documentation:</p>
<blockquote>
<p>An important consideration in node-level logging is implementing log
rotation, so that logs don't consume all available storage on the
node. Kubernetes is not responsible for rotating logs, but rather a
deployment tool should set up a solution to address that. For example,
in Kubernetes clusters, deployed by the kube-up.sh script, there is a
logrotate tool configured to run each hour. You can also set up a
container runtime to rotate an application's logs automatically.</p>
</blockquote>
<p>As proposed option, this can be managed on container's runtime level.</p>
<p>Please refer to <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#logging-at-the-node-level" rel="nofollow noreferrer">Logging at the node level</a>.</p>
<h2 id="reducing-logs-for-weave-cni">Reducing logs for Weave CNI</h2>
<p>There are two containers in each pod. Weave itself and weave-npc (which is a network policy controller).</p>
<p>By default weave's log level is set to INFO. This can be changed to WARNING to see only exceptions. This can be achieved by adding <code>--log-level</code> flag through the <code>EXTRA_ARGS</code> environment variable for the weave:</p>
<pre><code>$ kubectl edit daemonset weave-net -n kube-system
</code></pre>
<p>So <code>weave container</code> part should look like:</p>
<pre><code>spec:
containers:
- command:
- /home/weave/launch.sh
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: EXTRA_ARGS # this was added with value below!
value: --log-level=warning
- name: INIT_CONTAINER
value: "true"
image: docker.io/weaveworks/weave-kube:2.8.1
imagePullPolicy: IfNotPresent
name: weave
</code></pre>
<p><a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#-reading-the-logs" rel="nofollow noreferrer">Weave - logs level</a>.</p>
<p>A lot of logs go from <code>Weave NPC</code>, there's an option that allows to <code>disable</code> NPC. However based on documentation this is a paid option based on their documentation - <a href="https://www.weave.works/product/cloud/" rel="nofollow noreferrer">cloud.weave.works</a></p>
<p><a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#-changing-configuration-options" rel="nofollow noreferrer">Weave - Changing configuration options</a></p>
| moonkotte |
<p>Good afternoon,</p>
<p>I have recently deployed cnvrg CORE application on-premise with Minikube.</p>
<p>In cnvrg CORE we can create a "machine resource" to give to the application some computer resources like CPU and GPU from a different machine through SSH.</p>
<p>I have found a problem when creating a new resource of any type (in the attached image you can see an example). It says that I can't create the machine because "I have reached the limit" but the only machine I have is the default one (Kubernetes in my case).</p>
<p>I haven't found any information about this on the internet, can you please tell me, is it a problem with the version of cnvrg CORE (v3.9.27)? Or is it something I have to configure during the installation?</p>
<p>Thank you very much!</p>
<p><a href="https://i.stack.imgur.com/zmNRL.png" rel="nofollow noreferrer">cnvrg message "You've reached the maximum machines"</a></p>
| Cristina | <p>I wrote to the support of cnvrg and adding machines is not allowed as CORE is a free community version the option to add new resources is not supported. Other paid versions of cnvrg allow you to add new resources and other functionalities, so cnvrg CORE is not a real open-source version of cnvrg.</p>
| Cristina |
<p>I want to deploy redis pod which loads a list. Then I will have kubernetes job which will execute bash script with variable taken from that list in redis.</p>
<p>How can I make this redis pod to be auto deleted when all items from a list are used?</p>
| a11eksandar | <p>By default, Kubernetes keeps the completed jobs and associated objects for debugging purposes, and you will lose all the generated logs by them when deleted.</p>
<p>That being said, a job can be automatically deleted by using the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">TTL mechanism for finished Jobs</a>.</p>
<p>Here you can find an example of a job's manifest with the TTL enabled and set to delete the job and associated objects (pods, services, etc.) 100 sec after its completion:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
</code></pre>
| Gabriel Robledo Ahumada |
<p>when one pod has below tolerations.</p>
<pre><code> tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
- effect: NoSchedule
operator: Exists
</code></pre>
<p>why it can be scheduled to the node with below taints.
<strong><code>Runtime=true:NoSchedule</code></strong></p>
<p>And I also searched the kubernetes documents. Generally one tolerations group will include the 'key', so how does below works?</p>
<pre><code> - effect: NoSchedule
operator: Exists
</code></pre>
| liam xu | <p>I have reproduced the issue,</p>
<p>I have tainted the node <code>gke-cluster-4-default-pool-8ad24f8f-2ixm</code> with <code>Runtime=true:NoSchedule</code></p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-cluster-4-default-pool-8ad24f8f-2ixm Ready <none> 10d v1.26.6-gke.1700
gke-cluster-4-default-pool-8ad24f8f-ncy0 Ready <none> 10d v1.26.6-gke.1700
gke-cluster-4-default-pool-8ad24f8f-o537 Ready <none> 10d v1.26.6-gke.1700
$ kubectl taint nodes gke-cluster-4-default-pool-8ad24f8f-2ixm Runtime=true:NoSchedule
node/gke-cluster-4-default-pool-8ad24f8f-2ixm tainted
</code></pre>
<p>Then I have created a deployment without any tolerations, so the pods are not scheduled with nodes with taints:</p>
<pre><code>$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-7f456874f4-95x66 1/1 Running 0 34s 10.24.2.6 gke-cluster-4-default-pool-8ad24f8f-o537 <none> <none>
nginx-deployment-7f456874f4-9sj68 1/1 Running 0 34s 10.24.0.12 gke-cluster-4-default-pool-8ad24f8f-ncy0 <none> <none>
nginx-deployment-7f456874f4-f4s98 1/1 Running 0 34s 10.24.0.13 gke-cluster-4-default-pool-8ad24f8f-ncy0 <none> <none>
nginx-deployment-7f456874f4-zbgp9 1/1 Running 0 34s 10.24.2.7 gke-cluster-4-default-pool-8ad24f8f-o537 <none> <none>
nginx-deployment-7f456874f4-zs4js 1/1 Running 0 34s 10.24.0.11 gke-cluster-4-default-pool-8ad24f8f-ncy0 <none> <none>
</code></pre>
<p>Later I have added the tolerations given by you and 2 pods have scheduled on the node with applied taints : (I have deleted the existing deployment and deployed after adding tolerations)</p>
<pre><code>kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-6d998db8f-58wr2 1/1 Running 0 6s 10.24.1.7 gke-cluster-4-default-pool-8ad24f8f-2ixm <none> <none>
nginx-deployment-6d998db8f-62dcm 1/1 Running 0 6s 10.24.2.8 gke-cluster-4-default-pool-8ad24f8f-o537 <none> <none>
nginx-deployment-6d998db8f-srcmg 1/1 Running 0 6s 10.24.0.14 gke-cluster-4-default-pool-8ad24f8f-ncy0 <none> <none>
nginx-deployment-6d998db8f-wv48m 1/1 Running 0 6s 10.24.1.6 gke-cluster-4-default-pool-8ad24f8f-2ixm <none> <none>
nginx-deployment-6d998db8f-zbck2 1/1 Running 0 6s 10.24.0.15 gke-cluster-4-default-pool-8ad24f8f-ncy0 <none> <none>
</code></pre>
<p>So the tolerations and taints are working fine so the issue is with the resource itself, it can be due to below reasons:</p>
<p>1.Insufficient resources on nodes such as CPU or memory check those with kubectl describe command.</p>
<p>2.Double check the taints and tolerations if there are any other taints or tolerations that are stopping the pod to schedule.</p>
<p>3.Check if you have any nodeselectors and affinity rules that prevent pods from scheduling the node.</p>
<p>For further debugging add the describe command of pod and node. Attaching a <a href="https://foxutech.com/kubernetes-taints-and-tolerations-explained/" rel="nofollow noreferrer">blog</a> written by motoskia for your reference.</p>
| Sai Chandra Gadde |
<p>So I'm dealing with a structure like this:</p>
<pre><code>.
βββ 1
βΒ Β βββ env-vars
βΒ Β βββ kustomization.yaml
βββ 2
βΒ Β βββ env-vars
βΒ Β βββ kustomization.yaml
βββ env-vars
βββ kustomization.yaml
βββ shared
βββ env-vars
βββ kustomization.yaml
</code></pre>
<p>while env-vars within each level has some env vars and</p>
<pre><code>$cat kustomization.yaml
bases:
- 1/
- 2/
namePrefix: toplevel-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
</code></pre>
<pre><code>$cat 1/kustomization.yaml
bases:
- ./../shared
namePrefix: first-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
</code></pre>
<pre><code>$cat 2/kustomization.yaml
bases:
- ./../shared
namePrefix: second-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
</code></pre>
<pre><code>$cat shared/kustomization.yaml
configMapGenerator:
- name: env-cm
behavior: create
envs:
- env-vars
</code></pre>
<p>I'm essentially trying to create 2 configmaps with some shared values (which are injected from different resources: from <code>shared</code> directory and the top-level directory)</p>
<hr />
<p><code>kustomize build .</code> fails with some conflict errors for finding multiple objects:</p>
<pre><code>Error: merging from generator <blah>: found multiple objects <blah> that could accept merge of ~G_v1_ConfigMap|~X|env-cm
</code></pre>
<p>Unfortunately I need to use <code>merge</code> on the top-level <code>configMapGenerator</code>, since there are some labels injected to <code>1</code> and <code>2</code> configmaps (so <code>create</code>ing a top-level configmap altho addresses the env-vars, but excludes the labels)</p>
<p>Any suggestion on how to address this issue is appreciated</p>
| Mahyar | <p>I believe this should solve your issue.</p>
<p><code>kustomization.yaml</code> which is located in <code>base</code> or <code>/</code>:</p>
<pre><code>$ cat kustomization.yaml
resources:
- ./1
- ./2
namePrefix: toplevel-
configMapGenerator:
- name: first-env-cm
behavior: merge
envs:
- env-vars
- name: second-env-cm
behavior: merge
envs:
- env-vars
</code></pre>
<p>With help of search I found <a href="https://github.com/kubernetes-sigs/kustomize/issues/1442" rel="nofollow noreferrer">this github issue</a> which is I'd say the same issue. And then a pull-request with <a href="https://github.com/kubernetes-sigs/kustomize/pull/1520/files#diff-c1e6b6a8ce9692d830228e40df4a604cf063ef54ca54e157f70981557e72f08bL606-R609" rel="nofollow noreferrer">changes in code</a>. We can see that during <code>kustomize</code> render merge behaviour was changed to look for <code>currentId</code> instead of <code>originalId</code>. Knowing that we can address to exact "pre-rendered" configmaps separately.</p>
| moonkotte |
<p>I want to implement custom logic to determine readiness for my pod, and I went over this: <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes.external-state" rel="noreferrer">https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes.external-state</a> and they mention an example property:
<code>management.endpoint.health.group.readiness.include=readinessState,customCheck</code></p>
<p>Question is - how do I override <code>customCheck</code>?
In my case I want to use HTTP probes, so the yaml looks like:</p>
<pre><code>readinessProbe:
initialDelaySeconds: 10
periodSeconds: 10
httpGet:
path: /actuator/health
port: 12345
</code></pre>
<p>So then again - where and how should I apply logic that would determine when the app is ready (just like the link above, i'd like to rely on an external service in order for it to be ready)</p>
| Hummus | <p>customCheck is a key for your custom HealthIndicator. The key for a given HealthIndicator is the name of the bean without the HealthIndicator suffix</p>
<p>You can read:
<a href="https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.health.writing-custom-health-indicators" rel="nofollow noreferrer">https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.health.writing-custom-health-indicators</a></p>
<p>You are defining readinessProbe, so probably hiting /actuator/health/readiness is a better choice.</p>
<pre><code>public class CustomCheckHealthIndicator extends AvailabilityStateHealthIndicator {
private final YourService yourService;
public CustomCheckHealthIndicator(ApplicationAvailability availability, YourService yourService) {
super(availability, ReadinessState.class, (statusMappings) -> {
statusMappings.add(ReadinessState.ACCEPTING_TRAFFIC, Status.UP);
statusMappings.add(ReadinessState.REFUSING_TRAFFIC, Status.OUT_OF_SERVICE);
});
this.yourService = yourService;
}
@Override
protected AvailabilityState getState(ApplicationAvailability applicationAvailability) {
if (yourService.isInitCompleted()) {
return ReadinessState.ACCEPTING_TRAFFIC;
} else {
return ReadinessState.REFUSING_TRAFFIC;
}
}
}
</code></pre>
| KrzysztofS |
<p>I am trying to prepare my application so that I can deploy it via kubernetes in the cloud. Therefore I installed minikube to get accustomed with how to set up an ingress. Therefore I followed the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">ingress documentation by kubernetes</a>. (NOTE: I did not expose my frontend service like they do in the beginning of the tutorial, as I understood it's not needed for an ingress).</p>
<p>But after hours of desperate debugging and no useful help by ChatGPT I am still not able to resolve my bug. Whenever I try to access my application via my custom host (example.com), I get <code>InvalidHostHeader</code> as a response.</p>
<p>For simplicity's sake right now my application simply has one deployment with one pod that runs a vuejs frontend. My <code>frontend-deployment.yaml</code> looks like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: XXXXXXXXXXX
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
</code></pre>
<p>My <code>frontend-service.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
ports:
- name: http
port: 8080
targetPort: http
selector:
app: frontend
type: ClusterIP
</code></pre>
<p>I use the default NGINX ingress controller of minikube. And my <code>ingress.yaml</code> looks like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
name: http
</code></pre>
<p>Obviously, I configure my <code>/etc/hosts</code> file to map my minikube ip address to <code>example.com</code></p>
<p>Any help and also suggestions on best practices/improvements on the structure of the yaml files is welcome!</p>
| raweber | <p>An <strong>invalid host header error</strong> occurs may be of different reasons as mentioned below:</p>
<ol>
<li>Incorrect hostname in your configuration. Check and confirm if the
host name is correct using the kubectl describe command.</li>
<li>Check theimage you are using is existing and have access.</li>
<li>Verify theendpoints and services are configured correctly and running.</li>
<li>Checkfirewall rules if it is blocking the traffic.</li>
<li>Check the DNS is pointing to the correct host and ip. Check the logs for any errors to debug further.</li>
</ol>
<p>You can also refer to the <a href="https://medium.com/@ManagedKube/kubernetes-troubleshooting-ingress-and-services-traffic-flows-547ea867b120" rel="nofollow noreferrer">blog</a> by managedkube and another <a href="https://medium.com/@AvinashBlaze/what-is-this-invalid-host-header-error-9cd760ae6d16" rel="nofollow noreferrer">blog</a> written by Avinash Thakur for further information.</p>
| Sai Chandra Gadde |
Subsets and Splits