Question
stringlengths 65
39.6k
| Answer
stringlengths 38
29.1k
|
---|---|
<p>On a Kubernetes cluster when using HaProxy as an ingress controller. How will the HaProxy add a new pod when the old pod has died.</p>
<p>Does it can make sure that the pod is ready to get traffic into.</p>
<p>Right now I am using a readiness probe and liveness probe. I know that the order in Kubernetes to use a new pod would be first Liveness probe --> Readiness probe --> 6/6 --> pod is ready.</p>
<p>So will it use the same Kubernetes mechanism using HaProxy Ingress Controller ?</p>
|
<p>Short answer is: <strong>Yes, it is!</strong></p>
<p><strong>From <a href="https://www.haproxy.com/blog/dissecting-the-haproxy-kubernetes-ingress-controller/" rel="nofollow noreferrer">documentation</a>:</strong></p>
<blockquote>
<p>The most demanding part is syncing the status of pods, since the environment is highly dynamic and pods can be created or destroyed at any time. The controller feeds those changes directly to HAProxy via the HAProxy Data Plane API, which reloads HAProxy as needed.</p>
</blockquote>
<p>HAProxy ingress don't take care of the pod healthy, it is responsible to receive the external traffic and forward for the correct kubernetes services.</p>
<p><strong>Kubelet</strong> uses liveness and probes to know when to restart a container, it means that you must define <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">liveness, readiness</a> in pod definition.<br>
See more about container probes in pod <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">lifecycle</a> documentation.</p>
<blockquote>
<p>The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.</p>
</blockquote>
|
<p>I am making an application with docker and orchestrator kubernetes.
In the fronted it is made with create-react-app, the problem comes when I want to use axios I get 2 ip in a request.</p>
<p>the code below is the action of the redux to capture the data</p>
<pre><code>import { LOGIN, LOGIN_EXITO, LOGIN_ERROR } from "../types";
// import clienteAxios from "../../axios";
import axios from "axios";
import Swal from "sweetalert2";
const HOST = window._env_.REACT_APP_HOST_BACKEND;
const PORT = window._env_.REACT_APP_PORT_BACKEND;
export function loginAction(user, password) {
return async (dispatch) => {
dispatch(login());
try {
// let datos = await clienteAxios.post("/login", { user, password });
console.log(`${HOST}:${PORT}/api`);
let datos = await axios.post(`${HOST}:${PORT}/api`, { user, password });
console.log(datos);
dispatch(loginExito(datos.data));
if (datos.data[0].r_mensaje === "ok") {
Swal.fire("Acceso...", "Bienvenido", "success");
} else {
Swal.fire({
icon: "error",
title: "Error...",
text: datos.data[0].r_mensaje,
});
}
} catch (error) {
console.log(error);
dispatch(loginError(true));
Swal.fire({
icon: "error",
title: "Error...",
text: "Intente de nuevo...",
});
}
};
}
</code></pre>
<p>and when making the query this error throws me, that 2 ip join
<a href="https://i.stack.imgur.com/2Bb3i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Bb3i.png" alt="enter image description here"></a></p>
<p>this is my nginx.conf that I use in the dockerfile.</p>
<pre><code>server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
</code></pre>
|
<p>Axios is interpreting your argument as a relative path and is automatically appending the scheme, host and port to form the request URL.</p>
<p>For example, <code>axios.post('/user', ...)</code>, would translate to <code>http://172.17.0.3:30599/user</code>.</p>
<p>You can fix this by passing the full URL:</p>
<pre><code>axios.post(`http://${HOST}:${PORT}/api`, ...)
</code></pre>
<p>Alternatively, create an axios instance, set <code>baseURL</code> in the config, and use <code>instance.post('/api')</code>.</p>
|
<p>I'm currently facing an issue with my Kubernetes clusters and need some assistance. Everything was running smoothly until today. However, after performing an update on my Ubuntu system, I'm unable to establish a connection from my working environment to the kubernetes clusters.</p>
<p>When executing the command <code>kubectl get pods</code>, I'm encountering the following error message:</p>
<p><code>E0805 09:59:45.750534 234576 memcache.go:265] couldn’t get current server API group list: Get "http://localhost:3334/api?timeout=32s": EOF</code></p>
<p>Here are the details of my cluster setup: Kubernetes 1.27, bare-metal, Host System is Ubuntu 20.04</p>
<p>I would greatly appreciate any guidance or insights on resolving this issue.</p>
|
<p>Try</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get nodes -v=10
</code></pre>
<p>and look for the errors.</p>
|
<p>I'm testing thi project : </p>
<blockquote>
<p><a href="https://github.com/sayems/kubernetes.resources/" rel="nofollow noreferrer">https://github.com/sayems/kubernetes.resources/</a></p>
</blockquote>
<p>And just for my understanding i commented out the ansible provisioning in the vagant part : </p>
<blockquote>
<p><a href="https://github.com/sayems/kubernetes.resources/blob/master/k8s-vagrant/Vagrantfile#L34" rel="nofollow noreferrer">https://github.com/sayems/kubernetes.resources/blob/master/k8s-vagrant/Vagrantfile#L34</a></p>
</blockquote>
<p>to </p>
<blockquote>
<p><a href="https://github.com/sayems/kubernetes.resources/blob/master/k8s-vagrant/Vagrantfile#L41" rel="nofollow noreferrer">https://github.com/sayems/kubernetes.resources/blob/master/k8s-vagrant/Vagrantfile#L41</a></p>
</blockquote>
<p>Then ran : </p>
<pre><code>vagrant up
</code></pre>
<p>Witch start a non provisioned cluster a wanted </p>
<p>Then : </p>
<pre><code>ansible-playbook playbook.yml -i inventory.ini
</code></pre>
<p>But I'm getting this error : </p>
<blockquote>
<p>TASK [token : Copy join command to local file]
******************************************************************************************************************************************************************************************************************************* fatal: [k8s-master]: FAILED! => {"msg": "Failed to get information on
remote file (./join-command): sudo: il est nécessaire de saisir un mot
de passe\n"}</p>
</blockquote>
<p>If i understand it needs to be root to execute this command : </p>
<blockquote>
<p><a href="https://github.com/sayems/kubernetes.resources/blob/master/k8s-vagrant/roles/join/tasks/main.yml" rel="nofollow noreferrer">https://github.com/sayems/kubernetes.resources/blob/master/k8s-vagrant/roles/join/tasks/main.yml</a></p>
</blockquote>
<p>But we are supposed to be root : </p>
<blockquote>
<p><a href="https://github.com/sayems/kubernetes.resources/blob/master/k8s-vagrant/playbook.yml" rel="nofollow noreferrer">https://github.com/sayems/kubernetes.resources/blob/master/k8s-vagrant/playbook.yml</a></p>
</blockquote>
<p>can anyone help or have an idea ? </p>
<p>Thanks </p>
|
<p>I think you need to run it with <code>sudo</code>. Modules like copy expect to be available as the user running Ansible.</p>
<p><a href="https://github.com/ansible/ansible/issues/41056" rel="nofollow noreferrer">Reference</a></p>
|
<p>I'm using Secrets as an environmental variable and I was wondering how you would call the secret in the client side of my application? I'm running a Node.js application and want to use the Secrets environmental variable. I would normally call my environment variables by doing <code>process.env.VARIABLE_NAME</code> locally since I have an env file, but I know that it's not the same for a secret as environmental variable when deployed onto Kubernetes.</p>
<p>Could anybody help me with this? Thanks!</p>
|
<p>The environment variable created by secret will read as eny other environment variable passed to pod.</p>
<p>So if you create a secret, for example:</p>
<pre><code>kubectl create secret generic user --from-literal=username=xyz
</code></pre>
<p>and pass it to the pod:</p>
<pre><code>env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: user
key: username
</code></pre>
<p>It will be passed to the pod as environment variable. You can check it by executing <code>printenv USERNAME</code> in the pod and output will be similar to this:</p>
<pre><code>kubectl exec -it secret-env -- printenv USERNAME
xyz
</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-with-data-from-multiple-secrets" rel="noreferrer">Multiple secrets</a> can be passed to the pod as environment variables.</p>
|
<p>Why can't we create PV or PVC in imperative way?</p>
<p>Trying using create command, but it doesn't show any of them.</p>
<p><code>kubectl create --help</code></p>
<pre><code>Available Commands:
clusterrole Create a ClusterRole.
clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole
configmap Create a configmap from a local file, directory or literal value
cronjob Create a cronjob with the specified name.
deployment Create a deployment with the specified name.
ingress Create an ingress with the specified name.
job Create a job with the specified name.
namespace Create a namespace with the specified name
poddisruptionbudget Create a pod disruption budget with the specified name.
priorityclass Create a priorityclass with the specified name.
quota Create a quota with the specified name.
role Create a role with single rule.
rolebinding Create a RoleBinding for a particular Role or ClusterRole
secret Create a secret using specified subcommand
service Create a service using specified subcommand.
serviceaccount Create a service account with the specified name
</code></pre>
|
<p>As described in the <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/" rel="noreferrer">documentation</a> <code>kubectl</code> uses imperative commands <strong>built into the kubectl command-line tool</strong> in order to help you creating objects quickly.</p>
<p>After some checks it seems like this is not available because it has not been implemented yet. You can see the full list of the create options at <strong>kubectl/pkg/cmd/<a href="https://github.com/kubernetes/kubectl/tree/master/pkg/cmd/create" rel="noreferrer">create</a></strong>.
For example, <a href="https://github.com/kubernetes/kubernetes/pull/78153" rel="noreferrer">#78153</a> was responsible for <code>kubectl create ingress</code> functionality.</p>
<p>You would probably get more information and perhaps reasons why this is not implemented by asking the developers and opening a <a href="https://github.com/kubernetes/kubectl/issues/new/choose" rel="noreferrer">new issue</a>.</p>
|
<p>I am new on kubernetes and I tried to run small app using kubernetes. I created docker image and used minikube to run it. So application is very simple, it just prints hello-world.</p>
<pre><code>@RestController
@RequestMapping(value = "helloworld")
public class MyController {
@GetMapping
public HelloWord helloWord(){
return new HelloWord("Hello Word");
}
}
</code></pre>
<p>My dockerfile:</p>
<pre><code>FROM adoptopenjdk/openjdk11-openj9:jdk-11.0.1.13-alpine-slim
VOLUME /tmp
ARG JAR_FILE=target/myapp-1.0.0.jar
COPY ${JAR_FILE} myapp-1.0.0.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-jar","/myapp-1.0.0.jar"]
</code></pre>
<p>deployment.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myhelloworldservice
spec:
selector:
app: my-hello-world-app
ports:
- protocol: "TCP"
port: 8080
targetPort: 80
nodePort: 30003
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-hello-world
spec:
selector:
matchLabels:
app: my-hello-world-app
replicas: 5
template:
metadata:
labels:
app: my-hello-world-app
spec:
containers:
- name: hello-world
image: myname/myhelloimage
ports:
- containerPort: 80
</code></pre>
<p>I run the command :</p>
<blockquote>
<p>kubectl create -f deployment.yaml</p>
</blockquote>
<p>and the output is :</p>
<blockquote>
<p>service/myhelloworldservice created</p>
<p>created deployment.apps/my-hello-world</p>
</blockquote>
<p>I run minikube ip command to get ip and then used that ip adress to access my app by using port 30003, but not able to access my app.</p>
<p>I used :</p>
<p><a href="http://xxx.xx.xx.xxx:30003/helloworld" rel="nofollow noreferrer">http://xxx.xx.xx.xxx:30003/helloworld</a></p>
<p>What is the problem why I cannot access to my app? I am getting This site can’t be reached. Refused to connect error.</p>
|
<p>You can port forward from your minikube to your localhost using <code>kubectl port-forward <<created_container_name>> 80</code>
Now you should be able to access your app through a browser with <code>localhost:80/helloworld</code></p>
|
<p>Looks like journald pod logging ('kubectl logs' command) doesn't work with kubernetes v1.18.x. Is there a way to make this work with v1.18.x? I've set up a multinode cluster with docker logging driver as 'journald' (using /etc/docker/daemon.json) and using systemd-journal-gatewayd with pull approach to aggregate historical logs. However, I'm very much interested in being able to tail current logs using 'kubectl logs' or, 'kubectl logs -l 'app=label' --prefix -f' (for cluster-wide logs).</p>
<pre><code># kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
# docker version
Client:
Version: 18.03.1-ce
API version: 1.37
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:20:16 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.03.1-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:23:58 2018
OS/Arch: linux/amd64
Experimental: false
# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
</code></pre>
<p>Running 'kubectl logs' gives the following error:</p>
<pre><code># kubectl logs <pod-name>
failed to try resolving symlinks in path "/var/log/pods/default_<pod-name>_xxxx/<container-name>/0.log": lstat /var/log/pods/default_<pod-name>_xxxx/<container-name>/0.log: no such file or directory
</code></pre>
|
<p>After you changed logging driver, the location of logs has changed to host's journal. So far the logs can be retrieved by docker but you have to let kubelet know about the changes.</p>
<p>You can do it by passing <code>--log-dir=/var/log</code> to kubelet.
After adding the flag you have run <code>systemctl daemon-reload</code> and restart kubelet. It has to be done on all of the nodes.</p>
|
<p>I'm running a MySQL image in my one-node cluster for local testing purposes only.</p>
<p>I would like to be able to delete the database when needed to have the image build a new database from scratch, but I can't seem to find where or how I can do that easily.</p>
<p>I am on Windows, using Docker Desktop to manage my Docker images and Kubernetes cluster with WSL2. The pod uses a persistent volume/claim which can be seen below.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/MySQLTemp"
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p>The volume part of my deployment looks like:</p>
<pre class="lang-yaml prettyprint-override"><code> volumeMounts:
- name: mysql-persistent
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
<p>Is there a command I can use to either see where this database is stored on my Windows or WSL2 machine so I can delete it manually, or delete it from the command line through <code>docker</code> or <code>kubectl</code>?</p>
|
<p>For anyone looking in the future for this solution, and doesn't want to dredge through deep github discussions, my solution was this:</p>
<p>Change <code>hostPath:</code> to <code>local:</code> when defining the path. hostPath is apparently for if your kubernetes node providers have external persistent disks, like GCE or AWS.</p>
<p>Second, the path pointing to the symlink to your local machine from Docker desktop can apparently be found at <code>/run/desktop/mnt/host/c</code> for your c drive. I set my path to <code>/run/desktop/mnt/host/c/MySQLTemp</code> and created a <code>MySQLTemp</code> file in the root of my C drive, and it works.</p>
<p>Third, a <code>local:</code> path definition requires a nodeAffinity. You can tell it to use Docker Desktop like this:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
</code></pre>
|
<p>I have a daily k8s cron job,
I need to have unique id for every created job, which is same on job restart in case of job failure.
Example:</p>
<pre><code>2021-04-10, restarts:0, id = 1234 -> failed
2021-04-10, restarts:1, id = 1234 -> failed
2021-04-10, restarts:2, id = 1234 -> failed
2021-04-11, restarts:0, id 1235 -> failed
2021-04-11, restarts:1, id 1235 -> failed
2021-04-12, restarts:0, id 1236 -> success
2021-04-13, restarts:0, id 1237 -> success
</code></pre>
<p>Is there a way I generate such a variable ?</p>
|
<p><strong>Is there a way I generate such a variable ?</strong></p>
<p>Yes, with usage of UID. Every object created over the whole lifetime of a Kubernetes cluster has a distinct <code>UID</code>. It is intended to distinguish between historical occurrences of similar entities.</p>
<p>This UID then can be expose as <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">environment variable</a> and injected into your pod:</p>
<pre class="lang-yaml prettyprint-override"><code>
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
env:
- name: MY_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
command:
- /bin/sh
- -c
- date; echo $MY_UID
restartPolicy: OnFailure
</code></pre>
<p>And here you have printed out the unique UID:</p>
<pre><code>➜ ~ k logs hello-1618817040-4jkpx
Mon Apr 19 07:24:02 UTC 2021
f9060e34-a4e8-40d2-b459-6029b07e4fe7
</code></pre>
<p>You can use it directly or you can hash it and make it into any datatype you want.</p>
|
<p>I perform an installation og <code>jenkins</code> on GKE, using the official helm <a href="https://github.com/helm/charts/tree/master/stable/jenkins" rel="nofollow noreferrer">chart</a>.</p>
<p>I initially pass a list of plugins to the corresponding <a href="https://github.com/helm/charts/blob/master/stable/jenkins/values.yaml#L215" rel="nofollow noreferrer">key</a> </p>
<pre><code>- plugin1
- plugin2
- plugin3
- plugin4
</code></pre>
<p>and perform <code>helm upgrade --recreate-pods --force --tls --install</code></p>
<p>I then <strong>take out</strong> some of the plugins from the above list and run the same <code>helm</code> command again, e,g, with</p>
<pre><code>- plugin1
- plugin2
</code></pre>
<p>However <code>jenkins</code> keeps all the plugins from the initial list.</p>
<p>Is this the expected behavior?</p>
|
<p>Yes, it is an expected behavior. </p>
<p>To change this behavior you should set the parameter <code>master.overwritePlugins</code> to <code>true</code>.</p>
<p>Example:</p>
<pre><code>helm upgrade --set master.overwritePlugins=true --recreate-pods --force --install
</code></pre>
<p>From Helm chart <a href="https://github.com/helm/charts/tree/master/stable/jenkins" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p>| <code>master.overwritePlugins</code> | Overwrite installed plugins on start. | <code>false</code> |</p>
</blockquote>
|
<p>We've got checked-in YML files which contain our k8s "deployment descriptors" (is there a better name for these things?)</p>
<p>I'm looking at a Service descriptor like...</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: regalia-service
namespace: sem
spec:
selector:
app: "proxy"
ports:
- protocol: TCP
port: 8080
targetPort: 8080
</code></pre>
<p>I look in a different repo that is doing basically the same thing and I notice the spec.selector.app value is missing the quotes. Like...</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: scimitar-service
namespace: sem
spec:
selector:
app: proxy
ports:
- protocol: TCP
port: 8080
targetPort: 8080
</code></pre>
<p>I <em>think</em> these 2 Service descriptors are doing the same thing but how do I <em>know</em>?</p>
<p>Are the quotes significant in k8s descriptors?</p>
<p>Is this a YML thing or a k8s thing?</p>
|
<p>As you probably already find out, in yaml syntax strings values rarely need quotes. If the value is quoted, it is always a string, if unquoted it is inspected to be something else but defaults for being a string.</p>
<p>For most the string you can left them unquoted and as you already find out you will have similar results. But there are some cases where the quotes are required such as <code>string</code> started with some special character <code>%#@#$</code> or contains <code>whitespace</code> or the value looks like a number but in fact should be a string (like <code>45</code>, <code>true</code> or <code>false</code></p>
<p>For more reading please have a look this blog post about <a href="http://blogs.perl.org/users/tinita/2018/03/strings-in-yaml---to-quote-or-not-to-quote.html" rel="noreferrer">quoting in yaml</a>.</p>
|
<p>This question is about logging/monitoring.</p>
<p>I'm running a 3 node cluster on AKS, with 3 orgs, Dev, Test and Prod. The chart worked fine in Dev, but the same chart keeps getting killed by Kubernetes in Test, and it keeps getting recreated, and re-killed. Is there a way to extract details on why this is happening? All I see when I describe the pod is Reason: Killed</p>
<p>Please tell me more details on this or can give some suggestions. Thanks!</p>
|
<p>There might be various reasons for it to be killed, e.g. not sufficient resources or <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="nofollow noreferrer">failed liveness probe</a>.</p>
<p>For SonarQube there is a <a href="https://github.com/helm/charts/blob/7e45e678e39b88590fe877f159516f85f3fd3f38/stable/sonarqube/templates/deployment.yaml#L205" rel="nofollow noreferrer">liveness and readiness probe</a> configured so it might fail. Also as described in <a href="https://github.com/helm/charts/blob/7e45e678e39b88590fe877f159516f85f3fd3f38/stable/sonarqube/values.yaml#L79" rel="nofollow noreferrer">helm's chart values</a>:</p>
<blockquote>
<p>If an ingress <em>path</em> other than the root (/) is defined, it should be reflected here<br />
A trailing "/" must be included</p>
</blockquote>
<p>You can also check if there are sufficient resources on node:</p>
<ul>
<li>check what node are pods running on: <code>kubectl get pods -test</code> and
then run <code>kubectl describe node <node-name></code> to check if there is no
disk/ memory pressure.</li>
</ul>
<p>You can also run <code>kubectl logs <pod-name></code> and <code>kubectl describe pod <pod-name></code> that might give you some insight of kill reason.</p>
|
<p>I have some pods created by CronJobs that are in <code>Error</code> state and it seems that the CPU / Memory requested by these pods are not released since kubelet is not killing them. It prevents other pods from being scheduled.</p>
<p>Is that the expected behavior ? Should I clean them by hand to get the resources back ?</p>
<p>Thanks.</p>
|
<p>Pods in <code>Error</code> you should delete this pod to release resources assigned to it.</p>
<p>However pods in <code>Completed</code> or <code>Failed</code> status are not required to be cleaned up to release resources allocated to them. It can be checked by running simple <code>Job</code> to check memory resources allocated in node.</p>
<p>Memory allocation before job:</p>
<pre><code>Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 811m (86%) 1143m (121%)
memory 555Mi (19%) 1115Mi (39%)
</code></pre>
<p>Job example:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: test-job
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- date
image: busybox
name: test-job
resources:
requests:
memory: 200Mi
restartPolicy: Never
</code></pre>
<p>Memory after the job deployment:</p>
<pre><code> Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 811m (86%) 1143m (121%)
memory 555Mi (19%) 1115Mi (39%)
</code></pre>
|
<p>I created a deployment with liveness and readiness probes and initial delay which works fine. If I want to replace the initial delay with a startup probe the <code>startupProbe</code> key and its nested elements are never included in the deployment descrioptor when created with <code>kubectl apply</code> and get deleted from the deployment yaml in the GKE deployment editor after saving.</p>
<p>An example:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: "test"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-sleep
namespace: test
spec:
selector:
matchLabels:
app: postgres-sleep
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 50%
template:
metadata:
labels:
app: postgres-sleep
spec:
containers:
- name: postgres-sleep
image: krichter/microk8s-startup-probe-ignored:latest
ports:
- name: postgres
containerPort: 5432
readinessProbe:
tcpSocket:
port: 5432
periodSeconds: 3
livenessProbe:
tcpSocket:
port: 5432
periodSeconds: 3
startupProbe:
tcpSocket:
port: 5432
failureThreshold: 60
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: postgres-sleep
namespace: test
spec:
selector:
app: httpd
ports:
- protocol: TCP
port: 5432
targetPort: 5432
---
</code></pre>
<p>with <code>krichter/microk8s-startup-probe-ignored:latest</code> being</p>
<pre><code>FROM postgres:11
CMD sleep 30 && postgres
</code></pre>
<p>I'm reusing this example from the same issue with microk8s where I could solve it by changing the <code>kubelet</code> and <code>kubeapi-server</code> configuration files (see <a href="https://github.com/ubuntu/microk8s/issues/770" rel="nofollow noreferrer">https://github.com/ubuntu/microk8s/issues/770</a> in case you're interested). I assume this is not possible with GKE clusters as they don't expose these files, probably for good reasons.</p>
<p>I assume that the feature needs to be enable since it's behind a feature gate. How can I enable it on Google Kubernetes Engine (GKE) clusters with version >= 1.16? Currently I'm using the default from the regular channel 1.16.8-gke.15.</p>
|
<p>As I mentioned in my comments, I was able to reproduce the same behavior in my test environment, and after some researches I found the reason.</p>
<p>In GKE, features gates are only permitted if you are using an Alpha Cluster. You can see a complete list of feature gates <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features" rel="noreferrer">here</a></p>
<p>I've created an alpha cluster and applied the same yaml, it works for me, the <code>startupProbe</code> is there in the place.</p>
<p>So, you will only be able to use <code>startupProbe</code> in a GKE Alpha clusters, follow this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/creating-an-alpha-cluster" rel="noreferrer">documentation</a> to create a new one.</p>
<p><strong>Be aware of the limitations in alpha clusters:</strong></p>
<blockquote>
<ul>
<li>Alpha clusters have the following limitations:</li>
<li>Not covered by the <a href="https://cloud.google.com/kubernetes-engine/sla" rel="noreferrer">GKE SLA</a></li>
<li>Cannot be upgraded</li>
<li>Node auto-upgrade and auto-repair are disabled on alpha clusters</li>
<li><strong>Automatically deleted after 30 days</strong></li>
<li>Do not receive security updates</li>
</ul>
</blockquote>
<p>Also, Google don't recommend use for production workloads:</p>
<blockquote>
<p><strong>Warning:</strong> Do not use Alpha clusters or alpha features for production workloads. Alpha clusters expire after thirty days and do not receive security updates. You must migrate your data from alpha clusters before they expire. GKE does not automatically save data stored on alpha clusters.</p>
</blockquote>
|
<p>I have created a sample spring boot app and did the following:-</p>
<p>1.created a docker image</p>
<p>2.created an Azure container registry and did a docker push to this</p>
<p>3.Created a cluster in Azure Kubernetes service and deployed it successfully.I have chosen external endpoint option for this.</p>
<p><a href="https://i.stack.imgur.com/G9dtB.png" rel="nofollow noreferrer">Kubernetes external end point</a></p>
<p>say for service to service call i dont want to use IP like <a href="http://20.37.134.68:80" rel="nofollow noreferrer">http://20.37.134.68:80</a> but another custom name how can i do it?
Also if i chose internal then is there any way to replace the name.
Tried editing YAML with endpoint name property but failed.Any ideas?</p>
|
<p>I think you mixing some concept, so I'll try to explain and help you to reach what you want.</p>
<ol>
<li>When you deploy a container image in a Kubernetes cluster, in the most cases you will use a <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer"><code>pod</code></a> or <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer"><code>deployment</code></a> spec, that basically is a yaml file with all your deployment/pod configuration, name, image name etc. Here is an example of a simple echo-server app:</li>
</ol>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
ports:
- name: http
containerPort: 80
</code></pre>
<p>Observe the fields <code>name</code> in the file. Here you can configure the name for your deployment and for your containers.</p>
<ol start="2">
<li>In order to <strong>expose</strong> your application, you will need to use a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer"><code>service</code></a>. Services can be <code>internal</code> and <code>external</code>. <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Here</a> you can find all service types.</li>
</ol>
<p>For a internal service, you need to use the service type <code>ClusterIP</code> (default), it means only your cluster will reach the pods. To reach your service from other pods, you can use the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#srv-records" rel="nofollow noreferrer">service name</a> composed by <code>my-svc.my-namespace.svc.cluster-domain.example</code>.</p>
<p>Here is an example of a service for the deployment above:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: echo-svc
spec:
selector:
app: echo
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<ol start="3">
<li>To expose your service externally, you have the option to use a service type <code>NodePort</code>, <code>LoadBalancer</code> or use an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer"><code>ingress</code></a>.</li>
</ol>
<p>You can configure your DNS name in the ingress rules and make path rules if you want, or even configure a HTTPS for your application. There are few options to ingresses in kubernetes, and one of the most popular is <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer"><code>nginx-ingress</code></a>.</p>
<p>Here is an example of how to configure a simple ingress for our example service:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "false"
name: echo-ingress
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: echo-svc
servicePort: 80
</code></pre>
<p>In the example, i'm using the dns name <code>myapp.mydomain.com</code>, so it means you can only will reach your application by this name.</p>
<p>After create the ingress, you can see the external ip with the command <code>kubectl get ing</code>, and you can create a A entry in your dns server.</p>
|
<p>I just want to list pods with their <code>.status.podIP</code> as an extra column.
It seems that as soon as I specify <code>-o=custom-colums=</code> the default columns <code>NAME, READY, STATUS, RESTARTS, AGE</code> will disappear.</p>
<p>The closest I was able to get is</p>
<pre><code>kubectl get pod -o wide -o=custom-columns="NAME:.metadata.name,STATUS:.status.phase,RESTARTS:.status.containerStatuses[0].restartCount,PODIP:.status.podIP"
</code></pre>
<p>but that is not really equivalent to the the default columns in the following way:</p>
<ul>
<li>READY: I don't know how to get the default output (which looks like <code>2/2</code> or <code>0/1</code> by using custom columns</li>
<li>STATUS: In the default behaviour STATUS, can be Running, Failed, <strong>Evicted</strong>, but <code>.status.phase</code> will never be <code>Evicted</code>. It seems that the default STATUS is a combination of <code>.status.phase</code> and <code>.status.reason</code>. <strong>Is there a way to say show <code>.status.phase</code> if it's Running but if not show <code>.status.reason</code>?</strong></li>
<li>RESTARTS: This only shows the restarts of the first container in the pod (I guess the sum of all containers would be the correct one)</li>
<li>AGE: Again I don't know how to get the age of the pod using custom-columns</li>
</ul>
<p>Does anybody know the definitions of the default columns in custom-columns syntax?</p>
|
<p>I checked the differences between in API requests between the <code>kubectl get pods</code> and <code>kubectl -o custom columns</code>:</p>
<ul>
<li>With aggregation:</li>
</ul>
<pre class="lang-json prettyprint-override"><code>curl -k -v -XGET -H Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json -H User-Agent: kubectl/v1.18.8 (linux/amd64) kubernetes/9f2892a http://127.0.0.1:8001/api/v1/namespaces/default/pods?limit=500
</code></pre>
<ul>
<li>Without aggregation:</li>
</ul>
<pre class="lang-json prettyprint-override"><code>curl -k -v -XGET -H Accept:
application/json -H User-Agent: kubectl/v1.18.8 (linux/amd64) kubernetes/9f2892a http://127.0.0.1:8001/api/v1/namespaces/default/pods?limit=500
</code></pre>
<p>So you will notice that when <code>-o custom columns</code> is used, kubectl gets <code>PodList</code> instead of <code>Table</code> in response body. Podlist does not have that aggregated data, so to my understanding it is not possible to get the same output with kubectl pods using <code>custom-column</code>.</p>
<p>Here's a <a href="https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/printers/internalversion/printers.go#L828" rel="nofollow noreferrer">code</a> snippet responsible for the output that you desire. Possible solution would be to fork the client and customize it to your own needs since as you already might notice this output requires some custom logic. Another possible solution would be to use one of the Kubernetes <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">api client libraries</a>. Lastly you may want to try <a href="https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/" rel="nofollow noreferrer">extend kubectl</a> functionalities with <a href="https://github.com/ishantanu/awesome-kubectl-plugins" rel="nofollow noreferrer">kubectl plugins</a>.</p>
|
<p>I have a rabbit mq pod and I configured to use a persistence storage incase of pod restart/deletion by mounting a volume.</p>
<p>I configured everything but not able to get through this error:</p>
<pre><code>/usr/lib/rabbitmq/bin/rabbitmq-server: 42:
/usr/lib/rabbitmq/bin/rabbitmq-server:
cannot create /var/lib/rabbitmq/mnesia/[email protected]:
Permission denied
</code></pre>
<p>Here're my config file and deployment app for kubernetes</p>
<ol>
<li><code>Dockerfile</code></li>
</ol>
<pre><code>FROM ubuntu:16.04
# hadolint ignore=DL3009
RUN apt-get update
# hadolint ignore=DL3008
RUN apt-get -y install --no-install-recommends rabbitmq-server
RUN apt-get -y autoremove && apt-get -y clean
# hadolint ignore=DL3001
RUN service rabbitmq-server start
COPY start.sh /start.sh
RUN chmod 755 ./start.sh
EXPOSE 5672
EXPOSE 15672
CMD ["/start.sh", "test", "1234"]
</code></pre>
<ol start="2">
<li><code>start.sh</code></li>
</ol>
<pre><code>#!/bin/sh
cat > /etc/rabbitmq/rabbitmq.conf <<EOF
listeners.tcp.default = 5672
default_user = <<"$1">>
default_pass = <<"$2">>
EOF
rabbitmq-server
</code></pre>
<ol start="3">
<li><code>rabbitmq.yaml</code></li>
</ol>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: message-broker
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 5672
targetPort: 5672
name: "tcp"
protocol: TCP
- port: 15672
targetPort: 15672
name: "management"
protocol: TCP
selector:
app: message-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: message-broker
namespace: {{ .Release.Namespace }}
spec:
replicas: 1
selector:
matchLabels:
app: message-broker
template:
metadata:
labels:
app: message-broker
spec:
containers:
- name: message-broker
image: {{ .Values.message_broker.image }}
imagePullPolicy: {{ .Values.components.message_broker.imagePullPolicy }}
ports:
- containerPort: 5672
name: tcp
- containerPort: 15672
name: management
volumeMounts:
- name: data
mountPath: /var/lib/rabbitmq/mnesia
env:
- name: RABBITMQ_DEFAULT_PASS
valueFrom:
secretKeyRef:
name: rabbitmq-secrets
key: password # password = root
- name: RABBITMQ_DEFAULT_USER
valueFrom:
secretKeyRef:
name: rabbitmq-secrets
key: user # user = root
...
nodeSelector:
....
volumes:
- name: data
hostPath:
path: /var/test/rabbitmq
</code></pre>
<p>Let me know what I might be missing. :)</p>
|
<p>The volume you mounted in <code>/var/lib/rabbitmq/mnesia</code> is owned by root.</p>
<p>The rabbitmq process is running as <code>rabbitmq</code> user and doesn't have write access to this directory.</p>
<p>In your <code>start.sh</code> add:</p>
<pre><code>chown rabbitmq:rabbitmq /var/lib/rabbitmq/mnesia
</code></pre>
<p>before starting the rabbitmq-server process.</p>
|
<p>I need help to create a reverse zone for the external IP of Kubernetes Ingress Website or something that do the some function like a reverse zone.</p>
<p>Basically I need that when I enter the IP of the ingress in the browser, redirects me to the domain name.</p>
<p>Thanks for the help.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: prueba-web-ingress
annotations:
nginx.ingress.kubernetes.io/permanent-redirect: http://example.com
networking.gke.io/managed-certificates: certificateexample
kubernetes.io/ingress.global-static-ip-name: test
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: prueba-web
servicePort: 80
</code></pre>
|
<p>Try the following, just change the <code>example.com</code> to your domain:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: reverse-redirect
annotations:
nginx.ingress.kubernetes.io/permanent-redirect: http://example.com
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: "somerandomname" # needs some name, doesn't need to exist
port:
number: 80
</code></pre>
<p>When sending a request to nginx ingress without <code>Host</code> header, it will default to the ingress without specified <code>host</code> filed (just like the example above). Such request will receive a following response:</p>
<pre><code>$ curl 123.123.123.123 -I
HTTP/1.1 301 Moved Permanently
Date: Wed, 28 Apr 2021 11:36:05 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Location: http://example.com
</code></pre>
<p>Browser receiving this redirect, will redirect to the domain specified in <code>Location</code> header.</p>
<p>Docs:</p>
<ul>
<li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#permanent-redirect" rel="nofollow noreferrer">permanent redirect annotation</a></li>
</ul>
<hr />
<h3>EDIT:</h3>
<p>Since you are not using nginx ingress you can try to use a redirect app.</p>
<p>Here is a deployment with some random image I have found on dockerhub that responds to every request with redirect. I don't want to lecture you on security but I feel that I should at leat mention that you should never use random container images from the internet, and if you are, you are doing it on your own resposibiliy. Preferably build one from source and push to your own repo.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redir
name: redir
spec:
replicas: 1
selector:
matchLabels:
app: redir
template:
metadata:
creationTimestamp: null
labels:
app: redir
spec:
containers:
- image: themill/docker-nginx-redirect-301
name: docker-nginx-redirect-301
env:
- name: REDIRECT_CODE
value: "302"
- name: REDIRECT_URL
value: https://example.com
</code></pre>
<hr />
<p>Service for the above deployment:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: redir
name: redir
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: redir
status:
loadBalancer: {}
</code></pre>
<hr />
<p>Now the ingress part:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: prueba-web-ingress
annotations:
networking.gke.io/managed-certificates: certificateexample
kubernetes.io/ingress.global-static-ip-name: test
kubernetes.io/ingress.allow-http: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: redir
servicePort: 80
- host: example.com
http:
paths:
- path: /
backend:
serviceName: prueba-web
servicePort: 80
</code></pre>
<p>Notice the same applies here: no <code>host</code> field set for redir service, although prueba-web service has its host filed set.</p>
|
<p>is there a way to set condition in Chart.yaml file based on variable existency? I want to install dependency just when the variable <code>SERVICE_A_URL</code> is not set. I tried these but helm always try to install dependency.</p>
<pre><code>condition: "not SERVICE_A_URL"
condition: "not defined SERVICE_A_URL"
</code></pre>
<p>Thank you!</p>
|
<p>As written <a href="https://helm.sh/docs/topics/charts/#tags-and-condition-fields-in-dependencies" rel="noreferrer">in documentation:</a></p>
<blockquote>
<p>All charts are loaded by default. If tags or condition fields are
present, they will be evaluated and used to control loading for the
chart(s) they are applied to.</p>
</blockquote>
<blockquote>
<p>Condition - The condition field holds one or more YAML paths
(delimited by commas). If this path exists in the top parent's values
and resolves to a boolean value, the chart will be enabled or disabled
based on that boolean value. Only the first valid path found in the
list is evaluated and if no paths exist then the condition has no
effect.</p>
</blockquote>
<p>If there is no path specified or there is nothing associated with the path the condition has no effect. You can disable installing the dependency in the <code>values.yaml</code> file.</p>
<p>For example, if you have following <code>Chart.yaml</code> file:</p>
<pre><code>dependencies:
- name: subchart1
condition: subchart1.enabled
tags:
- front-end
- subchart1
</code></pre>
<p>and you want to disable charts tagged <code>back-end</code>, in <code>values.yaml</code> you have to assign <code>false</code> value to it:</p>
<pre><code>subchart1:
enabled: true
tags:
front-end: false
back-end: true
</code></pre>
|
<p>I have 3 kubernetes clusters (prod, test, monitoring). Iam new to prometheus so i have tested it by installing it in my test environment with the helm chart:</p>
<pre><code># https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
helm install [RELEASE_NAME] prometheus-community/kube-prometheus-stack
</code></pre>
<p>But if i want to have metrics from the prod and test clusters, i have to repeat the same installation of the helm and each "kube-prometheus-stack" would be standalone in its own cluster. It is not ideal at all. Iam trying to find a way to have a single prometheus/grafana which would federate/agregate the metrics from each cluster's prometheus server.<br/></p>
<p>I found this link, saying about prometheus federation:</p>
<pre><code>https://prometheus.io/docs/prometheus/latest/federation/
</code></pre>
<p>If install the helm chart "kube-prometheus-stack" and get rid of grafana on the 2 other cluster, how can i make the 3rd "kube-prometheus-stack", on the 3rd cluster, scrapes metrics from the 2 other ones?<br/>
thanks</p>
|
<p>You have to modify configuration for prometheus federate so it can scrape metrics from other clusters as described <a href="https://prometheus.io/docs/prometheus/latest/federation/#configuring-federation" rel="noreferrer">in documentation</a>:</p>
<pre><code>scrape_configs:
- job_name: 'federate'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 'source-prometheus-1:9090'
- 'source-prometheus-2:9090'
- 'source-prometheus-3:9090'
</code></pre>
<p><code>params</code> field checks for <a href="https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L248" rel="noreferrer">jobs to scrape</a> metrics from. In this particular example</p>
<blockquote>
<p>It will scrape any series with the label job="prometheus" or a metric name starting
with job: from the Prometheus servers at
source-prometheus-{1,2,3}:9090</p>
</blockquote>
<p>You can check following articles to give you more insight of prometheus federation:</p>
<ol>
<li><p><a href="https://www.linkedin.com/pulse/monitoring-kubernetes-prometheus-outside-cluster-steven-acreman" rel="noreferrer">Monitoring Kubernetes with Prometheus - outside the cluster!</a></p>
</li>
<li><p><a href="https://medium.com/@jotak/prometheus-federation-in-kubernetes-4ce46bda834e" rel="noreferrer">Prometheus federation in Kubernetes</a></p>
</li>
<li><p><a href="https://banzaicloud.com/blog/prometheus-federation/" rel="noreferrer">Monitoring multiple federated clusters with Prometheus - the secure way</a></p>
</li>
<li><p><a href="https://developers.mattermost.com/blog/cloud-monitoring/" rel="noreferrer">Monitoring a Multi-Cluster Environment Using Prometheus Federation and Grafana</a></p>
</li>
</ol>
|
<p>Trying to provision k8s cluster on 3 Debian 10 VMs with kubeadm.</p>
<p>All vms have 2 network interfaces, eth0 as public interface with static ip, eth1 as local interface with static ips in 192.168.0.0/16:</p>
<ul>
<li>Master: 192.168.1.1</li>
<li>Node1: 192.168.2.1</li>
<li>Node2: 192.168.2.2</li>
</ul>
<p>All nodes have interconnect between them.</p>
<p><code>ip a</code> from master host:</p>
<pre><code># ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:52:70:53:d5:12 brd ff:ff:ff:ff:ff:ff
inet XXX.XXX.244.240/24 brd XXX.XXX.244.255 scope global dynamic eth0
valid_lft 257951sec preferred_lft 257951sec
inet6 2a01:367:c1f2::112/48 scope global
valid_lft forever preferred_lft forever
inet6 fe80::252:70ff:fe53:d512/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:95:af:b0:8c:c4 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/16 brd 192.168.255.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::295:afff:feb0:8cc4/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
<p>Master node is initialized fine with:</p>
<pre><code>kubeadm init --upload-certs --apiserver-advertise-address=192.168.1.1 --apiserver-cert-extra-sans=192.168.1.1,XXX.XXX.244.240 --pod-network-cidr=10.40.0.0/16 -v=5
</code></pre>
<p><a href="https://pastebin.com/raw/FfENpPnL" rel="nofollow noreferrer">Output</a></p>
<p>But when I join worker nodes kube-api is not reachable:</p>
<pre><code>kubeadm join 192.168.1.1:6443 --token 7bl0in.s6o5kyqg27utklcl --discovery-token-ca-cert-hash sha256:7829b6c7580c0c0f66aa378c9f7e12433eb2d3b67858dd3900f7174ec99cda0e -v=5
</code></pre>
<p><a href="https://pastebin.com/raw/ZE2rDNTY" rel="nofollow noreferrer">Output</a></p>
<p>Netstat from master:</p>
<pre><code># netstat -tupn | grep :6443
tcp 0 0 192.168.1.1:43332 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41774 192.168.1.1:6443 ESTABLISHED 5362/kube-proxy
tcp 0 0 192.168.1.1:41744 192.168.1.1:6443 ESTABLISHED 5236/kubelet
tcp 0 0 192.168.1.1:43376 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43398 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41652 192.168.1.1:6443 ESTABLISHED 4914/kube-scheduler
tcp 0 0 192.168.1.1:43448 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43328 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43452 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43386 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43350 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41758 192.168.1.1:6443 ESTABLISHED 5182/kube-controlle
tcp 0 0 192.168.1.1:43306 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43354 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43296 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43408 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41730 192.168.1.1:6443 ESTABLISHED 5182/kube-controlle
tcp 0 0 192.168.1.1:41738 192.168.1.1:6443 ESTABLISHED 4914/kube-scheduler
tcp 0 0 192.168.1.1:43444 192.168.1.1:6443 TIME_WAIT -
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41730 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41744 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41738 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41652 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 ::1:6443 ::1:42862 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41758 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 ::1:42862 ::1:6443 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41774 ESTABLISHED 5094/kube-apiserver
</code></pre>
<p>Pods from master:</p>
<pre><code># kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-558bd4d5db-8qhhl 0/1 Pending 0 12m <none> <none> <none> <none>
coredns-558bd4d5db-9hj7z 0/1 Pending 0 12m <none> <none> <none> <none>
etcd-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-apiserver-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-controller-manager-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-proxy-dzd42 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-scheduler-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
</code></pre>
<p>All vms have this kernel parameters set:</p>
<ul>
<li><code>{ name: 'vm.swappiness', value: '0' }</code></li>
<li><code>{ name: 'net.bridge.bridge-nf-call-iptables', value: '1' }</code></li>
<li><code>{ name: 'net.bridge.bridge-nf-call-ip6tables', value: '1'}</code></li>
<li><code>{ name: 'net.ipv4.ip_forward', value: 1 }</code></li>
<li><code>{ name: 'net.ipv6.conf.all.forwarding', value: 1}</code></li>
</ul>
<p>br_netfilter kernel module active and iptables set to legacy mode (via alternatives)</p>
<p>Am I missing something?</p>
|
<p>The reason for your issues is that the TLS connection between the components has to be secured. From the <code>kubelet</code> point of view this will be safe if the <code>Api-server</code> certificate will contain in alternative names the IP of the server that we want to connect to. You can notice yourself that you only add to <code>SANs</code> one IP address.</p>
<p>How can you fix this? There are two ways:</p>
<ol>
<li><p>Use the <code>--discovery-token-unsafe-skip-ca-verification</code> flag with your kubeadm join command from your node.</p>
</li>
<li><p>Add the IP address from the second <code>NIC</code> to <code>SANs</code> api certificate at the cluster initialization phase (kubeadm init)</p>
</li>
</ol>
<p>For more reading you check this directly related PR <a href="https://github.com/kubernetes/kubernetes/pull/93264" rel="nofollow noreferrer">#93264</a> which was introduced in kubernetes 1.19.</p>
|
<p>In order to deploy my k8s cluster I <code>kubectl apply -f folder-of-yamls/</code></p>
<p>And it seems that the order of execution is important.
One approach I've seen is to prefix <code>001-namespace.yaml</code> <code>002-secrets.yaml</code> etc. to create an ordering.</p>
<p>To tear down, if I <code>kubectl delete -f folder-of-yamls/</code>, can I simply reverse the order or must I manually create a sequence?</p>
|
<p>Order of deletion shouldn't matter to much, as mentioned by David Maze in comments, <code>kubectl delete -f folder/</code> will clean up everything correctly, however there might be some issues when you delete objects that are dependent from other. For example <code>PVC</code> should be deleted before <code>PV</code> (PV will take forever to be deleted) but kubernetes should take care of that if everything is present in directory you are deleting yamls from.</p>
|
<p>I ran the below command to run the spark job on kubernetes.</p>
<pre><code>./bin/spark-submit \
--master k8s://https://192.168.0.91:6443 \
--deploy-mode cluster \
--name spark-steve-test \
--class org.apache.spark.examples.Spark \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.namespace=spark \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.container.image=sclee01/spark:v2.3.0 \
local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar
</code></pre>
<p>However, I got the below messages that it seemed that the pod was not created due to some reasons.</p>
<pre><code>20/10/22 12:00:36 INFO LoggingPodStatusWatcherImpl: Application status for spark-6a79e5b39a84403bb83dbf69ca20a02c (phase: Pending)
20/10/22 12:00:37 INFO LoggingPodStatusWatcherImpl: State changed, new state:
pod name: spark-steve-test-01-734603754e4038ed-driver
namespace: spark
labels: spark-app-selector -> spark-6a79e5b39a84403bb83dbf69ca20a02c, spark-role -> driver
pod uid: 5afec1f7-b1cd-4bac-a6c2-be239e0efc30
creation time: 2020-10-22T03:00:34Z
service account name: spark
volumes: spark-local-dir-1, spark-conf-volume, spark-token-bdwh9
node name: bistelresearchdev-sm
start time: 2020-10-22T03:00:34Z
phase: Running
container status:
container name: spark-kubernetes-driver
container image: sclee01/spark:v2.3.0
container state: running
container started at: 2020-10-22T03:00:37Z
20/10/22 12:00:37 INFO LoggingPodStatusWatcherImpl: Application status for spark-6a79e5b39a84403bb83dbf69ca20a02c (phase: Running)
20/10/22 12:00:38 INFO LoggingPodStatusWatcherImpl: Application status for spark-6a79e5b39a84403bb83dbf69ca20a02c (phase: Running)
20/10/22 12:00:39 INFO LoggingPodStatusWatcherImpl: Application status for spark-6a79e5b39a84403bb83dbf69ca20a02c (phase: Running)
20/10/22 12:00:40 INFO LoggingPodStatusWatcherImpl: Application status for spark-6a79e5b39a84403bb83dbf69ca20a02c (phase: Running)
20/10/22 12:00:41 INFO LoggingPodStatusWatcherImpl: State changed, new state:
pod name: spark-steve-test-01-734603754e4038ed-driver
namespace: spark
labels: spark-app-selector -> spark-6a79e5b39a84403bb83dbf69ca20a02c, spark-role -> driver
pod uid: 5afec1f7-b1cd-4bac-a6c2-be239e0efc30
creation time: 2020-10-22T03:00:34Z
service account name: spark
volumes: spark-local-dir-1, spark-conf-volume, spark-token-bdwh9
node name: bistelresearchdev-sm
start time: 2020-10-22T03:00:34Z
phase: Failed
container status:
container name: spark-kubernetes-driver
container image: sclee01/spark:v2.3.0
container state: terminated
container started at: 2020-10-22T03:00:37Z
container finished at: 2020-10-22T03:00:40Z
exit code: 1
termination reason: Error
20/10/22 12:00:41 INFO LoggingPodStatusWatcherImpl: Application status for spark-6a79e5b39a84403bb83dbf69ca20a02c (phase: Failed)
20/10/22 12:00:41 INFO LoggingPodStatusWatcherImpl: Container final statuses:
container name: spark-kubernetes-driver
container image: sclee01/spark:v2.3.0
container state: terminated
container started at: 2020-10-22T03:00:37Z
container finished at: 2020-10-22T03:00:40Z
exit code: 1
termination reason: Error
20/10/22 12:00:41 INFO LoggingPodStatusWatcherImpl: Application spark-steve-test-01 with submission ID spark:spark-steve-test-01-734603754e4038ed-driver finished
20/10/22 12:00:41 INFO ShutdownHookManager: Shutdown hook called
20/10/22 12:00:41 INFO ShutdownHookManager: Deleting directory /tmp/spark-2e4e5f9a-c54d-4790-b4cb-f9b6cd1e2105
</code></pre>
<p>The only I can see is that the <code>error</code> and I could not see the detailed reason for it.
I ran the below command but it doesn't give me any further information.</p>
<pre><code>bistel@BISTelResearchDev-NN:~/user/sclee/project/spark/spark-3.0.1-bin-hadoop2.7$ kubectl logs -p spark-steve-test-01-734603754e4038ed-driver
Error from server (NotFound): pods "spark-steve-test-01-734603754e4038ed-driver" not found
bistel@BISTelResearchDev-NN:~/user/sclee/project/spark/spark-3.0.1-bin-hadoop2.7$
</code></pre>
<p>Any help will be apppreciated.</p>
<p>Thanks.</p>
|
<p>For more <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-pods" rel="nofollow noreferrer">detailed information</a> you can use <code>kubectl describe pod <pod-name></code>.
It will print a detailed description of the selected resources, including related resources such as events or controllers.</p>
<p>You can also use <code>kubectl get event | grep pod/<pod-name></code> - it will show events only for selected pod.</p>
|
<p>I have a K8 cluster running with a deployment which has an update policy of RollingUpdate. How do I get Kubernetes to wait an extra amount of seconds or until some condition is met before marking a container as ready after starting?</p>
<p>An example would be having an API server with no downtime when deploying an update. But after the container starts it still needs X amount of seconds before it is ready to start serving HTTP requests. If it marks it as ready immediately once the container starts and the API server isn't actually ready there will be some HTTP requests that will fail for a brief time window.</p>
|
<p>Posting @David Maze comment as community wiki for better visibility:</p>
<p>You need a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="nofollow noreferrer">readiness probe</a>; the pod won't show as "ready" (and the deployment won't proceed) until the probe passes.</p>
<p>Example:</p>
<pre><code>readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
<ul>
<li><p><code>initialDelaySeconds</code>: Number of seconds after the container has
started before liveness or readiness probes are initiated. Defaults
to 0 seconds. Minimum value is 0.</p>
</li>
<li><p><code>periodSeconds</code>: How often (in seconds) to perform the probe. Default
to 10 seconds. Minimum value is 1.</p>
</li>
</ul>
|
<p>Good afternoon</p>
<p>I'm just starting with Kubernetes, and I'm working with HPA (HorizontalPodAutoscaler):</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: find-complementary-account-info-1
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: find-complementary-account-info-1
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageUtilization: 50
</code></pre>
<p>But when I try to see the metrics on the console, it doesn't show me:</p>
<pre><code> [dockermd@tmp108 APP-MM-ConsultaCuentaPagadoraPospago]$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
find-complementary-account-info-1 Deployment/find-complementary-account-info-1 <unknown>/50% 2 5 2 143m
</code></pre>
<p>My manifiest is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: find-complementary-account-info-1
labels:
app: find-complementary-account-info-1
spec:
replicas: 2
selector:
matchLabels:
app: find-complementary-account-info-1
template:
metadata:
labels:
app: find-complementary-account-info-1
spec:
containers:
- name: find-complementary-account-info-1
image: find-complementary-account-info-1:latest
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "350Mi"
requests:
memory: "300Mi"
ports:
- containerPort: 8083
env:
- name: URL_CONNECTION_BD
value: jdbc:oracle:thin:@10.161.6.15:1531/DEFAULTSRV.WORLD
- name: USERNAME_CONNECTION_BD
valueFrom:
secretKeyRef:
name: credentials-bd-pers
key: user_pers
- name: PASSWORD_CONNECTION_BD
valueFrom:
secretKeyRef:
name: credentials-bd-pers
key: password_pers
apiVersion: v1
kind: Service
metadata:
name: svc-find-complementary-account-info-1
labels:
app: find-complementary-account-info-1
namespace: default
spec:
selector:
app: find-complementary-account-info-1
type: LoadBalancer
ports:
-
protocol: TCP
port: 8083
targetPort: 8083
nodePort: 30025
externalIPs:
- 10.161.174.68
</code></pre>
<p>Show me the tag</p>
<p>Also if I perform a kubect describe hpa, it throws me the following:</p>
<pre><code>[dockermd@tmp108 APP-MM-ConsultaCuentaPagadoraPospago]$ kubectl describe hpa find-complementary-account-info-1
Name: find-complementary-account-info-1
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Thu, 29 Oct 2020 13:57:58 -0400
Reference: Deployment/find-complementary-account-info-1
Metrics: ( current / target )
resource memory on pods (as a percentage of request): <unknown> / 50%
Min replicas: 2
Max replicas: 5
Deployment pods: 2 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource memory: no metrics returned from resource metrics API
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 4m49s (x551 over 144m) horizontal-pod-autoscaler unable to get metrics for resource memory: no metrics returned from resource metrics API
</code></pre>
<p>I am not working in the cloud, I am working in an on-premise environment configured with Bare-Metal</p>
<p>Also install metrics-server:</p>
<pre><code>[dockermd@tmp108 APP-MM-ConsultaCuentaPagadoraPospago]$ kubectl get deployment metrics-server -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 177m
</code></pre>
<p>What will I lack? Could someone give me a hand?</p>
<pre><code>[dockermd@tmp108 certs]$ kubectl describe pod metrics-server -n kube-system
Name: metrics-server-5f4b6b9889-6pbv8
Namespace: kube-system
Priority: 0
Node: tmp224/10.164.21.169
Start Time: Thu, 29 Oct 2020 13:27:19 -0400
Labels: k8s-app=metrics-server
pod-template-hash=5f4b6b9889
Annotations: cni.projectcalico.org/podIP: 10.244.119.140/32
cni.projectcalico.org/podIPs: 10.244.119.140/32
Status: Running
IP: 10.244.119.140
IPs:
IP: 10.244.119.140
Controlled By: ReplicaSet/metrics-server-5f4b6b9889
Containers:
metrics-server:
Container ID: docker://f71d26dc2c8e787ae9551faad66f9588a950bf0a6d0d5cb90ff11ceb219e9b37
Image: k8s.gcr.io/metrics-server-amd64:v0.3.6
Image ID: docker-pullable://k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b
Port: 4443/TCP
Host Port: 0/TCP
Args:
--cert-dir=/tmp
--secure-port=4443
State: Running
Started: Thu, 29 Oct 2020 13:27:51 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from metrics-server-token-4mn92 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
metrics-server-token-4mn92:
Type: Secret (a volume populated by a Secret)
SecretName: metrics-server-token-4mn92
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/arch=amd64
kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
</code></pre>
|
<p>The error <code>unable to get metrics for resource memory: no metrics returned from resource metrics API</code> is not caused by faulty HPA but by metrics-server that is not able to scrape any metrics.</p>
<p>Most of the time this error is caused by <a href="https://stackoverflow.com/a/63139861/12237732">missing commands</a> in <a href="https://github.com/kubernetes-sigs/metrics-server/blob/master/manifests/base/deployment.yaml" rel="noreferrer">metrics-server deployment</a>. It can be done by adding following arguments in the deployment.</p>
<pre><code> --kubelet-insecure-tls
--kubelet-preferred-address-types=InternalIP,ExternalIP
</code></pre>
|
<p>When I try to copy some files in an existing directory with a wildcard, I receive the error:</p>
<pre><code>kubectl cp localdir/* my-namespace/my-pod:/remote-dir/
error: one of src or dest must be a remote file specification
</code></pre>
<p>It looks like wildcards support have been removed but I have many files to copy and my remote dir is not empty so I can't use the recursive.</p>
<p>How can I run a similar operation?</p>
|
<p>As a workaround you can use:</p>
<pre><code>find localdir/* | xargs -I{} kubectl cp {} my-namespace/my-pod:/remote-dir/
</code></pre>
<p>In find you can use a wildcard to specify files you are looking for and it will copy it to the pod.</p>
|
<p>I have deployed istio on kubernetes, and I installed prometheus from istio addons. My goal is to only monitor some pods of one application(such as all pods of bookinfo application). The job definition for monitoring pods is as below:</p>
<pre><code> - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
job_name: kubernetes-nodes-cadvisor
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- replacement: kubernetes.default.svc:443
target_label: __address__
- regex: (.+)
replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
source_labels:
- __meta_kubernetes_node_name
target_label: __metrics_path__
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
</code></pre>
<p>My problem is that I don't know how to monitor only one namespace's pods. For example, I deploy the bookinfo application in a namespace named Book. I only want the metrics of pods from namespace Book. However, prometheus will collect all pods metrics of the nodes. Instead of changing annotations of the application like <a href="https://stackoverflow.com/questions/59070150/monitor-only-one-namespace-metrics-prometheus-with-kubernetes">Monitor only one namespace metrics - Prometheus with Kubernetes</a>, I want know if there is a method to select only one namespace by changing the job definition above. Or is there some way to choose the monitor pods by it's labels?</p>
|
<p>The following will match all target pods with label: <code>some_label</code> with any value.</p>
<pre><code>relabel_configs:
- action: keep
source_labels: [__meta_kubernetes_pod_label_some_label]
regex: (.*)
</code></pre>
<p>If you want to keep targets with label: <code>monitor</code> and value: <code>true</code> you would do:</p>
<pre><code>relabel_configs:
- action: keep
source_labels: [__meta_kubernetes_pod_label_monitor]
regex: true
</code></pre>
<p>All pods that don't match it will be dropped from scraping.</p>
<p>The same you should be able to do for namespaces:</p>
<pre><code>relabel_configs:
- action: keep
source_labels: [__meta_kubernetes_namespace]
regex: Book
</code></pre>
<hr />
<p>EDIT ></p>
<blockquote>
<p>is there a way to change the [container_label_io_kubernetes_container_name] labels into "container_name"?</p>
</blockquote>
<p>Try this:</p>
<pre><code>relabel_configs:
- action: replace
source_labels: [container_label_io_kubernetes_container_name]
target_label: container_name
</code></pre>
<p>It's all explained in <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/" rel="nofollow noreferrer">prometheus docs about configuration</a></p>
|
<p>I am trying to use <code>NFS</code> volume in the same cluster I have deployed other k8s services. But one of the services using the <code>NFS</code> fails with
<code>Output: mount.nfs: mounting nfs.default.svc.cluster.local:/opt/shared-shibboleth-idp failed, reason given by server: No such file or directory</code></p>
<p>The <code>nfs PV</code></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: nfs.default.svc.cluster.local # nfs is from svc {{ include "nfs.name" .}}
path: "/opt/shared-shibboleth-idp"
</code></pre>
<p>Description of <code>nfs service</code></p>
<pre><code>➜ helm git:(ft-helm) ✗ kubectl describe svc nfs
Name: nfs
Namespace: default
Labels: app=nfs
chart=nfs-1.0.0
heritage=Tiller
Annotations: <none>
Selector: role=nfs
Type: ClusterIP
IP: 10.19.251.72
Port: mountd 20048/TCP
TargetPort: 20048/TCP
Endpoints: 10.16.1.5:20048
Port: nfs 2049/TCP
TargetPort: 2049/TCP
Endpoints: 10.16.1.5:2049
Port: rpcbind 111/TCP
TargetPort: 111/TCP
Endpoints: 10.16.1.5:111
</code></pre>
<p>And the <code>nfs deployment</code> </p>
<pre><code>➜ helm git:(ft-helm) ✗ kubectl describe replicationcontrollers telling-quoll-nfs
Name: telling-quoll-nfs
Namespace: default
Selector: role=nfs
Labels: app=nfs
chart=nfs-1.0.0
heritage=Tiller
Annotations: <none>
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: role=nfs
Containers:
nfs:
Image: k8s.gcr.io/volume-nfs:0.8
Ports: 20048/TCP, 2049/TCP, 111/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Environment: <none>
Mounts:
/exports from nfs (rw)
Volumes:
nfs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-pv-provisioning-demo
ReadOnly: false
Events: <none>
</code></pre>
<p>And where it is being used</p>
<pre><code>volumeMounts:
# names must match the volume names below
- name: RELEASE-NAME-shared-shib
mountPath: "/opt/shared-shibboleth-idp"
;
;
volumes:
- name: RELEASE-NAME-shared-shib
persistentVolumeClaim:
claimName: nfs
;
;
</code></pre>
<p>k8s <code>version</code></p>
<pre><code>➜ helm git:(ft-helm) ✗ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-20T04:49:16Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.8", GitCommit:"7d3d6f113e933ed1b44b78dff4baf649258415e5", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:16Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
|
<p>As mentioned in the comments made by <code>Patrick W</code> and <code>damitj07</code>:</p>
<p>You need have to create the folder or dir manually before try to mount, otherwise the Kubernetes will raise an error because the destination directory not exist.</p>
|
<p>I have an application written in Go which reads environmental variables from a config.toml file.
The config.toml file contains the key value as</p>
<pre><code>Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
</code></pre>
<p>In my application am reading the all the variables from the .toml file to my application as</p>
<pre><code>// Represents database and server credentials
type Config struct {
Server string
Database string
NRFAddrPort string
}
var NRFAddrPort string
// Read and parse the configuration file
func (c *Config) Read() {
if _, err := toml.DecodeFile("config.toml", &c); err != nil {
log.Print("Cannot parse .toml configuration file ")
}
NRFAddrPort = c.NRFAddrPort
}
</code></pre>
<p>I would like to deploy my application in my Kubernetes cluster (3 VMs, a master and 2 worker nodes). After creating a docker and pushed to docker hub, when deploy my application using configMaps to parse the variables, my application runs for a few seconds and then gives Error.
It seems the application cannot read the env variable from the configMap. Below is my configMap and the deployment.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nrf-config
namespace: default
data:
config-toml: |
Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
</code></pre>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nrf-instance
spec:
selector:
matchLabels:
app: nrf-instance
replicas: 1
template:
metadata:
labels:
app: nrf-instance
version: "1.0"
spec:
nodeName: k8s-worker-node2
containers:
- name: nrf-instance
image: grego/appapi:1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
volumeMounts:
- name: config-volume
mountPath: /home/ubuntu/appapi
volumes:
- name: config-volume
configMap:
name: nrf-config
</code></pre>
<p>Also one thing I do not understand is the mountPath in volumeMounts. Do I need to copy the config.toml to this mountPath?
When I hard code these variable in my application and deploy the docker image in kubernetes, it run without error.
My problem now is how to parse these environmental variable to my application using kubernetes configMap or any method so it can run in my Kubernetes cluster instead of hard code them in my application. Any help.</p>
<p>Also attached is my Dockerfile content</p>
<pre><code># Dockerfile References: https://docs.docker.com/engine/reference/builder/
# Start from the latest golang base image
FROM golang:latest as builder
# Set the Current Working Directory inside the container
WORKDIR /app
# Copy go mod and sum files
COPY go.mod go.sum ./
# Download all dependencies. Dependencies will be cached if the go.mod and go.sum files are not changed
RUN go mod download
# Copy the source from the current directory to the Working Directory inside the container
COPY . .
# Build the Go app
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
######## Start a new stage from scratch #######
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# Copy the Pre-built binary file from the previous stage
COPY --from=builder /app/main .
# Expose port 9090 to the outside world
EXPOSE 9090
# Command to run the executable
CMD ["./main"]
</code></pre>
<p>Any problem about the content?</p>
<p>Passing the values as env values as</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nrf-instance
spec:
selector:
matchLabels:
app: nrf-instance
replicas: 1
template:
metadata:
labels:
app: nrf-instance
version: "1.0"
spec:
nodeName: k8s-worker-node2
containers:
- name: nrf-instance
image: grego/appapi:1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
env:
- name: Server
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
- name: Database
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
- name: NRFAddrPort
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
</code></pre>
|
<p>You cannot pass those values as separate environment variables as it is because they are read as one text blob instead of separate <code>key:values</code>. Current configmap looks like this:</p>
<pre><code>Data
====
config.toml:
----
Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
</code></pre>
<p>To pass it as environment variables you have to modify the <code>configmap</code> to read those values as <code>key: value pair</code>:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: example-configmap
data:
Server: mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
Database: nrfdb
NRFAddrPort: :9090
</code></pre>
<p>This way those values will be separated and can be passed as env variables:</p>
<pre><code>Data
====
Database:
----
nrfdb
NRFAddrPort:
----
:9090
Server:
----
mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
</code></pre>
<p>When you pass it to pod:</p>
<pre><code>[...]
spec:
containers:
- name: nrf-instance
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
envFrom:
- configMapRef:
name: example-configmap
</code></pre>
<p>You can see that it was passed correctly, for example by executing <code>env</code> command inside the pod:</p>
<pre><code>kubectl exec -it env-6fb4b557d7-zw84w -- env
NRFAddrPort=:9090
Server=mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
Database=nrfdb
</code></pre>
<p>The values are read as separate env variables, for example <code>Server</code> value:</p>
<pre><code>kubectl exec -it env-6fb4b557d7-zw84w -- printenv Server
mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
</code></pre>
|
<p>I have an ongoing requirement to patch my nginx-ingress daemonset each time I wish to expose new TCP ports. I have reviewed the documentation and I cannot understand the correct kubectl patch syntax to perform the patch. An excerpt from the yaml follows:</p>
<pre><code>spec:
revisionHistoryLimit: 10
selector:
matchLabels:
name: nginx-ingress-microk8s
template:
metadata:
creationTimestamp: null
labels:
name: nginx-ingress-microk8s
spec:
containers:
- args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
- --default-backend-service=ingress/custom-default-backend
- --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
- --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
- --ingress-class=public
- ' '
- --publish-status-address=127.0.0.1
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: k8s.gcr.io/ingress-nginx/controller:v0.44.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: nginx-ingress-microk8s
ports:
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
- containerPort: 10254
hostPort: 10254
name: health
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: {}
</code></pre>
<p>I want to use kubectl patch to append another port definition under ports i.e.</p>
<pre><code> - containerPort: 1234
hostPort: 1234
name: my-port-1234
protocol: TCP
</code></pre>
<p>Patching a config map was simple using:</p>
<pre><code>kubectl patch configmap nginx-ingress-tcp-microk8s-conf -n ingress --type merge -p '{"data":{"1234":"namespace1/api-connect:1234"}}'
</code></pre>
<p>but I cannot understand how to amend the command to cope with the more complex update required for the Daemonset.</p>
<p>Any assistance gratefully received. Thanks</p>
|
<p>As already mentioned by David in the comment it is better to keep every change under version control.</p>
<p>But if you really need to do this, here is the command:</p>
<pre><code>kubectl patch ds -n ingress nginx-ingress-microk8s-controller --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/ports/-", "value":{"containerPort":1234,"name":"my-port-1234","hostPort":1234,"protocol":"TCP"}}]'
</code></pre>
<p>patch command is explained in k8s docs: <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer">update-api-object-kubectl-patch</a>,
and the json type patch details are explained in <a href="https://www.rfc-editor.org/rfc/rfc6902" rel="nofollow noreferrer">rfc6902</a>.</p>
|
<p>I have one <code>kubernetes master</code> and three <code>kubernetes nodes</code>. I made one pod which is running on specific node. I want to run that pod on 2 nodes. how can I achieve this? do <code>replica</code> concept help me? if yes how?</p>
|
<p><strong>Yes</strong>, you can assign pods to one or more nodes of your cluster, and here are some options to achieve this:</p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer"><strong>nodeSelector</strong></a></p>
<blockquote>
<p>nodeSelector is the simplest recommended form of node selection constraint. nodeSelector is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). The most common usage is one key-value pair.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer"><strong>affinity and anti-affinity</strong></a></p>
<blockquote>
<p>Node affinity is conceptually similar to nodeSelector -- it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.</p>
<p>nodeSelector provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity feature, greatly expands the types of constraints you can express. The key enhancements are</p>
<ol>
<li>The affinity/anti-affinity language is more expressive. The language offers more matching rules besides exact matches created with a logical AND operation;</li>
<li>you can indicate that the rule is "soft"/"preference" rather than a hard requirement, so if the scheduler can't satisfy it, the pod will still be scheduled;</li>
<li>you can constrain against labels on other pods running on the node (or other topological domain), rather than against labels on the node itself, which allows rules about which pods can and cannot be co-located</li>
</ol>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer"><strong>DaemonSet</strong></a></p>
<blockquote>
<p>A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.</p>
<p>Some typical uses of a DaemonSet are:</p>
<ul>
<li>running a cluster storage daemon on every node</li>
<li>running a logs collection daemon on every node</li>
<li>running a node monitoring daemon on every node</li>
</ul>
</blockquote>
<p>Please check <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">this</a> link to read more about how to assign pods to nodes.</p>
|
<p>I am adding the external authentication using auth-url annotation. How to set conditional request headers for the auth-url api which depends on incoming calls? Can I set the request headers in nginx controller according to incoming calls?</p>
<p>Edited:</p>
<p>Hi,
This is about adding a custom header(Id) which is expected into auth-url. I am setting the Id header which is required in authorize api of auth-url but not receiving in the api. Is this the right method to set? My next question is If it is set how can I set it conditionally depending on from which host server the request is coming through?</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: http://ca6dd3adc439.ngrok.io/authorize
nginx.ingress.kubernetes.io/auth-method: POST
nginx.ingress.kubernetes.io/auth-snippet: |
proxy_set_header Id "queryApps";
spec:
rules:
- host: "hw1.yourdomain"
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: hello-netcore-k8s
servicePort: 80
- host: "hw2.yourdomain"
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: hello-kubernetes-second
servicePort: 80
</code></pre>
|
<p><strong>My next question is If it is set how can I set it conditionally depending on from which host server the request is coming through?</strong></p>
<p>The best way would be to create two ingress objects with one where the external auth enabled for host <code>hw1.yourdoman</code>. For some reason while testing this the <code>auth-snippet</code> was not passing the header but with it works fine with <code>configuration-snippet</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress-auth-on
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: http://ca6dd3adc439.ngrok.io/authorize
nginx.ingress.kubernetes.io/auth-method: POST
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Id "queryApps";
spec:
rules:
- host: "hw1.yourdomain"
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: hello-netcore-k8s
servicePort: 80
</code></pre>
<p>As you can see here it passes the desired header:</p>
<pre class="lang-sh prettyprint-override"><code> "path": "/",
"headers": {
"host": "hw1.yourdomain",
"x-request-id": "5e91333bed960802a67958d71e787b75",
"x-real-ip": "192.168.49.1",
"x-forwarded-for": "192.168.49.1",
"x-forwarded-host": "hw1.yourdomain",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-scheme": "http",
"id": "queryApps",
"user-agent": "curl/7.52.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
</code></pre>
<p>Moving on, the second ingress object has to be configured the auth disabled for host <code>hw2.yourdomain</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress-auth-off
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: "hw2.yourdomain"
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: hello-kubernetes-second
servicePort: 80
</code></pre>
<p>You can then have a look at the <code>nginx.conf</code> to check how the those two ingresss objects are configure at controller level. This is the first ingress:</p>
<pre class="lang-sh prettyprint-override"><code> ## start server hw1.yourdomain
server {
server_name hw1.yourdomain ;
listen 80 ;
listen 443 ssl http2 ;
set $proxy_upstream_name "-";
location = /_external-auth-Lw {
internal;
set $proxy_upstream_name "default-hello-netcore-k8s-80";
hello-netcore-k8s.default.svc.cluster.local;
proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
--------
--------
# Pass the extracted client certificate to the auth provider
set $target http://hello-netcore-k8s.default.svc.cluster.local;
proxy_pass $target;
location / {
set $namespace "default";
set $ingress_name "hello-kubernetes-ingress-auth-on";
set $service_name "hello-netcore-k8s";
set $service_port "80";
set $location_path "/";
set $balancer_ewma_score -1;
set $proxy_upstream_name "default-hello-netcore-k8s-80";
# this location requires authentication
auth_request /_external-auth-Lw;
auth_request_set $auth_cookie $upstream_http_set_cookie;
add_header Set-Cookie $auth_cookie;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
--------
proxy_set_header Id "queryApps";
----
</code></pre>
<p>And this is the second one:</p>
<pre class="lang-sh prettyprint-override"><code> ## start server hw2.yourdomain
server {
server_name hw2.yourdomain ;
listen 80 ;
listen 443 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "default";
set $ingress_name "hello-kubernetes-ingress-auth-off";
set $service_name "hello-kubernetes-second";
set $service_port "80";
set $location_path "/";
</code></pre>
|
<p>I am new to Kubernetes I am trying to mimic a behavior a bit like what I do with <code>docker-compose</code> when I serve a Couchbase database in a docker container.</p>
<pre class="lang-yaml prettyprint-override"><code> couchbase:
image: couchbase
volumes:
- ./couchbase:/opt/couchbase/var
ports:
- 8091-8096:8091-8096
- 11210-11211:11210-11211
</code></pre>
<p>I managed to create a cluster in my localhost using a tool called "<a href="https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster" rel="nofollow noreferrer">kind</a>"</p>
<pre class="lang-sh prettyprint-override"><code>kind create cluster --name my-cluster
kubectl config use-context my-cluster
</code></pre>
<p>Then I am trying to use that cluster to deploy a Couchbase service</p>
<p>I created a file named <code>couchbase.yaml</code> with the following content (I am trying to mimic what I do with my docker-compose file).</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: couchbase
namespace: my-project
labels:
platform: couchbase
spec:
replicas: 1
selector:
matchLabels:
platform: couchbase
template:
metadata:
labels:
platform: couchbase
spec:
volumes:
- name: couchbase-data
hostPath:
# directory location on host
path: /home/me/my-project/couchbase
# this field is optional
type: Directory
containers:
- name: couchbase
image: couchbase
volumeMounts:
- mountPath: /opt/couchbase/var
name: couchbase-data
</code></pre>
<p>Then I start the deployment like this:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create namespace my-project
kubectl apply -f couchbase.yaml
kubectl expose deployment -n my-project couchbase --type=LoadBalancer --port=8091
</code></pre>
<p>However my deployment never actually start:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get deployments -n my-project couchbase
NAME READY UP-TO-DATE AVAILABLE AGE
couchbase 0/1 1 0 6m14s
</code></pre>
<p>And when I look for the logs I see this:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl logs -n my-project -lplatform=couchbase --all-containers=true
Error from server (BadRequest): container "couchbase" in pod "couchbase-589f7fc4c7-th2r2" is waiting to start: ContainerCreating
</code></pre>
|
<p>As OP mentioned in a comment, issue was solved using extra mount as explained in documentation: <a href="https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts" rel="nofollow noreferrer">https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts</a></p>
<p>Here is OP's comment but formated so it's more readable:</p>
<hr />
<p>the error shows up when I run this command:</p>
<pre><code>kubectl describe pods -n my-project couchbase
</code></pre>
<p>I could fix it by creating a new kind cluster:</p>
<pre><code>kind create cluster --config cluster.yaml
</code></pre>
<p>Passing this content in <code>cluster.yaml</code>:</p>
<pre><code>kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: inf
nodes:
- role: control-plane
extraMounts:
- hostPath: /home/me/my-project/couchbase
containerPath: /couchbase
</code></pre>
<p>In <code>couchbase.yaml</code> the path becomes path: <code>/couchbase</code> of course.</p>
|
<p>I followed the <a href="https://cert-manager.io/docs/tutorials/acme/ingress/" rel="nofollow noreferrer">cert-manager tutorial</a> to enable tls in my <strong>k3s cluster</strong>. So I modified the letsencrypt-staging issuer file to look like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: traefik
</code></pre>
<p>but when I deploy it, I get the error <code> Failed to verify ACME account: Get "https://acme-staging-v02.api.letsencrypt.org/directory": read tcp 10.42.0.96:45732->172.65.46.172:443: read: connection reset by peer</code>. But thats only with the staging clusterIssuer. The production example from te tutorial works flawlessly. I resacherd this error and it seems to be somthing with the kubernetes dns but I don't know how to test the dns or any other way to figure this error out.</p>
<hr />
<p>Tested the kubernetes DNS and it is up and running, so it must be an error with cert-manager,especially because the <code>prod</code> certificates status says `Ready=True</p>
|
<p>So it seems like I ran into a let's encrypt limit. After waiting for a day, the certificate now works</p>
|
<p>I hava a Redis runing in a container .
Inside the container cgroup rss show using about 1283MB memory.</p>
<p>The kmem memory usage is 30.75MB.</p>
<p>The summary of the memory usage of all processes in the docker container is 883MB.</p>
<p>How can i figure out the "disappeared memory "(1296-883-30=383MB).The "disappeared memory" will growing with the time pass.Flinally the container will be oom killed .</p>
<p>environmet info is</p>
<pre><code>redis version:4.0.1
docker version:18.09.9
k8s version:1.13
</code></pre>
<p>**the memory usage is 1283MB **</p>
<pre><code>root@redis-m-s-rbac-0:/opt#cat /sys/fs/cgroup/memory/memory.usage_in_bytes
1346289664 >>>> 1283.921875 MB
</code></pre>
<p>the kmem memory usage is 30.75MB</p>
<pre><code>root@redis-m-s-rbac-0:/opt#cat /sys/fs/cgroup/memory/memory.kmem.usage_in_bytes
32194560 >>> 30.703125 MB
</code></pre>
<pre><code>root@redis-m-s-rbac-0:/opt#cat /sys/fs/cgroup/memory/memory.stat
cache 3358720
rss 1359073280 >>> 1296.11328125 MB
rss_huge 515899392
shmem 0
mapped_file 405504
dirty 0
writeback 0
swap 0
pgpgin 11355630
pgpgout 11148885
pgfault 25710366
pgmajfault 0
inactive_anon 0
active_anon 1359245312
inactive_file 2351104
active_file 1966080
unevictable 0
hierarchical_memory_limit 4294967296
hierarchical_memsw_limit 4294967296
total_cache 3358720
total_rss 1359073280
total_rss_huge 515899392
total_shmem 0
total_mapped_file 405504
total_dirty 0
total_writeback 0
total_swap 0
total_pgpgin 11355630
total_pgpgout 11148885
total_pgfault 25710366
total_pgmajfault 0
total_inactive_anon 0
total_active_anon 1359245312
total_inactive_file 2351104
total_active_file 1966080
total_unevictable 0
</code></pre>
<p>**the summary of the memory usage of all processes in the docker container is 883MB **</p>
<pre><code>root@redis-m-s-rbac-0:/opt#ps aux | awk '{sum+=$6} END {print sum / 1024}'
883.609
</code></pre>
|
<p>This is happening because <code>usage_in_bytes</code> does not show exact value of memory and swap usage. The <code>memory.usage_in_bytes</code> show current memory(RSS+Cache) usage.</p>
<blockquote>
<p><em><em>5.5 usage_in_bytes For efficiency, as other kernel components, memory cgroup uses some optimization to avoid unnecessary cacheline
false sharing. usage_in_bytes is affected by the method and doesn't
show 'exact' value of memory (and swap) usage, it's a fuzz value for
efficient access. (Of course, when necessary, it's synchronized.) If
you want to know more exact memory usage, you should use
RSS+CACHE(+SWAP) value in memory.stat(see 5.2).</em></em></p>
</blockquote>
<p>Reference:
<a href="https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt" rel="nofollow noreferrer">https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt</a></p>
|
<pre><code>docker run -it -p 80:80 con-1
docker run -it -p hostport:containerport
</code></pre>
<p>Lets say i have this yaml definition, does below where it says 80 <code>ports -> containerPort: 80</code> sufficent? In other words how do i account for -p 80:80 the hostport and container port in kubernetes yaml definition?</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
</code></pre>
|
<p>Exposing ports of applications with k8s is different than exposing it with docker.</p>
<p>For pods, <code>spec.containers.ports</code> field isn't used to expose ports. It mostly used for documenting purpouses and also to name ports so that you can reference them later in service object's target-port with their name, and not a number (<a href="https://stackoverflow.com/a/65270688/12201084">https://stackoverflow.com/a/65270688/12201084</a>).</p>
<p>So how do we expose pods to the outside?</p>
<p>It's done with <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service objects</a>. There are 4 types of service: ClusterIP, NodePort, LoadBalancer and ExternalName.</p>
<p>They are all well explained in k8s documentation so I am not going to explain it here. Check out <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">K8s docs on types of servies</a></p>
<p>Assuming you know what type you want to use you can now use kubectl to create this service:</p>
<pre><code>kubectl expose pod <pod-name> --port <port> --target-port <targetport> --type <type>
kubectl expose deployment <pod-name> --port <port> --target-port <targetport> --type <type>
</code></pre>
<p>Where:</p>
<ul>
<li>--port - is used to pecify the port on whihc you want to expose the application</li>
<li>--target-port - is used to specify the port on which the applciation is running</li>
<li>--type - is used to specify the type of service</li>
</ul>
<p>With docker you would use <code>-p <port>:<target-port></code></p>
<p>OK, but maybe you don't want to use kubeclt to create a service and you would like to keep the service in git or whatever as a yaml file. You can check out <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">the examples in k8s docs</a>, copy it and write you own yaml or do the following:</p>
<pre><code>$ kubectl expose pod my-svc --port 80 --dry-run=client -oyaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: my-svc
name: my-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: my-svc
status:
loadBalancer: {}
</code></pre>
<p>Note: notice that if you don't pass a value for <code>--target-port</code> it defaults to the same value as <code>--port</code>
Also notice the selector filed, that has the same values as the labels on a pod. It will forward the traffic to every pod with this label (within the namespace).</p>
<p>Now, if you don't pass the value for <code>--type</code>, it defaults to ClisterIP so it means the service will be accessible only from within the cluster.
If you want to access the pod/application from the outside, you need to use either NodePort or LoadBalancer.</p>
<p>Nodeport opens some random port on every node and connecting to this port will forward the packets to the pod. The problem is that you can't just pick any port to open, and often you dont even get to pick the port at all (it's randomly assigned).</p>
<p>In case of type LoadBalancer you can use whateever port you'd like, but you need to run in cloud and use cloud provisioner to create and configure external loadbalancer for you and point it to your pod. If you are running on bare-metal you can use projects like <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> to make use of LoadBalancer type.</p>
<hr />
<p>To summarize, exposing containers with docker is totally different than exposing them with kubernetes. Don't assume k8s will work the same way the docker works just with different notation, because it won't.</p>
<p>Read the docs and blogs about k8s services and learn how they work.</p>
|
<p>I am installing kubectl in Ubuntu 20.04 following the guide <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="nofollow noreferrer">here</a> but it didn't create <code>/etc/kubernetes</code> folder for some reason. Then I tried this <a href="https://www.techrepublic.com/article/how-to-quickly-install-kubernetes-on-ubuntu/" rel="nofollow noreferrer">guide</a> , it now created a that folder, but only with <code>manifests</code> inside. There is no <code>.conf</code> file created. It will return this error if I ran <code>kubectl cluster-info</code>.</p>
<pre><code>W0629 19:50:08.122990 83790 loader.go:221] Config not found: /etc/kubernetes/admin.conf
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
</code></pre>
<p>Anyone knows the solution? Thanks</p>
|
<p>First things first. Kubernetes and kubectl are different things. Installing kubectl is not supposed to create <code>/etc/kubernetes</code> folder or files there. The <code>kubectl</code> cli tool is only a client for a kubernetes cluster. If you already have a Kubernetes cluster then you can provide a <code>kubeconfig</code> to this cli tool. This will enable you to interact with the cluster.</p>
<p>Your second link is installing kubernetes cluster with <code>kubeadm</code> cli tool. This will create a cluster from ground up. If cluster creation is completed successfully then <code>kubeadm</code> tool will automatically create an <code>admin.conf</code> file for you. This file is also a <code>kubeconfig</code> file.</p>
|
<p>CoreDNS pod is not running. Please find below status.</p>
<pre><code>kubectl get po --all-namespaces -o wide | grep -i coredns
kube-system coredns-6955765f44-8qhkr 1/1 Running 0 24m 10.244.0.59 k8s-master <none> <none>
kube-system coredns-6955765f44-lpmjk 0/1 Running 0 24m 10.244.1.43 k8s-worker-node-1 <none> <none>
</code></pre>
<p>Please find below logs of pod.</p>
<pre><code>kubectl logs coredns-6955765f44-lpmjk -n kube-system
E0420 03:43:03.855622 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
E0420 03:43:03.855622 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
E0420 03:43:03.855622 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
E0420 03:43:03.855622 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
E0420 03:43:05.859525 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
E0420 03:43:05.859525 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
</code></pre>
|
<p>To solve <code>no route to host</code> issue with CoreDNS pods you have to flush iptables by running:</p>
<pre><code>systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker
</code></pre>
<p>Also mind that flannel has been removed from the list of CNIs in the <a href="https://github.com/kubernetes/website/commit/f73647531dcdade2327412253a5f839781d57897/" rel="nofollow noreferrer">kubeadm documentation</a>:</p>
<blockquote>
<p>The reason for that is that Cluster Lifecycle have been
getting a number of issues related to flannel (either in kubeadm or
kops tickets) and we don't have good answers for the users as the
project is not actively maintained.
- Add note that issues for CNI should be logged in the respective issue
trackers and that Calico is the only CNI we e2e test kubeadm against.</p>
</blockquote>
<p>So recommended approach would be also move to Calico CNI.</p>
|
<p>I have a app where two pods needs to have access to the same volume. I want to be able to delete the cluster and then after apply to be able to access the data that is on the volume.</p>
<p>So for example:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: retaining
provisioner: csi.hetzner.cloud
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: media
spec:
#storageClassName: retaining
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-php
labels:
app: myapp-php
k8s-app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp-php
template:
metadata:
labels:
app: myapp-php
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-php
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 750m
memory: 3Gi
requests:
cpu: 750m
memory: 3Gi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-web
labels:
app: myapp-web
k8s-app: myapp
spec:
selector:
matchLabels:
app: myapp-web
template:
metadata:
labels:
app: myapp-web
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-web
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 10m
memory: 128Mi
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
</code></pre>
<p>If I do these:</p>
<pre><code>k apply -f pv-issue.yaml
k delete -f pv-issue.yaml
k apply-f pv-issue.yaml
</code></pre>
<p>I want to connect the same volume.</p>
<p>What I have tried:</p>
<ol>
<li>If I keep the file as is, the volume will be deleted so the data will be lost.</li>
<li>I can remove the pvc declaration from the file. Then it works. My issue that on the real app I am using kustomize and I don't see a way to exclude resources when doing <code>kustomize build app | kubectl delete -f -</code></li>
<li>Tried using retain in the pvc. It retains the volume on delete, but on the apply a new volume is created.</li>
<li>Statefulset, however I don't see a way that to different statefulsets can share the same volume.</li>
</ol>
<p>Is there a way to achieve this?
Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?</p>
|
<p><strong>Is there a way to achieve this? Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?</strong></p>
<p>Cluster deletion will make all your local volumes to be deleted. You can achieve this by storing the data outside the cluster. <code>Kubernetes</code> has a wide variety of storage providers to help you deploy data on a variety of storage types.</p>
<p>You may want to think also that you can keep the data locally on nodes with usage of <code>hostPath</code> but that is also not a good solution since it will require you to pin the pod to the specific node to avoid data loss. And if you delete you cluster in a way that all of you <code>VM</code> are gone, then this will be also gone.</p>
<p>Having some network-attached storage would be right way to go here. Very good example of those are <a href="https://cloud.google.com/compute/docs/disks#pdspecs" rel="nofollow noreferrer">Persistence disks</a> which durable network storage devices that you instances can access. They're located independently from you virtuals machines and they <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/deleting-a-cluster" rel="nofollow noreferrer">are not being deleted</a> when you delete the cluster.</p>
|
<p>I have installed kube-prometheus-stack as a <strong>dependency</strong> in my helm chart on a local Docker for Mac Kubernetes cluster v1.19.7.</p>
<p>The <strong>myrelease-name-prometheus-node-exporter</strong> service is failing with errors received from the node-exporter daemonset after installation of the helm chart for <a href="https://hub.kubeapps.com/charts/prometheus-community/kube-prometheus-stack#!" rel="nofollow noreferrer">kube-prometheus-stack</a> is installed. This is installed in a Docker Desktop for Mac Kubernetes Cluster environment.</p>
<p><strong>release-name-prometheus-node-exporter daemonset error log</strong></p>
<pre class="lang-sh prettyprint-override"><code>MountVolume.SetUp failed for volume "flaskapi-prometheus-node-exporter-token-zft28" : failed to sync secret cache: timed out waiting for the condition
Error: failed to start container "node-exporter": Error response from daemon: path / is mounted on / but it is not a shared or slave mount
Back-off restarting failed container
</code></pre>
<p>The scrape targets for <code>kube-scheduler:http://192.168.65.4:10251/metrics</code>, <code>kube-proxy:http://192.168.65.4:10249/metrics</code>, <code>kube-etcd:http://192.168.65.4:2379/metrics</code>, <code>kube-controller-manager:http://192.168.65.4:10252/metrics</code> and <code>node-exporter:http://192.168.65.4:9100/metrics</code> are marked as unhealthy. All show as <code>connection refused</code>, except for <code>kube-etcd</code> which displays <code>connection reset by peer</code>.</p>
<p><strong>Chart.yaml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v2
appVersion: "0.0.1"
description: A Helm chart for flaskapi deployment
name: flaskapi
version: 0.0.1
dependencies:
- name: kube-prometheus-stack
version: "14.4.0"
repository: "https://prometheus-community.github.io/helm-charts"
- name: ingress-nginx
version: "3.25.0"
repository: "https://kubernetes.github.io/ingress-nginx"
- name: redis
version: "12.9.0"
repository: "https://charts.bitnami.com/bitnami"
</code></pre>
<p><strong>Values.yaml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>hostname: flaskapi-service
redis_host: flaskapi-redis-master.default.svc.cluster.local
redis_port: "6379"
</code></pre>
<p><strong>Environment</strong>
Mac OS Catalina 10.15.7
Docker Desktop For Mac 3.2.2(61853) with docker engine v20.10.5
Local Kubernetes 1.19.7 Cluster provided by Docker Desktop For Mac</p>
<ul>
<li><p>Prometheus Operator version:</p>
<p><a href="https://hub.kubeapps.com/charts/prometheus-community/kube-prometheus-stack#!" rel="nofollow noreferrer">kube-prometheus-stack</a> 14.4.0</p>
</li>
<li><p>Kubernetes version information:</p>
<p><code>kubectl version</code></p>
</li>
</ul>
<pre class="lang-sh prettyprint-override"><code> Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>kubectl get all</strong></p>
<pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE
pod/alertmanager-flaskapi-kube-prometheus-s-alertmanager-0 2/2 Running 0 16m
pod/flask-deployment-775fcf8ff-2hp9s 1/1 Running 0 16m
pod/flask-deployment-775fcf8ff-4qdjn 1/1 Running 0 16m
pod/flask-deployment-775fcf8ff-6bvmv 1/1 Running 0 16m
pod/flaskapi-grafana-6cb58f6656-77rqk 2/2 Running 0 16m
pod/flaskapi-ingress-nginx-controller-ccfc7b6df-qvl7d 1/1 Running 0 16m
pod/flaskapi-kube-prometheus-s-operator-69f4bcf865-tq4q2 1/1 Running 0 16m
pod/flaskapi-kube-state-metrics-67c7f5f854-hbr27 1/1 Running 0 16m
pod/flaskapi-prometheus-node-exporter-7hgnm 0/1 CrashLoopBackOff 8 16m
pod/flaskapi-redis-master-0 1/1 Running 0 16m
pod/flaskapi-redis-slave-0 1/1 Running 0 16m
pod/flaskapi-redis-slave-1 1/1 Running 0 15m
pod/prometheus-flaskapi-kube-prometheus-s-prometheus-0 2/2 Running 0 16m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 16m
service/flask-api-service ClusterIP 10.108.242.86 <none> 4444/TCP 16m
service/flaskapi-grafana ClusterIP 10.98.186.112 <none> 80/TCP 16m
service/flaskapi-ingress-nginx-controller LoadBalancer 10.102.217.51 localhost 80:30347/TCP,443:31422/TCP 16m
service/flaskapi-ingress-nginx-controller-admission ClusterIP 10.99.21.136 <none> 443/TCP 16m
service/flaskapi-kube-prometheus-s-alertmanager ClusterIP 10.107.215.73 <none> 9093/TCP 16m
service/flaskapi-kube-prometheus-s-operator ClusterIP 10.107.162.227 <none> 443/TCP 16m
service/flaskapi-kube-prometheus-s-prometheus ClusterIP 10.96.168.75 <none> 9090/TCP 16m
service/flaskapi-kube-state-metrics ClusterIP 10.100.118.21 <none> 8080/TCP 16m
service/flaskapi-prometheus-node-exporter ClusterIP 10.97.61.162 <none> 9100/TCP 16m
service/flaskapi-redis-headless ClusterIP None <none> 6379/TCP 16m
service/flaskapi-redis-master ClusterIP 10.96.192.160 <none> 6379/TCP 16m
service/flaskapi-redis-slave ClusterIP 10.107.119.108 <none> 6379/TCP 16m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d1h
service/prometheus-operated ClusterIP None <none> 9090/TCP 16m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/flaskapi-prometheus-node-exporter 1 1 0 1 0 <none> 16m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flask-deployment 3/3 3 3 16m
deployment.apps/flaskapi-grafana 1/1 1 1 16m
deployment.apps/flaskapi-ingress-nginx-controller 1/1 1 1 16m
deployment.apps/flaskapi-kube-prometheus-s-operator 1/1 1 1 16m
deployment.apps/flaskapi-kube-state-metrics 1/1 1 1 16m
NAME DESIRED CURRENT READY AGE
replicaset.apps/flask-deployment-775fcf8ff 3 3 3 16m
replicaset.apps/flaskapi-grafana-6cb58f6656 1 1 1 16m
replicaset.apps/flaskapi-ingress-nginx-controller-ccfc7b6df 1 1 1 16m
replicaset.apps/flaskapi-kube-prometheus-s-operator-69f4bcf865 1 1 1 16m
replicaset.apps/flaskapi-kube-state-metrics-67c7f5f854 1 1 1 16m
NAME READY AGE
statefulset.apps/alertmanager-flaskapi-kube-prometheus-s-alertmanager 1/1 16m
statefulset.apps/flaskapi-redis-master 1/1 16m
statefulset.apps/flaskapi-redis-slave 2/2 16m
statefulset.apps/prometheus-flaskapi-kube-prometheus-s-prometheus 1/1 16m
</code></pre>
<p><strong>kubectl get svc -n kube-system</strong></p>
<pre class="lang-sh prettyprint-override"><code>flaskapi-kube-prometheus-s-coredns ClusterIP None <none> 9153/TCP 29s
flaskapi-kube-prometheus-s-kube-controller-manager ClusterIP None <none> 10252/TCP 29s
flaskapi-kube-prometheus-s-kube-etcd ClusterIP None <none> 2379/TCP 29s
flaskapi-kube-prometheus-s-kube-proxy ClusterIP None <none> 10249/TCP 29s
flaskapi-kube-prometheus-s-kube-scheduler ClusterIP None <none> 10251/TCP 29s
flaskapi-kube-prometheus-s-kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 2d18h
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d18h
</code></pre>
<p>Tried updating values.yaml to include this:</p>
<p><strong>Updated values.yaml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>prometheus-node-exporter:
hostRootFsMount: false
</code></pre>
<p>and this:</p>
<pre class="lang-yaml prettyprint-override"><code>prometheus:
prometheus-node-exporter:
hostRootFsMount: false
</code></pre>
<p>However, the issue described, remains, except the log for node-exporter daemonset now gives:</p>
<pre class="lang-sh prettyprint-override"><code>failed to try resolving symlinks in path "/var/log/pods/default_flaskapi-prometheus-node-exporter-p5cc8_54c20fc6-c914-4cc6-b441-07b68cda140e/node-exporter/3.log": lstat /var/log/pods/default_flaskapi-prometheus-node-exporter-p5cc8_54c20fc6-c914-4cc6-b441-07b68cda140e/node-exporter/3.log: no such file or directory
</code></pre>
<p><strong>Updated Information From Comment Suggestions</strong></p>
<p><code>kubectl get pod flaskapi-prometheus-node-exporter-p5cc8</code>
No args available since node-exporter crashing...</p>
<pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE
flaskapi-prometheus-node-exporter-p5cc8 0/1 CrashLoopBackOff 7 14m
</code></pre>
<p>The Args from the yaml output of <code>kubectl describe pod flaskapi-prometheus-node-exporter-p5cc8</code> gives:</p>
<pre class="lang-yaml prettyprint-override"><code> Args:
--path.procfs=/host/proc
--path.sysfs=/host/sys
--path.rootfs=/host/root
--web.listen-address=$(HOST_IP):9100
--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/)
--collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
</code></pre>
<p>After updating the values.yaml to include root <code>kube-prometheus-stack</code> as suggested in comments of answer allows the prometheus-node-exporter daemonset to start successfully. However, the scrape targets mentioned above are still unavailable....</p>
<pre class="lang-yaml prettyprint-override"><code>kube-prometheus-stack:
prometheus-node-exporter:
hostRootFsMount: false
</code></pre>
<p>How do I get the node-exporter working and make the associated scrape targets healthy?</p>
<p>Is the node-exporter of the <a href="https://hub.kubeapps.com/charts/prometheus-community/kube-prometheus-stack#!" rel="nofollow noreferrer">kube-prometheus-stack</a> helm chart incompatible with Docker Desktop for Mac Kubernetes clusters?</p>
<p>I have raised this as an <a href="https://github.com/prometheus-operator/kube-prometheus/issues/1073" rel="nofollow noreferrer">issue</a> at kube-prometheus-stack with log output included for scrape targets for <code>docker-desktop</code> and <code>minikube</code> clusters.</p>
<p><strong>Conclusion</strong>
It looks as though the unavailable scrape targets is a problem/bug with kube-prometheus-stack. I searched and found similar issues on their GitHub page: <a href="https://github.com/prometheus-operator/kube-prometheus/issues/713" rel="nofollow noreferrer">713</a> and <a href="https://github.com/prometheus-operator/kube-prometheus/issues/718" rel="nofollow noreferrer">718</a>. Tried on a minikube cluster with hyperkit vm-driver. On minikube the node-exporter functions out of the box, but the scrape targets issue still occurs. Not sure what a safe solution is?</p>
<p>I may investigate an alternative helm chart dependency for prometheus and grafana...</p>
|
<p>This issue was solved recently. Here is more information: <a href="https://github.com/prometheus-community/helm-charts/issues/467" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/issues/467</a> and here: <a href="https://github.com/prometheus-community/helm-charts/pull/757" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/pull/757</a></p>
<p>Here is the solution (<a href="https://github.com/prometheus-community/helm-charts/issues/467#issuecomment-802642666" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/issues/467#issuecomment-802642666</a>):</p>
<blockquote>
<p>[you need to] opt-out the rootfs host mount (preventing the crash). In order to do that you need to specify the following value in values.yaml file:</p>
<pre><code>prometheus-node-exporter:
hostRootFsMount: false
</code></pre>
</blockquote>
|
<p>I deployed an Angular SPA application in Azure Kubernetes. I was trying to access the application through the Ingress Nginx controller. The Ingress is pointing to xxx.xxx.com and the app is deployed in the path "/". So when I accessed the application the application is loading fine. But when I try to navigate to any other page other than index.html by entering it directly in the browser (ex: eee.com/homepage), I get 404 Not Found.</p>
<p>Following is the dockerfile content</p>
<pre><code># base image
FROM nginx:1.16.0-alpine
# copy artifact build from the 'build environment'
COPY ./app/ /usr/share/nginx/html/
RUN rm -f /etc/nginx/conf.d/nginx.config
COPY ./nginx.config /etc/nginx/conf.d/nginx.config
# expose port 80
EXPOSE 80
# run nginx
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p>Ingress.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-frontend-ingress
namespace: app
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- xxx.com
secretName: tls-dev-app-ingress
rules:
- host: xxx.com
http:
paths:
- backend:
serviceName: poc-service
servicePort: 80
path: /
</code></pre>
<p>nginx.conf</p>
<pre><code>server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
</code></pre>
|
<p>You can add a configuration-snippet that will take care of rewriting any path to <code>/</code>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-frontend-ingress
namespace: app
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite /([^.]+)$ / break;
[...]
</code></pre>
<p>For example, if you go to <code>host.com/xyz</code> it will be redirected to <code>host.com</code></p>
|
<p>I have multiple services and their probes are configured in the same way. I would like to extract common values like initialDelaySeconds, periodSeconds etc for livenessProbe into configMap. Is it possible?</p>
<p>When I create a configMap like this:</p>
<pre><code>data:
liveness-endpoint: /actuator/health/liveness
liveness-initialDelaySeconds: 60
liveness-periodSeconds: 5
</code></pre>
<p>and try to reference it in probe like this:</p>
<pre><code> livenessProbe:
httpGet:
path: liveness-endpoint
port: http-api
initialDelaySeconds: liveness-initialDelaySeconds
periodSeconds: liveness-periodSeconds
</code></pre>
<p>kubernetes complain, that configMap must have only Strings, so I'm changing it to</p>
<pre><code> liveness-initialDelaySeconds: "60"
</code></pre>
<p>and then it complains that probe must use Integer, not String.</p>
<p>As you see, I can reference port for probe, so probably there is a way to define those int values, but how?</p>
|
<p>Kubernetes doesn't allow configMap usage in yaml files. Basically it needs to know before configMap even loads, you can only use configMaps as volumes and environment variables.</p>
<p>Also ports can be string because you can name the ports in pod, svc definitions and then reference that in liveness, readiness probes. But periodSeconds is just plain old integer value only.</p>
|
<p>I have written a Go-based K8s client application to connect with the K8s cluster. To handle the realtime notification from the K8s cluster (add, delete, update) of Pod, Namespace, and Node, I have programmed an informer. The code snippet is below.</p>
<p>I want to bring specific attention to the "runtime.HandleCrash()" function, which (I guess) helps to redirect the runtime panic/errors to the panic file. </p>
<pre><code>// Read the ES config.
panicFile, _ := os.OpenFile("/var/log/panicfile", os.O_WRONLY|os.O_CREATE|os.O_SYNC, 0644)
syscall.Dup2(int(panicFile.Fd()), int(os.Stderr.Fd()))
</code></pre>
<p>See some errors below which are reported/collected in the panic file. </p>
<p>My question is: What is the way, I can program informer that it reports/notifies the specific errors to my application rather than writing to a panic file? That way, my application would be able to handle this - expected event - more gracefully. </p>
<p>Is there any way I can register a callback function (similar to Informer.AddEventHandler()). </p>
<pre><code>func (kcv *K8sWorker) armK8sPodListeners() error {
// Kubernetes serves an utility to handle API crashes
defer runtime.HandleCrash()
var sharedInformer = informers.NewSharedInformerFactory(kcv.kubeClient.K8sClient, 0)
// Add watcher for the Pod.
kcv.podInformer = sharedInformer.Core().V1().Pods().Informer()
kcv.podInformerChan = make(chan struct{})
// Pod informer state change handler
kcv.podInformer.AddEventHandler(cache.ResourceEventHandlerFuncs {
// When a new pod gets created
AddFunc: func(obj interface{}) {
kcv.handleAddPod(obj)
},
// When a pod gets updated
UpdateFunc: func(oldObj interface{}, newObj interface{}) {
kcv.handleUpdatePod(oldObj, newObj)
},
// When a pod gets deleted
DeleteFunc: func(obj interface{}) {
kcv.handleDeletePod(obj)
},
})
kcv.nsInformer = sharedInformer.Core().V1().Namespaces().Informer()
kcv.nsInformerChan = make(chan struct{})
// Namespace informer state change handler
kcv.nsInformer.AddEventHandler(cache.ResourceEventHandlerFuncs {
// When a new namespace gets created
AddFunc: func(obj interface{}) {
kcv.handleAddNamespace(obj)
},
// When a namespace gets updated
//UpdateFunc: func(oldObj interface{}, newObj interface{}) {
// kcv.handleUpdateNamespace(oldObj, newObj)
//},
// When a namespace gets deleted
DeleteFunc: func(obj interface{}) {
kcv.handleDeleteNamespace(obj)
},
})
// Add watcher for the Node.
kcv.nodeInformer = sharedInformer.Core().V1().Nodes().Informer()
kcv.nodeInformerChan = make(chan struct{})
// Node informer state change handler
kcv.nodeInformer.AddEventHandler(cache.ResourceEventHandlerFuncs {
// When a new node gets created
AddFunc: func(obj interface{}) {
kcv.handleAddNode(obj)
},
// When a node gets updated
UpdateFunc: func(oldObj interface{}, newObj interface{}) {
kcv.handleUpdateNode(oldObj, newObj)
},
// When a node gets deleted
DeleteFunc: func(obj interface{}) {
kcv.handleDeleteNode(obj)
},
})
// Start the shared informer.
kcv.sharedInformerChan = make(chan struct{})
sharedInformer.Start(kcv.sharedInformerChan)
log.Debug("Shared informer started")
return nil
}
</code></pre>
<p>In a specific use case, I shutdown the K8s cluster resulting in an informer throwing error messages into a panic file as below. </p>
<p>The moment I boot up the K8s cluster nodes, it stops reporting these errors. </p>
<pre><code>==== output from "/var/log/panicfile" ======
E0611 16:13:03.558214 10 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Pod: Get https://10.30.8.75:6443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.30.8.75:6443: connect: no route to host
E0611 16:13:03.558224 10 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Namespace: Get https://10.30.8.75:6443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.30.8.75:6443: connect: no route to host
E0611 16:13:03.558246 10 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://10.30.8.75:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 10.30.8.75:6443: connect: no route to host
</code></pre>
|
<p>Your question is:</p>
<blockquote>
<p>Is there any way I can register a callback function (similar to Informer.AddEventHandler()).</p>
</blockquote>
<p>A believe what you are looking for is <code>SetWatchErrorHandler()</code></p>
<p>From the <a href="https://github.com/kubernetes/client-go/blob/v0.20.5/tools/cache/shared_informer.go#L169-L182" rel="nofollow noreferrer">source code</a>:</p>
<pre><code>type SharedInformer interface {
...
// The WatchErrorHandler is called whenever ListAndWatch drops the
// connection with an error. After calling this handler, the informer
// will backoff and retry.
//
// The default implementation looks at the error type and tries to log
// the error message at an appropriate level.
//
// There's only one handler, so if you call this multiple times, last one
// wins; calling after the informer has been started returns an error.
//
// The handler is intended for visibility, not to e.g. pause the consumers.
// The handler should return quickly - any expensive processing should be
// offloaded.
SetWatchErrorHandler(handler WatchErrorHandler) error
}
</code></pre>
<p>You call this function on informer:</p>
<pre><code>kcv.podInformer.SetWatchErrorHandler(func(r *Reflector, err error) {
// your code goes here
})
</code></pre>
<p>Here is the <a href="https://github.com/kubernetes/client-go/blob/f6ce18ae578c8cca64d14ab9687824d9e1305a67/tools/cache/reflector.go#L125-L140" rel="nofollow noreferrer">DefaultWatchErrorHandler</a>.</p>
|
<p>I am trying to understand how I should deploy containers with Kubernetes. I am new at this topic so at this moment I am testing all these ideas in a virtual machine.</p>
<p>I'm using Git, Jenkins, Docker, Docker Hub and Kubernetes.</p>
<p>Also, I have a Master node and only one Slave node.</p>
<p>I created a YAML deploy file to start the pod and create a new container.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f deployment.yaml
</code></pre>
<p>Then I expose the deploy.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl expose deployment my-app --type=LoadBalancer --name=my-app
</code></pre>
<p>YAML File for deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: app
spec:
selector:
matchLabels:
app: app
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: app
role: master
tier: backend
spec:
containers:
- name: appcontainer
image: repository:1.0
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 8085
imagePullSecrets:
- name: regcred
</code></pre>
<p>Now that I have all working, for example, I give a new update on the image and I need to update this new image, for example, 1.0 to 1.1, on the deployment done. I need to know the proper way to do this action.</p>
<p>Because I think I'm doing it wrong like I'm trying to smash the image on the container created with the new image and I don't know if the proper way is deploying with a new YAML file and if Kubernetes builds a new container with that image and consequently kill the old deploy, but I don't know how I can do that if this the right thing to do.</p>
|
<p>The easiest way to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">update a deployment</a> with new image will be to run:</p>
<pre><code>kubectl set image deployment/my-app appcontainer=repository:1.1 --record
</code></pre>
<p>This way it will first create a new pod(s) with newer version of image and once successfully deployed it will terminate old pod or pods depending on number of replicas you have specified in the <code>replicas</code> field.</p>
<p>You can check status of the update by running </p>
<pre><code>kubectl rollout status deployment.v1.apps/my-app
</code></pre>
|
<p>I am trying to install Gitlab from their official Helm chart <code>gitlab/gitlab</code>. One of the sub-charts is the <code>bitnami/postgresql</code> chart. I have access to the source code of both charts.</p>
<pre><code>$ helm install gitlab gitlab/gitlab \
--set global.hosts.domain=mando \
--set global.hosts.externalIP=192.168.1.2 \
--set [email protected]
--set global.edition=ce
</code></pre>
<p>When I try to install the Gitlab chart, several containers are created, and the PostgreSQL one fails to start due to an unbound PVC. I have tried creating several different PVs that might accommodate its requirement but none of them seem to work.</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s (x14 over 8m) default-scheduler error while running "VolumeBinding" filter plugin for pod "gitlab-postgresql-0": pod has unbound immediate PersistentVolumeClaims
</code></pre>
<p>I can describe the PVC and get <em>some</em> information about it, but it's not clear from the output what is missing from my PVs or what I can do do make the claim successful.</p>
<pre><code>[mando infra]$ kubectl describe pvc data-gitlab-postgresql-0
Name: data-gitlab-postgresql-0
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app=postgresql
release=gitlab
role=master
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: gitlab-postgresql-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 4m48s (x6324 over 26h) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
</code></pre>
<p>So how can I find the PersistentVolumeClaim requirements when PV binding fails?</p>
|
<p>As described <a href="https://docs.gitlab.com/charts/installation/storage.html" rel="nofollow noreferrer">in the gitlab documentation</a> you have to manage storage on your own. You have to create storageclass, pv and pvcs by yourself.</p>
<p>It is recommended to use dynamic storage provisioning.
Example <code>StorageClass</code> <a href="https://gitlab.com/gitlab-org/charts/gitlab/blob/master/examples/storage/gke_storage_class.yml" rel="nofollow noreferrer">object for GCP</a>:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: CUSTOM_STORAGE_CLASS_NAME
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
parameters:
type: pd-standard
</code></pre>
<p>After creating <code>StorageClass</code> you have to upgrade your chart by modifying <a href="https://gitlab.com/gitlab-org/charts/gitlab/blob/master/examples/storage/helm_options.yml" rel="nofollow noreferrer">following file</a> with created <code>storageClass</code>: </p>
<pre><code>gitlab:
gitaly:
persistence:
storageClass: CUSTOM_STORAGE_CLASS_NAME
size: 50Gi
postgresql:
persistence:
storageClass: CUSTOM_STORAGE_CLASS_NAME
size: 8Gi
minio:
persistence:
storageClass: CUSTOM_STORAGE_CLASS_NAME
size: 10Gi
redis:
master:
persistence:
storageClass: CUSTOM_STORAGE_CLASS_NAME
size: 5Gi
</code></pre>
<p>And the upgrade your chart</p>
<pre><code>helm install -upgrade gitlab gitlab/gitlab -f HELM_OPTIONS_YAML_FILE
</code></pre>
|
<p>in K8S audit logs we get the responseStatus with different "code" number.
where can i find all the options that can be return by K8S?</p>
<pre><code> "responseStatus": {
"metadata": {},
"status": "Failure",
"reason": "Invalid",
"code": 422
},
</code></pre>
|
<p>They are basically HTTP status. In this case it is <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422" rel="nofollow noreferrer">HTTP 422</a>.</p>
|
<p>I'm running a cluster on AWS EKS. Container(StatefulSet POD) that currently running has Docker installation inside of it. </p>
<p>I ran this image as Kubernetes StatefulSet in my cluster. Here is my yaml file,</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jenkins
labels:
run: jenkins
spec:
serviceName: jenkins
replicas: 1
selector:
matchLabels:
run: jenkins
template:
metadata:
labels:
run: jenkins
spec:
securityContext:
fsGroup: 1000
containers:
- name: jenkins
image: 99*****.dkr.ecr.<region>.amazonaws.com/<my_jenkins_image>:0.0.3
imagePullPolicy: Always
ports:
- containerPort: 8080
name: jenkins-port
</code></pre>
<p>Inside this POD, I can not run any docker command which gives a ERROR:</p>
<blockquote>
<p>/etc/init.d/docker: 96: ulimit: error setting limit (Operation not permitted)</p>
</blockquote>
<p>In my research, I went through some artcile which did not fix my issue.
I have listed down solution that i tried but not fixed in my case</p>
<p><strong>First solution: (I ran inside the container)</strong>
<a href="https://stackoverflow.com/questions/24318543/how-to-set-ulimit-file-descriptor-on-docker-container-the-image-tag-is-phusion">aricle link</a></p>
<pre><code>$ sudo service docker stop
$ sudo bash -c "echo \"limit nofile 262144 262144\" >> /etc/init/docker.conf"
$ sudo service docker start
</code></pre>
<p><strong>Second solution: (I ran inside the container)</strong></p>
<pre><code>ulimit -n 65536 in /etc/init.d/docker
</code></pre>
<p><strong>Third solution: ** <a href="https://unix.stackexchange.com/questions/450799/getting-operation-not-permitted-error-when-setting-ulimit-for-memlock-in-a-doc">article link</a>
This seems a far better answer, which i could not add into my configuration file.
it says, run pod with as privilaged. But there is no way to add that option in ***Kubernetes StatefulSet*</strong> .
So I tried by adding a SecurityContext <strong>(securityContext:fsGroup: 1000)</strong> like this inside configuration file,</p>
<pre><code>spec:
serviceName: jenkins
replicas: 1
selector:
matchLabels:
run: jenkins
template:
metadata:
labels:
run: jenkins
spec:
securityContext:
fsGroup: 1000
</code></pre>
<p>still it does not work.</p>
<p><strong>Note :same image worked on Docker swarm</strong></p>
<p>Anyhelp would be appreciated!</p>
|
<p>I had this issue with Elastic Search and adding <code>initContainer</code> worked. In this case it could be the solution: </p>
<pre><code>spec:
.
.
.
initContainers:
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
</code></pre>
<p>If it doesn't work, there is a <a href="https://github.com/kubernetes/kubernetes/issues/3595#issuecomment-487919341" rel="nofollow noreferrer">second</a> way to solve this problem which includes creating a new Dockerfile or changing existing:</p>
<pre><code>FROM 99*****.dkr.ecr.<region>.amazonaws.com/<my_jenkins_image>:0.0.3
RUN ulimit -n 65536
USER 1000
</code></pre>
<p>and change securityContext to:</p>
<pre><code> securityContext:
runAsNonRoot: true
runAsUser: 1000
capabilities:
add: ["IPC_LOCK"]
</code></pre>
|
<p>I created a single master and 2 worker cluster using Kops on AWS. Now for my experiment, I need to kill the pod on a worker and check the kubelet logs to know:</p>
<p>When the pod was removed from service endpoint list?</p>
<p>When a new pod container got recreated?</p>
<p>When the new pod container was assigned the new IP Address?</p>
<p>While when I created an on-prem cluster using kubeadm, I could see all the information (like the one mentioned above) in the kubelet logs of the worker node (whose pod was killed).</p>
<p>I do not see detailed kubelet logs like this, specially logs related to assignment of IP address in Kops created K8s cluster.</p>
<p>How to get the information mentioned above in the cluster created using kops?</p>
|
<p>On the machines with systemd both the kubelet and container runtime write to <code>journald</code>. If systemd is not present they write to <code>.log</code> in the <code>/var/log</code> location.</p>
<p>You can access the systemd logs with <code>journalctl</code> command:</p>
<pre class="lang-sh prettyprint-override"><code>journalctl -u kubelet
</code></pre>
<p>This information of course has to be collected after login into desired node.</p>
|
<p>I have an ASP.NET Core Multi-Container docker app which I am now trying to host to Kubernetes cluster on my local PC. But unfortunately one container is starting and other is giving error <strong>address already in use</strong>.</p>
<p>The Deployment file is given below:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: multiapp
imagePullPolicy: Never
ports:
- containerPort: 80
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
</code></pre>
<p>The full logs of the container which is failing is:</p>
<pre><code>Unable to start Kestrel.
System.IO.IOException: Failed to bind to address http://[::]:80: address already in use.
---> Microsoft.AspNetCore.Connections.AddressInUseException: Address already in use
---> System.Net.Sockets.SocketException (98): Address already in use
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.TransportManager.BindAsync(EndPoint endPoint, ConnectionDelegate connectionDelegate, EndpointConfig endpointConfig)
</code></pre>
<p>Note that I already tried putting another port to that container in the YAML file</p>
<pre><code>ports:
- containerPort: 81
</code></pre>
<p>But it seems to not working. How to fix it?</p>
|
<p>To quote this answer: <a href="https://stackoverflow.com/a/62057548/12201084">https://stackoverflow.com/a/62057548/12201084</a></p>
<blockquote>
<p>containerPort as part of the pod definition is only informational purposes.</p>
</blockquote>
<p>This means that setting containerPort does not have any influence on what port application opens. You can even skip it and don't set it at all.</p>
<p>If you want your application to open a specific port you need to tell it to the applciation. It's usually done with flags, envs or configfiles. Setting a port in pod/container yaml definition won't change a thing.</p>
<p>You have to remember that <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">k8s network model</a> is different than docker and docker compose's model.</p>
<hr />
<p>So why does the containerPort field exist if is doesn't do a thing? - you may ask</p>
<p>Well. Actually is not completely true. It's main puspose is indeed for informational/documenting purposes but it may also be used with services. You can name a port in pod definition and then use this name to reference the port in service definition yaml (this only applies to targetPort field).</p>
|
<p>I am running a local deployment and trying to redirect HTTPS traffic to my backend pods.
I don't want SSL termination at the Ingress level, which is why I didn't use any tls secrets.</p>
<p>I am creating a self signed cert within the container, and Tomcat starts up by picking that and exposing on 8443.</p>
<p>Here is my Ingress Spec</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-name
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
#nginx.ingress.kubernetes.io/service-upstream: "false"
kubernetes.io/ingress.class: {{ .Values.global.ingressClass }}
nginx.ingress.kubernetes.io/affinity: "cookie"
spec:
rules:
- http:
paths:
- path: /myserver
backend:
serviceName: myserver
servicePort: 8443
</code></pre>
<p>I used the above annotation in different combinations but I still can't reach my pod.</p>
<p>My service routes</p>
<pre><code># service information for myserver
service:
type: ClusterIP
port: 8443
targetPort: 8443
protocol: TCP
</code></pre>
<p>I did see a few answers regarding this suggesting annotations, but that didn't seem to work for me. Thanks in advance!</p>
<p>edit: The only thing that remotely worked was when I overwrote the ingress values as</p>
<pre><code>nginx-ingress:
controller:
publishService:
enabled: true
service:
type: NodePort
nodePorts:
https: "40000"
</code></pre>
<p>This does enable https, but it picks up kubernetes' fake certs, rather than my cert from the container</p>
<p>Edit 2:
For some reason, the ssl-passthrough is not working. I enforced it as</p>
<pre><code>nginx-ingress:
controller:
extraArgs:
enable-ssl-passthrough: ""
</code></pre>
<p>when I describe the deployment, I can see it in the args but when I check with <code>kubectl ingress-nginx backends</code> as described in <a href="https://kubernetes.github.io/ingress-nginx/kubectl-plugin/#backends" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/kubectl-plugin/#backends</a>, it says "sslPassThrough:false"</p>
|
<p>There are several thingns you need to setup if you want to use ssl-passthrough.</p>
<ol>
<li><p>First is to set proper host name:</p>
<pre><code>spec:
rules:
- host: example.com <- HERE
http:
...
</code></pre>
<p>It's mentioned <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/#ssl-passthrough" rel="nofollow noreferrer">in documentation</a>:</p>
<blockquote>
<p>SSL Passthrough leverages SNI [Server Name Indication] and reads the virtual domain from the
TLS negotiation, which requires compatible clients. After a connection
has been accepted by the TLS listener, it is handled by the controller
itself and piped back and forth between the backend and the client.</p>
<p>If there is no hostname matching the requested host name, the request
is handed over to NGINX on the configured passthrough proxy port
(default: 442), which proxies the request to the default backend.</p>
</blockquote>
</li>
</ol>
<hr />
<ol start="2">
<li><p>Second thing is setting <code>--enable-ssl-passthrough</code> flag as already mentioned in separate answer by @thomas.</p>
<p>Just edit the nginx ingress deployment and add this line to args list:</p>
<pre><code> - --enable-ssl-passthrough
</code></pre>
</li>
</ol>
<hr />
<ol start="3">
<li><p>Third thing that has to be done is use to use the following annotations in your ingress object definition:</p>
<pre><code> nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
</code></pre>
</li>
</ol>
<p>IMPORTANT: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="nofollow noreferrer">In docs</a> you can read:</p>
<blockquote>
<p>Attention</p>
<p>Because SSL Passthrough works on layer 4 of the OSI model (TCP) and
not on the layer 7 (HTTP), using SSL Passthrough invalidates all the
other annotations set on an Ingress object.</p>
</blockquote>
<p>This means that all other annotations are useless from now on. This applies to annotations like <code>force-ssl-redirect</code>, <code>affinity</code> and also to paths you defined (e.g. <code>path: /myserver</code>). Since traffic is end-to-end encrypted, all ingress sees is some gibberish and all it can do is pass this data to the application based on dns name (SNI).</p>
|
<p>While trying to deploy an application got an error as below:</p>
<pre><code>Error: UPGRADE FAILED: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
</code></pre>
<p>Output of <strong><code>kubectl api-resources</code></strong> consists some resources along with the same error in the end.</p>
<p><strong>Environment</strong>: Azure Cloud, AKS Service</p>
|
<p>Solution:</p>
<p>The steps I followed are:</p>
<ol>
<li><p><code>kubectl get apiservices</code> : If metric-server service is down with the error <strong>CrashLoopBackOff</strong> try to follow the step 2 otherwise just try to restart the metric-server service using <strong>kubectl delete apiservice/"service_name"</strong>. For me it was <strong>v1beta1.metrics.k8s.io</strong> .</p></li>
<li><p><code>kubectl get pods -n kube-system</code> and found out that pods like metrics-server, kubernetes-dashboard are down because of the main coreDNS pod was down.</p></li>
</ol>
<hr>
<p>For me it was: </p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/coredns-85577b65b-zj2x2 0/1 CrashLoopBackOff 7 13m
</code></pre>
<ol start="3">
<li>Use <code>kubectl describe pod/"pod_name"</code> to check the error in coreDNS pod and if it is down because of <strong>/etc/coredns/Corefile:10 - Error during parsing: Unknown directive proxy</strong>, then we need to use <strong>forward</strong> instead of <strong>proxy</strong> in the yaml file where coreDNS config is there. Because CoreDNS version 1.5x used by the image does not support the <strong>proxy</strong> keyword anymore.</li>
</ol>
|
<p>Whitelisting ips is possible with this annotation: <code>ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/24"</code></p>
<p>Is it possible to do <strong>the same with blacklisting?</strong> would be nice to block some suspicous requests.</p>
|
<p>Unfortunately support for blocking ip addresses is not supported natively by traefik and any requests were <a href="https://github.com/traefik/traefik/pull/4454" rel="nofollow noreferrer">declined</a> with a comment:</p>
<blockquote>
<p>We want to keep the IP filtering section as simple as possible and we
think that your use case could be addressed differently.</p>
<p>We think that a blacklisting task can be better achieved using a
firewall.</p>
<p>So, for now, and I insist on the "for now", we will decline your pull
request.</p>
</blockquote>
<p>For the same reason <a href="https://github.com/traefik/traefik/pull/7926" rel="nofollow noreferrer">#7926</a> was declined.</p>
<p>You may be interested though in the those two plugins:</p>
<ul>
<li><a href="https://pilot.traefik.io/plugins/276812076107694611/deny-ip-plugin" rel="nofollow noreferrer">https://pilot.traefik.io/plugins/276812076107694611/deny-ip-plugin</a></li>
<li><a href="https://pilot.traefik.io/plugins/280093067746214409/fail2-ban" rel="nofollow noreferrer">https://pilot.traefik.io/plugins/280093067746214409/fail2-ban</a></li>
</ul>
|
<p>In our docker-compose.yaml we have:</p>
<pre><code>version: "3.5"
services:
consul-server:
image: consul:latest
command: "agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=./usr/src/app/consul.d/"
volumes:
- ./consul.d/:/usr/src/app/consul.d
</code></pre>
<p>In the <code>consul.d</code> folder we have statically defined our services. It works fine with docker-compose.</p>
<p>But when trying to run it on Kubernetes with this configmap:</p>
<pre><code>ahmad@ahmad-pc:~$ kubectl describe configmap consul-config -n staging
Name: consul-config
Namespace: staging
Labels: <none>
Annotations: <none>
Data
====
trip.json:
----
... omitted for clarity ...
</code></pre>
<p>and consul.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: consul-server
name: consul-server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: consul-server
template:
metadata:
labels:
io.kompose.service: consul-server
spec:
containers:
- image: quay.io/bitnami/consul:latest
name: consul-server
ports:
- containerPort: 8500
#env:
#- name: CONSUL_CONF_DIR # Consul seems not respecting this env variable
# value: /consul/conf/
volumeMounts:
- name: config-volume
mountPath: /consul/conf/
command: ["agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/"]
volumes:
- name: config-volume
configMap:
name: consul-config
</code></pre>
<p>I got the following error:</p>
<pre><code>ahmad@ahmad-pc:~$ kubectl describe pod consul-server-7489787fc7-8qzhh -n staging
...
Error: failed to start container "consul-server": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/\":
stat agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -config-dir=/consul/conf/:
no such file or directory": unknown
</code></pre>
<p>But when I run the container without <code>command: agent...</code> and bash into it, I can list files mounted in the right place.
Why consul gives me a not found error despite that folder exists?</p>
|
<p>To <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod" rel="nofollow noreferrer">execute command</a> in the pod you have to define a command in <code>command</code> field and arguments for the command in <code>args</code> field. <code>command</code> field is the same as <code>ENTRYPOINT</code> in Docker and <code>args</code> field is the same as <code>CMD</code>.<br>
In this case you define <code>/bin/sh</code> as ENTRYPOINT and <code>"-c, "consul agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -data-dir=/bitnami/consul/data/ -config-dir=/consul/conf/"</code> as arguments so it can execute <code>consul agent ...</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: consul-server
name: consul-server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: consul-server
template:
metadata:
labels:
io.kompose.service: consul-server
spec:
containers:
- image: quay.io/bitnami/consul:latest
name: consul-server
ports:
- containerPort: 8500
env:
- name: CONSUL_CONF_DIR # Consul seems not respecting this env variable
value: /consul/conf/
volumeMounts:
- name: config-volume
mountPath: /consul/conf/
command: ["bin/sh"]
args: ["-c", "consul agent -server -bootstrap -ui -enable-script-checks=true -client=0.0.0.0 -data-dir=/bitnami/consul/data/ -config-dir=/consul/conf/"]
volumes:
- name: config-volume
configMap:
name: consul-config
</code></pre>
|
<p>I have created mongo-statefulset in kubernetes and all the 3 pods are running, </p>
<pre><code>NAME READY STATUS RESTARTS AGE
mongo-0 1/1 Running 0 40m
mongo-1 1/1 Running 0 40m
mongo-2 1/1 Running 0 40m
mysql-5456cbb767-t8g2g 1/1 Running 0 3h45m
nfs-client-provisioner-5d77dc5bd-mcz8p 1/1 Running 0 42m
</code></pre>
<p>however after initiating and reconfigure the first member of the replica set with the correct DNS name, could not add the remaining replica sets.</p>
<pre><code>>rs.initiate()
>var cfg = rs.conf()
>cfg.members[0].host="mongo‑0.mongo:27017"
>rs.reconfig(cfg)
</code></pre>
<p>Add the second member give me this error</p>
<pre><code>rs0:PRIMARY> rs.add("mongo‑1.mongo:27017")
{
"operationTime" : Timestamp(1590453966, 1),
"ok" : 0,
"errmsg" : "Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: mongo-0:27017; the following nodes did not respond affirmatively: mongo‑1.mongo:27017 failed with Error connecting to mongo‑1.mongo:27017 :: caused by :: Could not find address for mongo‑1.mongo:27017: SocketException: Host not found (authoritative)",
"code" : 74,
"codeName" : "NodeNotFound",
"$clusterTime" : {
"clusterTime" : Timestamp(1590453966, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs0:PRIMARY>
</code></pre>
<p>Also using hostname -f on all the nodes returns the correct hostname of the nodes.
eg. </p>
<pre><code>ubuntu@k8s-master:~$ hostname -f
k8s-master
</code></pre>
<p>Any idea as how to solve this problem?</p>
<p>Services deployed</p>
<pre><code>ubuntu@k8s-master:~$ kubectl get svc --all-namespaces=true -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h <none>
default mongo ClusterIP None <none> 27017/TCP 7h59m app=mongo
default mysql ClusterIP None <none> 3306/TCP 11h app=mysql
default nrf-instance-service NodePort 10.96.18.198 <none> 9090:30005/TCP 10h app=nrf-instance
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 20h k8s-app=kube-dns
kube-system metrics-server ClusterIP 10.111.247.62 <none> 443/TCP 11h k8s-app=metrics-server
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.97.205.147 <none> 8000/TCP 20h k8s-app=dashboard-metrics-scraper
kubernetes-dashboard kubernetes-dashboard NodePort 10.100.212.222 <none> 443:32648/TCP 20h k8s-app=kubernetes-dashboard
</code></pre>
<p>Also is the dns checked on each mongo pod</p>
<pre><code>ubuntu@k8s-master:~$ kubectl exec -ti mongo-0 -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
ubuntu@k8s-master:~$ kubectl exec -ti mongo-1 -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
ubuntu@k8s-master:~$ kubectl exec -ti mongo-2 -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
<pre><code>ubuntu@k8s-master:~$ kubectl exec -ti mongo-0 -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
ubuntu@k8s-master:~$ kubectl exec -ti mongo-1 -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
ubuntu@k8s-master:~$ kubectl exec -ti mongo-2 -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
</code></pre>
<p>statefulset.yaml content</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--bind_ip_all"
- "--replSet"
- rs0
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-volume
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
</code></pre>
<p>kubectl get endpoints mongo -oyaml </p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
annotations:
endpoints.kubernetes.io/last-change-trigger-time: "2020-05-26T08:27:18Z"
creationTimestamp: "2020-05-26T00:42:16Z"
labels:
app: mongo
service.kubernetes.io/headless: ""
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:endpoints.kubernetes.io/last-change-trigger-time: {}
f:labels:
.: {}
f:app: {}
f:service.kubernetes.io/headless: {}
f:subsets: {}
manager: kube-controller-manager
operation: Update
time: "2020-05-26T08:27:18Z"
name: mongo
namespace: default
resourceVersion: "114014"
selfLink: /api/v1/namespaces/default/endpoints/mongo
uid: 0a7faaa4-9de7-4a14-b101-04fc5884d23c
subsets:
- addresses:
- hostname: mongo-1
ip: 10.244.1.16
nodeName: k8s-worker-node2
targetRef:
kind: Pod
name: mongo-1
namespace: default
resourceVersion: "114012"
uid: 249f3cf6-aa2b-4d4e-a736-671dd1942fc7
- hostname: mongo-2
ip: 10.244.2.28
nodeName: k8s-worker-node1
targetRef:
kind: Pod
name: mongo-2
namespace: default
resourceVersion: "113775"
uid: 796a80d9-889e-4fdd-88f2-0baf3ed080c6
- hostname: mongo-0
ip: 10.244.2.31
nodeName: k8s-worker-node1
targetRef:
kind: Pod
name: mongo-0
namespace: default
resourceVersion: "113825"
uid: 8948c536-43fd-46c4-9132-d5afe145ede7
ports:
- name: mongo
port: 27017
protocol: TCP
</code></pre>
<p>Adding with the </p>
<pre><code>rs0:PRIMARY> rs.add("mongo‑1.mongo.default.svc.cluster.local:27017")
{
"operationTime" : Timestamp(1590526166, 1),
"ok" : 0,
"errmsg" : "Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: mongo-0:27017; the following nodes did not respond affirmatively: mongo‑1.mongo.default.svc.cluster.local:27017 failed with Error connecting to mongo‑1.mongo.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo‑1.mongo.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)",
"code" : 74,
"codeName" : "NodeNotFound",
"$clusterTime" : {
"clusterTime" : Timestamp(1590526166, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs0:PRIMARY>
</code></pre>
|
<p>I replicated your setup and have found the issue.</p>
<p>It turned out that you were using wrong characters of hyphen.</p>
<pre><code>‑ -
</code></pre>
<p>First one (used by you) has hex representation of 0x2011. Second is a regular - with hex of 0x2d. They look almost the same but they aren't the same from dns point of view.</p>
<p>To resolve your issue you need to use a regular hyphen.
Here:</p>
<pre><code>rs.add("mongo-1.mongo:27017")
</code></pre>
<p>Try copy-pasting above command and see if it works.</p>
|
<p>Does <code>kubectl drain</code> first make sure that pods with <code>replicas=1</code> are healthy on some other node?<br>
Assuming the pod is controlled by a deployment, and the pods can indeed be moved to other nodes.
Currently as I see it only evict (delete pods) from the nodes, without scheduling them first.</p>
|
<p>In addition to <a href="https://stackoverflow.com/users/8803619/suresh-vishnoi">Suresh Vishnoi</a> answer:</p>
<p>If <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="noreferrer">PodDisruptionBudget</a> is not specified and you have a deployment with one replica, the pod will be terminated and <strong>then</strong> new pod will be scheduled on a new node.</p>
<p>To make sure your application will be available during node draining process you have to specify PodDisruptionBudget and create more replicas. If you have 1 pod with <code>minAvailable: 30%</code> it will refuse to drain with following error: </p>
<pre><code>error when evicting pod "pod01" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
</code></pre>
<p>Briefly that's how draining process works:</p>
<p>As explained in documentation <code>kubectl drain</code> command "safely evicts all of your pods from a node before you perform maintenance on the node and allows the pod’s containers to gracefully terminate and will respect the <code>PodDisruptionBudgets</code> you have specified” </p>
<p>Drain does two things:</p>
<ol>
<li><p>cordons the node- it means that node is marked as unschedulable, so new pods cannot be scheduled on this node. Makes sense- if we know that node will be under maintenance there is no point to schedule a pod there and then reschedule it on another node due to maintenance. From Kubernetes perspective it adds a taint to the node: <code>node.kubernetes.io/unschedulable:NoSchedule</code></p></li>
<li><p>evicts/ deletes the pods- after node is marked as unschedulable it tries to evict the pods that are running on the node. It uses <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api" rel="noreferrer">Eviction API</a> which takes <code>PodDisruptionBudgets</code> into account (if it's not supported it will delete pods). It calls DELETE method to K8S but considers <code>GracePeriodSeconds</code> so it lets a pod finish it's processes. </p></li>
</ol>
|
<p>From the below pods, how can we get a list of pods which have been restarted more than 2 times. How can we get in a single line query ?</p>
<pre><code>xx-5f6df977d7-4gtxj 3/3 Running 0 6d21h
xx-5f6df977d7-4rvtg 3/3 Running 0 6d21h
pkz-ms-profile-df9fdc4f-2nqvw 1/1 Running 0 76d
push-green-95455c5c-fmkr7 3/3 Running 3 15d
spice-blue-77b7869847-6md6w 2/2 Running 0 19d
bang-blue-55845b9c68-ht5s5 1/3 Running 2 8m50s
mum-blue-6f544cd567-m6lws 2/2 Running 3 76d
</code></pre>
|
<p>Use:</p>
<pre><code>kubectl get pods | awk '{if($4>2)print$1}'
</code></pre>
<p>Use <code>-n "NameSpace"</code> if required to fetch pods on the basis of a namespace.
For example:</p>
<p><code>kubectl get pods -n kube-system | awk '{if($4>2)print$1}'</code></p>
<p>where $1, $4 : Depends on which column pod name is present , column on which filter is to be done respectively</p>
<p><strong>Note</strong>: <code>awk</code> will work in linux whereas</p>
|
<p>I am having a cluster on EKS with cluster autoscaler enabled. Lets assume there are 3 nodes node-1,node-2,node3. The nodes can have maximum of 10 pods each. Now when the 31st pod comes into picture the CA will launch a new node and schedule the pod on that. Now maybe lets say the 4 pods from node 2 are not required and they go down. Now according to the requirement if a new pod is launched the scheduler places the new pod on the 4th node (launched by the CA) and not on the second node. Also I want that going down further if the pods are removed from the nodes then the new pods should come into the already existing node and not in a new node put up by CA. I tried updating the EKS default scheduler config file using a scheduler plugin but am unable to do so.</p>
<p>I think we can create a second scheduler but I am not aware of the process properly. Any workaround or suggestions will help a lot.</p>
<p>This is the command:
<strong>"kube-scheduler --config custom.config"</strong>
and this is the error
<strong>"attempting to acquire leader lease kube-system/kube-scheduler..."</strong></p>
<p>This is my custom.config file</p>
<pre><code>apiVersion: kubescheduler.config.k8s.io/v1beta1
clientConnection:
kubeconfig: /etc/kubernetes/scheduler.conf
kind: KubeSchedulerConfiguration
percentageOfNodesToScore: 100
profiles:
- schedulerName: kube-scheduler-new
plugins:
score:
disabled:
- name: '*'
enabled:
- name: NodeResourcesMostAllocated
</code></pre>
|
<p><strong>How to manage pods scheduling?</strong></p>
<p>Custom scheduler is, of course one way to go if you have some specific use case but if you just want to have some particular node that you want to schedule the pod into to Kubernetes provides an options to do so.</p>
<p>Scheduling algorithm selection can be broken into two parts:</p>
<ul>
<li>Filtering the list of all nodes to obtain a list of acceptable nodes the pod can be scheduled to.</li>
<li>Prioritizing the acceptable nodes and choosing the best one. If multiple nodes have the highest score, round-robin is used to ensure pods are deployed across all of them evenly</li>
</ul>
<p>Kubernetes works great if you let scheduler decides which nodes the pod should go and it comes with tools that will give scheduler hints:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">Taints and tolerations</a> can be used to repel some certain pods from the node. This is very useful if you want to partition your cluster and allow some certain people schedule into some specific nodes. Can be also used when you have some hardware nodes and some pod require it (like in your question where you want a pod to be schedule on node 2). They come with 3 effects:</li>
</ul>
<ol>
<li><code>NoSchedule</code> which means there will be no scheduling</li>
<li><code>PreferNoSchedule</code> which means scheduler will try to avoid scheduling</li>
<li><code>NoExecute</code> also affects scheduling and affects pods already running on the node. IF you add this taint to node, pods that are running on the node and don't tolerate that will be evicted.</li>
</ol>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="nofollow noreferrer">Node affinity</a> on the other side can be used to attract some certain pods into specific nodes. Similar to tains node affinity does give me some options for fine tuning your scheduling preferences:</p>
<ol>
<li><code>requiredDuringSchedulingIgnoredDuringExecution</code> which can be used as hard requirement and tell scheduler that rules must be met for pod to be scheduled onto node.</li>
<li><code>preferredDuringSchedulingIgnoredDuringExecution</code> which can be used as soft requirement and tell scheduler to try to enforce it but it does not have to be guaranteed</li>
</ol>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">PodAffinity</a> can be used if you for example want your front-end pods to run on the same node as your database pod. This can be in similar way described as hard or soft requirement respectively.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">podAntiAffinity</a> can be used if you wish not have some certain pod to be running with each other.</p>
</li>
</ul>
|
<p>I'm trying to create a Nginx server using Kubernetes and the official docker image. Unfortunately, when I'm trying to mount a custom config file, nothing happen. My container works just fine but with it's default configuration file. Moreover the another mount on /etc (the lets encrypt folder) doesn't work too. Nevertheless the certbot mount works just fine...
(If I check inside the container /etc/nginx/nginx.conf it's not the file I'm trying to mount, and /etc/letsencrypt doesn't exist)
I link my deployment file just below..
If someone has an idea and want to help it would be delightful !</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.6
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
volumeMounts:
- name: letsencrypt
mountPath: /etc/letsencrypt
readOnly: true
volumeMounts:
- name: certbot
mountPath: /var/www/certbot
volumes:
- name: nginx-config
nfs:
server: 192.168.2.9
path: /volume1/nginx/nginx.conf
- name: letsencrypt
nfs:
server: 192.168.2.9
path: /volume1/nginx/letsencrypt
- name: certbot
nfs:
server: 192.168.2.9
path: /volume1/nginx/certbot
</code></pre>
<p><strong>Edit :</strong>
To solve this problem I had to put all my volume mount inside a single volumeMount section and to remove the reference to file in the volume section, like this :</p>
<pre><code> volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
- name: letsencrypt
mountPath: /etc/letsencrypt
readOnly: true
- name: certbot
mountPath: /var/www/certbot
volumes:
- name: nginx-config
nfs:
server: 192.168.2.9
path: /volume1/nginx/
- name: letsencrypt
nfs:
server: 192.168.2.9
path: /volume1/nginx/letsencrypt
- name: certbot
nfs:
server: 192.168.2.9
path: /volume1/nginx/certbot
</code></pre>
|
<p>I'm writing this answer as community wiki for better visibility of OP solution which was placed as an edit to question instead of answer.</p>
<p>The solution for the problem was combination of OP solution to putt all the volumes mount inside singe <code>volumeMount</code> with my answer to remove the reference to a file in the volume section:</p>
<pre><code> volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
- name: letsencrypt
mountPath: /etc/letsencrypt
readOnly: true
- name: certbot
mountPath: /var/www/certbot
volumes:
- name: nginx-config
nfs:
server: 192.168.2.9
path: /volume1/nginx/
- name: letsencrypt
nfs:
server: 192.168.2.9
path: /volume1/nginx/letsencrypt
- name: certbot
nfs:
server: 192.168.2.9
path: /volume1/nginx/certbot
</code></pre>
|
<p>How do I be able to go to a specific Pod in a DaemonSet without hostNetwork? The reason is my Pods in the DaemonSet are stateful, and I prefer to have at most one worker on each Node (that's why I used DaemonSet). </p>
<p>My original implementation was to use hostNetwork so the worker Pods can be found by Node IP by outside clients. But in many production environment hostNetwork is disabled, so we have to create one NodePort service for each Pod of the DaemonSet. This is not flexible and obviously cannot work in the long run.</p>
<p><strong>Some more background on how my application is stateful</strong></p>
<p>The application works in an HDFS-taste, where Workers(datanodes) register with Masters(namenodes) with their hostname. The masters and outside clients need to go to a specific worker for what it's hosting.</p>
|
<p><code>hostNetwork</code> is an optional setting and is not necessary. You can connect to your pods without specifying it.
To communicate with pods in DaemonSet you can specify <code>hostPort</code> in the DaemonSet’s pod spec to expose it on the node. You can then communicate with it directly by using the IP of the node it is running on.</p>
<p>Another approach to connect to stateful application is <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>. It allows you to specify network identifiers. However it requires <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless service</a> for network identity of the Pods and you are responsible for creating such services.</p>
|
<p>For the <code>main.go</code> code at the end of this question, I ran the following commands to run it on a <code>kubernetes</code> install (on a <code>PC</code>):</p>
<ol>
<li><p><code>docker image build -t myID/go-demo:1.2 .</code></p></li>
<li><p><code>docker image push myID/go-demo:1.2 # Pushed up to DockerHub</code></p></li>
<li><p><code>kubectl run demo2 --image=myID/go-demo:1.2 --port=19999 --labels app=demo2</code></p></li>
<li><p><code>kubectl port-forward deploy/demo2 19999:8888</code></p>
<p><code>Forwarding from 127.0.0.1:19999 -> 8888
Forwarding from [::1]:19999 -> 8888</code></p></li>
</ol>
<p>Then, in another <code>tmux(1)</code> terminal, I confirmed the service was <code>LISTEN</code>ing:</p>
<pre><code>user@vps10$ sudo netstat -ntlp | egrep "Local|19999"
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:19999 0.0.0.0:* LISTEN 736786/kubectl
tcp6 0 0 ::1:19999 :::* LISTEN 736786/kubectl
</code></pre>
<p>But here's my problem, noticing success with <code>localhost</code> and failure with <code>hostname -- vps10</code>:</p>
<pre><code>user@vps10$ curl localhost:19999 # Works.
Hello, 世界
user@vps10$ curl vps10:19999 # Fails.
curl: (7) Failed to connect to vps10 port 19999: Connection refused
</code></pre>
<p>From the above, the issue appears to be that the service is listening only via the <code>loopback</code> interface, and if that's indeed the issue, what do I do to get it to listen on <code>all interfaces</code> (or on a <code>specific interface</code> that I specify). I'm not a <code>kubernetes</code> or <code>go</code> expert (this example is from a book actually =:)), so please supply commands if necessary. Thank you in advance!</p>
<p><strong>HTTP server code</strong>:</p>
<pre><code>package main
import (
"fmt"
"log"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello, 世界")
}
func main() {
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe("0.0.0.0:8888", nil))
}
</code></pre>
|
<p>According to kubectl help:</p>
<pre><code> # Listen on port 8888 on all addresses, forwarding to 5000 in the pod
kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000
# Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod
kubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000
</code></pre>
<p>The default is forwarding for localhost only.</p>
|
<p>I have 3 node in k8s cluster, where all are masters, i.e. I have removed the <code>node-role.kubernetes.io/master</code> taint.</p>
<p>I physically removed the network cable on <code>foo2</code>, so I have </p>
<pre><code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
foo1 Ready master 3d22h v1.13.5
foo2 NotReady master 3d22h v1.13.5
foo3 Ready master 3d22h v1.13.5
</code></pre>
<p>After several hours some of the pods are still in <code>STATUS = Terminating</code> though I think they should be in <code>Terminated</code> ?</p>
<p>I read at <a href="https://www.bluematador.com/docs/troubleshooting/kubernetes-pod" rel="nofollow noreferrer">https://www.bluematador.com/docs/troubleshooting/kubernetes-pod</a></p>
<blockquote>
<p>In rare cases, it is possible for a pod to get stuck in the terminating state. This is detected by finding any pods where every container has been terminated, but the pod is still running. Usually, this is caused when a node in the cluster gets taken out of service abruptly, and the cluster scheduler and controller-manager do not clean up all of the pods on that node. </p>
<p>Solving this issue is as simple as manually deleting the pod using kubectl delete pod .</p>
</blockquote>
<p>The pod describe says if unreachable for 5 minutes will be tolerated ...</p>
<pre><code>Conditions:
Type Status
Initialized True
Ready False
ContainersReady True
PodScheduled True
Volumes:
etcd-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>I have tried <code>kubectl delete pod etcd-lns4g5xkcw</code> which just hung, though forcing it <strong>does</strong> work as per this <a href="https://stackoverflow.com/questions/55935173/kubernetes-pods-stuck-with-in-terminating-state">answer</a> ...</p>
<pre><code>kubectl delete pod etcd-lns4g5xkcw --force=true --grace-period=0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "etcd-lns4g5xkcw" force deleted
</code></pre>
<p>(1) Why is this happening ? Shouln't it change to terminated? </p>
<p>(2) Where even is <code>STATUS = Terminating</code> coming from? At <a href="https://v1-13.docs.kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">https://v1-13.docs.kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/</a> I See only Waiting/Running/Terminated as the options ?</p>
|
<p>Pods volume and network cleanup can consume more time while in <code>termination</code> status.
Proper way to do it is to drain node in order to get pods terminated successfully in grace period.
Because you plugged out the network cable the node has changed its status to <code>not ready</code> with pods already running on it.
Due to this pod could not terminate.</p>
<p>You may find this information from k8s documentation about <code>terminating</code> status useful:</p>
<blockquote>
<p>Kubernetes (versions 1.5 or newer) will not delete Pods just because a
Node is unreachable. The Pods running on an unreachable Node enter the
‘Terminating’ or ‘Unknown’ state after a timeout. Pods may also enter
these states when the user attempts graceful deletion of a Pod on an
unreachable Node:</p>
<p>There are 3 suggested ways to remove it from apiserver:</p>
<p>The Node object is deleted (either by you, or by the Node Controller).
The kubelet on the unresponsive Node starts responding, kills the Pod
and removes the entry from the apiserver. Force deletion of the Pod by
the user.</p>
</blockquote>
<p>Here you can find more information about background deletion from k8s <a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/" rel="nofollow noreferrer">offical documentation</a></p>
|
<p>Trying to run application in kubernetes which need to access Sybase DB from with in the Pod . I have the below egress Network policy which should allow all .The sybase db connection is getting created but its getting closed soon(Connection Closed Error) . Sybase docs say</p>
<p><em>Firewall software may filter network packets according to network port. Also, it is common to disallow UDP packets from crossing the firewall</em>.</p>
<p>My question is do i need to explicitly specify something for UDP or should nt the egress Allow all ( {}) take care of this ?</p>
<blockquote>
<p>NetWork Policy</p>
</blockquote>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: spring-app-network-policy
spec:
podSelector:
matchLabels:
role: spring-app
ingress:
- {}
egress:
- {}
policyTypes:
- Ingress
- Egress
</code></pre>
|
<p>The issue was using spring cloud which internally deployed new pods with different names and the policy was not applied. It's working by adding network policy for the newly deployed applications.</p>
|
<p>The default value of said annotation is 60 sec; I am looking to change its value to 120 sec. I added this as an annotation in ingress resource file but it doesn't seem to be working.</p>
<p>Since my request body is quite big, I am getting 408 from ingress http server immediately after 60 sec only; </p>
<p>Where else I can define this annotation if it is not allowed in ingress file itself?</p>
<p>The following page doesn't mention this annotation; Is it because it is not meant to be added as an annotation?</p>
<p><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations</a></p>
<p>Ingress resource snippet:</p>
<pre><code>kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /my-app
nginx.ingress.kubernetes.io/client-header-buffer-size: "1M"
nginx.ingress.kubernetes.io/client-header-timeout: "60"
nginx.ingress.kubernetes.io/client-body-buffer-size: "1M"
nginx.ingress.kubernetes.io/client-body-timeout: "120"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header custom-header $1;
spec:
rules:
- http:
paths:
- path: /(UK)/my-app/(.*)$
backend:
serviceName: test
servicePort: 80
</code></pre>
|
<p>To summarize our conversation in comments:</p>
<p>There are two Nginx ingress controllers;
One nginx controller is maintained by kubernetes community and the other one by nginx (the company behind nginx product). Here is the github repo for <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">Nginx ingress controller</a> and and here for <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">kubernetes nginx controller</a>.</p>
<hr>
<p>Nginx controller provided by kubernetes doesn't allow setting <code>client-body-timeout</code> with annotation. Here is a <a href="https://github.com/kubernetes/ingress-nginx/tree/master/internal/ingress/annotations" rel="nofollow noreferrer">link to github repo with annotations code</a>. This means that what you are left with is either</p>
<ul>
<li>setting this parameter globally, or </li>
<li>opening feature request on github and waiting for someone to implement it.</li>
</ul>
<p><code>client-body-timeout</code> parameter can only be set through global config (as specified in <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="nofollow noreferrer">documentation</a>).</p>
|
<p>I have containerized microservice built with Java. This application uses the default <code>/config-volume</code> directory when it searches for property files.<br />
Previously I manually deployed via <code>Dockerfile</code>, and now I'm looking to automate this process with <em>Kubernetes</em>.</p>
<p>The container image starts the microservice immediately so I need to add properties to the config-volume folder immediately. I accomplished this in Docker with this simple <code>Dockerfile</code>:</p>
<pre><code>FROM ########.amazon.ecr.url.us-north-1.amazonaws.com/company/image-name:1.0.0
RUN mkdir /config-volume
COPY path/to/my.properties /config-volume
</code></pre>
<p>I'm trying to replicate this type of behavior in a kubernetes <code>deployment.yaml</code> but I have found no way to do it.</p>
<p>I've tried performing a <code>kubectl cp</code> command immediately after applying the deployment and it sometimes works, but it can result in a race condition which cause the microservice to fail at startup.</p>
<p>(I've redacted unnecessary parts)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
replicas: 1
template:
spec:
containers:
- env:
image: ########.amazon.ecr.url.us-north-1.amazonaws.com/company/image-name:1.0.0
name: my-service
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /config-volume
name: config-volume
volumes:
- name: config-volume
emptyDir: {}
status: {}
</code></pre>
<p>Is there a way to copy files into a volume inside the <code>deployment.yaml</code>?</p>
|
<p>You are trying to emulate a <code>ConfigMap</code> using volumes. Instead, put your configuration into a <code>ConfigMap</code>, and mount that to your deployments. The documentation is there:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/</a></p>
<p>Once you have your configuration as a <code>ConfigMap</code>, mount it using something like this:</p>
<pre><code>...
containers:
- name: mycontainer
volumeMounts:
- name: config-volume
mountPath: /config-volume
volumes:
- name: config-volume
configMap:
name: nameOfConfigMap
</code></pre>
|
<p>I deployed k8s cluster on bare metal using terraform following <a href="https://github.com/packet-labs/kubernetes-bgp/blob/master/nodes.tf" rel="nofollow noreferrer">this repository on github</a></p>
<p>Now I have three nodes: </p>
<p><strong><em>ewr1-controller, ewr1-worker-0, ewr1-worker-1</em></strong></p>
<p>Next, I would like to run terraform apply and increment the worker nodes (<strong>*ewr1-worker-3, ewr1-worker-4 ... *</strong>) while keeping the existing controller and worker nodes.
I tried incrementing the count.index to start from 3, however it still overwrites the existing workers.</p>
<pre><code>resource "packet_device" "k8s_workers" {
project_id = "${packet_project.kubenet.id}"
facilities = "${var.facilities}"
count = "${var.worker_count}"
plan = "${var.worker_plan}"
operating_system = "ubuntu_16_04"
hostname = "${format("%s-%s-%d", "${var.facilities[0]}", "worker", count.index+3)}"
billing_cycle = "hourly"
tags = ["kubernetes", "k8s", "worker"]
}
</code></pre>
<p>I havent tried this but if I do </p>
<pre><code>terraform state rm 'packet_device.k8s_workers'
</code></pre>
<p>I am assuming these worker nodes will not be managed by the kubernetes master. I don't want to create all the nodes at beginning because the worker nodes that I am adding will have different specs(instance types).</p>
<p>The entire script I used is available here on <a href="https://github.com/packet-labs/kubernetes-bgp" rel="nofollow noreferrer">this github repository</a>.
I appreciate it if someone could tell what I am missing here and how to achieve this. </p>
<p>Thanks!</p>
|
<p>Node resizing is best addressed using an autoscaler. Using Terraform to scale a nodepool might not be the optimal approach as the tool is meant to declare the state of the system rather than dynamically change it. The best approach for this is to use a cloud auto scaler.
In bare metal, you can implement a <code>CloudProvider interface</code> (like the one provided by cloud such as AWS, GCP, Azure) as described <a href="https://github.com/kubernetes/autoscaler/issues/953" rel="nofollow noreferrer">here</a></p>
<p>After implementing that, you need to determine if your K8s implementation can be <a href="https://stackoverflow.com/a/45803656/10892354">operated as a provider by Terraform</a>, and if that's the case, find the nodepool <code>autoscaler</code> resource that allows the <code>autoscaling</code>.</p>
<p>Wrapping up, Terraform is not meant to be used as an <code>autoscaler</code> given its natures as a declarative language that describes the infrastructure.</p>
<p>The <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#introduction" rel="nofollow noreferrer">autoscaling features</a> in K8s are meant to tackle this kind of requirements.</p>
|
<p>I have an application in a container which reads certain data from a configMap which goes like this</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
password: hello123
</code></pre>
<p>Now I created a secret for the password and mounted as env variable while starting the container.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: appdbpassword
type: Opaque
stringData:
password: hello123
</code></pre>
<p>My pod looks like:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.pod.name }}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.image }}
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;"]
env:
- name: password
valueFrom:
secretKeyRef:
name: appdbpassword
key: password
volumeMounts:
- name: config-volume
mountPath: /app/app-config/application.yaml
subPath: application.yaml
volumes:
- name: config-volume
configMap:
name: app-config
</code></pre>
<p>I tried using this env variable inside the configMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
**password: ${password}**
</code></pre>
<p>But my application is unable to read this password. Am I missing something here? </p>
<p>EDIT:</p>
<p>I cannot change the application.yaml to any other form as my server looks for application.yaml in source path. Do we have any way to use that environment variable in values.yaml(helm) file and use it in the configmap?</p>
|
<p>You cannot use a secret in <code>ConfigMap</code> as they are intended to non-sensitive data (<a href="https://github.com/kubernetes/kubernetes/issues/79224" rel="nofollow noreferrer">See here</a>). </p>
<p>Also you should not pass <code>Secrets</code> using <code>env's</code> as it's create potential risk (Read more <a href="https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/" rel="nofollow noreferrer">here</a> why <code>env</code> shouldn't be
used).
Applications usually dump <code>env</code> variables in error reports or even write the to the
app logs at startup which could lead to exposing <code>Secrets</code>.</p>
<p>The best way would be to mount the <code>Secret</code> as file.
Here's an simple example how to mount it as file: </p>
<pre><code>spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: ssh-key
</code></pre>
<p>Kubernetes <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets" rel="nofollow noreferrer">documentation</a> explains well how to use and mount secrets.</p>
|
<p>We have deployed Traefik 2.2 on our Kubernetes cluster with following ingress-route created to access our application. This configuration is working fine for us and is currently the same for our Production system as well.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: application-xyz
spec:
tls: {}
entryPoints:
- web
- websecure
routes:
- match: "HostRegexp(`application-xyz.com`) && PathPrefix(`/`)"
kind: Rule
priority: 1
services:
- name: application-xyz-service
port: 80
- match: "PathPrefix(`/application-xyz/`)"
kind: Rule
services:
- name: application-xyz-service
port: 80
middlewares:
- name: application-xyz-stripprefix
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: application-xyz-stripprefix
namespace: frontend
spec:
stripPrefix:
prefixes:
- /application-xyz/
forceSlash: false
</code></pre>
<p>Question 1:
We are now planning to migrate from Traefik to Nginx Ingress Controller. Is there any way we can replicate the same on Nginx similar to Traefik configuration. I'm unsure if I'm comparing this in the right way or not. Would be grateful if we can get any pointers.</p>
<p>Question 2:
We wish to achieve stripprefix functionality in Nginx but didn't find any helpful documentation. Any leads in this regard is highly appreciated.</p>
|
<p>StripPrefix functionality in nginx ingress you can achieve using <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer"><code>rewrite-target</code> annotation</a>.
When <code>rewrite-target</code> is used, regexp path matching is enabled and it allows you to match eny part of path into groups and rewrite path based on that.</p>
<p>In your case it would look like following:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: rewrite
namespace: default
spec:
rules:
- host: application-xyz.com
http:
paths:
- backend:
serviceName: application-xyz-service
servicePort: 80
path: /(.*)
- http:
paths:
- backend:
serviceName: application-xyz-service
servicePort: 80
path: /application-xyz/(.*)
</code></pre>
<p>Feel free to ask questions if you feel like my answer needs more detailed explaination.</p>
|
<p>I have deployed a Kubernetes cluster composed of a master and two workers using <code>kubeadm</code> and the Flannel network driver (So I passed the <code>--pod-network-cidr=10.244.0.0/16</code> flag to <code>kubeadm init</code>).</p>
<p>Those nodes are communicating together using a VPN so that:</p>
<ul>
<li>Master node IP address is 10.0.0.170</li>
<li>Worker 1 IP address is 10.0.0.247</li>
<li>Worker 2 IP address is 10.0.0.35</li>
</ul>
<p>When I create a new pod and I try to ping google I have the following error:</p>
<pre><code>/ # ping google.com
ping: bad address 'google.com'
</code></pre>
<p>I followed the instructions from the <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="noreferrer">Kubernetes DNS debugging resolution</a> documentation page:</p>
<pre><code>$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
</code></pre>
<h3>Check the local DNS configuration first</h3>
<pre><code>$ kubectl exec busybox cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local invalid
options ndots:5
</code></pre>
<h3>Check if the DNS pod is running</h3>
<pre><code>$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-cqzb7 1/1 Running 0 7d18h
coredns-5c98db65d4-xc5d7 1/1 Running 0 7d18h
</code></pre>
<h3>Check for Errors in the DNS pod</h3>
<pre><code>$ for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done
.:53
2019-10-28T13:40:41.834Z [INFO] CoreDNS-1.3.1
2019-10-28T13:40:41.834Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-10-28T13:40:41.834Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
.:53
2019-10-28T13:40:42.870Z [INFO] CoreDNS-1.3.1
2019-10-28T13:40:42.870Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-10-28T13:40:42.870Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
</code></pre>
<h3>Is DNS service up?</h3>
<pre><code>$ kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 7d18h
</code></pre>
<h3>Are DNS endpoints exposed?</h3>
<pre><code>$ kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 10.244.0.3:53,10.244.0.4:53,10.244.0.3:53 + 3 more... 7d18h
</code></pre>
<h3>Are DNS queries being received/processed?</h3>
<p>I made the update to the coredns ConfigMap, ran again the <code>nslookup kubernetes.default</code> command, and here is the result:</p>
<pre><code>$ for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done
.:53
2019-10-28T13:40:41.834Z [INFO] CoreDNS-1.3.1
2019-10-28T13:40:41.834Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-10-28T13:40:41.834Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
[INFO] Reloading
2019-11-05T08:12:12.511Z [INFO] plugin/reload: Running configuration MD5 = 906291470f7b1db8bef629bdd0056cad
[INFO] Reloading complete
2019-11-05T08:12:12.608Z [INFO] 127.0.0.1:55754 - 7434 "HINFO IN 4808438627636259158.5471394156194192600. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.095189791s
.:53
2019-10-28T13:40:42.870Z [INFO] CoreDNS-1.3.1
2019-10-28T13:40:42.870Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-10-28T13:40:42.870Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
[INFO] Reloading
2019-11-05T08:12:47.988Z [INFO] plugin/reload: Running configuration MD5 = 906291470f7b1db8bef629bdd0056cad
[INFO] Reloading complete
2019-11-05T08:12:48.004Z [INFO] 127.0.0.1:51911 - 60104 "HINFO IN 4077052818408395245.3902243105088660270. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016522153s
</code></pre>
<p>So it seems that DNS pods are receiving the requests.</p>
<h2>But I had this error already!</h2>
<p>That error happened to me the first time I deployed the cluster.</p>
<p>At that time, I noticed that <code>kubectl get nodes -o wide</code> was showing the workers node public IP address as "INTERNAL-IP" instead of the private one.</p>
<p>Looking further I found out that on the worker nodes, kubelet was missing the <code>--node-ip</code> flag, so I've added it and restarted Kubelet and the issue was gone. I then concluded that missing flag was the reason, but it seems to not be the case as the <code>kubectl get nodes -o wide</code> command shows the internal IP addresses as "INTERNAL-IP" for the workers.</p>
<h2>And now</h2>
<p>The DNS server IP address 10.96.0.10 looks wrong to me, and I can't ping it from the pod. The DNS pods have the IP addresses 10.244.0.3 and 10.244.0.4 which I can't ping too.</p>
<p>I just tried to delete the coredns pods, so that they are scheduled again, and now their IP addresses changed, I can ping them from the pod and the <code>kubectl exec -ti busybox -- nslookup kubernetes.default</code> works:</p>
<pre><code>$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
</code></pre>
<p>But the <code>resolv.conf</code> file still has the "invalid" inside:</p>
<pre><code>$ kubectl exec busybox cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local invalid
options ndots:5
</code></pre>
<ul>
<li>Can anyone explain me what happened please?</li>
<li>And how can I solved this "invalid" from the <code>resolv.conf</code> file?</li>
</ul>
|
<p>As configured in <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns" rel="noreferrer">CoreDNS ConfigMap</a> default upstream nameservers are inherited from node, that is everything outside the cluster domain (.cluster.local)</p>
<p>So "invalid" is an entry copied from Node's <code>/etc/resolv.conf</code> file during Pod creation.</p>
<p>If you would manually modify <code>/etc/resolv.conf</code> on you Node, every Pod with <code>dnsPolicy: ClusterFirst</code> will inherit <code>/etc/resolv.conf</code> with this modification.</p>
<p>So, after adding <code>--node-ip</code> flag to kubelet and restarting CoreDNS Pods you should re-deploy your busybox Pod so it can inherit <code>/etc/resolv.conf</code> from the Node.</p>
|
<p>I'm trying to setup Spring cloud gateway on openshift and want to discover the services available within cluster. I'm able to discover the services by adding the @DiscoveryClient and dependencies as below.</p>
<p>Boot dependencies are like:</p>
<pre><code> spring-cloud.version : Greenwich.SR2
spring-boot-starter-parent:2.1.7.RELEASE
</code></pre>
<pre><code><dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
</code></pre>
<p>I can see services are being discovered and registered. And routing also happening but there is CN name validation error occurring while routing. I tried setting the use-insecure-trust-manager:true as well but still the same error.</p>
<pre><code>2021-12-31 12:30:33.867 TRACE 1 --- [or-http-epoll-8] o.s.c.g.h.p.RoutePredicateFactory : Pattern "[/customer-service/**]" does not match against value "/userprofile/addUser"
2021-12-31 12:30:33.868 TRACE 1 --- [or-http-epoll-8] o.s.c.g.h.p.RoutePredicateFactory : Pattern "/userprofile/**" matches against value "/userprofile/addUser"
2021-12-31 12:30:33.868 DEBUG 1 --- [or-http-epoll-8] o.s.c.g.h.RoutePredicateHandlerMapping : Route matched: CompositeDiscoveryClient_userprofile
2021-12-31 12:30:33.868 DEBUG 1 --- [or-http-epoll-8] o.s.c.g.h.RoutePredicateHandlerMapping : Mapping [Exchange: GET https://my-gatewat.net/userprofile/addUser ] to Route{id='CompositeDiscoveryClient_userprofile', uri=lb://userprofile, order=0, predicate=org.springframework.cloud.gateway.support.ServerWebExchangeUtils$$Lambda$712/0x000000010072a440@1046479, gatewayFilters=[OrderedGatewayFilter{delegate=org.springframework.cloud.gateway.filter.factory.RewritePathGatewayFilterFactory$$Lambda$713/0x000000010072a840@3c8d9cd1, order=1}]}
2021-12-31 12:30:33.888 TRACE 1 --- [or-http-epoll-8] o.s.c.g.filter.RouteToRequestUrlFilter : RouteToRequestUrlFilter start
2021-12-31 12:30:33.888 TRACE 1 --- [or-http-epoll-8] o.s.c.g.filter.LoadBalancerClientFilter : LoadBalancerClientFilter url before: lb://userprofile/addUser
2021-12-31 12:30:33.889 TRACE 1 --- [or-http-epoll-8] o.s.c.g.filter.LoadBalancerClientFilter : LoadBalancerClientFilter url chosen: https://10.130.83.26:8443/addUser
2021-12-31 12:30:33.891 DEBUG 1 --- [ctor-http-nio-7] r.n.resources.PooledConnectionProvider : [id: 0x326a2e7b] Created new pooled channel, now 0 active connections and 1 inactive connections
2021-12-31 12:30:33.891 DEBUG 1 --- [ctor-http-nio-7] reactor.netty.tcp.SslProvider : [id: 0x326a2e7b] SSL enabled using engine SSLEngineImpl and SNI /10.130.83.26:8443
2021-12-31 12:30:33.931 ERROR 1 --- [ctor-http-nio-7] a.w.r.e.AbstractErrorWebExceptionHandler : [8768bf6c] 500 Server Error for HTTP GET "/userprofile/addUser"
javax.net.ssl.SSLHandshakeException: No subject alternative names matching IP address 10.130.83.26 found
at java.base/sun.security.ssl.Alert.createSSLException(Unknown Source) ~[na:na]
at java.base/sun.security.ssl.TransportContext.fatal(Unknown Source) ~[na:na]
</code></pre>
<p>Application.yml:</p>
<pre><code>
spring:
application:
name: my-api-gateway
cloud:
gateway:
discovery:
locator:
enabled: true
httpclient:
ssl:
use-insecure-trust-manager: true
</code></pre>
<p>Tried adding SNI matchers in SSL Context, to skip hostname check, but still not working:</p>
<pre><code>SNIMatcher matcher = new SNIMatcher(0) {
@Override
public boolean matches(SNIServerName serverName) {
log.info("Server Name validation:{}", serverName);
return true;
}
};
</code></pre>
|
<p>I'm able to resolve this error by using k8s discovery with url-expression as below:</p>
<pre><code>spring:
cloud:
gateway:
discovery:
locator:
enabled: true
lower-case-service-id: true
url-expression: "'https://'+serviceId+':'+getPort()"
</code></pre>
<p>Routes will be registered as https://serivcename:port same URL will be used by SSLProvider where it will create SSLHandler with host in SNI Information rather IP-Address which was causing this failure.</p>
<p>Logs for where SSL provider added handler with SSL Engine only and hostname port.</p>
<p>2022-01-04 14:58:15.360 DEBUG 1 --- [or-http-epoll-4] reactor.netty.tcp.SslProvider : [63cc8609, L:/127.0.0.1:8091 - R:/127.0.0.1:60004] SSL enabled using engine io.netty.handler.ssl.JdkAlpnSslEngine@31e2342b and SNI my-service:8088</p>
|
<p>I am trying to get the list of pods that are servicing a particular service</p>
<p>There are 3 pods associated with my service.</p>
<p>I tried to execute the below command</p>
<p><code>oc describe svc my-svc-1</code></p>
<p>I am expecting to see the pods associated with this service. but that does not show up. What command gets me just the list of pods associated with the service.</p>
|
<p>A service chooses the pods using the selector. Look at the selector for the service, and get the pods using that selector. For kubectl, the command looks like:</p>
<pre><code>kubectl get pods --selector <selector>
</code></pre>
|
<p>I am adding security to my cluster. One of the requirements is that the communication between pods is secure.</p>
<p>The most viable option I found is to implement a "service mesh". I have seen that Calico, Istio, Linkerd are good options. But I don't know which one is the lightest as any of them have a lot of components that I won't really need.</p>
<p>If you have another recommendation, it is welcome.</p>
<p>I read:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/62881298/what-is-pod-to-pod-encryption-in-kubernetes-and-how-to-implement-pod-to-pod-enc">What is pod to pod encryption in kubernetes? And How to implement pod to pod encryption by using mTLS in kubernetes?</a></li>
<li><a href="https://stackoverflow.com/questions/45453187/how-to-configure-kubernetes-to-encrypt-the-traffic-between-nodes-and-pods">How to configure Kubernetes to encrypt the traffic between nodes, and pods?</a></li>
<li><a href="https://medium.com/@santhoz/nginx-sidecar-reverse-proxy-for-performance-http-to-https-redirection-in-kubernetes-dd9dbe2fd0c7" rel="nofollow noreferrer">https://medium.com/@santhoz/nginx-sidecar-reverse-proxy-for-performance-http-to-https-redirection-in-kubernetes-dd9dbe2fd0c7</a></li>
</ul>
|
<p>Calico is an overlay network and CNI implementation. It won't automatically encrypt the communication between pods on its own, as far as I know.</p>
<p>Linkerd and Istio are service meshes which implement CNI to encrypt traffic with a CNI provider like calico, <em>but</em> a CNI provider is not required.</p>
<p>Linkerd will <a href="https://linkerd.io/2/features/automatic-mtls/" rel="nofollow noreferrer">automatically</a> encrypt traffic with mTLS out of the box.</p>
<p>I think Istio added that feature recently.</p>
<p>Linkerd is much easier to install and use, and its proxy is <a href="https://kinvolk.io/blog/2019/05/performance-benchmark-analysis-of-istio-and-linkerd/" rel="nofollow noreferrer">faster and uses fewer resources</a>.</p>
|
<p>I've a new service that emits so many events that etcd runs out of space. These are deleted up after 1h, but I'd like them to be deleted sooner. Is there any way to do set the TTL on each event?</p>
|
<p>There is a flag for <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">api-server</a>:</p>
<blockquote>
<p>--event-ttl duration Default: 1h0m0s
Amount of time to retain events.</p>
</blockquote>
|
<p>I'm new to Kubernetes. I have setup 3 node cluster with two workers according to <a href="https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-ubuntu-18-04" rel="nofollow noreferrer">here</a>.<br/>
My configurations</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.10", GitCommit:"575467a0eaf3ca1f20eb86215b3bde40a5ae617a", GitTreeState:"clean", BuildDate:"2019-12-11T12:32:32Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<pre><code> kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Deployed simple python service listen to 8000 port http and reply "Hello world"</p>
<p>my deployment config</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
labels:
app: frontend-app
spec:
replicas: 2
selector:
matchLabels:
app: frontend-app
template:
metadata:
labels:
app: frontend-app
spec:
containers:
- name: pyfrontend
image: rushantha/pyfront:1.0
ports:
- containerPort: 8000
</code></pre>
<p>Exposed this as a service
<code>kubectl expose deploy frontend-app --port 8000</code></p>
<p>I can see it deployed and running.</p>
<pre><code>kubectl describe svc frontend-app
Name: frontend-app
Namespace: default
Labels: app=frontend-app
Annotations: <none>
Selector: app=frontend-app
Type: ClusterIP
IP: 10.96.113.192
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 172.16.1.10:8000,172.16.2.9:8000
Session Affinity: None
Events: <none>
</code></pre>
<p>when I log in to each service machine and do curl pods respond
ie. <code>curl 172.16.1.10:8000 or curl 172.16.2.9:8000</code></p>
<p>However when I try to access the pods via the ClusterIp only one pod always responds. So curl sometimes hangs, most probably the other pod does not respond. I confirmed when I tail the access logs for both pods. One pod never received any requests. </p>
<pre><code>curl 10.96.113.192:8000/ ---> Hangs sometimes.
</code></pre>
<p>Any ideas how to troubleshoot this and fix ?</p>
|
<p>After comparing the tutorial document and the outputs configuration
I discovered that the <code>--pod-network-cidr</code> declared in the document is different from the OP endpoints which solved the problem. </p>
<p>The network in the flannel configuration should match the <code>pod network CIDR</code> otherwise pods won`t be able to communicate with each other. </p>
<p>Some additional information that are worth checking: </p>
<ol>
<li><p>Under the <code>CIDR Notation</code> <a href="https://www.digitalocean.com/community/tutorials/understanding-ip-addresses-subnets-and-cidr-notation-for-networking" rel="nofollow noreferrer">section</a> there is a good explanation how this system works. </p></li>
<li><p>I find <a href="https://itnext.io/an-illustrated-guide-to-kubernetes-networking-part-1-d1ede3322727" rel="nofollow noreferrer">this document</a> about networking in kuberenetes very helpful. </p></li>
</ol>
|
<p>In the past I've tried <code>NodePort</code> service and if I add a firewall rule to the corresponding Node, it works like a charm:</p>
<pre><code> type: NodePort
ports:
- nodePort: 30000
port: 80
targetPort: 5000
</code></pre>
<p>I can access my service from outside and long as the node has an external IP(which it does by default in GKE).
However, the service can only be assigned to 30000+ range ports, which is not very convenient.
By the way, the Service looks as follows:</p>
<pre><code> kubectl get service -o=wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-engine-service NodePort 10.43.244.110 <none> 80:30000/TCP 11m app=web-engine-pod
</code></pre>
<p>Recently, I've come across a different configuration option that is documented <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">here</a>.
I've tried is as it seems quite promising and should allow to expose my service on any port I want.<br />
The configuration is as follows:</p>
<pre><code> ports:
- name: web-port
port: 80
targetPort: 5000
externalIPs:
- 35.198.163.215
</code></pre>
<p>After the service updated, I can see that External IP is indeed assigned to it:</p>
<pre><code>$ kubectl get service -o=wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-engine-service ClusterIP 10.43.244.110 35.198.163.215 80/TCP 19m app=web-engine-pod
</code></pre>
<p>(where <code>35.198.163.215</code> - Node's external IP in GKE)</p>
<p>And yet, my app is not available on the Node's IP, unlike in the first scenario(I did add firewall rules for all ports I'm working with including <code>80</code>, <code>5000</code>, <code>30000</code>).</p>
<p>What's the point of <code>externalIPs</code> configuration then? What does it actually do?</p>
<p><strong>Note:</strong> I'm creating a demo project, so please don't tell me about <code>LoabBalancer</code> type, I'm well aware of that and will get to that a bit later.</p>
|
<p>I wanted to give you more insight on:</p>
<ul>
<li>How you can manage to make it work.</li>
<li>Why it's not working in your example.</li>
<li>More information about exposing traffic on <code>GKE</code>.</li>
</ul>
<hr />
<h3>How you can manage to make it work?</h3>
<p>You will need to enter <strong>internal IP</strong> of your node/nodes to the service definition where <code>externalIP</code> resides.</p>
<p>Example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: hello-external
spec:
selector:
app: hello
version: 2.0.0
ports:
- name: http
protocol: TCP
port: 80 # port to send the traffic to
targetPort: 50001 # port that pod responds to
externalIPs:
- 10.156.0.47
- 10.156.0.48
- 10.156.0.49
</code></pre>
<hr />
<h3>Why it's not working in your example?</h3>
<p>I've prepared an example to show you why it doesn't work.</p>
<p>Assuming that you have:</p>
<ul>
<li>VM in GCP with:
<ul>
<li>any operating system that allows to run <code>tcpdump</code></li>
<li>internal IP of: <code>10.156.0.51</code></li>
<li>external IP of: <code>35.246.207.189</code></li>
<li>allowed the traffic to enter on port: <code>1111</code> to this VM</li>
</ul>
</li>
</ul>
<p>You can run below command <strong>(on VM)</strong> to capture the traffic coming to the port: <code>1111</code>:</p>
<ul>
<li><code>$ tcpdump port 1111 -nnvvS</code></li>
</ul>
<blockquote>
<ul>
<li>-nnvvS - don't resolve DNS or Port names, be more verbose when printing info, print the absolute sequence numbers</li>
</ul>
</blockquote>
<p>You will need to send a request to external IP: <code>35.246.207.189</code> of your VM with a port of: <code>1111</code></p>
<ul>
<li><code>$ curl 35.246.207.189:1111</code></li>
</ul>
<p>You will get a connection refused message but the packet will be captured. You will get an output similar to this:</p>
<pre class="lang-sh prettyprint-override"><code>tcpdump: listening on ens4, link-type EN10MB (Ethernet), capture size 262144 bytes
12:04:25.704299 IP OMMITED
YOUR_IP > 10.156.0.51.1111: Flags [S], cksum 0xd9a8 (correct), seq 585328262, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 1282380791 ecr 0,sackOK,eol], length 0
12:04:25.704337 IP OMMITED
10.156.0.51.1111 > YOUR_IP: Flags [R.], cksum 0x32e3 (correct), seq 0, ack 585328263, win 0, length 0
</code></pre>
<p>By that example you can see the destination IP address for your packet coming to the VM. As shown above it's the <strong>internal IP</strong> of your VM and not external. That's why putting external IP in your <code>YAML</code> definition is not working.</p>
<blockquote>
<p>This example also works on <code>GKE</code>. For simplicity purposes you can create a <code>GKE</code> cluster with Ubuntu as base image and do the same as shown above.</p>
</blockquote>
<p>You can read more about IP addresses by following link below:</p>
<ul>
<li><em><a href="https://cloud.google.com/vpc/docs/ip-addresses" rel="nofollow noreferrer">Cloud.google.com: VPC: Docs: IP addresses</a></em></li>
</ul>
<hr />
<h3>More about exposing traffic on <code>GKE</code></h3>
<blockquote>
<p>What's the point of <code>externalIPs</code> configuration then? What does it actually do?</p>
</blockquote>
<p>In simple terms it will allow the traffic to enter your cluster. Request sent to your cluster will need to have destination IP the same as in the <code>externalIP</code> parameter in your service definition to be routed to the corresponding service.</p>
<p>This method requires you to track the IP addresses of your nodes and could be prone to issues when the IP address of your node will not be available (nodes autoscaling for example).</p>
<p>I recommend you to expose your services/applications by following official <code>GKE</code> documentation:</p>
<ul>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: How to: Exposing apps</a></li>
</ul>
<p>As mentioned before, <code>LoadBalancer</code> type of service will automatically take into consideration changes that were made to the cluster. Things like autoscaling which increase/decrease count of your nodes. With the service shown above (with <code>externalIP</code>) this would require manual changes.</p>
<p>Please let me know if you have any questions to that.</p>
|
<p>When using <a href="https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner" rel="nofollow noreferrer">nfs-server-provisioner</a> is it possible to set a specific persistent volume for the NFS provisioner?</p>
<p>At present, I'm setting the Storage Class to use via helm:</p>
<pre><code>helm install stable/nfs-server-provisioner \
--namespace <chart-name>-helm \
--name <chart-name>-nfs \
--set persistence.enabled=true \
--set persistence.storageClass=slow \
--set persistence.size=25Gi \
--set storageClass.name=<chart-name>-nfs \
--set storageClass.reclaimPolicy=Retain
</code></pre>
<p>and the Storage Class is built via:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
type: pd-standard
replication-type: none
</code></pre>
<p>This then generates the PV dynamically when requested by a PVC.</p>
<p>I'm using the PV to store files for a Stateful CMS, using NFS allows for multiple pods to connect to the same file store.</p>
<p>What I'd like to do now is move all those images to a new set of pods on a new cluster. Rather than backing them up and going through the process of dynamically generating a PV and restoring the files to it, is it possible to retain the current PV and then connect the new PVC to it?</p>
|
<blockquote>
<p>When using nfs-server-provisioner is it possible to set a specific persistent volume for the NFS provisioner?</p>
</blockquote>
<p>If we are talking if it's possible to retain the data from existing old PV which was the storage for the old NFS server and then use it with new NFS server, the answer is yes.</p>
<p>I've managed to find a way to do it. Please remember that this is only a <strong>workaround</strong>.</p>
<p>Steps:</p>
<ul>
<li>Create a snapshot out of existing old nfs server storage.</li>
<li>Create a new disk where the source is previously created snapshot</li>
<li>Create a PV and PVC for newly created nfs-server</li>
<li>Pull the nfs-server-provisioner helm chart and edit it</li>
<li>Spawn edited nfs-server-provisioner helm chart</li>
<li>Create new PV's and PVC's with nfs-server-provisioner storageClass</li>
<li>Attach newly created PVC's to workload</li>
</ul>
<p>Please remember that this solution is showing the way to create PV's and PVC's for a new workload manually.</p>
<p>I've included the whole process below.</p>
<hr />
<h3>Create a snapshot out of existing old nfs server storage.</h3>
<p>Assuming that you created your old <code>nfs-server</code> with the <code>gce-pd</code>, you can access this disk via GCP Cloud Console to make a snapshot of it.</p>
<p>I've included here more safer approach which consists of creating a copy of the <code>gce-pd</code> with data inside of old nfs-server. This copy will be used in new <code>nfs-server</code>.</p>
<p>There is also a possibility to change the <code>persistentVolumeReclaimPolicy</code> on existing old PV to not be deleted when the <code>PVC</code> of old nfs-server is deleted. In this way, you could reuse existing disk in new nfs-server.</p>
<p>Please refer to official documentation how to create a snapshot out of persistent disks in GCP:</p>
<ul>
<li><em><a href="https://cloud.google.com/compute/docs/disks/create-snapshots" rel="nofollow noreferrer">Cloud.google.com: Compute: Disks: Create snapshot</a></em></li>
</ul>
<hr />
<h3>Create a new disk where the source is previously created snapshot</h3>
<p>You will need to create a new <code>gce-pd</code> disk for your new <code>nfs-server</code>. The earlier created snapshot will be the source for your new disk.</p>
<p>Please refer to official documentation on how to create a new disk from existing snapshot:</p>
<ul>
<li><em><a href="https://cloud.google.com/compute/docs/disks/restore-and-delete-snapshots#console" rel="nofollow noreferrer">Cloud.google.com: Compute: Disks: Restore and delete snapshots</a></em></li>
</ul>
<hr />
<h3>Create a PV and PVC for newly created nfs-server</h3>
<p>To ensure that the <code>GCP</code>'s disk will be bound to the newly created nfs-server you will need to create a PV and a PVC. You can use example below but please change it accordingly to your use case:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: data-disk
spec:
storageClassName: standard
capacity:
storage: 25G
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: data-disk-pvc # reference to the PVC below
gcePersistentDisk:
pdName: old-data-disk # name of the disk created from snapshot in GCP
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-disk-pvc # reference to the PV above
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 25G
</code></pre>
<p>The way that this nfs-server works is that it creates a disk in the <code>GCP</code> infrastructure to store all the data saved on the nfs-server. Further creation of <code>PV</code>'s and <code>PVC</code>'s with nfs <code>storageClass</code> will result in creation of a folder in the <code>/export</code> directory inside of a nfs-server pod.</p>
<hr />
<h3>Pull the nfs-server-provisioner helm chart and edit it</h3>
<p>You will need to pull the Helm chart of the <code>nfs-server-provisioner</code> as it requires a reconfiguration. You can do it by invoking below command:</p>
<ul>
<li><code>$ helm pull --untar stable/nfs-server-provisioner</code></li>
</ul>
<p>The changes are following in the <code>templates/statefulset.yaml</code> file:</p>
<ul>
<li><strong>Delete</strong> the parts responsible for handling persistence <code>.Values.persistence.enabled</code> (on the bottom). This parts are responsible for creating storage which you already have.</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code> {{- if not .Values.persistence.enabled }}
volumes:
- name: data
emptyDir: {}
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ {{ .Values.persistence.accessMode | quote }} ]
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.persistence.storageClass | quote }}
{{- end }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- end }}
</code></pre>
<ul>
<li>Add the volume definition to use previously created disk with the nfs data to the <code>spec.template.spec</code> part like here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-pod" rel="nofollow noreferrer">Kubernetes.io: Configure persistent volume storage: Create a pod</a></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code> volumes:
- name: data
persistentVolumeClaim:
claimName: data-disk-pvc # name of the pvc created from the disk
</code></pre>
<hr />
<h3>Spawn edited nfs-server-provisioner helm chart</h3>
<p>You will need to run this Helm chart from local storage instead of running it from the web. The command to run it will be following:</p>
<ul>
<li><code>$ helm install kruk-nfs . --set storageClass.name=kruk-nfs --set storageClass.reclaimPolicy=Retain</code></li>
</ul>
<blockquote>
<p>This syntax is specific to Helm3</p>
</blockquote>
<p>Above parameters are necessary to specify the name of the <code>storageClass</code> as well it's <code>reclaimPolicy</code>.</p>
<hr />
<h3>Create new PV's and PVC's with nfs-server-provisioner storageClass</h3>
<p>Example to create a PVC linked to the existing folder in nfs-server.</p>
<p>Assuming that the <code>/export</code> directory looks like this:</p>
<pre class="lang-sh prettyprint-override"><code>bash-5.0# ls
ganesha.log
lost+found
nfs-provisioner.identity
pvc-2c16cccb-da67-41da-9986-a15f3f9e68cf # folder we will create a PV and PVC for
v4old
v4recov
vfs.conf
</code></pre>
<blockquote>
<p>A tip!
When you create a <code>PVC</code> with a <code>storageClass</code> of this nfs-server it will create a folder with the name of this <code>PVC</code>.</p>
</blockquote>
<p>You will need to create a <code>PV</code> for your share:</p>
<pre class="lang-sh prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-example
spec:
storageClassName: kruk-nfs
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
nfs:
path: /export/pvc-2c16cccb-da67-41da-9986-a15f3f9e68cf # directory to mount pv
server: 10.73.4.71 # clusterIP of nfs-server-pod service
</code></pre>
<p>And <code>PVC</code> for your <code>PV</code>:</p>
<pre class="lang-sh prettyprint-override"><code>apiVersion: "v1"
kind: "PersistentVolumeClaim"
metadata:
name: pv-example-claim
spec:
storageClassName: kruk-nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
volumeName: pv-example # name of the PV created for a folder
</code></pre>
<h3>Attach newly created PVC's to workload</h3>
<p>Above manifests will create a <code>PVC</code> with the name of pv-example-claim that will have the contents of the <code>pvc-2c16cccb-da67-41da-9986-a15f3f9e68cf</code> directory available for usage. You can mount this <code>PVC</code> to a pod by following this example:</p>
<pre class="lang-yaml prettyprint-override"><code>piVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: storage-mounting
persistentVolumeClaim:
claimName: pv-example-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/storage"
name: storage-mounting
</code></pre>
<p>After that you should be able to check if you have the data in the folder specified in above manifest:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl exec -it task-pv-pod -- cat /storage/hello
hello there
</code></pre>
|
<p>Given a kubernetes cluster with:</p>
<ol>
<li><a href="https://github.com/prometheus/prometheus" rel="nofollow noreferrer">prometheus</a></li>
<li><a href="https://github.com/prometheus/node_exporter" rel="nofollow noreferrer">node-exporter</a></li>
<li><a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a></li>
</ol>
<p>I like to use the metric <code>container_memory_usage_bytes</code> but select by <code>deployment_name</code> instead of <code>pod</code>.</p>
<p>Selectors like <code>container_memory_usage_bytes{pod_name=~"foo-.+"}</code> if the <code>deployment_name=foo</code> are great as long there is not a deployment with <code>deployment_name=foo-bar</code>.</p>
<p>The same I'd like to achieve with the metric <code>kube_pod_container_resource_limits_memory_bytes</code>.</p>
<p>Is there a way to achieve this?</p>
|
<p><strong>TL;DR</strong></p>
<p><strong>There is no straightforward way to query prometheus by a <code>deployment-name</code>.</strong></p>
<p>You can query memory usage of a specific deployment by using deployment's labels.</p>
<p>Used query:</p>
<pre><code>sum(
kube_pod_labels{label_app=~"ubuntu.*"} * on (pod) group_right(label_app) container_memory_usage_bytes{namespace="memory-testing", container=""}
)
by (label_app)
</code></pre>
<p>There is an awesome article which explains the concepts behind this query. I encourage you to read it:</p>
<ul>
<li><em><a href="https://medium.com/@amimahloof/kubernetes-promql-prometheus-cpu-aggregation-walkthrough-2c6fd2f941eb" rel="noreferrer">Medium.com: Amimahloof: Kubernetes promql prometheus cpu aggregation walkthrough</a></em></li>
</ul>
<p>I've included an explanation with example below.</p>
<hr />
<p>The selector mentioned in the question:
<code>container_memory_usage_bytes{pod_name=~"foo-.+"}</code></p>
<blockquote>
<p><code>.+</code> - match any string but not an empty string</p>
</blockquote>
<p>with pods like:</p>
<ul>
<li><code>foo-12345678-abcde</code> - <strong>will match</strong> (deployment <code>foo</code>)</li>
<li><code>foo-deployment-98765432-zyxzy</code> - <strong>will match</strong> (deployment <code>foo-deployment</code>)</li>
</ul>
<p>As shown above it will match for both pods and for both deployments.</p>
<p>For more reference:</p>
<ul>
<li><em><a href="https://prometheus.io/docs/prometheus/latest/querying/basics/" rel="noreferrer">Prometheus.io: Docs: Prometheus: Querying: Basics</a></em></li>
</ul>
<hr />
<p>As mentioned earlier, you can use labels from your deployment to pinpoint the resource used by your specific deployment.</p>
<p>Assuming that:</p>
<ul>
<li>There are 2 deployments in <code>memory-testing</code> namespace:
<ul>
<li><code>ubuntu</code> with 3 replicas</li>
<li><code>ubuntu-additional</code> with 3 replicas</li>
</ul>
</li>
<li>Above deployments have labels the same as their names (they can be different):
<ul>
<li><code>app: ubuntu</code></li>
<li><code>app: ubuntu-additional</code></li>
</ul>
</li>
<li>Kubernetes cluster version <code>1.18.X</code></li>
</ul>
<blockquote>
<p><em>Why do I specify Kubernetes version?</em></p>
<blockquote>
<p>Kubernetes 1.16 will remove the duplicate <code>pod_name</code> and <code>container_name</code> metric labels from cAdvisor metrics. For the 1.14 and 1.15 release all <code>pod</code>, <code>pod_name</code>, <code>container</code> and <code>container_name</code> were available as a grace period.</p>
</blockquote>
<ul>
<li><em><a href="https://github.com/kubernetes/enhancements/issues/1206" rel="noreferrer">Github.com: Kubernetes: Metrics Overhaul</a></em></li>
</ul>
</blockquote>
<p>This means that you will need to substitute the parameters like:</p>
<ul>
<li><code>pod</code> with <code>pod_name</code></li>
<li><code>container</code> with <code>container_name</code></li>
</ul>
<p>To deploy Prometheus and other tools to monitor the cluster I used: <em><a href="https://github.com/coreos/kube-prometheus" rel="noreferrer">Github.com: Coreos: Kube-prometheus</a></em></p>
<p>The pods in <code>ubuntu</code> deployment are configured to generate artificial load (<code>stress-ng</code>). This is done to show how to avoid situation where the used resources are doubled.</p>
<p>The resources used by pods in <code>memory-testing</code> namespace:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl top pod --namespace=memory-testing
NAME CPU(cores) MEMORY(bytes)
ubuntu-5b5d6c56f6-cfr9g 816m 280Mi
ubuntu-5b5d6c56f6-g6vh9 834m 278Mi
ubuntu-5b5d6c56f6-sxldj 918m 288Mi
ubuntu-additional-84bdf9b7fb-b9pxm 0m 0Mi
ubuntu-additional-84bdf9b7fb-dzt72 0m 0Mi
ubuntu-additional-84bdf9b7fb-k5z6w 0m 0Mi
</code></pre>
<p>If you were to query Prometheus with below query:</p>
<pre><code>container_memory_usage_bytes{namespace="memory-testing", pod=~"ubuntu.*"}
</code></pre>
<p>You would get output similar to one below (it's cut to show only one pod for example purposes, by default it would show all pods with <code>ubuntu</code> in name and in <code>memory-testing</code> namespace):</p>
<pre><code>container_memory_usage_bytes{endpoint="https-metrics",id="/kubepods/besteffort/podb96dea39-b388-471e-a789-8c74b1670c74",instance="192.168.0.117:10250",job="kubelet",metrics_path="/metrics/cadvisor",namespace="memory-testing",node="node1",pod="ubuntu-5b5d6c56f6-cfr9g",service="kubelet"} 308559872
container_memory_usage_bytes{container="POD",endpoint="https-metrics",id="/kubepods/besteffort/podb96dea39-b388-471e-a789-8c74b1670c74/312980f90e6104d021c12c376e83fe2bfc524faa4d4cee6553182d0fa2e007a1",image="k8s.gcr.io/pause:3.2",instance="192.168.0.117:10250",job="kubelet",metrics_path="/metrics/cadvisor",name="k8s_POD_ubuntu-5b5d6c56f6-cfr9g_memory-testing_b96dea39-b388-471e-a789-8c74b1670c74_0",namespace="memory-testing",node="node1",pod="ubuntu-5b5d6c56f6-cfr9g",service="kubelet"} 782336
container_memory_usage_bytes{container="ubuntu",endpoint="https-metrics",id="/kubepods/besteffort/podb96dea39-b388-471e-a789-8c74b1670c74/1b93889a3e7415ad3fa040daf89f3f6bc77e569d85069de518267666ede8e21c",image="ubuntu@sha256:55cd38b70425947db71112eb5dddfa3aa3e3ce307754a3df2269069d2278ce47",instance="192.168.0.117:10250",job="kubelet",metrics_path="/metrics/cadvisor",name="k8s_ubuntu_ubuntu-5b5d6c56f6-cfr9g_memory-testing_b96dea39-b388-471e-a789-8c74b1670c74_0",namespace="memory-testing",node="node1",pod="ubuntu-5b5d6c56f6-cfr9g",service="kubelet"} 307777536
</code></pre>
<p>In this point you will need to choose which metric you will be using. In this example I used the first one. For more of a deep dive please take a look on this articles:</p>
<ul>
<li><em><a href="https://blog.freshtracks.io/a-deep-dive-into-kubernetes-metrics-part-3-container-resource-metrics-361c5ee46e66" rel="noreferrer">Blog.freshtracks.io: A deep dive into kubernetes metrics part 3 container resource metrics</a></em></li>
<li><em><a href="https://www.ianlewis.org/en/almighty-pause-container" rel="noreferrer">Ianlewis.org: Almighty pause container</a></em></li>
</ul>
<p>If we were to aggregate this metrics with <code>sum (QUERY) by (pod)</code> we would have in fact doubled our reported used resources.</p>
<p><strong>Dissecting the main query:</strong></p>
<pre><code>container_memory_usage_bytes{namespace="memory-testing", container=""}
</code></pre>
<p>Above query will output records with used memory metric for each pod. The <code>container=""</code>parameter is used to get only one record (mentioned before) which does not have <code>container</code> parameter.</p>
<pre><code>kube_pod_labels{label_app=~"ubuntu.*"}
</code></pre>
<p>Above query will output record with pods and it's labels with regexp of <code>ubuntu.*</code></p>
<pre><code>kube_pod_labels{label_app=~"ubuntu.*"} * on (pod) group_right(label_app) container_memory_usage_bytes{namespace="memory-testing", container=""}
</code></pre>
<p>Above query will match the <code>pod</code> from <code>kube_pod_labels</code> with <code>pod</code> of <code>container_memory_usage_bytes</code> and add the <code>label_app</code> to each of the records.</p>
<pre><code>sum (kube_pod_labels{label_app=~"ubuntu.*"} * on (pod) group_right(label_app) container_memory_usage_bytes{namespace="memory-testing", container=""}) by (label_app)
</code></pre>
<p>Above query will sum the records by <code>label_app</code>.</p>
<p>After that you should be able to get the query that will sum the used memory by a label (and in fact a Deployment).</p>
<p><a href="https://i.stack.imgur.com/cvW4W.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cvW4W.png" alt="Grafana Prometheus query" /></a></p>
<hr />
<p>As for:</p>
<blockquote>
<p>The same I'd like to achieve with the metric
<code>kube_pod_container_resource_limits_memory_bytes</code>.</p>
</blockquote>
<p>You can use below query to get the memory limit for the deployment tagged with labels as in previous example, assuming that each pod in a deployment is having the same limits:</p>
<pre><code>kube_pod_labels{label_app="ubuntu-with-limits"} * on (pod) group_right(label_app) kube_pod_container_resource_limits_memory_bytes{namespace="memory-testing", pod=~".*"}
</code></pre>
<p>You can apply functions like <code>avg()</code>,<code>mean()</code>,<code>max()</code> on this query to get the single number that will be your memory limit:</p>
<pre><code>avg(kube_pod_labels{label_app="ubuntu-with-limits"} * on (pod) group_right(label_app) kube_pod_container_resource_limits_memory_bytes{namespace="memory-testing", pod=~".*"}) by (label_app)
</code></pre>
<p>Your memory limits can vary if you use <code>VPA</code>. In that situation you could show all of them simultaneously or use the <code>avg()</code> to get the average for all of the "deployment".</p>
<p><a href="https://i.stack.imgur.com/RR9Xy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RR9Xy.png" alt="Combined resources and limits" /></a></p>
<hr />
<p>As a <strong>workaround</strong> to above solutions you could try to work with regexp like below:</p>
<pre><code>container_memory_usage_bytes{pod=~"^ubuntu-.{6,10}-.{5}"}
</code></pre>
|
<p>Im currently trying to run an autoscaling demo using prometheus and the prometheus adapter, and i was wondering if there is a way to autoscale one of my deployments based on metrics that prometheus scrapes from another deployment.</p>
<p>What i have right now are 2 different deployments, kafka-consumer-application (which i want to scale) and kafka-exporter (which exposes the kafka metrics that I'll be using for scaling). I know that if I have both of them as containers in the same deployment the autoscaling works, but the issue is that the kafka-exporter also gets autoscaled and its not ideal, so i want to separate them. I tried with the following HPA but i could not get it to work:</p>
<pre><code>kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: consumer-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: kafka-consumer-application
minReplicas: 1
maxReplicas: 10
metrics:
- type: object
object:
target: kafka-exporter
metricName: "kafka_consumergroup_lag"
targetValue: 5
</code></pre>
<p>Im not sure if im doing something wrong or if this is just not something i can do, so any advice is appreciated.</p>
<p>Thanks!</p>
<p>Note: im running the adapter with this config:</p>
<pre><code>rules:
default: false
resource: {}
custom:
- seriesQuery: 'kafka_consumergroup_lag'
resources:
overrides:
kubernetes_namespace: {resource: "namespace"}
kubernetes_pod_name: {resource: "pod"}
name:
matches: "kafka_consumergroup_lag"
as: "kafka_consumergroup_lag"
metricsQuery: 'avg_over_time(kafka_consumergroup_lag{topic="my-topic",consumergroup="we-consume"}[1m])'
``
</code></pre>
|
<p>In <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects" rel="nofollow noreferrer">kubernetes documentation</a> you can read:</p>
<blockquote>
<p>Autoscaling on metrics not related to Kubernetes objects
Applications running on Kubernetes may need to autoscale based on metrics that don’t have an obvious relationship to any object in the Kubernetes cluster, such as metrics describing a hosted service with no direct correlation to Kubernetes namespaces. In Kubernetes 1.10 and later, you can address this use case with <strong>external metrics</strong></p>
</blockquote>
<p>So using external metrics, your HPA yaml could look like following:</p>
<pre><code>kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta2
metadata:
name: consumer-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: kafka-consumer-application
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metric:
name: kafka_consumergroup_lag
#selector:
# matchLabels:
# topic: "my-topic"
target:
type: AverageValue
averageValue: 5
</code></pre>
<p>If you have more than one kafka-exporter you can use <code>selector</code> to filter it (<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#metricidentifier-v2beta2-autoscaling" rel="nofollow noreferrer">source</a>):</p>
<blockquote>
<p>selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics</p>
</blockquote>
<p>Also have a look at <a href="https://stackoverflow.com/questions/60990200/horizontal-pod-autoscaling-without-custom-metrics">this Stack question</a>.</p>
|
<p>I have a Hyperledger Fabric 1.4 blockchain running under Kubernetes</p>
<pre><code>➜ ~ kubectl get svc -n blockchain
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
blockchain-orderer NodePort 100.68.98.142 <none> 31010:31010/TCP 7d4h
blockchain-orderer2 NodePort 100.64.169.137 <none> 32010:32010/TCP 7d4h
blockchain-orderer3 NodePort 100.68.185.88 <none> 31012:31012/TCP 7d4h
blockchain-org1-ca NodePort 100.66.249.91 <none> 30054:30054/TCP 7d4h
blockchain-org1peer1 NodePort 100.70.28.4 <none> 30110:30110/TCP,30111:30111/TCP 7d4h
blockchain-org2-ca NodePort 100.68.243.9 <none> 30055:30055/TCP 7d4h
blockchain-org2peer1 NodePort 100.71.114.216 <none> 30210:30210/TCP,30211:30211/TCP 7d4h
blockchain-org3-ca NodePort 100.64.199.106 <none> 30056:30056/TCP 7d4h
blockchain-org3peer1 NodePort 100.66.55.254 <none> 30310:30310/TCP,30311:30311/TCP 7d4h
blockchain-org4-ca NodePort 100.70.219.197 <none> 30057:30057/TCP 7d4h
blockchain-org4peer1 NodePort 100.69.141.45 <none> 30410:30410/TCP,30411:30411/TCP 7d4h
docker ClusterIP 100.67.69.23 <none> 2375/TCP 7d4h
</code></pre>
<p>What I want is to connect Blockchain remotly from a Go app, and send data to write.</p>
<p>Nevertheless, I don't know how to test connectivity, from what I understand, with a NodePort service, I would be able to connect blockchain statically, via ip:port, as states the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">docs</a></p>
<blockquote>
<p>Exposes the Service on each Node’s IP at a static port (the NodePort). A <code>ClusterIP</code> Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the <code>NodePort</code> Service, from outside the cluster, by requesting <code><NodeIP>:<NodePort></code></p>
</blockquote>
<p>My nodes are hosted on AWS. </p>
<p>Pods I must connect are: </p>
<pre><code>blockchain-org1peer1
blockchain-org1peer2
blockchain-org1peer3
</code></pre>
<p>How should I do to be able to connect</p>
|
<p>There are multiple solutions I can think of, and others probably have more ideas.</p>
<p>1) Use a load balancer. You can setup a load balancer, or configure an existing one to forward a port to the nodeport of all the worker nodes in your k8s cluster. Then you can call the backend through that load balancer.</p>
<p>2) Use an ingress. The ingress probably already has a load balancer in front of it, and this might be easier to implement. With an ingress, you don't need a NodePort. You can directly connect the pods from the ingress, or add a service in front of each and connect to that service.</p>
<p>You have multiple pods in the backend, and they seem to be peer nodes in the blockchain, not replicas of the same pod. If you need to create three separate endpoints so you can control which peer you connect, an ingress with three proxies connected to these pods would work. </p>
<p>If you do not care which peer you connect, then you can add a service in front of the three pods as a load balancer. You can add a common label to all three peers, and have your service select its destination using that label, so when you hit the service, it acts as a load balancer between the peers. Then you can expose that service via the ingress.</p>
|
<p>I have been trying to create a POD with HELM UPGRADE:</p>
<pre><code>helm upgrade --values=$(System.DefaultWorkingDirectory)/_NAME-deploy-CI/drop/values-NAME.yaml --namespace sda-NAME-pro --install --reset-values --debug --wait NAME .
</code></pre>
<p>but running into below error:</p>
<pre><code>2020-07-08T12:51:28.0678161Z upgrade.go:367: [debug] warning: Upgrade "NAME" failed: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
</code></pre>
<p>YML part</p>
<pre><code> volumeMounts:
- name: secretvol
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: jks
secret:
secretName: {{ .Values.secret.jks }}
- name: secretvol
secret:
secretName: {{ .Values.secret.secretvol }}
</code></pre>
<p>Maybe, the first deploy need another command the first time? how can I specify these value to test it?</p>
|
<p><strong>TL;DR</strong></p>
<p>The issue you've encountered:</p>
<pre class="lang-sh prettyprint-override"><code>2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
</code></pre>
<p>is connected with the fact that the variable: <code>{{ .Values.secret.secretvol }}</code> is <strong>missing</strong>.</p>
<p>To fix it you will need to set this value in either:</p>
<ul>
<li>Helm command that you are using</li>
<li>File that stores your values in the Helm's chart.</li>
</ul>
<blockquote>
<h2>A tip!</h2>
<p>You can run your Helm command with <code>--debug --dry-run</code> to output generated <code>YAML</code>'s. This should show you where the errors could be located.</p>
</blockquote>
<p>There is an official documentation about values in Helm. Please take a look here:</p>
<ul>
<li><em><a href="https://helm.sh/docs/chart_template_guide/values_files/" rel="nofollow noreferrer">Helm.sh: Docs: Chart template guid: Values files</a></em></li>
</ul>
<hr />
<p>Basing on</p>
<blockquote>
<p>I have been trying to create a POD with HELM UPGRADE:</p>
</blockquote>
<p>I've made an example basing on your issue and how you can fix it.</p>
<p>Steps:</p>
<ul>
<li>Create a helm chart with correct values</li>
<li>Edit the values to reproduce the error</li>
</ul>
<h2>Create a helm chart</h2>
<p>For the simplicity of the setup I created basic Helm chart.</p>
<p>Below is the structure of files and directories:</p>
<pre class="lang-sh prettyprint-override"><code>❯ tree helm-dir
helm-dir
├── Chart.yaml
├── templates
│ └── pod.yaml
└── values.yaml
1 directory, 3 files
</code></pre>
<h3>Create <code>Chart.yaml</code> file</h3>
<p>Below is the <code>Chart.yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v2
name: helm-pod
description: A Helm chart for spawning pod with volumeMount
version: 0.1.0
</code></pre>
<h3>Create a <code>values.yaml</code> file</h3>
<p>Below is the simple <code>values.yaml</code> file which will be used by <strong>default</strong> in the <code>$ helm install</code> command</p>
<pre class="lang-yaml prettyprint-override"><code>usedImage: ubuntu
confidentialName: secret-password # name of the secret in Kubernetes
</code></pre>
<h3>Create a template for a <code>pod</code></h3>
<p>This template is stored in <code>templates</code> directory with a name <code>pod.yaml</code></p>
<p>Below <code>YAML</code> definition will be a template for spawned pod:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.usedImage }} # value from "values.yaml"
labels:
app: {{ .Values.usedImage }} # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: {{ .Values.usedImage }} # value from "values.yaml"
image: {{ .Values.usedImage }} # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: {{ .Values.confidentialName }} # value from "values.yaml"
</code></pre>
<p>With above example you should be able to run <code>$ helm install --name test-pod .</code></p>
<p>You should get output similar to this:</p>
<pre class="lang-sh prettyprint-override"><code>NAME: test-pod
LAST DEPLOYED: Thu Jul 9 14:47:46 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod
NAME READY STATUS RESTARTS AGE
ubuntu 0/1 ContainerCreating 0 0s
</code></pre>
<blockquote>
<p>Disclaimer!
The ubuntu pod is in the <code>ContainerCreating</code> state as there is no <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secret</a> named <code>secret-password</code> in the cluster.</p>
</blockquote>
<p>You can get more information about your pods by running:</p>
<ul>
<li><code>$ kubectl describe pod POD_NAME</code></li>
</ul>
<h2>Edit the values to reproduce the error</h2>
<p>The error you got as described earlier is most probably connected with the fact that the value: <code>{{ .Values.secret.secretvol }}</code> was <strong>missing</strong>.</p>
<p>If you were to edit the <code>values.yaml</code> file to:</p>
<pre class="lang-yaml prettyprint-override"><code>usedImage: ubuntu
# confidentialName: secret-password # name of the secret in Kubernetes
</code></pre>
<p><strong>Notice the added <code>#</code></strong>.</p>
<p>You should get below error when trying to deploy this chart:</p>
<pre class="lang-sh prettyprint-override"><code>Error: release test-pod failed: Pod "ubuntu" is invalid: [spec.volumes[0].secret.secretName: Required value, spec.containers[0].volumeMounts[0].name: Not found: "secretvol"]
</code></pre>
<p>I previously mentioned the <code>--debug --dry-run</code> parameters for Helm.</p>
<p>If you run:</p>
<ul>
<li><code>$ helm install --name test-pod --debug --dry-run .</code></li>
</ul>
<p>You should get the output similar to this (this is only the part):</p>
<pre class="lang-yaml prettyprint-override"><code>---
# Source: helm-pod/templates/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: ubuntu # value from "values.yaml"
labels:
app: ubuntu # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: ubuntu # value from "values.yaml"
image: ubuntu # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: # value from "values.yaml"
</code></pre>
<p>As you can see the value of <code>secretName</code> was missing. That's the reason above error was showing up.</p>
<pre class="lang-yaml prettyprint-override"><code> secretName: # value from "values.yaml"
</code></pre>
|
<p>I'm trying to deploy a Prometheus nodeexporter Daemonset in my AWS EKS K8s cluster.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: prometheus
chart: prometheus-11.12.1
component: node-exporter
heritage: Helm
release: prometheus
name: prometheus-node-exporter
namespace: operations-tools-test
spec:
selector:
matchLabels:
app: prometheus
component: node-exporter
release: prometheus
template:
metadata:
labels:
app: prometheus
chart: prometheus-11.12.1
component: node-exporter
heritage: Helm
release: prometheus
spec:
containers:
- args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --web.listen-address=:9100
image: prom/node-exporter:v1.0.1
imagePullPolicy: IfNotPresent
name: prometheus-node-exporter
ports:
- containerPort: 9100
hostPort: 9100
name: metrics
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /host/proc
name: proc
readOnly: true
- mountPath: /host/sys
name: sys
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
hostPID: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: prometheus-node-exporter
serviceAccountName: prometheus-node-exporter
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /proc
type: ""
name: proc
- hostPath:
path: /sys
type: ""
name: sys
</code></pre>
<p>After deploying however, its not getting deployed on one node.</p>
<p>pod.yml file for that file looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: eks.privileged
generateName: prometheus-node-exporter-
labels:
app: prometheus
chart: prometheus-11.12.1
component: node-exporter
heritage: Helm
pod-template-generation: "1"
release: prometheus
name: prometheus-node-exporter-xxxxx
namespace: operations-tools-test
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: prometheus-node-exporter
resourceVersion: "51496903"
selfLink: /api/v1/namespaces/namespace-x/pods/prometheus-node-exporter-xxxxx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- ip-xxx-xx-xxx-xxx.ec2.internal
containers:
- args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --web.listen-address=:9100
image: prom/node-exporter:v1.0.1
imagePullPolicy: IfNotPresent
name: prometheus-node-exporter
ports:
- containerPort: 9100
hostPort: 9100
name: metrics
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /host/proc
name: proc
readOnly: true
- mountPath: /host/sys
name: sys
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-node-exporter-token-xxxx
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostNetwork: true
hostPID: true
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: prometheus-node-exporter
serviceAccountName: prometheus-node-exporter
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/pid-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/network-unavailable
operator: Exists
volumes:
- hostPath:
path: /proc
type: ""
name: proc
- hostPath:
path: /sys
type: ""
name: sys
- name: prometheus-node-exporter-token-xxxxx
secret:
defaultMode: 420
secretName: prometheus-node-exporter-token-xxxxx
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-11-06T23:56:47Z"
message: '0/4 nodes are available: 2 node(s) didn''t have free ports for the requested
pod ports, 3 Insufficient pods, 3 node(s) didn''t match node selector.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: BestEffort
</code></pre>
<p>As seen above POD nodeAffinity looks up metadata.name which matches exactly what I have as a label in my node.</p>
<p>But when I run the below command,</p>
<pre class="lang-sh prettyprint-override"><code> kubectl describe po prometheus-node-exporter-xxxxx
</code></pre>
<p>I get in the events:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 60m default-scheduler 0/4 nodes are available: 1 Insufficient pods, 3 node(s) didn't match node selector.
Warning FailedScheduling 4m46s (x37 over 58m) default-scheduler 0/4 nodes are available: 2 node(s) didn't have free ports for the requested pod ports, 3 Insufficient pods, 3 node(s) didn't match node selector.
</code></pre>
<p>I have also checked Cloud-watch logs for Scheduler and I don't see any logs for my failed pod.</p>
<p>The Node has ample resources left</p>
<pre><code>Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 520m (26%) 210m (10%)
memory 386Mi (4%) 486Mi (6%)
</code></pre>
<p>I don't see a reason why it should not schedule a pod.
Can anyone help me with this?</p>
<p>TIA</p>
|
<p>As posted in the comments:</p>
<blockquote>
<p>Please add to the question the steps that you followed (editing any values in the Helm chart etc). Also please <strong>check if the nodes are not over the limit of pods that can be scheduled on it</strong>. Here you can find the link for more reference: <a href="https://stackoverflow.com/questions/52898067/aks-reporting-insufficient-pods">LINK</a>.</p>
</blockquote>
<blockquote>
<p>no processes occupying 9100 on the given node. <strong>@DawidKruk The POD limit was reached. Thanks!</strong> I expected them to give me some error regarding that rather than vague node selector property not matching</p>
</blockquote>
<hr />
<p>Not really sure why the following messages were displayed:</p>
<ul>
<li>node(s) didn't have free ports for the requested pod ports</li>
<li>node(s) didn't match node selector</li>
</ul>
<p><strong>The issue that <code>Pods</code> couldn't be scheduled on the nodes (<code>Pending</code> state) was connected with the <code>Insufficient pods</code> message in the <code>$ kubectl get events</code> command.</strong></p>
<p>Above message is displayed when the nodes reached their maximum capacity of pods (example: <code>node1</code> can schedule maximum of <code>30</code> pods).</p>
<hr />
<p>More on the <code>Insufficient Pods</code> can be found in this github issue comment:</p>
<blockquote>
<p>That's true. That's because the CNI implementation on EKS. Max pods number is limited by the network interfaces attached to instance multiplied by the number of ips per ENI - which varies depending on the size of instance. It's apparent for small instances, this number can be quite a low number.</p>
<p><a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI" rel="nofollow noreferrer">Docs.aws.amazon.com: AWSEC2: User Guide: Using ENI: Available IP per ENI</a></p>
<p>-- <em><a href="https://github.com/kubernetes/autoscaler/issues/1576#issuecomment-454100551" rel="nofollow noreferrer">Github.com: Kubernetes: Autoscaler: Issue 1576: Comment 454100551</a></em></p>
</blockquote>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/57970896/pod-limit-on-node-aws-eks">Stackoverflow.com: Questions: Pod limit on node AWS EKS</a></em></li>
</ul>
|
<p>Trying to figure out , how to apply new label "app=green" to the pods that currently marked with "color=green" ?</p>
<p>Looks like I could not use "--field-selector" and I don not want to specify name of each pod in "kubectl label" command. </p>
<p>Thank you !</p>
|
<p>This should work:</p>
<pre><code>kubectl label pods --selector=color=green app=green
</code></pre>
|
<p>Let's say I have the below yaml:</p>
<pre><code>spec:
securityContext:
fsGroup: 5678
serviceAccountName: some-account
volumes:
- name: secrets
secret:
secretName: data-secrets
- name: secrets-sftp-passwd-key
secret:
secretName: sftp-passwd-key
containers:
- name: sftp
securityContext:
runAsUser: 1234
runAsGroup: 5678
image: "some/some"
imagePullPolicy: always
ports:
- name: tcp-sftp
containerPort: 22
volumeMounts:
- name: secrets-sftp-passwd-key
mountPath: /etc/sftp/secret/
- name: data
mountPath: "/var/data"
env:
- name: "SFTP_USER"
value: "some_user"
- name: "SFTP_PASSWD"
value: "password-1"
</code></pre>
<p>I'm trying to run the container <code>sftp</code> with <code>runAsUser</code> and <code>runAsGroup</code>. The image's entrypoint script is supposed to take the SFTP_USER, SFTP_PASSWD and create certain password and group files using this input. These files are supposed to be created in /var/data folder. After this, a proftpd process is supposed to be started using these password and group files as the 1234 user. The container starts as 1234 user. But the files have a permission of root:5678. And I get the below error:</p>
<pre><code>unable to set UID to 0, current UID: 1234
</code></pre>
<p>The entry point script of the image is as below:</p>
<pre><code>echo "Starting to create password file"
PASSWORD=123456
echo $PASSWORD | /usr/bin/ftpasswd --passwd --file=/var/data/ftpd.passwd --name=virtual --uid=1234 --gid=5678 --home=/var/data/ --shell=/bin/bash --stdin
echo "Password file created"
/usr/bin/ftpasswd --group --name=--group --name=virtual --file=/var/data/ftpd.group --gid=5678 --member=1234
proftpd -n -4 -c /var/data/proftpd.conf --> This line is throwing the above error.
</code></pre>
<p>What is going wrong here? Why is it trying to set UID to 0?? I was under the impression that giving <code>runAsUser</code>, <code>runAsGroup</code> and <code>fsGroup</code> will make sure that the /var/data folder has the correct ownership of 1234. </p>
|
<p>The error you're encountered is being being printed by following <a href="https://github.com/proftpd/proftpd/blob/1bbb7b3e667c9f03050c692a9cecbd214bffff85/src/main.c#L2621-L2626" rel="nofollow noreferrer">code</a>: </p>
<pre><code>if (geteuid() != daemon_uid) {
pr_log_pri(PR_LOG_ERR, "unable to set UID to %s, current UID: %s",
pr_uid2str(permanent_pool, daemon_uid),
pr_uid2str(permanent_pool, geteuid()));
exit(1);
}
</code></pre>
<p>You're seeing this because your container userID is not equal to the <code>daemon_uid</code>.
If you keep looking for <code>daemon_uid</code> you will find following line: </p>
<pre><code>uid_t *uid = (uid_t *) get_param_ptr(main_server->conf, "UserID", FALSE);
daemon_uid = (uid != NULL ? *uid : PR_ROOT_UID);
</code></pre>
<p>This means that if the <code>UserID</code> is not provided in config the code assigns<code>PR_ROOT_UID</code> value to the <code>daemon_uid</code> which appears to be 0 (root id).
This is why the earlier mentioned if statement is generating this error message. </p>
<p>Take a look at the <a href="https://github.com/proftpd/proftpd/blob/1bbb7b3e667c9f03050c692a9cecbd214bffff85/sample-configurations/basic.conf#L28-L30" rel="nofollow noreferrer">example config</a> how to provide userID. </p>
|
<p>For CI/CD purposes, the project is maintaining 2 kustomization.yaml files</p>
<ol>
<li>Regular deployments - kustomization_deploy.yaml</li>
<li>Rollback deployment - kustomization_rollback.yaml</li>
</ol>
<p>To run kustomize build, a file with the name "kustomization.yaml" is required in the current directory.
If the project wants to use kustomization_rollback.yaml and NOT kustomization.yaml, how is this possible? Does kustomize accept file name as an argument? Docs do not specify anything on this.</p>
|
<p>Currently there is no possibility to change the behavior of <code>kustomize</code> to support other file names (by using precompiled binaries) than:</p>
<ul>
<li><code>kustomization.yaml</code></li>
<li><code>kustomization.yml</code></li>
<li><code>Kustomization</code></li>
</ul>
<p>All of the below cases will produce the same error output:</p>
<ul>
<li><code>kubectl kustomize dir/</code></li>
<li><code>kubectl apply -k dir/</code></li>
<li><code>kustomize build dir/</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>Error: unable to find one of 'kustomization.yaml', 'kustomization.yml' or 'Kustomization' in directory 'FULL_PATH/dir'
</code></pre>
<p>Depending on the CI/CD platform/solution/tool you should try other way around like for example:</p>
<ul>
<li>split the <code>Deployment</code> into 2 directories <code>kustomization_deploy</code>/<code>kustomization_rollback</code> with <code>kustomization.yaml</code></li>
</ul>
<hr />
<blockquote>
<p>As a side note!</p>
<p>File names that kustomize uses are placed in the:</p>
<ul>
<li><code>/kubernetes/vendor/sigs.k8s.io/kustomize/pkg/constants/constants.go</code></li>
</ul>
<pre class="lang-golang prettyprint-override"><code>// Package constants holds global constants for the kustomize tool.
package constants
// KustomizationFileNames is a list of filenames that can be recognized and consumbed
// by Kustomize.
// In each directory, Kustomize searches for file with the name in this list.
// Only one match is allowed.
var KustomizationFileNames = []string{
"kustomization.yaml",
"kustomization.yml",
"Kustomization",
}
</code></pre>
<p>The logic behind choosing the <code>Kustomization</code> file is placed in:</p>
<ul>
<li><code>/kubernetes/vendor/sigs.k8s.io/kustomize/pkg/target/kusttarget.go</code></li>
</ul>
</blockquote>
<hr />
<p>Additional reference:</p>
<ul>
<li><em><a href="https://github.com/kubernetes-sigs/kustomize" rel="noreferrer">Github.com: Kubernetes-sigs: Kustomize</a></em></li>
<li><em><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="noreferrer">Kubernetes.io: Docs: Tasks: Manage kubernetes objects: Kustomization</a></em></li>
</ul>
|
<p>Let's say I have a <code>StatefulSet</code> definition</p>
<pre><code>apiVersion: v1
kind: StatefulSet
metadata:
name: web
spec:
...
volumeClaimTemplates:
— metadata:
name: www
spec:
resources:
requests:
storage: 1Gi
</code></pre>
<p>This will create me a <code>PersistentVolumeClaim</code> (PVC) with a <code>PersistentVolume</code> (PV) of 1 GiB for each pod.</p>
<p>How can I write something like this</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: www
spec:
...
resources:
requests:
storage: 1Gi
...
</code></pre>
<p>and connect it with the <code>StatefulSet</code> in a way that it still creates a PVC and PV for each pod?</p>
|
<p>I am guessing that in your question you are using statfulset example from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components" rel="nofollow noreferrer">this website</a> so I will follow its naming convention.</p>
<p>The solution I am about to present you was tested by myself and it seems to work.</p>
<p>In <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#statefulsetspec-v1-apps" rel="nofollow noreferrer">k8s api reference</a> you can find the folllowing definition:</p>
<blockquote>
<p>volumeClaimTemplates is a list of claims that pods are allowed to reference. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. A claim in this list takes precedence over any volumes in the template, with the same name.</p>
</blockquote>
<p>So this means that as long as you have volumeclaim with specific name, staefulset will use it without creating a new one. This means that you can create some pv/pvc manually and statefulset will use them.</p>
<p>All you need to do is to correctly name your pvcs. How is this name supposed to look like?
Here is the first part:</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: www <-here is the first part
</code></pre>
<p>and the second part is a pod name.</p>
<p>(Have a look at this Stack question on <a href="https://stackoverflow.com/questions/46442238/can-i-rely-on-volumeclaimtemplates-naming-convention">can-i-rely-on-volumeclaimtemplates-naming-convention</a>.)</p>
<p>These two parts combined together create a name of pvc (separated with dash) e.g. </p>
<pre><code>www-web-0 <- this is how you are supposed to name one of your pvcs
│ └ second part (pod name)
└ first part
</code></pre>
<p>If you already have (automatically provisioned) PVCs, use</p>
<pre><code>kubectl get pvc <pvcname> -oyaml > pvcname.yaml
kubectl get pv <pvname> -oyaml > pvname.yaml
</code></pre>
<p>to save its specification to disk. Then you can run:</p>
<pre><code>kubectl apply -f pvcname.yaml
kubectl apply -f pvname.yaml
</code></pre>
<p>to apply pvc/pv configuration. Remember that some yaml files may require slight modifications before running <code>kubectl apply</code>.</p>
|
<p>I would like to make the result of a text classification model (finBERT pytorch model) available through an endpoint that is deployed on Kubernetes.</p>
<p>The whole pipeline is working but it's super slow to process (30 seconds for one sentence) when deployed. If I time the same endpoint in local, I'm getting results in 1 or 2 seconds. Running the docker image in local, the endpoint also takes 2 seconds to return a result.</p>
<p>When I'm checking the CPU usage of my kubernetes instance while the request is running, it doesn't go above 35% so I'm not sure it's related to a lack of computation power?</p>
<p>Did anyone witness such performances issues when making a forward pass to a pytorch model? Any clues on what I should investigate?</p>
<p>Any help is greatly appreciated, thank you!</p>
<p>I am currently using</p>
<p>limits:
cpu: "2"
requests:
cpu: "1"</p>
<p>Python : 3.7
Pytorch : 1.8.1</p>
|
<p>I had the same issue. Locally my pytorch model would return a prediction in 25 ms and then on Kubernetes it would take 5 seconds. The problem had to do with how many threads torch had available to use. I'm not 100% sure why this works, but reducing the number of threads sped up performance significantly.</p>
<p>Set the following environment variable on your kubernetes pod.
<code>OMP_NUM_THREADS=1</code></p>
<p>After doing that it performed on kubernetes like it did running it locally ~30ms per call.</p>
<p>These are my pod limits:</p>
<ul>
<li>cpu limits <code>1</code></li>
<li>mem limits: <code>1500m</code></li>
</ul>
<p>I was led to discover this from this blog post: <a href="https://www.chunyangwen.com/blog/python/pytorch-slow-inference.html" rel="nofollow noreferrer">https://www.chunyangwen.com/blog/python/pytorch-slow-inference.html</a></p>
|
<p>I am attempting to upload a file to a Google cloud storage bucket from a service running on a Kubernetes pod. This requires authentication for the Google storage and so I have created a json authentication file from the console. </p>
<p>This json file is saved as a secret on my kubernetes environment and is referenced in the <code>deploy.yaml</code> through the environment variable <code>GOOGLE_APPLICATION_CREDENTIALS</code>.</p>
<p>The format appears like this:</p>
<pre><code>{
"type": "service_account",
"project_id": "xxx",
"private_key_id": "xxx",
"private_key": "-----BEGIN PRIVATE KEY-----\nxxx\n-----END PRIVATE KEY-----\n",
"client_email": "xx",
"client_id": "",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "xxx"
}
</code></pre>
<p>And the following code of how I am handling this authentication:</p>
<pre><code>import google.auth
credentials, project_id = google.auth.default()
bucket_name = Config.STORAGE_BUCKET_NAME
client = storage.Client(project=project_id)
bucket = client.get_bucket(bucket_name)
</code></pre>
<p>Given that locally I have logged into gcloud, I am able to upload files when testing locally. However, when deployed to Kubernetes I get the following error:</p>
<pre><code> File "/storage.py", line 8, in <module>
credentials, project_id = google.auth.default()
File "/usr/local/lib/python3.6/dist-packages/google/auth/_default.py", line 308, in default
credentials, project_id = checker()
File "/usr/local/lib/python3.6/dist-packages/google/auth/_default.py", line 166, in _get_explicit_environ_credentials
os.environ[environment_vars.CREDENTIALS]
File "/usr/local/lib/python3.6/dist-packages/google/auth/_default.py", line 92, in _load_credentials_from_file
"File {} was not found.".format(filename)
google.auth.exceptions.DefaultCredentialsError: File {
"type": "service_account",
"project_id": "xxx",
"private_key_id": "xxx",
"private_key": "-----BEGIN PRIVATE KEY-----\nxxx\n-----END PRIVATE KEY-----\n",
"client_email": "xx",
"client_id": "",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "xxx"
}
was not found.
</code></pre>
<p>I'm confused as it's referencing the file itself in the error so I am assuming it located it, but it does not recognize it as a valid service authentication file. </p>
<p>All help and pointers appreciated. :)</p>
|
<p>I think you should try this code locally and then on kubernetes, to check if is a problem with your service_account.json file: </p>
<pre><code> client = storage.Client.from_service_account_json(
'service_account.json')
</code></pre>
<p>or <a href="https://cloud.google.com/docs/authentication/production#auth-cloud-implicit-python" rel="nofollow noreferrer">link</a>:</p>
<blockquote>
<p>If your application runs on Compute Engine, Kubernetes Engine, the App
Engine flexible environment, or Cloud Functions, you don't need to
create your own service account. Compute Engine includes a default
service account that is automatically created for you, and you can
assign a different service account, per-instance, if needed.</p>
</blockquote>
<pre><code>credentials = compute_engine.Credentials()
# Create the client using the credentials and specifying a project ID.
storage_client = storage.Client(credentials=credentials, project=project)
</code></pre>
|
<p>I need to get the list of pods running in a worker node by executing a command from master node. I can achieve if i moved into the worker node and execute <code>kubectl get pods -n ns</code>. But i need to execute this from the master node and get pods in worker.</p>
|
<p>You can get pods running on specific node by using this command: </p>
<pre><code>kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=<node>
</code></pre>
<p>This will list all pods from all namespaces but you can narrow it down for specific namespace. </p>
|
<p>I am running a local Kubernetes cluster through Docker Desktop on Windows. I'm attempting to modify my kube-apiserver config, and all of the information I've found has said to modify <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> on the master. I haven't been able to find this file, and am not sure what the proper way is to do this. Is there a different process because the cluster is through Docker Desktop?</p>
|
<blockquote>
<p>Is there a different process because the cluster is through Docker Desktop?</p>
</blockquote>
<p>You can get access to the <code>kubeapi-server.yaml</code> with a Kubernetes that is running on Docker Desktop but in a "hacky" way. I've included the explanation below.</p>
<p>For setups that require such reconfigurations, I encourage you to use different solution like for example <code>minikube</code>.</p>
<p><code>Minikube</code> has a feature that allows you to pass the additional options for the Kubernetes components. You can read more about <code>--extra-config ExtraOption</code> by following this documentation:</p>
<ul>
<li><em><a href="https://minikube.sigs.k8s.io/docs/commands/start/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Commands: Start</a></em></li>
</ul>
<hr />
<p>As for the reconfiguration of <code>kube-apiserver.yaml</code> with Docker Desktop</p>
<p>You need to run following command:</p>
<pre class="lang-bash prettyprint-override"><code>docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
</code></pre>
<p>Above command will allow you to run:</p>
<pre class="lang-bash prettyprint-override"><code>vi /etc/kubernetes/manifests/kube-apiserver.yaml
</code></pre>
<p>This lets you edit the API server configuration. The <code>Pod</code> running <code>kubeapi-server</code> will be restarted with new parameters.</p>
<p>You can check below StackOverflow answers for more reference:</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/63720511/12257134">Stackoverflow.com: Answer: Where are the Docker Desktop for Windows kubelet logs located?</a></em></li>
<li><em><a href="https://stackoverflow.com/a/57588995/12257134">Stackoverflow.com: Answer: How to change the default nodeport range on Mac (docker-desktop)?</a></em>
<blockquote>
<p>I've used this answer without <code>$ screen</code> command and I was able to reconfigure <code>kubeapi-server</code> on Docker Desktop in Windows</p>
</blockquote>
</li>
</ul>
|
<p>In Kubernetes you can use the <code>auth can-i</code> command to check if you have permissions to some resource.<br>
For example, I can use this command like that on the worker: </p>
<pre><code>kubectl --kubeconfig /etc/kubernetes/kubelet.conf auth can-i get pods -v 9
</code></pre>
<p>It will check if you have permissions to view pods, and when you add the <code>-v</code> flag it show the verbose output: </p>
<pre><code>...
curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubectl/v1.18.0 (linux/amd64) kubernetes/9e99141" 'https://<master_ip>:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews'
</code></pre>
<p>I wanted to use this REST API with <code>curl</code> and it doesn't work: </p>
<pre><code>curl --cacert /etc/kubernetes/pki/ca.crt \
--cert /var/lib/kubelet/pki/kubelet-client-current.pem \
--key /var/lib/kubelet/pki/kubelet-client-current.pem \
-d @- \
-H "Content-Type: application/json" \
-H "Accept: application/json, */*" \
-XPOST https://<master_ip>:6443/apis/authorization.k8s.io/v1/selfsubjectrulesreviews <<'EOF'
{
"kind":"SelfSubjectAccessReview",
"apiVersion":"authorization.k8s.io/v1",
"metadata":{
"creationTimestamp":null
},
"spec":{
"namespace":"default"
},
"status":{
"allowed":true
}
}
EOF
</code></pre>
<p>If failed with the error: </p>
<pre><code> "status": "Failure",
"message": "SelfSubjectAccessReview in version \"v1\" cannot be handled as a SelfSubjectRulesReview: converting (v1.SelfSubjectAccessReview).v1.SelfSubjectAccessReviewSpec to (authorization.SelfSubjectRulesReview).authorization.SelfSubjectRulesReviewSpec: Namespace not present in src",
"reason": "BadRequest",
"code": 400
</code></pre>
<p>How can I use SelfSubjectRulesReview API with curl to view resource permissions?</p>
<hr>
<p>Thanks to @HelloWorld I found the problem, the issue was with the different between selfsubjectaccessreviews vs selfsubjectrulesreviews. I will put 2 working <code>curl</code> examples. </p>
<p>1) <strong>selfsubjectaccessreviews</strong> example to see if the account has permissions for</p>
<pre><code>curl --cacert /etc/kubernetes/pki/ca.crt \
--cert /var/lib/kubelet/pki/kubelet-client-current.pem \
--key /var/lib/kubelet/pki/kubelet-client-current.pem \
-d @- \
-H "Content-Type: application/json" \
-H 'Accept: application/json, */*' \
-XPOST https://<master_ip>:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews <<'EOF'
{
"kind":"SelfSubjectAccessReview",
"apiVersion":"authorization.k8s.io/v1",
"metadata":{
"creationTimestamp":null
},
"spec":{
"resourceAttributes":{
"namespace":"default",
"verb":"get",
"resource":"pods"
}
},
"status":{
}
}
EOF
</code></pre>
<p>2) <strong>selfsubjectrulesreviews</strong> example to see all the permissions of the account on the default namespace: </p>
<pre><code>curl --cacert /etc/kubernetes/pki/ca.crt \
--cert /var/lib/kubelet/pki/kubelet-client-current.pem \
--key /var/lib/kubelet/pki/kubelet-client-current.pem \
-d @- \
-H "Content-Type: application/json" \
-H 'Accept: application/json, */*' \
-XPOST https://<master_ip>:6443/apis/authorization.k8s.io/v1/selfsubjectrulesreviews <<'EOF'
{
"kind":"SelfSubjectRulesReview",
"apiVersion":"authorization.k8s.io/v1",
"metadata":{
"creationTimestamp":null
},
"spec":{
"namespace":"default"
},
"status":{
}
}
EOF
</code></pre>
|
<p>Notice that kubectl verbose shows this url in output:</p>
<pre><code>https://<master_ip>:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews
</code></pre>
<p>and you are curling:</p>
<pre><code>https://<master_ip>:6443/apis/authorization.k8s.io/v1/selfsubjectrulesreviews
</code></pre>
<p>Can you notice the difference? selfsubject<strong>accessreviews</strong> vs selfsubject<strong>rulesreviews</strong>.</p>
<p>Change the url to correct one and it will work.</p>
|
<p>I have a typhoon kubernetes cluster on aws with an nginx ingress controller installed.</p>
<p>If I add an test ingress object it looks like this:</p>
<pre><code>NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default foo <none> * 10.0.8.180 80 143m
</code></pre>
<p><strong>Question: Where does my ingress controller get that address(10.0.8.180) from?</strong></p>
<p>There is no (loadbalancer) service on the system with that address.
(Because it is a private address external-dns does not work correctly.)</p>
|
<p>In order to answer your first question:</p>
<blockquote>
<p>Where does a kubernetes ingress gets its IP address from?</p>
</blockquote>
<p>we have to dig a bit into the code and its behavior.</p>
<p>It all starts <a href="https://github.com/kubernetes/ingress-nginx/blob/47b5e20a88e9b2f6c0356090ac15fa192b3825ba/cmd/nginx/flags.go#L68" rel="nofollow noreferrer">here</a> with <code>publish-service</code> flag:</p>
<pre><code>publishSvc = flags.String("publish-service", "",
`Service fronting the Ingress controller
Takes the form "namespace/name". When used together with update-status, the
controller mirrors the address of this service's endpoints to the load-balancer
status of all Ingress objects it satisfies.`)
</code></pre>
<p>The flag variable (publishSvc) is later <a href="https://github.com/kubernetes/ingress-nginx/blob/47b5e20a88e9b2f6c0356090ac15fa192b3825ba/cmd/nginx/flags.go#L282" rel="nofollow noreferrer">assigned</a> to other variable (PublishService):</p>
<pre><code>PublishService: *publishSvc,
</code></pre>
<p>Later in the code you can find that if this flag is set, <a href="https://github.com/kubernetes/ingress-nginx/blob/3eafaa35a171e2d91d75690461ef45e721411aec/internal/ingress/status/status.go#L179-L181" rel="nofollow noreferrer">this code</a> is being run:</p>
<pre><code>if s.PublishService != "" {
return statusAddressFromService(s.PublishService, s.Client)
}
</code></pre>
<p><a href="https://github.com/kubernetes/ingress-nginx/blob/3eafaa35a171e2d91d75690461ef45e721411aec/internal/ingress/status/status.go#L329-L365" rel="nofollow noreferrer"><code>statusAddressFromService</code> function</a> as an argument takes value of <code>publish-service</code> flag and queries kubernetes for service with this name, and returns IP address of related service.</p>
<p>If this <a href="https://github.com/kubernetes/ingress-nginx/blob/3eafaa35a171e2d91d75690461ef45e721411aec/internal/ingress/status/status.go#L183-L204" rel="nofollow noreferrer">flag is not set</a> it will query the k8s for IP address of a node where nginx ingress pod is running. This is the bahaviour you are experiencing and it makes me think that you didn't set this flag. This also answers your second question:</p>
<blockquote>
<p>Why does it take the address of the node instead of the NLB?</p>
</blockquote>
<p>Meanwhile you can find all yaml for all sort of platforms in <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">k8s nginx ingress documentation</a>.</p>
<p>Lets have a look at <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/deploy.yaml" rel="nofollow noreferrer">AWS ingress yaml</a>.
Notice here that <code>publish-service</code> has a value called <code>ingress-nginx/ingress-nginx-controller</code> (<namespace>/<service>).
and this is what you also want to do.</p>
<hr />
<p><strong>TLDR:</strong> All you have to do is to create a LoadBalancer Service and set the <code>publish-service</code> to <namespace_name>/<service_name></p>
|
<p>I have a very simple grpc-server. When I expose them directly via NodePort everything works just fine. I do not have any issues with this.</p>
<pre><code>grpc-client --> NodePort --> grpc-server
</code></pre>
<p>I changed the <code>NodePort</code> service to <code>ClusterIP</code> service and tried to use <code>ingress controller</code> to route the traffic to the grpc-server. The set up is like this. ingress and grpc-server are part of k8s cluster. grpc-client is outside - which is my local machine.</p>
<pre><code>grpc-client --> ingress --> clusterip --> grpc-server
</code></pre>
<p>My ingress is like this.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
namespace: test
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
spec:
rules:
- http:
paths:
- backend:
serviceName: grpc-server
servicePort: 6565
</code></pre>
<p>Whenever I send a request, I get en exception like this in the client side.</p>
<pre><code>io.grpc.StatusRuntimeException: INTERNAL: HTTP/2 error code: FRAME_SIZE_ERROR
Received Rst Stream
</code></pre>
<p>Ingress log is like this</p>
<pre><code>127.0.0.1 - - [28/Jul/2020:03:52:38 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.001 [] [] - - - - c9d46af55aa8a2b450f9f72e9d550023
127.0.0.1 - - [28/Jul/2020:03:52:38 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.000 [] [] - - - - a9e0f02851473467fc553f2a4645377d
127.0.0.1 - - [28/Jul/2020:03:52:40 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.001 [] [] - - - - fcc0668f300bd5e7e12325ae7c26fa9d
127.0.0.1 - - [28/Jul/2020:03:52:41 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.001 [] [] - - - - dea2b8eb8f6884ac512654af3c2e6b47
127.0.0.1 - - [28/Jul/2020:03:52:43 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.001 [] [] - - - - 0357acafce928ee05aaef1334eed1b69
127.0.0.1 - - [28/Jul/2020:03:52:44 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.001 [] [] - - - - df9b75d79d9177e39a901dfa020912f2
127.0.0.1 - - [28/Jul/2020:03:52:44 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.000 [] [] - - - - bb27acd5b86fd057af1ca8fe93f7d857
127.0.0.1 - - [28/Jul/2020:03:52:44 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.001 [] [] - - - - 2ced9207652fc564bb94e0d4a14ae6fc
127.0.0.1 - - [28/Jul/2020:03:52:45 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.001 [] [] - - - - b99fab13df2f64a6a0a901237dadfa27
127.0.0.1 - - [28/Jul/2020:03:52:45 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.002 [] [] - - - - 40cb75f4e4b3df28be66044a0825f1c6
127.0.0.1 - - [28/Jul/2020:03:52:46 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.001 [] [] - - - - 89cac6709278b4e54b0349075500bf74
127.0.0.1 - - [28/Jul/2020:03:52:46 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.000 [] [] - - - - 0b032199a053069df185c620de497c90
127.0.0.1 - - [28/Jul/2020:03:52:46 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.001 [] [] - - - - 5a07b1e24ea3e8e2066a7ed8950f7040
127.0.0.1 - - [28/Jul/2020:03:53:46 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.000 [] [] - - - - 0f63eec0d0ccb9cc87c2fbb1fdfae48f
</code></pre>
<p>Looks like the grpc-server never received any request as I do not see any log there.</p>
<p>What is going on?</p>
|
<p>That error you encountered comes from the default configuration of the nginx ingress controller (this is taken from the <a href="https://github.com/kubernetes/ingress-nginx/blob/ca74f9ad7d9a9072071dc94bfb8b5eb64f1e1fb0/internal/ingress/controller/config/config.go#L406" rel="nofollow noreferrer">ingress github config</a>):</p>
<pre><code>// Enables or disables the HTTP/2 support in secure connections
// http://nginx.org/en/docs/http/ngx_http_v2_module.html
// Default: true
UseHTTP2 bool `json:"use-http2,omitempty"`
</code></pre>
<p>This can be also verified by checking your ingress controller settings :</p>
<pre><code>kubectl exec -it -n <name_space> <nginx_ingress_controller_pod> -- cat /etc/nginx/nginx.conf
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=511 ;
listen 443 default_server reuseport backlog=511 ssl http2 ;
</code></pre>
<p>You're trying to connect through port 80 for which there is no <code>http2</code> support in default configuration. GRCP works only with <code>http2</code> so you should try to make a request using port 443.</p>
<p>Let me know if it works.</p>
|
<p>I have a yml file that I took from a github repository.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: scripts-cm
data:
locustfile.py: |
from locust import HttpLocust, TaskSet, task
class UserTasks(TaskSet):
@task
def index(self):
self.client.get("/")
@task
def stats(self):
self.client.get("/stats/requests")
class WebsiteUser(HttpLocust):
task_set = UserTasks
</code></pre>
<p>But I don't want to set my configure management to do this as I have a separate locustfile.py and it is quite big. Is there a way to copy the file in the data attribute instead? or I have to use cat command?</p>
|
<blockquote>
<p>Is there a way to copy the file in the data attribute instead?</p>
</blockquote>
<p>Yes there is. You can create a <code>configMap</code> from a file and mount this configMap as a Volume to a <code>Pod</code>. I've prepared an example to show you the whole process.</p>
<p>Let's assume that you have your <code>locustfile.py</code> with only the Python script:</p>
<pre class="lang-py prettyprint-override"><code>from locust import HttpLocust, TaskSet, task
class UserTasks(TaskSet):
@task
def index(self):
self.client.get("/")
@task
def stats(self):
self.client.get("/stats/requests")
class WebsiteUser(HttpLocust):
task_set = UserTasks
</code></pre>
<p>You will need to create a <code>configMap</code> from this file with following command:</p>
<ul>
<li><code>$ kubectl create configmap scripts-cm --from-file=locustfile.py</code></li>
</ul>
<p>After that you can mount this <code>configMap</code> as a <code>Volume</code> like in the example below:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
volumeMounts:
- name: config-volume
mountPath: /app/ # <-- where you would like to mount your files from a configMap
volumes:
- name: config-volume
configMap:
name: scripts-cm # <-- name of your configMap
restartPolicy: Never
</code></pre>
<p>You can then check if your files are placed correctly in the <code>/app</code> directory:</p>
<ul>
<li><code>$ kubectl exec -it ubuntu -- ls /app/</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>locustfile.py
</code></pre>
<blockquote>
<p>A tip!</p>
<p>You can add multiple files to a <code>configMap</code> by</p>
<ul>
<li><code>--from-file=one.py --from-file=two.py ...</code></li>
</ul>
<p>You can also run <code>--from-file=.</code> (use all the files from directory).</p>
</blockquote>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Kubernetes.io: Configure pod container: Configure pod configmap</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">Kubernetes.io: Configmap</a></em></li>
</ul>
|
<p>What I am aiming for is a folder(pod name) per pod created inside of a volume using a volumeClaimsTemplate in a StatefulSet.</p>
<p>An example would be:</p>
<ul>
<li>PersistentVolume = "/data"</li>
<li>Pods:
<ul>
<li>pod-0 = "/data/pod-0"</li>
<li>pod-1 = "/data/pod-1"</li>
</ul>
</li>
</ul>
<p>I am struggling with getting the replicas to create new folders for themselves. Any help with how to do this would be grateful.</p>
|
<blockquote>
<p><code>volumeClaimTemplates</code> is a list of claims that pods are allowed to
reference. The StatefulSet controller is responsible for mapping
network identities to claims in a way that maintains the identity of a
pod. Every claim in this list must have at least one matching (by
name) volumeMount in one container in the template. A claim in this
list takes precedence over any volumes in the template, with the same
name.</p>
</blockquote>
<p>This means that with <code>volumeClaimTemplates</code> you can request the PVC from the storage class dynamically.</p>
<p>If we use this <code>yaml</code> as an example:</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "standard"
resources:
requests:
storage: 1Gi
</code></pre>
<p>Once you deploy your pods you notice that your pods are being created and <code>PVC</code> is requested during the creation. <code>PVC</code> is name in the following convention:</p>
<p><code>volumeClaimTemplate</code> name + <code>Pod-name</code> + <code>Ordinal-number</code></p>
<p>So if you take above yaml as an example you will receive three PVC (assuming 3 replicas):</p>
<pre><code>NAME STATUS VOLUME
www-web-0 Bound pvc-12d77135...
www-web-1 Bound pvc-08724947...
www-web-2 Bound pvc-50ac9f96
</code></pre>
<p>It's worth mentioning that <code>Persistent Volume Claims</code> represent the exclusive usage of a Persistent Volume by a particular Pod.
This means that if we look into the volumes individually we find that each is assign to a particular pod:</p>
<pre><code>➜ ~ pwd
/tmp/hostpath-provisioner/pvc-08724947...
➜ ~ ls
web-1
➜ ~ pwd
/tmp/hostpath-provisioner/pvc-50ac9f96...
➜ ~ ls
web-2
</code></pre>
<hr />
<p>While testing this I did achieve your goal but I had to create <code>persistentvolumes</code> manually and they had to point towards the same local path:</p>
<pre><code> local:
path: /home/docker/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
</code></pre>
<p>This combined with <code>subPathExpr</code> mounted the directories named after the pods into the specified path.</p>
<pre><code> volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
subPathExpr: $(NAME)
env:
- name: NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
<p>And the result of this (<code>web</code> was the name of the deployment):</p>
<pre><code>➜ ~ pwd
/home/docker/data
➜ ~ pwd
web-0 web-1 web-2
</code></pre>
<p>Here`s more information how <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-with-expanded-environment-variables" rel="nofollow noreferrer">subpath with expanded env variables</a> works.</p>
<blockquote>
<p>Use the <code>subPathExpr</code> field to construct <code>subPath</code> directory names from Downward API environment variables. This feature requires the <code>VolumeSubpathEnvExpansion</code> <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">feature gate</a> to be enabled. It is enabled by default starting with Kubernetes 1.15. The <code>subPath</code> and <code>subPathExpr</code> properties are mutually exclusive.</p>
</blockquote>
<p>Let me know if you have any questions.</p>
|
<p>I am trying to create a Service: NodePort for one of the pods I have deployed,</p>
<p><strong>Below is my service definition</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: demo-voting-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30004
selector:
name: voting-app-pod
app: demo-voting-app
</code></pre>
<p>I am deploying this service with command below</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create -f voting-app-service.yaml
</code></pre>
<p><strong>Here is the Error</strong></p>
<pre><code>The Service "voting-service" is invalid: spec.ports[0].nodePort: Invalid value: 30004: provided port is already allocated
</code></pre>
<p>So I tried to find services that are using port 30004 with netstat and lsof commands but couldnt find any services using that port.</p>
<pre class="lang-sh prettyprint-override"><code>➜ Voting-app kubectl create -f voting-app-service.yaml
The Service "voting-service" is invalid: spec.ports[0].nodePort: Invalid value: 30004: provided port is already allocated
➜ Voting-app sudo netstat -lntp | grep 30004
➜ Voting-app lsof -i :30004
➜ Voting-app
</code></pre>
<p>minikube version: v1.22.0
kubectl : 1.21 version</p>
|
<p>As @HarshManvar mentioned you can change the port in the service file for one that isn't allocated.</p>
<p>Later you find that port <code>30004</code> was already allocated as there was a pod using that port:</p>
<blockquote>
<p>kubectl get svc | grep 30004</p>
</blockquote>
|
<p>I am trying to troubleshoot a DNS issue in our K8 cluster v1.19. There are 3 nodes (1 controller, 2 workers) all running vanilla Ubuntu 20.04 using Calico network with <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">Metallb</a> for inbound load balancing. This is all hosted on premise and has full access to the internet. There is also a proxy server (Traefik) in front of it that is handling the SSL to the K8 cluster and other services in the infrastructure.</p>
<p>This issue happened when I upgraded the helm chart for the pod that was/is connecting to the redis pod, but otherwise had been happy to run for the past 36 days.</p>
<p>In the log of one of the pods it is showing an error that it cannot determine where the redis pod(s) is/are:</p>
<pre><code>2020-11-09 00:00:00 [1] [verbose]: [Cache] Attempting connection to redis.
2020-11-09 00:00:00 [1] [verbose]: [Cache] Successfully connected to redis.
2020-11-09 00:00:00 [1] [verbose]: [PubSub] Attempting connection to redis.
2020-11-09 00:00:00 [1] [verbose]: [PubSub] Successfully connected to redis.
2020-11-09 00:00:00 [1] [warn]: Secret key is weak. Please consider lengthening it for better security.
2020-11-09 00:00:00 [1] [verbose]: [Database] Connecting to database...
2020-11-09 00:00:00 [1] [info]: [Database] Successfully connected .
2020-11-09 00:00:00 [1] [verbose]: [Database] Ran 0 migration(s).
2020-11-09 00:00:00 [1] [verbose]: Sending request for public key.
Error: getaddrinfo EAI_AGAIN oct-2020-redis-master
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3001,
code: 'EAI_AGAIN',
syscall: 'getaddrinfo',
hostname: 'oct-2020-redis-master'
}
[ioredis] Unhandled error event: Error: getaddrinfo EAI_AGAIN oct-2020-redis-master
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26)
Error: connect ETIMEDOUT
at Socket.<anonymous> (/app/node_modules/ioredis/built/redis/index.js:307:37)
at Object.onceWrapper (events.js:421:28)
at Socket.emit (events.js:315:20)
at Socket.EventEmitter.emit (domain.js:486:12)
at Socket._onTimeout (net.js:483:8)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7) {
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect'
}
</code></pre>
<p>I have gone through the steps outlined in <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a></p>
<pre class="lang-sh prettyprint-override"><code>ubuntu@k8-01:~$ kubectl exec -i -t dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
ubuntu@k8-01:~$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-lfm5t 1/1 Running 17 37d
coredns-f9fd979d6-sw2qp 1/1 Running 18 37d
ubuntu@k8-01:~$ kubectl logs --namespace=kube-system -l k8s-app=kube-dns
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = 3d3f6363f05ccd60e0f885f0eca6c5ff
[INFO] Reloading complete
[INFO] 10.244.210.238:34288 - 28733 "A IN oct-2020-redis-master.default.svc.cluster.local. udp 75 false 512" NOERROR qr,aa,rd 148 0.001300712s
[INFO] 10.244.210.238:44532 - 12032 "A IN oct-2020-redis-master.default.svc.cluster.local. udp 75 false 512" NOERROR qr,aa,rd 148 0.001279312s
[INFO] 10.244.210.235:44595 - 65094 "A IN oct-2020-redis-master.default.svc.cluster.local. udp 75 false 512" NOERROR qr,aa,rd 148 0.000163001s
[INFO] 10.244.210.235:55945 - 20758 "A IN oct-2020-redis-master.default.svc.cluster.local. udp 75 false 512" NOERROR qr,aa,rd 148 0.000141202s
ubuntu@k8-01:~$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default oct-2020-api ClusterIP 10.107.89.213 <none> 80/TCP 37d
default oct-2020-nginx-ingress-controller LoadBalancer 10.110.235.175 192.168.2.150 80:30194/TCP,443:31514/TCP 37d
default oct-2020-nginx-ingress-default-backend ClusterIP 10.98.147.246 <none> 80/TCP 37d
default oct-2020-redis-headless ClusterIP None <none> 6379/TCP 37d
default oct-2020-redis-master ClusterIP 10.109.58.236 <none> 6379/TCP 37d
default oct-2020-webclient ClusterIP 10.111.204.251 <none> 80/TCP 37d
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37d
kube-system coredns NodePort 10.101.104.114 <none> 53:31245/UDP 15h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 37d
</code></pre>
<p>When I enter the pod:</p>
<pre><code>/app # grep "nameserver" /etc/resolv.conf
nameserver 10.96.0.10
/app # nslookup
BusyBox v1.31.1 () multi-call binary.
Usage: nslookup [-type=QUERY_TYPE] [-debug] HOST [DNS_SERVER]
Query DNS about HOST
QUERY_TYPE: soa,ns,a,aaaa,cname,mx,txt,ptr,any
/app # ping 10.96.0.10
PING 10.96.0.10 (10.96.0.10): 56 data bytes
^C
--- 10.96.0.10 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
/app # nslookup oct-20-redis-master
;; connection timed out; no servers could be reached
</code></pre>
<p>Any ideas on troubleshooting would be greatly appreciated.</p>
|
<p>To answer my own question, I deleted the DNS pods and then it worked again. The command was the following:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete pod coredns-f9fd979d6-sw2qp --namespace=kube-system
</code></pre>
<p>This doesn't get to the underlying problem of why this is happening, or why K8 isn't detecting that something is wrong with those pods and recreating them. I am going to keep digging into this and put some more instrumenting on the DNS pods to see what it actually is that is causing this problem.</p>
<p>If anyone has any ideas on instrumenting to hook up or look at specifically, that would be appreciated.</p>
|
<p><code>kubectl get events</code> list the events for the K8s objects.
From where the events are triggered for PV/PVC actually ?
There is a list of volume events
<a href="https://docs.openshift.com/container-platform/4.5/nodes/clusters/nodes-containers-events.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.5/nodes/clusters/nodes-containers-events.html</a>
but it does not identifies that which events are for which resource ?</p>
|
<p>Let`s start from what exactly is an Kubernetes event. Those are object that provide insight into what is happening inside a cluster, such as what decisions were made by scheduler or why some pods were evicted from the node. Those API objects are persisted in etcd.</p>
<p>You can read more about them <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application-introspection/" rel="nofollow noreferrer">here</a> and <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/events-stackdriver/#:%7E:text=Kubernetes%20events%20are%20objects%20that,were%20evicted%20from%20the%20node" rel="nofollow noreferrer">here</a>.
There is also an excellent tutorial about Kubernetes events which you may find <a href="https://www.bluematador.com/blog/kubernetes-events-explained" rel="nofollow noreferrer">here</a>.</p>
<hr />
<p>There are couple of ways to view/fetch more detailed events from Kubernetes:</p>
<p>Use <code>kubectl get events -o wide</code>. This will give you information about <code>object</code>, <code>subobject</code> and <code>source</code> of the event. Here`s an example:</p>
<pre><code>LAST SEEN TYPE REASON OBJECT SUBOBJECT SOURCE MESSAGE
<unknown> Warning FailedScheduling pod/web-1 default-scheduler running "VolumeBinding" filter plugin for pod "web-1": pod has unbound immediate PersistentVolumeClaims
6m2s Normal ProvisioningSucceeded persistentvolumeclaim/www-web-1 k8s.io/minikube-hostpath 2481b4d6-0d2c-11eb-899d-02423db39261 Successfully provisioned volume pvc-a56b3f35-e7ac-4370-8fda-27342894908d
6m2s Normal ProvisioningSucceeded persistentvolumeclaim/www-web-1 k8s.io/minikube-hostpath 2481b4d6-0d2c-11eb-899d-02423db39261 Successfully provisioned volume pvc-a56b3f35-e7ac-4370-8fda-27342894908d
</code></pre>
<br>
<p>Use <code>kubectl get events --output json</code> will give you list of the event in <code>json</code> format containing other details such as <code>selflink</code>.</p>
<pre><code> ---
"apiVersion": "v1",
"count": 1,
"eventTime": null,
"firstTimestamp": "2020-10-13T12:07:17Z",
"involvedObject": {
---
"kind": "Event",
"lastTimestamp": "2020-10-13T12:07:17Z",
"message": "Created container nginx",
"metadata": {
---
</code></pre>
<p><code>Selflink</code> can be used to determine the the API location from where the data is being fetched.</p>
<p>We can take as an example <code>/api/v1/namespaces/default/events/</code> and fetch the data from API server using <code>kubectl proxy</code>:</p>
<pre><code>kubectl proxy --port=8080 & curl http://localhost:8080/api/v1/namespaces/default/events/
</code></pre>
<p>Using all those information you can narrow down to a specific details from the underlying object using <code>field-selector</code>:</p>
<pre><code> kubectl get events --field-selector type=!Normal
or
kubectl get events --field-selector involvedObject.kind=PersistentVolumeClaim
LAST SEEN TYPE REASON OBJECT MESSAGE
44m Normal ExternalProvisioning persistentvolumeclaim/www-web-0 waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
44m Normal Provisioning persistentvolumeclaim/www-web-0 External provisioner is provisioning volume for claim "default/www-web-0"
44m Normal ProvisioningSucceeded persistentvolumeclaim/www-web-0 Successfully provisioned volume pvc-815beb0a-b5f9-4b27-94ce-d21f2be728d5
</code></pre>
<p>Please also remember that all the information provided by <code>kubectl events</code> are the same ones from the <code>kubectl describe <ojbect></code>.</p>
<p>Lastly, if you look carefully into the <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/volume/events/event.go#L20" rel="nofollow noreferrer">event.go</a> code you may see all the events reference for volumes. If you compare those with the <code>Table 13. Volumes</code> you can see that they are almost the same (execpt for <code>WaitForPodScheduled</code> and <code>ExternalExpanding</code>)</p>
<p>This means that Openshift provided an aggregated set of information about possible events from different kubernetes that may occur in the cluster.</p>
|
<p>I am looking to understand things regarding Google Kubernetes Cluster backup . I came across this document but this seems to be more on GCP Anthos On premises GKE Cluster .</p>
<p><a href="https://cloud.google.com/anthos/gke/docs/on-prem/archive/1.1/how-to/administration/backing-up" rel="nofollow noreferrer">https://cloud.google.com/anthos/gke/docs/on-prem/archive/1.1/how-to/administration/backing-up</a></p>
<p>I saw few blogs talking of a GKE feature to create a clone of existing GKE Cluster but I cannot find any option in GCP Console to create new cluster by cloning an existing GKE Cluster.</p>
<p><a href="https://blog.doit-intl.com/google-kubernetes-engine-cluster-migration-with-velero-4a140b018f32" rel="nofollow noreferrer">https://blog.doit-intl.com/google-kubernetes-engine-cluster-migration-with-velero-4a140b018f32</a></p>
<p>Can somebody please confirm if this cloning feature is still available in GKE or it is deprecated ?</p>
<p>Apart from Cloning a GKE Cluster , we need to take backup of Cluster resources and PersistentVolumes.
It seems Veloro is a useful tool for this and it is GKE aware.</p>
<p><a href="https://velero.io/" rel="nofollow noreferrer">https://velero.io/</a></p>
<p>I am looking for further suggestions regarding GKE Cluster backup which takes care of both Cluster resources and persistent volumes. Any recommendation/best practises from Google on GKE backup</p>
|
<blockquote>
<p>I saw few blogs talking of a GKE feature to create a clone of existing GKE Cluster but I cannot find any option in GCP Console to create new cluster by cloning an existing GKE Cluster.</p>
</blockquote>
<blockquote>
<p>Can somebody please confirm if this cloning feature is still available in GKE or it is deprecated ?</p>
</blockquote>
<p>You can in fact create a <strong>duplicate!</strong> By going into:</p>
<ul>
<li><code>Cloud Console</code> (Web UI) -> <code>Kubernetes Engine</code> -> <code>CLUSTER-NAME</code> -> <code>Duplicate</code>.</li>
</ul>
<p><a href="https://i.stack.imgur.com/EHBjx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EHBjx.png" alt="DUPLICATE" /></a></p>
<blockquote>
<p><strong>Disclaimer!</strong></p>
<p>This option will create a duplicate of your cluster but it <strong>will not</strong> copy the workload (<code>Pods</code>, <code>Deployments</code>, <code>Services</code>, <code>Persistent Volumes</code>, etc.)</p>
</blockquote>
<hr />
<p>Transferring your workload will heavily depend on the resources that you are using. You will need to carefully consider all of the resources and choose the solution most appropriate to your use case.</p>
<p>Solution mentioned in the question:</p>
<ul>
<li><em><a href="https://velero.io/" rel="nofollow noreferrer">Velero.io</a></em></li>
</ul>
<p>Storage specific:</p>
<ul>
<li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Persistent Volumes: Volume snapshots</a></em> - available from <code>GKE</code> version <code>1.17</code></li>
<li>Make snapshots/images from existing <code>PV's</code> and create new <code>PV's</code> from this snapshots/images:
<ul>
<li><em><a href="https://cloud.google.com/compute/docs/disks/restore-and-delete-snapshots#console" rel="nofollow noreferrer">Cloud.google.com: Compute: Disks: Restore and delete snapshots</a></em></li>
<li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Persistent Volumes: Preexisting PD</a></em></li>
</ul>
</li>
</ul>
<hr />
<p>It could be beneficiary to add that you could also take a different approach and use tools that are designed to provision resources multiple times. Once created "scripts" could be used multiple times (on multiple clusters when migrating etc.). Examples of such tools:</p>
<ul>
<li><em><a href="https://cloud.google.com/deployment-manager" rel="nofollow noreferrer">Cloud.google.com: Deployment Manager</a></em></li>
<li><em><a href="https://www.terraform.io/" rel="nofollow noreferrer">Terraform.io</a></em></li>
<li><em><a href="https://www.ansible.com/" rel="nofollow noreferrer">Ansible.com</a></em></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://www.youtube.com/watch?v=JyzgS-KKuoo" rel="nofollow noreferrer">Youtube.com: How to Backup and Restore Your Kubernetes Cluster - Annette Clewett & Dylan Murray, Red Hat
</a></em></li>
<li><em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Storage overview</a></em></li>
</ul>
|
<p>I i'm making Kubernetes HTTP Request
GET /apis/apps/v1/namespaces/{namespace}/deployments/{name}</p>
<p>This gives response in json but i want response in json.</p>
<p>Does Kubernetes give response in yaml? if yes, let me know how to do it?</p>
|
<p>When running curl requests you can tell api-server to send you yaml formated output by setting <code>Accept: application/yaml</code> header. Take a look at example below:</p>
<pre><code>curl --header "Accept: application/yaml" "/apis/[...]"
</code></pre>
<p>Or you can also use some external tool to convert from json to yaml, e.g. <code>yq</code>:</p>
<pre><code>curl "/apis/[...]" | yq -y .
</code></pre>
<p>where <code>-y</code> stands for yaml output.</p>
|
<p>I have installed K8S using Kubeadm and see the below pods across the control plane nodes and the worker nodes.</p>
<p><a href="https://i.stack.imgur.com/mHLtw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mHLtw.png" alt="enter image description here"></a></p>
<p>Kubelet is started as a systemd service, it looks at the /etc/kubernetes/manifests folder and starts the objects defined in the etcd.yaml, kube-apiserver.yaml, kube-controller-manager.yaml and kube-scheduler.yaml files. If I drop a yaml file in the same folder, kubelet observes the same and starts the appropriate K8S object.</p>
<p>I am not sure how the coredns Deployment, kube-proxy and the flannel DaemonSets shown in the above screenshot get automatically started once I start/reboot the control plane nodes and the worker nodes. Can someone help me with the K8S startup process for the same?</p>
|
<p>Kubeadm installs CoreDNS and kube-proxy in <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon" rel="nofollow noreferrer">init addon</a> phase. It's being installed after basic cluster functionality is already started (apiserver, controller manager etc.). Kubeadm installs it the same way you would run <code>kubectl apply</code>.</p>
<p>Here is the <a href="https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/cmd/phases/init/addons.go#L43-L71" rel="nofollow noreferrer">code of kubeadm init addon phase</a> if you want to see how it actually works under the hood.</p>
<p>Apiserver, controller manager and scheduler have to be started using manifest folder (also known as <code>static pods</code>) because there is no other way (technically this is not 100% correct, because there are other methods, e.g. you can run your control plane using systemd). How do you start running stuff on kubernetes when there is no kubernetes yet? There is no api server to handle your requests, there is no scheduler to schedule your pods, there is no controller to manage resources.
After the essential components are started, you can run pods in a regular way using api.</p>
<p>Also, answering your question on what happens when you restart your control plane or nodes? You need to know that state of whole k8s cluster is hold in etcd. So when you reboot control plane, kubernetes reads state from etcd, compares it to reality and if there are differences, it adjusts reality to match the state in etcd. e.g. if there is no coredns running but there is coredns object in etcd, kubernetes is going to start it.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.