Question
stringlengths 65
39.6k
| Answer
stringlengths 38
29.1k
|
---|---|
<p>I have two deployments </p>
<p>deployment 1</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: first-service
spec:
selector:
key: app1
ports:
- port: 81
targetPort: 5050
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: first-deployment
spec:
replicas: 1
selector:
matchLabels:
run: app1
template:
metadata:
labels:
run: app1
spec:
containers:
- name: ocr
image: ocr_app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5050
</code></pre>
<p>deployment 2</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: second-service
spec:
selector:
key: app2
ports:
- port: 82
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: second-deployment
spec:
replicas: 1
selector:
matchLabels:
run: app2
template:
metadata:
labels:
run: app2
spec:
containers:
- name: ner
image: ner_app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
</code></pre>
<p>After enabling ingress on minikube I applied ingess</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: demo.local
http:
paths:
- path: /ocr
backend:
serviceName: first-service
servicePort: 81
- path: /ner
backend:
serviceName: second-service
servicePort: 82
</code></pre>
<p>In my hosts file I have </p>
<pre><code>192.168.177.71 demo.local
</code></pre>
<p>Where <code>192.168.177.71</code> is my current minikube ip</p>
<p>I then ran this command</p>
<pre><code>kubectl port-forward nginx-ingress-controller-6fc5bcc8c9-p6mvj 3000:80 --namespace kube-system
</code></pre>
<p>And in the console is outputs </p>
<pre><code>Forwarding from 127.0.0.1:3000 -> 80
Forwarding from [::1]:3000 -> 80
</code></pre>
<p>But when I make a request to <code>demo.local:3000/ocr</code> using postman there is no response</p>
<blockquote>
<p>Could not get any response There was an error connecting to
demo.local:3000.</p>
</blockquote>
<p>EDIT: using <code>minikube service first-service</code> gives this output</p>
<pre><code>PS D:\docker> minikube service first-service
|-----------|---------------|-------------|--------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------|-------------|--------------|
| default | first-service | | No node port |
|-----------|---------------|-------------|--------------|
* service default/first-service has no node port
</code></pre>
|
<p>@erotavlas as Mafor provided answer which help you resolve your issue please accept his answer. </p>
<p>I am posting extended answer which might help someone else.</p>
<p>Root cause of this Issue was with <code>selector/labels</code>. </p>
<p>In <code>first-service</code>, <code>spec.selector</code> was set to <code>key: app1</code>, however in deployment <code>spec.selector.matchLabels</code> was set to <code>run: app1</code>. </p>
<p>To proper work you need to have the same selectors. So you would need to change in service, spec.selector to <code>run: app1</code> or change deployment <code>spec.selector.matchLabels</code> to <code>key: app1</code>. Same situation with <code>second-service</code> and <code>second-deployment</code>. More details can be found <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors" rel="noreferrer">here</a>.</p>
<p>I've tried to use Ingress on Minikube based on <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="noreferrer">official docs</a> and your YAMLs.</p>
<p>As addition, to use <code>Ingress</code> on <code>Minikube</code>, <code>Ingress addon</code> must be enabled.</p>
<pre><code>$ minikube addons list | grep ingress
- ingress: disabled
</code></pre>
<p>If it's disabled, you have to enable it.</p>
<pre><code>$ minikube addons enable ingress
✅ ingress was successfully enabled
</code></pre>
<p><code>targetPort:</code> is the port the container accepts traffic on / port where application runs inside the pod<br>
<code>port:</code> is the abstracted <code>Service</code> port, which can be any port other pods use to access the Service. </p>
<p>OP used own images where application was running on ports <code>5050</code> and <code>5000</code>, for this example I will use GCP hello world on port <code>8080</code>. Labels/matchLabels were changed to have the sam value in deployment and service.</p>
<p><strong>First service</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: first-service
spec:
selector:
key: app1
ports:
- port: 81
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: first-deployment
spec:
replicas: 1
selector:
matchLabels:
key: app1
template:
metadata:
labels:
key: app1
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
service/first-service created
deployment.apps/first-deployment created
</code></pre>
<p><strong>Second Service</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: second-service
spec:
selector:
key: app2
ports:
- port: 82
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: second-deployment
spec:
replicas: 1
selector:
matchLabels:
key: app2
template:
metadata:
labels:
key: app2
spec:
containers:
- name: hello2
image: gcr.io/google-samples/hello-app:2.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
service/second-service created
deployment.apps/second-deployment created
</code></pre>
<p>It crated services as <code>ClusterIP</code> type. If need you can use <code>NodePort</code> but it's not neccessary.</p>
<p><strong>Apply Ingress</strong></p>
<p>Ingress provided in question is enough to tests.</p>
<p>As mentioned in <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#create-an-ingress-resource" rel="noreferrer">official docs</a>. You should add minikube ip to host file.</p>
<blockquote>
<p>Note: If you are running Minikube locally, use minikube ip to get the external IP. The IP address displayed within the ingress list will be the internal IP.</p>
</blockquote>
<p>In Ubuntu OS it's <code>/etc/hosts</code> (need to use sudo to edit). In Windows OS please check <a href="https://www.liquidweb.com/kb/edit-host-file-windows-10/" rel="noreferrer">this article</a></p>
<p>For my cluster (Using GCE):</p>
<pre><code>$ minikube ip
10.132.15.208
</code></pre>
<p>Added to <code>hosts</code> file value: </p>
<blockquote>
<p>10.132.15.208 demo.local</p>
</blockquote>
<p>Below responses.</p>
<pre><code>$ curl demo.local/ocr
Hello, world!
Version: 1.0.0
Hostname: first-deployment-85b75bf4f9-qlzrp
$ curl demo.local/ner
Hello, world!
Version: 2.0.0
Hostname: second-deployment-5b5bbb7f4-9sbqr
</code></pre>
<p>However, version with <code>rewrite</code> provided by Mafor is more versatile.</p>
<p>In addition, You could also consider use <code>LoadBalancer</code> on <code>Minikube</code>.
More information can be found in <a href="https://minikube.sigs.k8s.io/docs/tasks/loadbalancer/" rel="noreferrer">Minikube docs</a>.</p>
|
<p>i've a namespace called: <code>test</code>, and containing 3 pods: <code>frontend</code>, <code>backend</code> and <code>database</code>.</p>
<p>this is the manifest of pods:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: frontend
namespace: test
labels:
app: todo
tier: frontend
spec:
containers:
- name: frontend
image: nginx
---
kind: Pod
apiVersion: v1
metadata:
name: backend
namespace: test
labels:
app: todo
tier: backend
spec:
containers:
- name: backend
image: nginx
---
kind: Pod
apiVersion: v1
metadata:
name: database
namespace: test
labels:
app: todo
tier: database
spec:
containers:
- name: database
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: example
</code></pre>
<p>I would implement a network policy , that allow only allow incoming traffic from the backend to the database but disallow incoming traffic from the frontend.</p>
<p>this my network policy:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-allow
namespace: test
spec:
podSelector:
matchLabels:
app: todo
tier: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: todo
tier: backend
ports:
- protocol: TCP
port: 3306
- protocol: UDP
port: 3306
</code></pre>
<p>This is the output of <code>kubectl get pods -n test -o wide</code></p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
backend 1/1 Running 0 28m 172.17.0.5 minikube <none> <none>
database 1/1 Running 0 28m 172.17.0.4 minikube <none> <none>
frontend 1/1 Running 0 28m 172.17.0.3 minikube <none> <none>
</code></pre>
<p>This is the output of <code>kubectl get networkpolicy -n test -o wide</code></p>
<pre><code>NAME POD-SELECTOR AGE
app-allow app=todo,tier=database 21m
</code></pre>
<p>when i execute <code>telnet @ip-of-mysql-pod 3306</code> from the <code>frontend</code> pod , the connection look be established and the network policy is not working</p>
<pre><code>kubectl exec -it pod/frontend bash -n test
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@frontend:/# telnet 172.17.0.4 3306
Trying 172.17.0.4...
Connected to 172.17.0.4.
Escape character is '^]'.
J
8.0.25 k{%J\�#(t%~qI%7caching_sha2_password
</code></pre>
<p>there are something i missing ?</p>
<p>Thanks</p>
|
<p>It seems that you forgot to add "default deny" policy:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
</code></pre>
<p>The default behavior of <code>NetworkPolicy</code> is to allow all connections between pod unless explicitly denied.</p>
<p>More details here: <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-traffic" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-traffic</a></p>
|
<p><strong>Information:</strong></p>
<ul>
<li>Kubernetes version: 1.14.1 </li>
<li>Cloud: Azure (not AKS) </li>
<li>DNS: CoreDNS</li>
<li>Deployer: Kubespray </li>
<li>Container: containerd</li>
<li>3 worker nodes</li>
</ul>
<p><strong>Description</strong></p>
<p>I have this statefulset:</p>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: busy
spec:
serviceName: busy
selector:
matchLabels:
app: busy
replicas: 3
template:
metadata:
name: busy
labels:
app: busy
spec:
containers:
- name: busy
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- sleep
- "3600"
restartPolicy: Always
</code></pre>
<p>And this headless service:</p>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: busy-headless
spec:
clusterIP: None
publishNotReadyAddresses: true
selector:
app: busy
</code></pre>
<p>The statefulset creates 3 pods (busy-{1,2,3}). According to the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">documentation</a>, each pod should have a DNS record like <code>busy-{1,2,3}.busy-headless.default.svc.cluster.local</code>.</p>
<p><strong>Issue</strong></p>
<p>When I try to resolve the DNS entries associated to the record <code>busy-headless.default.svc.cluster.local</code> from <code>busy-0</code> I get </p>
<pre><code>17:00 $ kubectl exec -ti busy-0 -- nslookup busy-headless.default.svc.cluster.local
Server: 10.233.0.3
Address 1: 10.233.0.3 coredns.kube-system.svc.cluster.local
Name: busy-headless.default.svc.cluster.local
Address 1: 10.233.67.10 10-233-67-10.busy-headless.default.svc.cluster.local
Address 2: 10.233.68.27 10-233-68-27.busy-headless.default.svc.cluster.local
Address 3: 10.233.68.26 10-233-68-26.busy-headless.default.svc.cluster.local
Address 4: 10.233.69.11 busy-0.busy.default.svc.cluster.local
</code></pre>
<p>From <code>busy-1</code> the command returns <code>busy-1.busy.default.svc.cluster.local</code> for <code>busy-1</code> and <code>10-233-69-11.busy-headless.default.svc.cluster.local</code> for <code>busy-0</code>.</p>
<p>A nslookup on <code>busy-{1,2,3}.busy-headless.default.svc.cluster.local</code> returns an error.</p>
<p>What could possibly be wrong?</p>
<p>Thank you!</p>
|
<p>In your StatefulSet manifest try to specify:</p>
<pre><code>serviceName: busy-headless
</code></pre>
|
<p>In <code>Pod</code> specification, there is an option to specify the user ID that needs to be run as by all containers</p>
<pre><code>podSecurityContext:
runAsUser: <a numeric Id>
</code></pre>
<p>Is there a way we can change the user name as well, the way we have for windows pods and container, like below</p>
<pre><code> securityContext:
windowsOptions:
runAsUserName: "ContainerUser"
</code></pre>
|
<p>Unfortunately, there is no such way. <code>WindowsSecurityContextOptions</code> contain <strong>Windows-specific</strong> options and credentials. <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context" rel="nofollow noreferrer">PodSecurityContext</a> allows you to use:</p>
<blockquote>
<ul>
<li><code>securityContext.runAsUser</code> (int64)</li>
</ul>
<p>The UID to run the entrypoint of the container process. Defaults to
user specified in image metadata if unspecified. May also be set in
SecurityContext. If set in both SecurityContext and
PodSecurityContext, the value specified in SecurityContext takes
precedence for that container.</p>
<ul>
<li><code>securityContext.runAsNonRoot</code> (boolean)</li>
</ul>
<p>Indicates that the container must run as a non-root user. If true, the
Kubelet will validate the image at runtime to ensure that it does not
run as UID 0 (root) and fail to start the container if it does. If
unset or false, no such validation will be performed. May also be set
in SecurityContext. If set in both SecurityContext and
PodSecurityContext, the value specified in SecurityContext takes
precedence.</p>
<ul>
<li><code>securityContext.runAsGroup</code> (int64)</li>
</ul>
<p>The GID to run the entrypoint of the container process. Uses runtime
default if unset. May also be set in SecurityContext. If set in both
SecurityContext and PodSecurityContext, the value specified in
SecurityContext takes precedence for that container.</p>
</blockquote>
<p>Trying to use String instead of Integer for <code>runAsUser</code> will result in error:</p>
<pre><code>invalid type for io.k8s.api.core.v1.SecurityContext.runAsUser: got "string", expected "integer"
</code></pre>
|
<p>How can I change default docker image registry in Openshift ?
I already modified <code>/etc/containers/registries.conf</code> in workers and master nodes and put something like this but it didn't work.</p>
<pre><code>[[registry]]
prefix = "my_private_registry.com"
location = "my_private_registry.com"
insecure = false
</code></pre>
<p>How can I change the default repo?
Thank you</p>
|
<p>This is explained in <a href="https://docs.openshift.com/container-platform/4.7/openshift_images/image-configuration.html" rel="nofollow noreferrer">OpenShift documentation</a>:</p>
<blockquote>
<p>containerRuntimeSearchRegistries: Registries for which image pull and
push actions are allowed using image short names. All other registries
are blocked.</p>
</blockquote>
<p>So you can configure something like:</p>
<pre><code>apiVersion: config.openshift.io/v1
kind: Image
metadata:
annotations:
release.openshift.io/create-only: "true"
name: cluster
spec:
registrySources:
containerRuntimeSearchRegistries:
- my_private_registry.com
- quay.io
- registry.redhat.io
</code></pre>
|
<p>As the title suggests, I can't find any difference between Prometheus Adapter and Prometheus Operator for monitoring in Kubernetes.</p>
<p>Can anyone tell me the difference? Or if there are particular use cases in which to use one or the other?</p>
<p>Thanks in advance.</p>
|
<p>Those are completely different things. Prometheus Operator is a tool created by CoreOS to simplify deployment and management of Prometheus instances in K8s. Using Prometheus Operator you can very easily deploy Prometheus, Alertmanager, Prometheus alert rules and Service Monitors.</p>
<p>Prometheus Adapter is required for using Custom Metrics API in K8s. It is used primarily for Horizontal Pod Autoscaler to scale based on metrics retrieved from Prometheus. For example, you can create metrics inside of your application and collect them using Prometheus and then you can scale based on those metrics which is really good, because by default K8s is able to scale based only on raw metrics of CPU and memory usage which is not suitable in many cases.</p>
<p>So actually those two things can nicely complement each other.</p>
|
<p>I have 3 micro service applications, which shall be deployed in K8s.
Do I need to create 3 deployment files and 3 service files for this or to concatenate all 3 deployments to a single file (as well as services) ?</p>
|
<p>You did not provide information about thoses microservices.</p>
<p>You can do it in one file, however the Best Practise is to have each <code>Applicataion</code> / <code>Microservice</code> in separate <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments</a>.</p>
<p>Currently it might be only 3 <code>microservices</code> but in the future you may consider adding new features and new <code>microservices</code> will be needed.</p>
<p>If you have each <code>microservice</code> in different <code>deployment</code> You will be able to make fast configuration changes, without scrolling many rows of YAML code (less likely that you will make a syntax mistake inside the file).
It will easier to troubleshoot specific <code>microservices</code> and manage traffic between them - you can use <a href="https://istio.io/docs/concepts/what-is-istio/" rel="nofollow noreferrer">Istio</a> to do that.</p>
<p>As each microservice will be in different <code>deployment</code> you will have more versatility as you will be able to create some <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">initContainers</a> or there will be need to use <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>.</p>
<p>In addition you don't need to limit only to <code>Deployments</code>. You can also use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> for specific applications (i.e apps which requiring Databases).</p>
<p>You also can create your own <a href="https://helm.sh/docs/" rel="nofollow noreferrer">Helm</a> chart which allow you to deploy application using one command but have all deployments organized in directories.</p>
|
<p>I have a K8s Deployment that runs a Linux Docker image with Java and executes a sh script command running a Java process that on pod startup and it fails shortly after starting, triggering a pod crash and re-creation. It's a Java issue and logs are not helpful so I want to stop it before it fails and explore the pod file system and environemnt.</p>
<p>If I just try to <code>kill</code> the java PID the pod crashes instantly. Is there any way I can stop the Java process (without altering the program or sh script code) from inside the pod shell before it gets to the part of crashing, and not trigger a pod crash?</p>
<p>Thanks!</p>
|
<p>Starting from Kubernetes 1.19 you can debug running pods using <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container" rel="nofollow noreferrer">Ephemeral Containers</a> and <code>kubectl debug</code> command.</p>
|
<p>I have n instances of my micro-service running as kubernetes pods but as there's some scheduling logic in the application code, I would like only one of these pods to execute the code.</p>
<p>In Spring applications, a common approach is to activate <strong>scheduled</strong> profile <code>-Dspring.profiles.active=scheduled</code> for only instance & leave it deactivated for the remaining instances. I'd like to know how one can accomplish this in kubernetes. </p>
<hr>
<p><strong>Note:</strong> I am familiar with the approach where a kubernetes cron job can invoke an end point so that only one instances picked by load balancer executes the scheduled code. However, I would like to know if it's possible to configure kubernetes specification in such a way that only one pod has an environment variable set.</p>
|
<p>You can create deployment with 1 replica with the required environment variable and another deployment with as many replicas you want without that variable. You may also set the same labels on both deployments so that Service can load balance traffic between pods from both deployments if you need it. </p>
|
<p>To use storage inside Kubernetes PODs I can use <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="noreferrer">volumes</a> and <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">persistent volumes</a>. While the volumes like <code>emptyDir</code> are ephemeral, I could use <code>hostPath</code> and many other cloud based volume plugins which would provide a persistent solution in volumes itself.</p>
<p>In that case why should I be using Persistent Volume then?</p>
|
<p>It is very important to understand the main differences between <code>Volumes</code> and <code>PersistentVolumes</code>. Both <code>Volumes</code> and <code>PersistentVolumes</code> are Kubernetes resources which provides an abstraction of a data storage facility.</p>
<ul>
<li><p><code>Volumes</code>: let your pod write to a filesystem that exists as long as the pod exists. They also let you share data between containers in the same pod but data in that volume will be destroyed when the pod is restarted. <code>Volume</code> decouples the storage from the Container. Its lifecycle is coupled to a pod.</p>
</li>
<li><p><code>PersistentVolumes</code>: serves as a long-term storage in your Kubernetes cluster. They exist beyond containers, pods, and nodes. A pod uses a persistent volume claim to to get read and write access to the persistent volume. <code>PersistentVolume</code> decouples the storage from the Pod. Its lifecycle is independent. It enables safe pod restarts and sharing data between pods.</p>
</li>
</ul>
<p>When it comes to <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="noreferrer">hostPath</a>:</p>
<blockquote>
<p>A <code>hostPath</code> volume mounts a file or directory from the host node's
filesystem into your Pod.</p>
</blockquote>
<p><code>hostPath</code> has its usage scenarios but in general it might not recommended due to several reasons:</p>
<ul>
<li><p>Pods with identical configuration (such as created from a <code>PodTemplate</code>) may behave differently on different nodes due to different files on the nodes</p>
</li>
<li><p>The files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged Container or modify the file permissions on the host to be able to write to a <code>hostPath</code> volume</p>
</li>
<li><p>You don't always directly control which node your pods will run on, so you're not guaranteed that the pod will actually be scheduled on the node that has the data volume.</p>
</li>
<li><p>If a node goes down you need the pod to be scheduled on other node where your locally provisioned volume will not be available.</p>
</li>
</ul>
<p>The <code>hostPath</code> would be good if for example you would like to use it for log collector running in a <code>DaemonSet</code>.</p>
<p>I recommend the <a href="https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-volumes-example-nfs-persistent-volume.html" rel="noreferrer">Kubernetes Volumes Guide</a> as a nice supplement to this topic.</p>
|
<p>I am trying to create a Helm chart (x) with 5 deployments within chart (x) in a specific order:</p>
<ol>
<li>Deployment 1 ( zk)</li>
<li>Deployment 2 (security)</li>
<li>Deployment 3 (localmaster)</li>
<li>Deployment 4 (nginx)</li>
<li>Deployment 5 (semoss)</li>
</ol>
<p>Helm/Tiller version: "v2.12.3"
Kubectl version: Major:"1", Minor:"17"
Minikube version: v1.6.2</p>
<p>What I currently have:
RESOURCES:
==> v1/Deployment</p>
<p>NAME</p>
<ol>
<li>Localmaster</li>
<li>Nginx</li>
<li>Security</li>
<li>Semoss</li>
<li>Zk</li>
</ol>
<p>I can easily deploy chart (x) but once I run helm ls, my (x) chart is in a random order as you can see above. I only have one chart name (x) and within (x) I have:</p>
<p>Chart.yaml <strong>charts</strong> <strong>templates</strong> values.yaml</p>
<p><strong>Templates</strong> and <strong>charts</strong> are directories and the rest are files.
Is there a specific way or trick to have my x (chart) in the order I want? I’ve done some research and I am not so sure if helm spray in the right call as I am trying to deploy 1 chart with different deployment as opposed to an umbrella chart, and many other sub-charts.
Let me know if you need more info.</p>
|
<p><a href="https://helm.sh/docs/" rel="noreferrer">Helm</a> is package manager, allows you to define applications as a set of components on your cluster, and provides mechanisms to manage those sets from start to end. </p>
<p>Helm itself its not creating pods, it send requests to Kubernetes api and then Kubernetes is creating everything.</p>
<p>I have one idea how it can be achieve using Helm.</p>
<p>Helm order of deploying <code>Kinds</code> is hardcoded <a href="https://github.com/helm/helm/blob/9ad53aac42165a5fadc6c87be0dea6b115f93090/pkg/tiller/kind_sorter.go#L29" rel="noreferrer">here</a>. However if you want to set deploying order of the same kind to k8s, it can be done using <code>annotations</code>.</p>
<p>You could set annotations: <a href="https://helm.sh/docs/topics/charts_hooks/#the-available-hooks" rel="noreferrer">Pre-install</a> hook with <code>hook-weight</code> like in <a href="https://helm.sh/docs/topics/charts_hooks/#writing-a-hook" rel="noreferrer">this</a> example (lower value in hook-weight have higher priority). Similar case can be found on <a href="https://github.com/helm/helm/issues/1228#issuecomment-249707316" rel="noreferrer">Github</a>.</p>
<p>It would look like example below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
helm.sh/hook: pre-install
helm.sh/hook-weight: "10"
labels:
app.kubernetes.io/instance: test
...
</code></pre>
<p>You can check which deployment was created first using <code>kubectl get events</code>. However, creation of pods is still scheduled by Kubernetes.</p>
<p>To obtain exactly what you need you can use <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">initContainers</a> and <code>hardcode</code> "sleep" command.<br> First deployment with sleep 1s, second deployment with 5s, third with 10s, depends how long deployment need to create all pods.</p>
<p>You can check <a href="https://www.alibabacloud.com/blog/helm-charts-and-templates-hooks-and-tests-part-3_595650" rel="noreferrer">this</a> article, but keep in mind <code>spec.containers</code> and <code>spec.initContainers</code> are two different things.</p>
|
<p>I am looking for a way to create/retrieve/update/delete a user in Kubernetes, such that I can allow him certain stuff via RoleBindings.</p>
<p>Everything I have found is more or less manual work on the master node. However, I imagine a service deployed in Kubernetes I could call via an API to do the magic for me without doing any manual work. Is such a thing available?</p>
|
<p>From <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#users-in-kubernetes" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/#users-in-kubernetes</a></p>
<blockquote>
<p>All Kubernetes clusters have two categories of users: service accounts
managed by Kubernetes, and normal users.</p>
<p>Kubernetes does not have objects which represent
normal user accounts. Normal users cannot be added to a cluster
through an API call.</p>
<p>Even though a normal user cannot be added via an API call, any user
that presents a valid certificate signed by the cluster's certificate
authority (CA) is considered authenticated.</p>
</blockquote>
<p>So there is no API call to create normal user. However you can create service accounts that can have RoleBindings <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-binding-examples" rel="nofollow noreferrer">bound</a> to them.</p>
<p>Another possibility is to create TLS certificate, sign it with Kubernetes cluster CA (using <a href="https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/" rel="nofollow noreferrer">CSR</a>s) and use it as a "normal user".</p>
|
<p>I am new to Kubernetes. I have a K8 cluster with multiple deployments (more than 150), each having more than 4 pods scaled.
I have a requirement to increase resource limits for all deployments in the cluster; and I'm aware I can increase this directly via my deployment YAML.
However, I'm thinking if there is any way I can increase the resources for all deployments at one go.</p>
<p>Thanks for your help in advance.</p>
|
<p>There are few things to point out here:</p>
<ol>
<li>There is a <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#patch" rel="noreferrer">kubectl patch</a> command that allows you to:</li>
</ol>
<blockquote>
<p>Update field(s) of a resource using strategic merge patch, a JSON
merge patch, or a JSON patch.</p>
<p>JSON and YAML formats are accepted.</p>
</blockquote>
<p>See examples below:</p>
<pre><code>kubectl patch deploy deploy1 deploy2 --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value":"120Mi"}]'
</code></pre>
<p>or:</p>
<pre><code>kubectl patch deploy $(kubectl get deploy -o go-template --template '{{range .items}}{{.metadata.name}}{{" "}}{{end}}') --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value":"120Mi"}]'
</code></pre>
<p>For further reference see <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="noreferrer">this doc</a>.</p>
<ol start="2">
<li>You can add proper labels into deployment via <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-resources-em-" rel="noreferrer">kubectl set</a> command:</li>
</ol>
<hr />
<pre><code>kubectl set resources deployment -l key=value --limits memory=120Mi
</code></pre>
<ol start="3">
<li>Also, you can use some additional CLI like <code>sed</code>, <code>awk</code> or <code>xargs</code>. For example:</li>
</ol>
<hr />
<pre><code>kubectl get deployments -o name | sed -e 's/.*\///g' | xargs -I {} kubectl patch deployment {} --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "Always"}]'
</code></pre>
<p>or:</p>
<pre><code>kubectl get deployments -o name | awk '{print $1 }' | xargs kubectl patch deployment $0 -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
</code></pre>
<ol start="4">
<li>It is also worth noting that configuration files should be stored in version control before being pushed to the cluster. See the <a href="https://kubernetes.io/docs/concepts/configuration/overview/" rel="noreferrer">Configuration Best Practices</a> for more details.</li>
</ol>
|
<ul>
<li><p>Throttling stats are available in <code>/sys/fs/cgroup/cpu/cpu.stat</code></p>
</li>
<li><p>As per documentation of k8s CPU requests, the number is translated<br />
into a value that goes into <code>/sys/fs/cgroup/cpu/cpu.shares</code></p>
</li>
<li><p>If container A on a node has <code>cpu.shares</code> value twice that of
container B, then it will get twice amount of time if both are trying to run simultaneously.</p>
</li>
<li><p>Is this considered as throttling for container B based on
<code>cpu.shares</code> value?</p>
</li>
<li><p>If so, how can we measure this kind of throttling?</p>
</li>
</ul>
|
<p>First of all CPU shares don't make sense until after CPU pressure occurs. As long as there is enough CPU all tasks may run for as much time as they want.</p>
<p>Second, CPU requests do not cause throttling, only limits do. For more details please read more about <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">Quality of Service for Pods</a>.</p>
<p>You can measure CPU throttling due to CPU limits using Metrics-server or Prometheus like that: <a href="https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/master/alerts/resource_alerts.libsonnet#L143" rel="nofollow noreferrer">https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/master/alerts/resource_alerts.libsonnet#L143</a></p>
|
<p>I have deployed application on kubernetes cluster and for monitoring using prometheus and grafana. For kubernetes pods monitoring using Grafana dashboard: <em>Kubernetes cluster monitoring (via Prometheus) <a href="https://grafana.com/grafana/dashboards/315" rel="noreferrer">https://grafana.com/grafana/dashboards/315</a></em></p>
<p>I had imported the dashboard using id 315 and its reflecting without pod name and containers name instead getting pod_name . Can anyone pls help how can i get pod name and container name in dashboard. </p>
<p><a href="https://i.stack.imgur.com/T4drF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/T4drF.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/dSsBk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/dSsBk.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/I1cXK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/I1cXK.png" alt="enter image description here"></a></p>
|
<p>Provided tutorial was updated <strong>2 years ago</strong>. </p>
<p>Current version of Kubernetes is <a href="https://kubernetes.io/docs/setup/release/notes/#v1-17-0" rel="noreferrer">1.17</a>. As per tags, tutorial was tested on <code>Prometheus v. 1.3.0</code>, <code>Kubernetes v.1.4.0</code> and <code>Grafana v.3.1.1</code> which are quite old at the moment.</p>
<p>In requirements you have statement:</p>
<blockquote>
<p>Prometheus will use metrics provided by <strong>cAdvisor via kubelet</strong> service (runs on each node of Kubernetes cluster by default) and via kube-apiserver service only.</p>
</blockquote>
<p>In <code>Kubernetes 1.16</code> metrics labels like <code>pod_name</code> and <code>container_name</code> was removed. Instead of that you need to use <code>pod</code> and <code>container</code>. You can verify it <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#removed-metrics" rel="noreferrer">here</a>.</p>
<blockquote>
<p>Any Prometheus queries that match <code>pod_name</code> and <code>container_name</code> labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead. </p>
</blockquote>
<p>Please check <a href="https://github.com/istio/istio/issues/18827" rel="noreferrer">this Github Thread</a> about dashboard bug for more information.</p>
<p><strong>Solution</strong></p>
<p>Please change <code>pod_name</code> to <code>pod</code> in your query.</p>
|
<p>I can't connect Docker CLI to the remote Docker demon inside minikube.</p>
<p>I've done <code>minikube delete</code> and then <code>minikube start --driver=hyperv</code> but when I do & <code>minikube -p minikube docker-env | Invoke-Expression</code> it comes back with a weird error which is:</p>
<pre><code>You: The term 'You' is not recognized as a name of a cmdlet, function, script file, or executable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
Invoke-Expression: Cannot bind argument to parameter 'Command' because it is an empty string.
Invoke-Expression:
Line |
1 | & minikube -p minikube docker-env | Invoke-Expression
| ~~~~~~~~~~~~~~~~~
| The string is missing the terminator: '.
</code></pre>
<p>Can anybody help with this?</p>
|
<p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>As already discussed in the comments the solution is to use the <a href="https://minikube.sigs.k8s.io/docs/commands/docker-env/" rel="nofollow noreferrer">minikube docker-env command</a>:</p>
<blockquote>
<pre><code>minikube docker-env
</code></pre>
<p>Configure environment to use minikube’s Docker daemon</p>
<p><strong>Synopsis</strong></p>
<p>Sets up docker env variables; similar to <code>$(docker-machine env)</code>.</p>
<pre><code>minikube docker-env [flags]
</code></pre>
</blockquote>
<p>Notice the <code>--shell</code> option:</p>
<pre><code> --shell string Force environment to be configured for a specified shell: [fish, cmd, powershell, tcsh, bash, zsh], default is auto-detect
</code></pre>
|
<p>I use kubernetes on docker for windows.<br>
And I want to use Kaniko but I could not build an image on local kubernetes.</p>
<p>Dockerfile</p>
<pre><code>FROM ubuntu:18.04
RUN apt update
RUN apt install -y ssh
</code></pre>
<p>kanikopod.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- image: gcr.io/kaniko-project/executor:latest
args:
- --dockerfile=/mnt/Dockerfile
- --context= /mnt
- --no-push
name: kaniko
command:
- sleep
- infinity
volumeMounts:
- mountPath: /mnt
name: mount-volume
restartPolicy: Never
volumes:
- name: mount-volume
persistentVolumeClaim:
claimName: kaniko-workspace
</code></pre>
<p>But ContainerCannotRun error occurred.</p>
<p>kubectl describe pods kaniko<br>
result is</p>
<pre><code>Name: kaniko
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: docker-desktop/192.168.65.3
Start Time: Mon, 06 May 2019 18:13:47 +0900
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"kaniko","namespace":"default"},"spec":{"containers":[{"args":["--dock...
Status: Pending
IP:
Containers:
kaniko:
Container ID:
Image: gcr.io/kaniko-project/executor:latest
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
infinity
Args:
--dockerfile=/mnt/Dockerfile
--context= /mnt
--no-push
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mnt from mount-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-spjnr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mount-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: kaniko-workspace
ReadOnly: false
default-token-spjnr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-spjnr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4s default-scheduler Successfully assigned default/kaniko to docker-desktop
</code></pre>
<p>kubectl logs kaniko<br>
But there was no output.</p>
<p>I think "--destination=" option is needed for kaniko, but I cannot find the information.<br>
What should I do?</p>
|
<p>Try to remove this lines:</p>
<pre><code> command:
- sleep
- infinity
</code></pre>
|
<p>I'm trying to setup <code>Nginx-ingress controller</code> to manage two paths on the same <code>hostname</code> in bare metal based cluster.</p>
<p>In the <strong>app1</strong> namespace i have below nginx resource:-</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app1-ingress
namespace: app1
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
</code></pre>
<p>And in the <strong>app2</strong> namespace i have below nginx resource:-</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app2-ingress
namespace: app2
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
</code></pre>
<p>My <code>app1-service</code> applied first and it is running fine, now when i applied the second <code>app2-service</code> it shows below warning and not able to access it on browser.</p>
<pre><code>Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources
Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources
Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources
</code></pre>
<p>How do i configure my nginx ingress resource to connect multiple service paths on the same hostname?</p>
|
<p>Default Nginx Ingress controller doesn't support having different <code>Ingress</code> resources with the same hostname. You can have one <code>Ingress</code> resource that contains multiple paths, but in this case all apps should live in one namespace. Like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app1-ingress
namespace: app1
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
</code></pre>
<p>Splitting ingresses between namespaces is currently not supported by standard Nginx Ingress controller.</p>
<p>You may however take a look at an alternative implementation of <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">Nginx Ingress</a> by Nginx Inc. They have support for <a href="https://github.com/nginxinc/kubernetes-ingress/tree/main/examples/ingress-resources/mergeable-ingress-types" rel="nofollow noreferrer">Mergeable Ingresses</a>.</p>
|
<p>I'm deploying ha vault on k8s (EKS) and getting this error on one of the vault pods, which I think is causing other pods to fail also :
This is the output of the <code>kubectl get events</code>:<br />
search for : <code>nodes are available: 1 Insufficient memory</code></p>
<pre><code>26m Normal Created pod/vault-1 Created container vault
26m Normal Started pod/vault-1 Started container vault
26m Normal Pulled pod/vault-1 Container image "hashicorp/vault-enterprise:1.5.0_ent" already present on machine
7m40s Warning BackOff pod/vault-1 Back-off restarting failed container
2m38s Normal Scheduled pod/vault-1 Successfully assigned vault-foo/vault-1 to ip-10-101-0-103.ec2.internal
2m35s Normal SuccessfulAttachVolume pod/vault-1 AttachVolume.Attach succeeded for volume "pvc-acfc7e26-3616-4075-ab79-0c3f7b0f6470"
2m35s Normal SuccessfulAttachVolume pod/vault-1 AttachVolume.Attach succeeded for volume "pvc-19d03d48-1de2-41f8-aadf-02d0a9f4bfbd"
48s Normal Pulled pod/vault-1 Container image "hashicorp/vault-enterprise:1.5.0_ent" already present on machine
48s Normal Created pod/vault-1 Created container vault
99s Normal Started pod/vault-1 Started container vault
60s Warning BackOff pod/vault-1 Back-off restarting failed container
27m Normal TaintManagerEviction pod/vault-2 Cancelling deletion of Pod vault-foo/vault-2
28m Warning FailedScheduling pod/vault-2 0/4 nodes are available: 1 Insufficient memory, 4 Insufficient cpu.
28m Warning FailedScheduling pod/vault-2 0/5 nodes are available: 1 Insufficient memory, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 4 Insufficient cpu.
27m Normal Scheduled pod/vault-2 Successfully assigned vault-foo/vault-2 to ip-10-101-0-103.ec2.internal
27m Normal SuccessfulAttachVolume pod/vault-2 AttachVolume.Attach succeeded for volume "pvc-fb91141d-ebd9-4767-b122-da8c98349cba"
27m Normal SuccessfulAttachVolume pod/vault-2 AttachVolume.Attach succeeded for volume "pvc-95effe76-6e01-49ad-9bec-14e091e1a334"
27m Normal Pulling pod/vault-2 Pulling image "hashicorp/vault-enterprise:1.5.0_ent"
27m Normal Pulled pod/vault-2 Successfully pulled image "hashicorp/vault-enterprise:1.5.0_ent"
26m Normal Created pod/vault-2 Created container vault
26m Normal Started pod/vault-2 Started container vault
26m Normal Pulled pod/vault-2 Container image "hashicorp/vault-enterprise:1.5.0_ent" already present on machine
7m26s Warning BackOff pod/vault-2 Back-off restarting failed container
2m36s Warning FailedScheduling pod/vault-2 0/7 nodes are available: 1 Insufficient memory, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 1 node(s) had volume node affinity conflict, 1 node(s) were unschedulable, 4 Insufficient cpu.
114s Warning FailedScheduling pod/vault-2 0/8 nodes are available: 1 Insufficient memory, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 1 node(s) were unschedulable, 4 Insufficient cpu.
104s Warning FailedScheduling pod/vault-2 0/9 nodes are available: 1 Insufficient memory, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 1 node(s) had volume node affinity conflict, 1 node(s) were unschedulable, 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 4 Insufficient cpu.
93s Normal Scheduled pod/vault-2 Successfully assigned vault-foo/vault-2 to ip-10-101-0-82.ec2.internal
88s Normal SuccessfulAttachVolume pod/vault-2 AttachVolume.Attach succeeded for volume "pvc-fb91141d-ebd9-4767-b122-da8c98349cba"
88s Normal SuccessfulAttachVolume pod/vault-2 AttachVolume.Attach succeeded for volume "pvc-95effe76-6e01-49ad-9bec-14e091e1a334"
83s Normal Pulling pod/vault-2 Pulling image "hashicorp/vault-enterprise:1.5.0_ent"
81s Normal Pulled pod/vault-2 Successfully pulled image "hashicorp/vault-enterprise:1.5.0_ent"
38s Normal Created pod/vault-2 Created container vault
37s Normal Started pod/vault-2 Started container vault
38s Normal Pulled pod/vault-2 Container image "hashicorp/vault-enterprise:1.5.0_ent" already present on machine
4s Warning BackOff pod/vault-2 Back-off restarting failed container
2m38s Normal Scheduled pod/vault-agent-injector-d54bdc675-qwsmz Successfully assigned vault-foo/vault-agent-injector-d54bdc675-qwsmz to ip-10-101-2-91.ec2.internal
2m37s Normal Pulling pod/vault-agent-injector-d54bdc675-qwsmz Pulling image "hashicorp/vault-k8s:latest"
2m36s Normal Pulled pod/vault-agent-injector-d54bdc675-qwsmz Successfully pulled image "hashicorp/vault-k8s:latest"
2m36s Normal Created pod/vault-agent-injector-d54bdc675-qwsmz Created container sidecar-injector
2m35s Normal Started pod/vault-agent-injector-d54bdc675-qwsmz Started container sidecar-injector
28m Normal Scheduled pod/vault-agent-injector-d54bdc675-wz9ws Successfully assigned vault-foo/vault-agent-injector-d54bdc675-wz9ws to ip-10-101-0-87.ec2.internal
28m Normal Pulled pod/vault-agent-injector-d54bdc675-wz9ws Container image "hashicorp/vault-k8s:latest" already present on machine
28m Normal Created pod/vault-agent-injector-d54bdc675-wz9ws Created container sidecar-injector
28m Normal Started pod/vault-agent-injector-d54bdc675-wz9ws Started container sidecar-injector
3m22s Normal Killing pod/vault-agent-injector-d54bdc675-wz9ws Stopping container sidecar-injector
3m22s Warning Unhealthy pod/vault-agent-injector-d54bdc675-wz9ws Readiness probe failed: Get https://10.101.0.73:8080/health/ready: dial tcp 10.101.0.73:8080: connect: connection refused
3m18s Warning Unhealthy pod/vault-agent-injector-d54bdc675-wz9ws Liveness probe failed: Get https://10.101.0.73:8080/health/ready: dial tcp 10.101.0.73:8080: connect: no route to host
28m Normal SuccessfulCreate replicaset/vault-agent-injector-d54bdc675 Created pod: vault-agent-injector-d54bdc675-wz9ws
2m38s Normal SuccessfulCreate replicaset/vault-agent-injector-d54bdc675 Created pod: vault-agent-injector-d54bdc675-qwsmz
28m Normal ScalingReplicaSet deployment/vault-agent-injector Scaled up replica set vault-agent-injector-d54bdc675 to 1
2m38s Normal ScalingReplicaSet deployment/vault-agent-injector Scaled up replica set vault-agent-injector-d54bdc675 to 1
28m Normal EnsuringLoadBalancer service/vault-ui Ensuring load balancer
28m Normal EnsuredLoadBalancer service/vault-ui Ensured load balancer
26m Normal UpdatedLoadBalancer service/vault-ui Updated load balancer with new hosts
3m24s Normal DeletingLoadBalancer service/vault-ui Deleting load balancer
3m23s Warning PortNotAllocated service/vault-ui Port 32476 is not allocated; repairing
3m23s Warning ClusterIPNotAllocated service/vault-ui Cluster IP 172.20.216.143 is not allocated; repairing
3m22s Warning FailedToUpdateEndpointSlices service/vault-ui Error updating Endpoint Slices for Service vault-foo/vault-ui: failed to update vault-ui-crtg4 EndpointSlice for Service vault-foo/vault-ui: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "vault-ui-crtg4": the object has been modified; please apply your changes to the latest version and try again
3m16s Warning FailedToUpdateEndpoint endpoints/vault-ui Failed to update endpoint vault-foo/vault-ui: Operation cannot be fulfilled on endpoints "vault-ui": the object has been modified; please apply your changes to the latest version and try again
2m52s Normal DeletedLoadBalancer service/vault-ui Deleted load balancer
2m39s Normal EnsuringLoadBalancer service/vault-ui Ensuring load balancer
2m36s Normal EnsuredLoadBalancer service/vault-ui Ensured load balancer
96s Normal UpdatedLoadBalancer service/vault-ui Updated load balancer with new hosts
28m Normal NoPods poddisruptionbudget/vault No matching pods found
28m Normal SuccessfulCreate statefulset/vault create Pod vault-0 in StatefulSet vault successful
28m Normal SuccessfulCreate statefulset/vault create Pod vault-1 in StatefulSet vault successful
28m Normal SuccessfulCreate statefulset/vault create Pod vault-2 in StatefulSet vault successful
2m40s Normal NoPods poddisruptionbudget/vault No matching pods found
2m38s Normal SuccessfulCreate statefulset/vault create Pod vault-0 in StatefulSet vault successful
2m38s Normal SuccessfulCreate statefulset/vault create Pod vault-1 in StatefulSet vault successful
2m38s Normal SuccessfulCreate statefulset/vault create Pod vault-2 in StatefulSet vault successful
</code></pre>
<p>And this is my helm :</p>
<pre><code># Vault Helm Chart Value Overrides
global:
enabled: true
tlsDisable: false
injector:
enabled: true
# Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
image:
repository: "hashicorp/vault-k8s"
tag: "latest"
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
server:
# Use the Enterprise Image
image:
repository: "hashicorp/vault-enterprise"
tag: "1.5.0_ent"
# These Resource Limits are in line with node requirements in the
# Vault Reference Architecture for a Small Cluster
resources:
requests:
memory: 8Gi
cpu: 2000m
limits:
memory: 16Gi
cpu: 2000m
# For HA configuration and because we need to manually init the vault,
# we need to define custom readiness/liveness Probe settings
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
livenessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
# extraEnvironmentVars is a list of extra environment variables to set with the stateful set. These could be
# used to include variables required for auto-unseal.
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Vault in the path .
#extraVolumes:
# - type: secret
# name: tls-server
# - type: secret
# name: tls-ca
# - type: secret
# name: kms-creds
extraVolumes:
- type: secret
name: vault-server-tls
# This configures the Vault Statefulset to create a PVC for audit logs.
# See https://www.vaultproject.io/docs/audit/index.html to know more
auditStorage:
enabled: true
standalone:
enabled: false
# Run Vault in "HA" mode.
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
tls_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "http://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
}
retry_join {
leader_api_addr = "http://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
}
retry_join {
leader_api_addr = "http://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
}
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "LoadBalancer"
serviceNodePort: null
externalPort: 8200
# For Added Security, edit the below
#loadBalancerSourceRanges:
# - < Your IP RANGE Ex. 10.0.0.0/16 >
# - < YOUR SINGLE IP Ex. 1.78.23.3/32 >
</code></pre>
<p>what did I not configure right?</p>
|
<p>There are several issue here and they are all represented by the error messages like:</p>
<pre><code>0/9 nodes are available: 1 Insufficient memory, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 1 node(s) had volume node affinity conflict, 1 node(s) were unschedulable, 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 4 Insufficient cpu.
</code></pre>
<p>You got 9 Nodes but none of them are available for scheduling due to a different set of conditions. Note that each Node can be affected by multiple issues and so the numbers can add up to more than what you have on total nodes.</p>
<p>Let's break them down one by one:</p>
<ul>
<li><p><code>Insufficient memory</code>: Execute <code>kubectl describe node <node-name></code> to check how much free memory is available there. Check the requests and limits of your pods. Note that Kubernetes will block the full amount of memory a pod requests regardless how much this pod uses.</p>
</li>
<li><p><code>Insufficient cpu</code>: Analogical as above.</p>
</li>
<li><p><code>node(s) didn't match pod affinity/anti-affinity</code>: Check your <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">affinity/anti-affinity</a> rules.</p>
</li>
<li><p><code>node(s) didn't satisfy existing pods anti-affinity rules</code>: Same as above.</p>
</li>
<li><p><code>node(s) had volume node affinity conflict</code>: Happens when pod was not able to be scheduled because it cannot connect to the volume from another Availability Zone. You can fix this by creating a <code>storageclass</code> for a single zone and than use that <code>storageclass</code> in your PVC.</p>
</li>
<li><p><code>node(s) were unschedulable</code>: This is because the node is marked as <code>Unschedulable</code>. Which leads us to the next issue below:</p>
</li>
<li><p><code>node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate</code>: This corresponds to the <code>NodeCondition</code> <code>Ready</code> = <code>False</code>. You can use kubectl describe node to check taints and <code>kubectl taint nodes <node-name> <taint-name>-</code> in order to remove them. Check the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">Taints and Tolerations</a> for more details.</p>
</li>
</ul>
<p>Also there is a <a href="https://github.com/hashicorp/consul-helm/issues/243" rel="nofollow noreferrer">GitHub thread</a> with a similar issue that you may find useful.</p>
<p>Try checking/eliminating those issue one by one (starting from the first listed above) as they can make a "chain reaction" in some scenarios.</p>
|
<p>We only have namespace level access and was wondering how we could deploy Istio on a specific namespace which will be specific to applications deployed on that specific namespace. Is this a possibility?</p>
<p>I couldn't find any content online so was wondering if this has been tried out.</p>
|
<p>Of course it can. Easiest way to do that is using Helm chart: <a href="https://github.com/istio/istio/tree/master/install/kubernetes/helm/istio" rel="nofollow noreferrer">https://github.com/istio/istio/tree/master/install/kubernetes/helm/istio</a>.
Just specify needed namespace when doing 'helm install' and that's all.</p>
|
<p>I'm trying to install Kubernetes with dashboard but I get the following issue:</p>
<pre><code>test@ubuntukubernetes1:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-ksc9n 0/1 CrashLoopBackOff 14 (2m15s ago) 49m
kube-system coredns-6d4b75cb6d-27m6b 0/1 ContainerCreating 0 4h
kube-system coredns-6d4b75cb6d-vrgtk 0/1 ContainerCreating 0 4h
kube-system etcd-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-apiserver-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-controller-manager-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-proxy-6v8w6 1/1 Running 1 (106m ago) 4h
kube-system kube-scheduler-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kubernetes-dashboard dashboard-metrics-scraper-7bfdf779ff-dfn4q 0/1 Pending 0 48m
kubernetes-dashboard dashboard-metrics-scraper-8c47d4b5d-9kh7h 0/1 Pending 0 73m
kubernetes-dashboard kubernetes-dashboard-5676d8b865-q459s 0/1 Pending 0 73m
kubernetes-dashboard kubernetes-dashboard-6cdd697d84-kqnxl 0/1 Pending 0 48m
test@ubuntukubernetes1:~$
</code></pre>
<p>Log files:</p>
<pre><code>test@ubuntukubernetes1:~$ kubectl logs --namespace kube-flannel kube-flannel-ds-ksc9n
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
I0808 23:40:17.324664 1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W0808 23:40:17.324753 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
E0808 23:40:17.547453 1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-ksc9n': pods "kube-flannel-ds-ksc9n" is forbidden: User "system:serviceaccount:kube-flannel:flannel" cannot get resource "pods" in API group "" in the namespace "kube-flannel"
test@ubuntukubernetes1:~$
</code></pre>
<p>Do you know how this issue can be solved? I tried the following installation:</p>
<pre><code>Swapoff -a
Remove following line from /etc/fstab
/swap.img none swap sw 0 0
sudo apt update
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker
sudo apt install apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
sudo mv ~/kubernetes.list /etc/apt/sources.list.d
sudo apt update
sudo apt install kubeadm kubelet kubectl kubernetes-cni
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
kubectl proxy --address 192.168.1.133 --accept-hosts '.*'
</code></pre>
<p>Can you advise?</p>
|
<p>I had the same situation on a new deployment today. Turns out, the kube-flannel-rbac.yml file had the wrong namespace. It's now 'kube-flannel', not 'kube-system', so I modified it and re-applied.</p>
<p>I also added a 'namespace' entry under each 'name' entry in kube-flannel.yml, except for under the roleRef heading. (it threw an error when I added it there) All pods came up as 'Running' after the new yml was applied.</p>
|
<p>I'm trying to filter out all paths that begin with <code>/something</code>. While the regex seems PCRE valid by online testers, the result is <code>404</code> for all paths:</p>
<pre class="lang-json prettyprint-override"><code>kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: myhost.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
path: /^([^something].*)
</code></pre>
<p>Tried to play with the regex (e.g, <code>path: /(^[^something])(.*)</code>), but still get <code>404</code> for all.</p>
<p>What am I missing?</p>
<p>Using <code>v1.12.2</code> client with <code>v1.14.1</code> server.</p>
|
<p>If you want to deny all traffic to <code>/something</code> you should use <code>Nginx</code> annotations called <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-snippet" rel="nofollow noreferrer">server-snipped</a>. It will allow you to add custom configuration.</p>
<p>It would look like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-snippet
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
location /something {
deny all;
}
</code></pre>
<p>Fimilar example can be found on <a href="https://github.com/nginxinc/kubernetes-ingress/issues/161#issuecomment-322224255" rel="nofollow noreferrer">Github</a> thread.</p>
<p>You can also consider second option with 2 <code>ingress</code> objects and <code>authentication</code>. This was mentioned in another <a href="https://stackoverflow.com/a/51894604/11148139">StackOverflow question</a>.</p>
<p>In addition, you can deny access not only by location but also with specific IP. It can be obtain using annotation called <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range" rel="nofollow noreferrer">whitelist-source-range</a>.</p>
|
<p>I've created an <code>EKS</code> cluster using <a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest" rel="nofollow noreferrer">this terraform module</a>:</p>
<pre><code>module "eks" {
...
worker_groups = [
{
name = "worker-group-1"
instance_type = "t2.small"
asg_desired_capacity = 2
}
]
}
</code></pre>
<p>As you can see, It creates a worker group of <code>t2.small</code> <code>EC2</code> instances. <code>t2.small</code> instances are <code>EBS-backed</code> and have <code>20 GiB</code> of <code>EBS</code> volume by default. What happens when one of these nodes, consumes all of it's allocated <code>EBS</code> volume?</p>
<ul>
<li>Does the <code>cluster autoscaler</code> (which is enabled in my <code>EKS</code> cluster)
create a new worker node?</li>
<li>Or, Does the allocated <code>EBS</code> volume get increased?</li>
</ul>
<p>If none of the above scenarios happens, How should I deal with it in my cluster? What's the best approach?</p>
|
<blockquote>
<p>What happens when one of these nodes, consumes all of it's allocated
<code>EBS</code> volume?</p>
</blockquote>
<ul>
<li>The node reports a condition when a compute resource is under pressure. The scheduler views that condition as a signal to dissuade placing additional pods on the node. In this case, the Node Condition would be <code>DiskPressure</code> which means that no new Pods are scheduled to the node. You can find more details regarding that topic by checking <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="nofollow noreferrer">Configure Out of Resource Handling</a> docs.</li>
</ul>
<blockquote>
<p>Does the cluster autoscaler (which is enabled in my EKS cluster)
create a new worker node? Or, Does the allocated EBS volume get increased?</p>
</blockquote>
<ul>
<li>The autoscaler checks the cluster for pods that cannot be scheduled on any existing nodes because of inadequate CPU or memory resources or because the pod’s node affinity rules or taint tolerations do not match an existing node. If the cluster has unschedulable pods, the autoscaler will check its managed node pools to decide if adding a node would unblock the pod. If so, it will add a node if the node pool can be increased in size. All the specifics of AWS implementation of the Cluster Autoscaler can be found <a href="https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html" rel="nofollow noreferrer">here</a>.</li>
</ul>
|
<p>I'm trying to send a GET request from Lambda to a pod without exposing it externally. That pod got a ClusterIP service attached to it. I can access this pod directly via internet (via ingress) so I know it works properly.</p>
<p>Here is the part of the service attached to a pod:</p>
<pre><code>spec:
clusterIP: 10.xxx.xxx.xx
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: app_name
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>I attached lambda to the vpc and subnets but if I use below code I get an error. I tried using pod IP address and cluster IP address but with the same error. It works for google/other sites and when the pod is exposed to the internet.</p>
<pre><code> const http = require('http');
exports.handler = async (event, context) => {
return new Promise((resolve, reject) => {
const options = {
host: 'www.google.com',
path: '/api/',
port: 80,
method: 'GET'
};
const req = http.request(options, (res) => {
let data = '';
// A chunk of data has been recieved.
res.on('data', (chunk) => {
data += chunk;
});
// The whole response has been received. Print out the result.
res.on('end', () => {
console.log(JSON.parse(data));
});
});
req.write('');
req.end();
});
};
</code></pre>
<p>Response:</p>
<pre><code>{
"errorMessage": "Task timed out after 3.00 seconds"
}
</code></pre>
<p>I understand all below and I'm happy to change the service type but I don't know how do I supposed to address the pod in my Lambda (replace www.google.com with something). Happy to try any other code or python script.</p>
<pre><code>A ClusterIP service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access.
A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.
A LoadBalancer service is the standard way to expose a service to the internet. On GKE, this will spin up a Network Load Balancer that will give you a single IP address that will forward all traffic to your service.
</code></pre>
<p>Anyone tried something like that?</p>
|
<p>One of the easiest way is to expose service behind internal Load Balancer. In this way your service will not be exposed to Internet but will still be available within VPC.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>A little bit more advanced but more flexible solution is to use <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx Ingress</a> with the same internal Load Balancer.</p>
|
<p>Colleagues, someone can suggest some UI for easy setup alerts prometeia and json certainly cool, but it's uncomfortable, and I think not only for me.
Thank you.</p>
|
<p>The most common mix is <code>Prometheus</code> and <code>Grafana</code>.</p>
<p><a href="https://grafana.com/" rel="nofollow noreferrer">Grafana</a> also providing plugi-in for <a href="https://grafana.com/grafana/plugins/prometheus" rel="nofollow noreferrer">Prometheus</a> and <a href="https://grafana.com/grafana/plugins/camptocamp-prometheus-alertmanager-datasource/installation" rel="nofollow noreferrer">Prometheus AlertManager</a>. You can find many tutorials online with installation, configuration and integration of both like <a href="https://www.scaleway.com/en/docs/configure-prometheus-monitoring-with-grafana/" rel="nofollow noreferrer">this</a>.</p>
<p>You could also check other UI's like <a href="https://github.com/pjhampton/kibana-prometheus-exporter#installing" rel="nofollow noreferrer">Kibana</a> or <a href="https://kiali.io/" rel="nofollow noreferrer">Kiali</a>, however I think <code>Grafana</code> would be best for your needs.</p>
|
<p>I am unable to upload a file through a deployment YAML in Kubernetes.<br />
The deployment YAML</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: openjdk:14
ports:
- containerPort: 8080
volumeMounts:
- name: testing
mountPath: "/usr/src/myapp/docker.jar"
workingDir: "/usr/src/myapp"
command: ["java"]
args: ["-jar", "docker.jar"]
volumes:
- hostPath:
path: "C:\\Users\\user\\Desktop\\kubernetes\\docker.jar"
type: File
name: testing
</code></pre>
<p>I get the following error:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19s default-scheduler Successfully assigned default/test-64fb7fbc75-mhnnj to minikube
Normal Pulled 13s (x3 over 15s) kubelet Container image "openjdk:14" already present on machine
Warning Failed 12s (x3 over 14s) kubelet Error: Error response from daemon: invalid mode: /usr/src/myapp/docker.jar
</code></pre>
<p>When I remove the volumeMount it runs with the error unable to access docker.jar.</p>
<pre><code> volumeMounts:
- name: testing
mountPath: "/usr/src/myapp/docker.jar"
</code></pre>
|
<p>This is a community wiki asnwer. Feel free to expand it.</p>
<p>That is a known issue with Docker on Windows. Right now it is not possible to correctly mount Windows directories as volumes.</p>
<p>You could try some of the workarounds mentioned by @CodeWizard in <a href="https://github.com/kubernetes/kubernetes/issues/59876" rel="nofollow noreferrer">this github thread</a> like <a href="https://github.com/kubernetes/kubernetes/issues/59876#issuecomment-628955935" rel="nofollow noreferrer">here</a> or <a href="https://github.com/kubernetes/kubernetes/issues/59876#issuecomment-390452420" rel="nofollow noreferrer">here</a>.</p>
<p>Also, if you are using VirtualBox, you might want to check <a href="https://stackoverflow.com/a/52887435/11560878">this solution</a>:</p>
<blockquote>
<p>On Windows, you can not directly map Windows directory to your
container. Because your containers are reside inside a VirtualBox VM.
So your docker -v command actually maps the directory between the VM
and the container.</p>
<p>So you have to do it in two steps:</p>
<p>Map a Windows directory to the VM through VirtualBox manager Map a
directory in your container to the directory in your VM You better use
the Kitematic UI to help you. It is much eaiser.</p>
</blockquote>
<p>Alternatively, you can deploy your setup on Linux environment to completely omit those specific kind of issues.</p>
|
<p>I have a simple Node.js server running on some Kubernetes pods. When I delete a deployment with:</p>
<pre><code>kubectl delete deployment <deployment-name>
</code></pre>
<p>sometimes pods have a status of terminating for 5-8 seconds. Seems excessive - is there a way to kill them faster somehow?</p>
|
<p>If you really want to <em>kill</em> them instead of <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">gracefully shutdown</a>, then:</p>
<pre><code>kubectl delete deployment <deployment-name> --grace-period=0
</code></pre>
<p>Also, if you have any <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers" rel="nofollow noreferrer"><code>preStop</code> handlers configured</a>, you could investigate if they are causing any unnecessary delay.</p>
|
<p>The example has:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: usernamekey
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: passwordkey
restartPolicy: Never
</code></pre>
<p>the above from:</p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a></p>
<p>I've created a secret like this:</p>
<pre><code>kubectl --namespace=mycustomnamespace create secret generic mysecret --from-literal=passwordkey="abc123" --from-literal=usernamekey="mememe"
</code></pre>
<p>I understand that the above secrets exist under the namespace.</p>
<p>But if I try this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
namespace: mycustomnamespace
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: usernamekey
namespace: mycustomnamespace
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: passwordkey
namespace: mycustomnamespace
restartPolicy: Never
</code></pre>
<p>(note that I added a namespace declaration under metadata)</p>
<p>I get this error:</p>
<blockquote>
<p>Error validating data:
[ValidationError(Pod.spec.containers[1].env[2].valueFrom.secretKeyRef):
unknown field "namespace" in io.k8s.api.core.v1.SecretKeySelector,
ValidationError(Pod.spec.containers[1].env[6].valueFrom.secretKeyRef):
unknown field "namespace" in io.k8s.api.core.v1.SecretKeySelector];</p>
</blockquote>
<p>If I take out the namespace(s) (under the secretKeyRef(s))....the pod fails..with</p>
<blockquote>
<p>Warning Failed 2s (x8 over 1m) kubelet, minikube Error:
secret "mysecret" not found</p>
</blockquote>
<p>Yes, my secrets are in the namespace:</p>
<pre><code>kubectl get secrets --namespace mycustomnamespace
NAME TYPE DATA AGE
default-token-55bzp kubernetes.io/service-account-token 3 10m
mysecret Opaque 2 10m
</code></pre>
<p>APPEND : (resolution)</p>
<p>It was an error on my part. Check my comment under Vasily's answer.</p>
<p>But basically, the magic-sauce is that the below yml....</p>
<pre><code>metadata:
name: secret-env-pod
namespace: mycustomnamespace
</code></pre>
<p>the above yml should "drive" the namespaces (aka, set the scope of the namespace) for the rest of the configuration (yml) .... </p>
<p>(if you are a future reader of this question, double and triple check that you have everything under the correct namespace. ALL of your normal "get" statements need to use -n (aka --namespace) as a part.</p>
<p>example</p>
<pre><code>kubectl get pods
</code></pre>
<p>the above will only get pods under "default".</p>
<p>you have to do</p>
<pre><code>kubectl get pods --namespace mycustomnamespace
</code></pre>
|
<p>Simply remove <code>namespace: mycustomnamespace</code> from pod secretKeyRef definitions. </p>
<p>Also your secret create command should be like that:</p>
<pre><code>kubectl --namespace=mycustomnamespace create secret generic mysecret --from-literal=passwordkey="abc123" --from-literal=usernamekey="mememe"
</code></pre>
|
<p>I'm studying the main components of kubernetes.
I was momentarily stuck regarding the concept of creating (deleting) a pod. In many charts or figures the pods are depicted inside the worker nodes and for this reason I was convinced that they were objects created directly in the worker node.</p>
<p>In depth this concept I came across some pages that see the pod as a simple placeholder in the API server instead.</p>
<p>In this reference <a href="https://banzaicloud.com/blog/k8s-custom-scheduler/" rel="nofollow noreferrer">link</a> it is said that in the first point the pod is created and in the fourth point that the pod is associated with the node from the API server. <br>
In this reference <a href="https://dzone.com/articles/kubernetes-lifecycle-of-a-pod" rel="nofollow noreferrer">link</a> it is said that "a new Pod object is created on API server but is not bound to any node." <br>
In this reference <a href="https://stackoverflow.com/questions/41012246/steps-involved-in-creating-a-pod-in-kubernetes">link</a> it is said that "A Pod has one container that is a placeholder generated by the Kubernetes API"</p>
<p>All this makes me think that a pod is not actually created in a worker node.
Could someone give me an explanation to clarify this idea for me?</p>
|
<p>Simply speaking the process of running pod is the following:</p>
<ol>
<li>User makes API request to create pod in namespace.</li>
<li>API server validates the request making sure that user has necessary authorization to create pod in given namespace and that request conforms to PodSpec.</li>
<li>If request is valid API server creates API object of kind "Pod" in its Etcd database.</li>
<li>Kube-scheduler watches Pods and sees that there is new Pod object. It then evaluates Pod's resources, affinity rules, nodeSelectors, tolerations and so on and finally makes a decision on which node the pod should run. If there are no nodes available due to lack of resources or other constraints - Pod remains in state Pending. Kube-scheduler periodically retries scheduling decisions for Pending pods. </li>
<li>After Pod is scheduled to node kube-scheduler passes the job to kubelet on selected node.</li>
<li>Kubelet is then responsible for actually starting the pod.</li>
</ol>
|
<p>I want to run Eventstore in Kubernetes node. I start the node with <code>minikube start</code>, then I apply this yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: eventstore-deployment
spec:
selector:
matchLabels:
app: eventstore
replicas: 1
template:
metadata:
labels:
app: eventstore
spec:
containers:
- name: eventstore
image: eventstore/eventstore
ports:
- containerPort: 1113
protocol: TCP
- containerPort: 2113
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: eventstore
spec:
selector:
app: eventstore
ports:
- protocol: TCP
port: 1113
targetPort: 1113
---
apiVersion: v1
kind: Service
metadata:
name: eventstore-dashboard
spec:
selector:
app: eventstore
ports:
- protocol: TCP
port: 2113
targetPort: 2113
nodePort: 30113
type: NodePort
</code></pre>
<p>the deployment, the replica set and the pod starts, but nothing happens: Eventstore doesn't print to the log, I can't open its dashboard. Also other services can't connect to <em>eventstore:1113</em>. No errors and the pods doesn't crash.
The only I see in logs is "The selected container has not logged any messages yet".</p>
<p><a href="https://i.stack.imgur.com/9fTDD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9fTDD.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/YG1EH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YG1EH.png" alt="enter image description here"></a></p>
<p>I've tried a clean vanilla minukube node with different vm drivers, and also a node with configured Ambassador + Linkerd. The results are the same. </p>
<p>But when I run Eventstore in Docker with this yaml file via <em>docker-compose</em></p>
<pre><code>eventstore:
image: eventstore/eventstore
ports:
- '1113:1113'
- '2113:2113'
</code></pre>
<p>Everything works fine: Eventstore outputs to logs, other services can connect to it and I can open its dashboard on 2113 port.</p>
<p><strong>UPDATE:</strong> Eventstore started working after about 30-40 minutes after deployment. I've tried several times, and had to wait. Other pods start working almost immediately (30 secs - 1 min) after deployment. </p>
|
<p>As @ligowsky confirmed in comment section, issue was cause due to VM Performance. Posting this as Community Wiki for better visibility. </p>
<p><code>Minikube</code> as default is running with <code>2 CPUs</code> and <code>2048 Memory</code>. More details can be found <a href="https://github.com/kubernetes/minikube/blob/232080ae0cbcf9cb9a388eb76cc11cf6884e19c0/pkg/minikube/constants/constants.go#L103" rel="nofollow noreferrer">here</a>.</p>
<p>You can change this if your VM has more resources.</p>
<p><strong>- During Minikube start</strong></p>
<pre><code>$ sudo minikube start --cpus 2 --memory 8192 --vm-driver=<driverType>
</code></pre>
<p><strong>- When Minikube is running, however minikube need to be restarted</strong></p>
<pre><code>$ minikube config set memory 4096
⚠️ These changes will take effect upon a minikube delete and then a minikube start
</code></pre>
<p>More commands can be found in <a href="https://minikube.sigs.k8s.io/docs/examples/" rel="nofollow noreferrer">Minikube docs</a>.</p>
<p>In my case when <code>Minikube</code> resources was 4CPUs and 8192 memory I didn't have any issues with <code>eventstore</code>.</p>
<p><strong>OP's Solution</strong></p>
<p>OP used <a href="https://github.com/kubernetes-sigs/kind" rel="nofollow noreferrer">Kind</a> to run <code>eventstore</code> deployment. </p>
<blockquote>
<p>Kind is a tool for running local Kubernetes clusters using Docker
container "nodes". kind is primarily designed for testing Kubernetes
1.11+</p>
</blockquote>
<p><code>Kind</code> documentation can be found <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I am using the following command to gracefully delete any stale pods in <em>Pending</em> state:</p>
<p><code>kubectl get pod -n my-namespace | grep Pending | awk '{print $1}' | xargs kubectl delete pod -n my-namespace</code></p>
<p>However, I would like to add a condition that deletes only those pods who have been in pending state for more than N hours. There is the <code>AGE</code> column returned with <code>get pods</code> but its time unit varies and I am assuming there is a better way.</p>
<p>Also would appreciate if anyone can mention any best practices around this as I aim to run this command periodically to cleanup the pending Pods.</p>
|
<p>It is difficult to calculate how much time a Pod has spent in a particular Status by using <code>kubectl</code> solely and without a help of some 3rd party tools. However, I got a solution that you may find useful.</p>
<p>You can list all Pods that are in <code>Pending</code> state <strong>and</strong> are older than X days. For example, the command below will list all <code>Pending</code> Pods that are older than 5 days:</p>
<pre><code>kubectl get pods --field-selector=status.phase=Pending --sort-by=.metadata.creationTimestamp | awk 'match($5,/[6-9]d|[0-9][0-9]d|[0-9][0-9][0-9]d/) {print $0}'
</code></pre>
<p>Than you can use the next command to delete these pods:</p>
<pre><code>kubectl delete pod $(kubectl get pods --field-selector=status.phase=Pending --sort-by=.metadata.creationTimestamp | awk 'match($5,/[6-9]d|[0-9][0-9]d|[0-9][0-9][0-9]d/) {print $0}')
</code></pre>
<p>The value can and should be adjusted by modifying the <code>awk</code> scripting in order to match your use case.</p>
|
<p>When trying to execute profiling on a kubernetes cluster with JProfiler (13.0 build 13073) for macOS</p>
<p>"Could not execute kubectl executable" is thrown</p>
<p>Cannot run program ""/usr/local/Cellar/kubernetes-cli/1.23.4/bin/kubectl"": error=2, No such file or directory (kubectl installed with homebrew)</p>
<p>It's the same if selecting physical file or simlink /usr/local/bin/kubectl as the value in Setup Executable > Local kubectl executable.</p>
<p>It's as if the entire process is in a sandbox and can't access/see the files.</p>
|
<blockquote>
<p>This is a bug in 13.0 and will be fixed in 13.0.1.</p>
</blockquote>
<p>13.0.1 download link: <a href="https://download-gcdn.ej-technologies.com/jprofiler/jprofiler_macos_13_0_1.dmg" rel="nofollow noreferrer">https://download-gcdn.ej-technologies.com/jprofiler/jprofiler_macos_13_0_1.dmg</a></p>
|
<p>Due to a number of settings outside of my control, the default location of my kubectl cache file is on a very slow drive on my Windows PC. This ended up being the root cause for much of the slowness in my kubectl interactions.</p>
<p>I have a much faster location in mind. However, I can't find a way to permanently change the location; I must either temporarily alter the home directory environment variables or explicitly set it on each command. </p>
<p>Is there a way to alter my .kube/config file to permanently/persistently set my cache location? </p>
|
<p>Best way for you is move the whole home directory to fast drive.</p>
<p>But if you don't want moving the whole directory you can <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/set-alias?view=powershell-6" rel="nofollow noreferrer">set Powershell alias</a> for your command like
<code>PS> Set-Alias -Name kubectl -Value "Path\to\kubectl --kubeconfig=PLACE_FOR_YOUR_CONFIG"</code></p>
|
<p>I follow <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">this</a> to install kubernetes on my cloud.
When I run command <code>kubectl get nodes</code> I get this error: </p>
<pre><code>The connection to the server localhost:6443 was refused - did you specify the right host or port?
</code></pre>
<p>How can I fix this?</p>
|
<p>If you followed only mentioned docs it means that you have only installed <code>kubeadm</code>, <code>kubectl</code> and <code>kubelet</code>.</p>
<p>If you want to run <code>kubeadm</code> properly you need to do 3 steps more.</p>
<p><strong>1. Install docker</strong> </p>
<p>Install <a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/" rel="nofollow noreferrer">Docker ubuntu version</a>. If you are using another system chose it from left menu side.</p>
<p><strong>Why:</strong></p>
<p>If you will not install docker you will receive errror like below:</p>
<pre><code>preflight] WARNING: Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": e
xecutable file not found in $PATH
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p><strong>2. Initialization of <code>kubeadm</code></strong></p>
<p>You have installed properly <code>kubeadm</code> and <code>docker</code> but now you need to initialize <code>kubeadm</code>. Docs can be found <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/" rel="nofollow noreferrer">here</a></p>
<p>In short version you have to run command</p>
<p><code>$ sudo kubeadm init</code> </p>
<p>After initialization you will receive information to run commands like:</p>
<pre><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
<p>and token to join another VM to cluster. It looks like</p>
<pre><code>kubeadm join 10.166.XX.XXX:6443 --token XXXX.XXXXXXXXXXXX \
--discovery-token-ca-cert-hash sha256:aXXXXXXXXXXXXXXXXXXXXXXXX166b0b446986dd05c1334626aa82355e7
</code></pre>
<p>If you want to run some special action in init phase please check <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/" rel="nofollow noreferrer">this docs</a>.</p>
<p><strong>3. Change node status to <code>Ready</code></strong></p>
<p>After previous step you will be able to execute</p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-kubeadm NotReady master 4m29s v1.16.2
</code></pre>
<p>But your node will be in <code>NotReady</code> status. If you will describe it <code>$ kubectl describe node</code> you will see error:</p>
<pre><code>Ready False Wed, 30 Oct 2019 09:55:09 +0000 Wed, 30 Oct 2019 09:50:03 +0000 KubeletNotReady runtime network not ready: Ne
tworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
</code></pre>
<p>It means that you have to install one of CNIs. List of them can be found <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network" rel="nofollow noreferrer">here</a>.</p>
<p><strong>EDIT</strong></p>
<p>Also one thing comes to my mind.</p>
<p>Sometimes when you turned off and on VM you need to restart
kubelet and docker service. You can do it by using</p>
<pre><code>$ service docker restart
$ systemctl restart kubelet
</code></pre>
<p>Hope it helps.</p>
|
<p>For services, pods, namespaces we don't get CPU utilization, memory Utilization metrics,we get Memory request , CPU request, Memory allocated, CPU allocated using CoreV1Api.
How to get Utilization for above?</p>
<p>For Cluster, we did average of CPU Utilization of nodes (EC2 instances Ids) , so we have cpu/memory utilization for cluster, So can we assume, CPU Utilization for pod will be utilization of node on which it is running?For namespace, on which node ,it exists?</p>
|
<p>You can get resources <code>request</code>, <code>limits</code> or <code>utilization</code> for a couple of ways. </p>
<p><code>$ kubectl top pods</code> or <code>$ kubectl top nodes</code>. You can check <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">Quota</a> or <code>kubectl describe node/pod</code> to check information inside.
You can also specify if you need pods form only one namespace like <code>kubectl top pod --namespace=kube-system</code></p>
<p>To do that you will need <a href="https://github.com/kubernetes-sigs/metrics-server" rel="nofollow noreferrer">metric server</a> which is usually installed on the beginning. To check if it is installed you should list your pods in <code>kube-system</code> namespaces.</p>
<pre><code>$ kubectl get pods -n kube-system
NAME
...
metrics-server-v0.3.1-57c75779f-wsmfk 2/2 Running 0 6h24m
...
</code></pre>
<p>Then you can check current metrics in a few ways. Check <a href="https://github.com/feiskyer/kubernetes-handbook/blob/master/en/addons/metrics.md#metrics-api" rel="nofollow noreferrer">this thread</a>. When you will list <code>raw</code> metrics is good to list it using <code>jq</code>.
<code>$ /apis/metrics.k8s.io/v1beta1/nodes | jq .</code></p>
<p>Another thing is that you could use <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> for metrics and alerting (depends on your needs). If you want only CPU and memory utilization, <code>metrics server</code> is enoug, however <code>Prometheus</code> also installing <code>custom.metrics</code> which will allow you to get metrics from all kubernetes objects. </p>
<p>Later you can also install some UI like <a href="https://grafana.com/" rel="nofollow noreferrer">Grafana</a>.</p>
<blockquote>
<p>So can we assume, CPU Utilization for pod will be utilization of node on which it is running</p>
</blockquote>
<p>CPU utilization for node will display utilization of all resources assigned to this node, even if pods are in different namespaces.</p>
<p>I would encourage you to check <a href="https://www.replex.io/blog/kubernetes-in-production-the-ultimate-guide-to-monitoring-resource-metrics" rel="nofollow noreferrer">this article</a>.</p>
|
<p>I'm trying to restart my kubernetes deployment via the kubernetes api using the
@kubernetes/client-node Library. I'm not using deployment scale because i only need one deployment (db and service container) per app.</p>
<p>I also tried to restart a single container inside the deployment via exec (/sbin/reboot or kill), but it seems to not work with the nodejs library because it fails to upgrade to websocket connection, what is needed by the kubernetes exec endpoint as it seems. The other idea was to restart the whole deployment by setting the scale to 0 and then 1 again. But I dont get it working via the nodejs library. I tried to find an example for that, but was not successful.</p>
<p>The rolling restart is not working for me, becuase my application doesnt support multiple instances.</p>
<p>i tried it like this to scale</p>
<pre><code>await k8sApi.patchNamespacedDeploymentScale(`mydeployment-name`, 'default', {
spec: { replicas: 0 },
});
await k8sApi.patchNamespacedDeploymentScale(`mydeployment-name`, 'default', {
spec: { replicas: 1 },
});
</code></pre>
<p>and to reboot the container i tried this</p>
<pre><code>await coreV1Api.connectPostNamespacedPodExec(
podName,
'default',
'/sbin/reboot',
'web',
false,
false,
false,
false
);
</code></pre>
<hr />
<p>Extra input:</p>
<p>When trying to use patchNamespacedDeployment i get the following error back by kubernetes api:</p>
<pre><code>statusCode: 415,
statusMessage: 'Unsupported Media Type',
</code></pre>
<p>And response body:</p>
<pre><code>V1Scale {
apiVersion: 'v1',
kind: 'Status',
metadata: V1ObjectMeta {
annotations: undefined,
clusterName: undefined,
creationTimestamp: undefined,
deletionGracePeriodSeconds: undefined,
deletionTimestamp: undefined,
finalizers: undefined,
generateName: undefined,
generation: undefined,
labels: undefined,
managedFields: undefined,
name: undefined,
namespace: undefined,
ownerReferences: undefined,
resourceVersion: undefined,
selfLink: undefined,
uid: undefined
},
spec: undefined,
status: V1ScaleStatus { replicas: undefined, selector: undefined }
</code></pre>
<p>when trying the exec approach i get the following response:</p>
<pre><code>kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message: 'Upgrade request required',
reason: 'BadRequest',
code: 400
</code></pre>
<p>i already looked the upgrade request error up, and it seems like the library isnt aware of this, because the library was generated from function footprints or something, so it is not aware of websockets.</p>
|
<p>Really seems like there is a bug in the node Kubernetes client library.
On PATCH requests it should set the content type to "application/json-patch+json" but instead it sends the content type as "application/json".
Thats why you get unsupported media type back by the api.</p>
<p>Furthermore you need to use the JSON Patch format for the body you send: <a href="http://jsonpatch.com" rel="noreferrer">http://jsonpatch.com</a></p>
<p>To manually set the content type you can pass custom headers to the function call.</p>
<p>This worked for me:</p>
<pre><code>const patch = [
{
op: 'replace',
path: '/spec/replicas',
value: 0,
},
];
await k8sApi.patchNamespacedDeployment(
`mydeployment-name`,
'default',
patch,
undefined,
undefined,
undefined,
undefined,
{ headers: { 'content-type': 'application/json-patch+json' } }
);
</code></pre>
<p>After some google searching I found that this problem is already existing since 2018: <a href="https://github.com/kubernetes-client/javascript/issues/19" rel="noreferrer">https://github.com/kubernetes-client/javascript/issues/19</a></p>
|
<p>I had configured multi master of k8s repeatedly.
It looks like the previous configuration of master held still.
The pod of calico node didn't work properly.</p>
<p>I want to remove all files of k8s.
Could you know how to remove the all files ?</p>
|
<p>Many depends how did you create cluster.</p>
<p>If you used <code>Kubeadm</code> you can use <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/" rel="nofollow noreferrer">kubeadm reset</a> which will revert of changes made by <code>kubeadm init</code> or <code>kubeadm join</code>.</p>
<p>Beside that you need to delete <code>kubectl</code>, <code>kubeadm</code>, etc. If you instlled it using <code>apt-get</code> (depends on distro. This is example from ubuntu) you can <code>purge</code> them.</p>
<pre><code>$ sudo apt-get purge kubeadm kubectl kube*
</code></pre>
<p>Then use <code>autoremove</code> to get rid out of all dependencies related.</p>
<pre><code>$ sudo apt-get autoremove
</code></pre>
<p>And at the end you should remove rest of config using:</p>
<ul>
<li><strong>$ sudo rm -rf ~/.kube</strong></li>
<li><strong>ipvsadm --clear</strong> or <strong>iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X</strong> or similar to clean-up iptables rules </li>
</ul>
<p>If this won't answer your question, please add more information about your cluster.</p>
|
<p>I was install kubectl by the official instruction, but when i've try <code>kubectl apply -f</code> i've get " <code>Error from server (NotFound): the server could not find the requested resource </code>" error.</p>
<p>The Internet says that it's because the Client and Server Version of kubectl is different.</p>
<p>I'll check the verision of kubectl:</p>
<p><code>Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} </code></p>
<p><code>Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"dirty", BuildDate:"2017-06-22T04:31:09Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} </code></p>
<p>If it is official install, why version so different? And is it really problem of this error?</p>
<p>I also have docker, docker-compose and minikube.</p>
<p>OS Linux Mint</p>
|
<p>Posting Community Wiki as root cause was mentioned by @David Maze </p>
<p>As was pointed in the comments, your versions are very different.
<a href="https://www.infoq.com/news/2017/07/kubernetes-1.7" rel="nofollow noreferrer">Kubernetes 1.7</a> was realesed ~ <strong>July 2017</strong>, when <a href="https://kubernetes.io/docs/setup/release/notes/" rel="nofollow noreferrer">Kubernetes 1.17</a> was released in <strong>Jan 2020</strong> (almost 2,5 year difference). Another thing is version of <code>Docker</code> and <code>Minikube</code> must support <code>kubernetes</code> version.</p>
<p>As Example, if you would like to run Kubernetes 1.6.3 on latest <code>Minikube</code> version, error occurs.</p>
<pre><code>minikube v1.7.3 on Ubuntu 16.04
✨ Using the none driver based on user configuration
⚠️ Specified Kubernetes version 1.6.4 is less than the oldest supported version: v1.11.10
💣 Sorry, Kubernetes 1.6.4 is not supported by this release of minikube
</code></pre>
<p>Also, there was huge change in <code>apiVersions</code> between version 1.15 and 1.16. More details can be found <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="nofollow noreferrer">here</a>.</p>
<p>In <a href="https://stackoverflow.com/questions/38230452/what-is-command-to-find-detailed-information-about-kubernetes-masters-using-ku">this Stackoverflow thread</a> was explained what is shown in <code>kubectl version</code>. </p>
<blockquote>
<p>The second line ("Server Version") contains the apiserver version.</p>
</blockquote>
<p>As for example <code>Network Policy API</code> was introduced in Kubernetes 1.7, so if you would like to use it in 1.6, you will get error as API cannot recognize it.</p>
<p>I've reproduced your issue.</p>
<pre><code>minikube:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.3", GitCommit:"0480917b552be33e2dba47386e51decb1a211df6", GitTreeState:"dirty", BuildDate:"2017-05-12T10:50:10Z", GoVersion:"go1.7", Compiler:"gc", Platform:"linux/amd64"}
minikube:~$ kubectl get pods
Error from server (NotAcceptable): the server was unable to respond with a content type that the client supports (get pods)
minikube:~$ kubectl get nodes
Error from server (NotAcceptable): the server was unable to respond with a content type that the client supports (get nodes)
minikube:~$ kubectl run nginx --image=nginx
WARNING: New generator "deployment/apps.v1" specified, but it isn't available. Falling back to "deployment/apps.v1beta1".
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
error: no matches for kind "Deployment" in version "apps/v1"
</code></pre>
<p>As I mentioned before, <code>Network Policy</code> was introduced in 1.7. When you will try apply <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource" rel="nofollow noreferrer">this config</a> from Official Kubernetes docs, it will show the same error you have.</p>
<pre><code>minikube:~$ kubectl apply -f network.yaml
Error from server (NotFound): the server could not find the requested resource.
</code></pre>
<p>Most recommended way is to install newest versions of docker, kubernetes and minikube (security and newest features) based on <a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/" rel="nofollow noreferrer">Docker docs</a> and <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">Kubernetes kubectl docs</a> and <a href="https://minikube.sigs.k8s.io/docs/start/linux/" rel="nofollow noreferrer">Minikube</a>. </p>
<p>Another option is to downgrade all components.</p>
|
<p>I´m trying to deploy metrics to kubernetes and something really strange is happening, I have one worker and one master. I have the following pods list:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default php-apache-774ff9d754-d7vp9 1/1 Running 0 2m43s 192.168.77.172 master-node <none> <none>
kube-system calico-kube-controllers-6b9d4c8765-x7pql 1/1 Running 2 4h11m 192.168.77.130 master-node <none> <none>
kube-system calico-node-d4rnh 0/1 Running 1 4h11m 10.221.194.166 master-node <none> <none>
kube-system calico-node-hwkmd 0/1 Running 1 4h11m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system coredns-6955765f44-kf4dr 1/1 Running 1 4h20m 192.168.178.65 free5gc-virtual-machine <none> <none>
kube-system coredns-6955765f44-s58rf 1/1 Running 1 4h20m 192.168.178.66 free5gc-virtual-machine <none> <none>
kube-system etcd-free5gc-virtual-machine 1/1 Running 1 4h21m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system kube-apiserver-free5gc-virtual-machine 1/1 Running 1 4h21m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system kube-controller-manager-free5gc-virtual-machine 1/1 Running 1 4h21m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system kube-proxy-brvdg 1/1 Running 1 4h19m 10.221.194.166 master-node <none> <none>
kube-system kube-proxy-lfzjw 1/1 Running 1 4h20m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system kube-scheduler-free5gc-virtual-machine 1/1 Running 1 4h21m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system metrics-server-86c6d8b9bf-p2hh8 1/1 Running 0 2m43s 192.168.77.171 master-node <none> <none>
</code></pre>
<p>When I try to get the metrics I see the following:</p>
<pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache <unknown>/50% 1 10 1 3m58s
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl top pods --all-namespaces
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
</code></pre>
<p>Lastly, I see the log (v=6) the output of metrics-server:</p>
<pre><code>free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl logs metrics-server-86c6d8b9bf-p2hh8 -n kube-system
I0206 18:16:18.657605 1 serving.go:273] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0206 18:16:19.367356 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 7 milliseconds
I0206 18:16:19.370573 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.373245 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.375024 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] listing is available at https://:4443/swaggerapi
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] https://:4443/swaggerui/ is mapped to folder /swagger-ui/
I0206 18:16:19.421207 1 healthz.go:83] Installing healthz checkers:"ping", "poststarthook/generic-apiserver-start-informers", "healthz"
I0206 18:16:19.421641 1 serve.go:96] Serving securely on [::]:4443
I0206 18:16:19.421873 1 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421891 1 reflector.go:240] Listing and watching *v1.Pod from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421914 1 reflector.go:202] Starting reflector *v1.Node (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421929 1 reflector.go:240] Listing and watching *v1.Node from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.423052 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0 200 OK in 1 milliseconds
I0206 18:16:19.424261 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0 200 OK in 2 milliseconds
I0206 18:16:19.425586 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?resourceVersion=38924&timeoutSeconds=481&watch=true 200 OK in 0 milliseconds
I0206 18:16:19.433545 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?resourceVersion=39246&timeoutSeconds=582&watch=true 200 OK in 0 milliseconds
I0206 18:16:49.388514 1 manager.go:99] Beginning cycle, collecting metrics...
I0206 18:16:49.388598 1 manager.go:95] Scraping metrics from 2 sources
I0206 18:16:49.395742 1 manager.go:120] Querying source: kubelet_summary:free5gc-virtual-machine
I0206 18:16:49.400574 1 manager.go:120] Querying source: kubelet_summary:master-node
I0206 18:16:49.413751 1 round_trippers.go:405] GET https://10.221.194.166:10250/stats/summary/ 200 OK in 13 milliseconds
I0206 18:16:49.414317 1 round_trippers.go:405] GET https://10.221.195.58:10250/stats/summary/ 200 OK in 18 milliseconds
I0206 18:16:49.417044 1 manager.go:150] ScrapeMetrics: time: 28.428677ms, nodes: 2, pods: 13
I0206 18:16:49.417062 1 manager.go:115] ...Storing metrics...
I0206 18:16:49.417083 1 manager.go:126] ...Cycle complete
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl logs metrics-server-86c6d8b9bf-p2hh8 -n kube-system
I0206 18:16:18.657605 1 serving.go:273] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0206 18:16:19.367356 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 7 milliseconds
I0206 18:16:19.370573 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.373245 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.375024 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] listing is available at https://:4443/swaggerapi
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] https://:4443/swaggerui/ is mapped to folder /swagger-ui/
I0206 18:16:19.421207 1 healthz.go:83] Installing healthz checkers:"ping", "poststarthook/generic-apiserver-start-informers", "healthz"
I0206 18:16:19.421641 1 serve.go:96] Serving securely on [::]:4443
I0206 18:16:19.421873 1 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421891 1 reflector.go:240] Listing and watching *v1.Pod from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421914 1 reflector.go:202] Starting reflector *v1.Node (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421929 1 reflector.go:240] Listing and watching *v1.Node from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.423052 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0 200 OK in 1 milliseconds
I0206 18:16:19.424261 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0 200 OK in 2 milliseconds
I0206 18:16:19.425586 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?resourceVersion=38924&timeoutSeconds=481&watch=true 200 OK in 0 milliseconds
I0206 18:16:19.433545 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?resourceVersion=39246&timeoutSeconds=582&watch=true 200 OK in 0 milliseconds
I0206 18:16:49.388514 1 manager.go:99] Beginning cycle, collecting metrics...
I0206 18:16:49.388598 1 manager.go:95] Scraping metrics from 2 sources
I0206 18:16:49.395742 1 manager.go:120] Querying source: kubelet_summary:free5gc-virtual-machine
I0206 18:16:49.400574 1 manager.go:120] Querying source: kubelet_summary:master-node
I0206 18:16:49.413751 1 round_trippers.go:405] GET https://10.221.194.166:10250/stats/summary/ 200 OK in 13 milliseconds
I0206 18:16:49.414317 1 round_trippers.go:405] GET https://10.221.195.58:10250/stats/summary/ 200 OK in 18 milliseconds
I0206 18:16:49.417044 1 manager.go:150] ScrapeMetrics: time: 28.428677ms, nodes: 2, pods: 13
I0206 18:16:49.417062 1 manager.go:115] ...Storing metrics...
I0206 18:16:49.417083 1 manager.go:126] ...Cycle complete
</code></pre>
<p>Using the log output with v=10 I can see even the details of health of each pod, but nothing while running the <code>kubectl get hpa</code> or <code>kubectl top nodes</code>. Can someone give me a hint? Furthermore, my metrics manifest is:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
args:
- /metrics-server
- --metric-resolution=30s
- --requestheader-allowed-names=aggregator
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
- --v=6
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
#- --kubelet-preferred-address-types=InternalIP
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
nodeSelector:
beta.kubernetes.io/os: linux
kubernetes.io/arch: "amd64"
</code></pre>
<p>And I can see the following:</p>
<pre><code>free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl get apiservice v1beta1.metrics.k8s.io -o yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
creationTimestamp: "2020-02-06T18:57:28Z"
name: v1beta1.metrics.k8s.io
resourceVersion: "45583"
selfLink: /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io
uid: ca439221-b987-4c13-b0e0-8d2bb237e612
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
port: 443
version: v1beta1
versionPriority: 100
status:
conditions:
- lastTransitionTime: "2020-02-06T18:57:28Z"
message: 'failing or missing response from https://10.110.144.114:443/apis/metrics.k8s.io/v1beta1:
Get https://10.110.144.114:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.110.144.114:443:
connect: no route to host'
reason: FailedDiscoveryCheck
status: "False"
type: Available
</code></pre>
|
<p>I have reproduced your issue (on <code>Google Compute Engine</code>). Tried a few scenarios to find workaround/solution for this issue.</p>
<p>First thing I want to mention is that you have provided <code>ServiceAccount</code> and <code>Deployment</code> YAML. You also need <code>ClusterRoleBinding</code>, <code>RoleBinding</code>, <code>ApiService</code>, etc. All needed YAMLs can be found in <a href="https://github.com/kubernetes-sigs/metrics-server/tree/master/deploy/kubernetes" rel="nofollow noreferrer">this Github repo</a>.</p>
<p>For fast deploy <code>metrics-server</code> with all required config you can use:</p>
<pre><code>$ git clone https://github.com/kubernetes-sigs/metrics-server.git
$ cd metrics-server/deploy/
$ kubectl apply -f kubernetes/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
</code></pre>
<p>The second thing I would advise you to check your <code>CNI</code> pods (<code>calico-node-d4rnh</code> and <code>calico-node-hawked</code>). Created 4h11m but <code>Ready 0/1</code>.</p>
<p>Last thing regarding gathering CPU and Memory data from pods and nodes. </p>
<p><strong>Using Calico</strong></p>
<p>If you are using one-node <code>kubeadm</code>, it will work correctly, however, when you are using more than 1 node in <code>kubeadm</code>, this will cause some issues. There are many similar threads on Github regarding this. I've tried with various flags in <code>args:</code>, but no success. In <code>metrics-server</code> logs (<code>-v=6</code>) you will be able to see that metrics are gathering. In <a href="https://github.com/kubernetes-sigs/metrics-server/issues/278#issuecomment-546638711" rel="nofollow noreferrer">this Github thread</a>, one of the Github users posted answer which is a workaround for this issue. It's also mentioned in <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#hostport-services-do-not-work" rel="nofollow noreferrer">K8s docs</a> about <code>hostNetwork</code>.</p>
<blockquote>
<p>Adding <code>hostNetwork: true</code> is what finally got <code>metrics-server</code> working for me. Without it, nada. Without the <code>kubelet-preferred-address-types line</code>, I could query my master node but not my two worker nodes, nor could I query pods, obviously undesirable results. Lack of <code>kubelet-insecure-tls</code> also results in an inoperable <code>metrics-server</code> installation.</p>
</blockquote>
<pre><code>spec:
hostNetwork: true
containers:
- args:
- --kubelet-insecure-tls
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --v=6
image: k8s.gcr.io/metrics-server-amd64:v0.3.6
imagePullPolicy: Always
</code></pre>
<p>If you will deploy with this config it will work. </p>
<pre><code>$ kubectl describe apiservice v1beta1.metrics.k8s.io
Name: v1beta1.metrics.k8s.io
...
Status:
Conditions:
Last Transition Time: 2020-02-20T09:37:59Z
Message: all checks passed
Reason: Passed
Status: True
Type: Available
Events: <none>
</code></pre>
<p>In addition, you can see the difference when using <code>host network: true</code> when you will check <code>iptables</code>. There is much more entries compare to deployment without this config.</p>
<p>After that, you can edit deployment, and remove or comment <code>host network: true</code>.</p>
<pre><code>$ kubectl edit deploy metrics-server -n kube-system
deployment.apps/metrics-server edited
$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
nginx-6db489d4b7-2qhzw 0m 3Mi
nginx-6db489d4b7-9fvrj 0m 2Mi
nginx-6db489d4b7-dgbf9 0m 2Mi
nginx-6db489d4b7-dvcz5 0m 2Mi
</code></pre>
<p>Also, you will be able to find metrics using:</p>
<pre><code>$ kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
</code></pre>
<p>For better visibility you can use also <code>jq</code>.</p>
<pre><code>$ kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods | jq .
</code></pre>
<p><strong>Using Weave Net</strong></p>
<p>When you will use <a href="https://www.weave.works/oss/net/" rel="nofollow noreferrer">Weave Net</a> and instead of Calico it will work without setting <code>host network</code>.</p>
<pre><code>$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
</code></pre>
<p>However, you will need to work with <code>certificates</code>. But if you don't care about security, you can just use <code>--kubelet-insecure-tls</code> like in the previous example, when <code>Calico</code> was used.</p>
|
<p>I have created a free account on GCP as also my first cluster.</p>
<p>I want to deploy <code>istio</code> on my GKE cluster, so I am following the <a href="https://istio.io/docs/setup/kubernetes/install/platform/gke/" rel="nofollow noreferrer">official instructions</a>.</p>
<p>At some point, the instructions indicate that I should</p>
<blockquote>
<p>Ensure that the Google Kubernetes Engine API is enabled for your
project (also found by navigating to “APIs & Services” -> “Dashboard”
in the navigation bar)</p>
</blockquote>
<p>What is that supposed to mean?
Isn't the API already active since I have created and I am running a cluster?</p>
<p>How can a cluster be running <strong>without</strong> the API being enabled?</p>
|
<p>Enabling GKE API is the prerequisite for running GKE. If you already run GKE then you can skip this part.</p>
<p>You can enable Istio as a part of GKE cluster creation. Here is the good instruction from Google: <a href="https://cloud.google.com/istio/docs/istio-on-gke/installing" rel="nofollow noreferrer">https://cloud.google.com/istio/docs/istio-on-gke/installing</a></p>
|
<p>I am trying to deploy my EKS cluster using Python CDK. I am following this(<a href="https://github.com/pahud/aws-cdk-python-workshop/tree/master/Lab6" rel="nofollow noreferrer">https://github.com/pahud/aws-cdk-python-workshop/tree/master/Lab6</a>) link for implementation.
Everything is working good, but when I do 'cdk deploy', it is showing following error:
<a href="https://i.stack.imgur.com/SBip0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SBip0.png" alt="enter image description here" /></a></p>
<p>On Cloudformation console it is showing following error:</p>
<p><a href="https://i.stack.imgur.com/qmS7I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qmS7I.png" alt="enter image description here" /></a></p>
<p>I tried changing version to 1.20, 1.20.4, 1.16, 1.16.5, etc(<a href="https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html</a>) in <strong>cdk_pycon_eks_stack.py</strong> in following way,</p>
<pre><code>...................
# create the cluster
cluster = aws_eks.Cluster(self, 'cluster',
masters_role=eks_admin_role,
vpc=vpc,
default_capacity=0,
version='1.20',
output_cluster_name=True
)
...........................................
</code></pre>
<p>, but then it shows following error:</p>
<p><a href="https://i.stack.imgur.com/1XHTA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1XHTA.png" alt="enter image description here" /></a></p>
<p>Any help would be appreciated!!
Thanks</p>
<hr />
<p>Edit:</p>
<p>Additional images for reference in comments/Answers:
for <code>version = aws_eks.KubernetesVersion.of("v1_20")</code>
<a href="https://i.stack.imgur.com/hMHjU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hMHjU.png" alt="enter image description here" /></a></p>
<p>for <code>version = aws_eks.KubernetesVersion().V1_20</code></p>
<p><a href="https://i.stack.imgur.com/keK2j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/keK2j.png" alt="enter image description here" /></a></p>
|
<ul>
<li>First, you need <code>aws-cdk-lib-2.0.0rc7</code> to support Kubernetes version 1.20 on EKS</li>
<li>Then follow the post <a href="https://dev.to/vumdao/using-iam-service-account-instead-of-instance-profile-for-eks-pods-262p" rel="nofollow noreferrer">https://dev.to/vumdao/using-iam-service-account-instead-of-instance-profile-for-eks-pods-262p</a> to up your EKS cluster with IAM service account</li>
<li>Snippet:</li>
</ul>
<pre><code> # Create EKS cluster
self.eks_cluster = eks.Cluster(
scope=self, id='EKSDevCluster',
vpc=eks_private_vpc,
default_capacity=0,
cluster_name='eks-dev',
masters_role=eks_admin_role,
core_dns_compute_type=eks.CoreDnsComputeType.EC2,
version=eks.KubernetesVersion.V1_20,
role=node_role
)
</code></pre>
|
<p>On my vagrant VM,I created kops cluster</p>
<pre><code> kops create cluster \
--state "s3://kops-state-1000" \
--zones "eu-central-1a","eu-central-1b" \
--master-count 3 \
--master-size=t2.micro \
--node-count 2 \
--node-size=t2.micro \
--ssh-public-key ~/.ssh/id_rsa.pub\
--name jh.mydomain.com \
--yes
</code></pre>
<p>I can list cluster with kops get cluster</p>
<pre><code>jh.mydomain.com aws eu-central-1a,eu-central-1b
</code></pre>
<p>The problem is that I can not find .kube repository. When I go for </p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>Is this somehow related to my virtual box deployment or not? I am aware that the command <em>kubectl</em> works on the master node because that's where <em>kube-apiserver</em> runs. Does this mean that my deployment is workers node?</p>
<p>I tried what Vasily suggested</p>
<pre><code>kops export kubecfg $CLUSTER_NAME
W0513 15:44:17.286965 3279 root.go:248] no context set in kubecfg
--name is required
</code></pre>
|
<p>You need to obtain kubeconfig from your cluster and save it as ${HOME}/.kube/config:</p>
<pre><code>kops export kubecfg $CLUSTER_NAME
</code></pre>
<p>After that you may start using kubectl.</p>
|
<p>I have a simple single-pod postgresql deployment running on AWS EKS (code <a href="https://github.com/seandavi/postgresql_zombodb_docker/tree/master/kubernetes" rel="nofollow noreferrer">here</a>). I have exposed the pod using a load balancer.</p>
<pre><code>kubectl get services/postgres-lb -o yaml
</code></pre>
<p>This yields the following:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "false"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
creationTimestamp: 2019-04-23T02:36:54Z
labels:
app: postgres
name: postgres-lb
namespace: default
resourceVersion: "1522157"
selfLink: /api/v1/namespaces/default/services/postgres-lb
uid: <HASHREMOVED?
spec:
clusterIP: 10.100.94.170
externalTrafficPolicy: Cluster
ports:
- nodePort: 32331
port: 5434
protocol: TCP
targetPort: 5432
selector:
app: postgres
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: ...aaadz-example.elb.us-east-1.amazonaws.com
</code></pre>
<p>This works and I can access the pod as expected. However, the connection to postgresql seems to drop about every minute or so if not active. I am pretty sure that at least some AWS load balancers behave this way to "drain connections"; hence the annotation above to NOT drain connections. However, I still see the same behavior of dropping connections if idle. </p>
<p>What is the best practice on AWS EKS for hosting a database, for example, and then exposing its single port to the internet? Web searches have turned up many variations, but all seem either overly complicated or not directly applicable. I have used GCE and found it to be much more straightforward with respect to network and exposing ports, so I feel like I am missing something obvious on AWS.</p>
|
<p>Try setting <code>service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout</code> annotation to some larger value (by default it is 60 seconds in AWS).</p>
|
<p>I have two pods running for a specific service. I want to get logs related to both the pods and check for specific text.</p>
<p>For that I used ,as described <a href="https://stackoverflow.com/questions/33069736/how-do-i-get-logs-from-all-pods-of-a-kubernetes-replication-controller">here</a>:</p>
<p><code>kubectl logs -l app=my-app -c my-app-container --since=25m | grep -i "search-text" |wc -l</code></p>
<p>This does not output anything, even though there are matching text for <code>search-text</code></p>
<p>Then tried with <code>deployments</code></p>
<p><code>kubectl logs deployment/my-app-deployment -c my-app-container --since=90m | grep -i "search-text" |wc -l</code></p>
<p>How can I search for this specific string in all related pods?</p>
<p><code>kubectl logs my-pod-1 -c my-app-container --since=90m | grep -i "search-text" |wc -l</code>, this gives the proper count.</p>
<p>References :</p>
<p><a href="https://stackoverflow.com/questions/55851250/get-all-logs-from-a-specific-container-in-a-replica-set">Get all Logs from a specific container in a replica set</a>
<br>
<a href="https://stackoverflow.com/questions/60518658/how-to-get-logs-of-deployment-from-kubernetes">how to get logs of deployment from kubernetes</a></p>
|
<p><code>kubectl logs</code> is limited to viewing a single pod’s logs at a time. However, you can use the <code>-l</code> flag to use a selector (label query) to filter on. For example:</p>
<pre><code>kubectl logs -l app=nginx -l app=php
</code></pre>
<p>Use <code>-c</code> flag if you need to see container logs. More supported flags and examples can be found <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer">here</a>.</p>
<p>When you are able to see the logs from desired Pods/Containers it is time to use <code>grep</code> to filter out the output. For example, I got some logs from a Pod:</p>
<pre><code>~$ kubectl logs nginx-app-b8b875889-4nn52
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
</code></pre>
<p>and I would like to only see lines with the word "configuration", so I execute:</p>
<pre><code>$ kubectl logs nginx-app-b8b875889-4nn52 | grep configuration
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
</code></pre>
<p>or if I would like to count lines with the word "info" than:</p>
<pre><code>$ kubectl logs nginx-app-b8b875889-4nn52 | grep info |wc -l
2
</code></pre>
<p>More details can be found in the <a href="https://man7.org/linux/man-pages/man1/grep.1.html" rel="nofollow noreferrer">grep manual</a>. Bear in mind that if you don't specify arguments like <code>--since=</code> or <code>--tail=</code> and the Pod your are trying to view logs from is running for some longer period of time the results may be misleading.</p>
<p>Normally, I would also suggest to use 3rd party tools like <a href="https://github.com/stern/stern" rel="nofollow noreferrer">Stern</a> or <a href="https://github.com/johanhaleby/kubetail" rel="nofollow noreferrer">Kubetail</a> which are more powerful than simple <code>kubectl logs</code> but in your use case combining both:</p>
<ul>
<li><code>kubectl logs -l</code></li>
</ul>
<p>and:</p>
<ul>
<li><code>| grep</code></li>
</ul>
<p>is the way to go.</p>
<p><strong>EDIT:</strong></p>
<p>Also make sure you are greping from the proper resources. From your question it seems that you run <code>kubectl logs deployment/my-app-deployment</code> and than <code>kubectl logs my-pod-1 -c my-app-container</code> which does not correspond to the <code>my-app-deployment</code> deployment. List all deployments, pods and labels to be confident that you check the right resource. Use:</p>
<pre><code>kubectl get deploy,pods --show-labels
</code></pre>
|
<p>My team has a special requirement to delete all pod logs every X hours. This is cause the logs contain some sensitive info - we read and process them with fluentbit, but it's an issue that the logs are still there after.
I couldn't find any normal way to rotate them by time, only recommendations on the docker daemon logging driver that rotates by file size.
Is it possible to create a k8s cronjob to do something like "echo ''> /path/to/logfile" per pod/container? If yes, how?</p>
<p>I'd appreciate any help here.
Thanks!</p>
|
<p>Kubernetes doesn’t provide built-in log rotation, but this functionality is available in many tools.</p>
<p>According to <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="noreferrer">Kubernetes Logging Architecture</a>:</p>
<blockquote>
<p>An important consideration in node-level logging is implementing log
rotation, so that logs don't consume all available storage on the
node. Kubernetes is not responsible for rotating logs, but rather a
deployment tool should set up a solution to address that. For example,
in Kubernetes clusters, deployed by the kube-up.sh script, there is a
<a href="https://linux.die.net/man/8/logrotate" rel="noreferrer">logrotate</a> tool configured to run each hour. You can also set up a
container runtime to rotate an application's logs automatically.</p>
</blockquote>
<p>Below are some examples of how the log rotation can be implemented:</p>
<ul>
<li><p><a href="https://vividcode.io/enable-log-rotation-in-kubernetes-cluster/" rel="noreferrer">Enable Log Rotation in Kubernetes Cluster</a></p>
</li>
<li><p><a href="https://github.com/mkilchhofer/logrotate-container" rel="noreferrer">logrotate-container</a></p>
</li>
</ul>
<p>You can use them as a guide.</p>
|
<p>Using <code>kubeadm</code> to create a cluster, I have a master and work node. </p>
<p>Now I want to share a <code>persistentVolume</code> in the work node, which will be bound with <code>Postgres</code> pod.</p>
<p>Expecting the code will create <code>persistentVolume</code> in the path <code>/postgres</code> of work node, but it seems the <code>hostPath</code> will not work in a cluster, how should I assign this property to the specific node?</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-postgres
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/postgres"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
strategy: {}
template:
metadata:
labels:
app: postgres
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
volumes:
- name: vol-postgres
persistentVolumeClaim:
claimName: pvc-postgres
containers:
- name: postgres
image: postgres:12
imagePullPolicy: Always
env:
- name: DB_USER
value: postgres
- name: DB_PASS
value: postgres
- name: DB_NAME
value: postgres
ports:
- name: postgres
containerPort: 5432
volumeMounts:
- mountPath: "/postgres"
name: vol-postgres
livenessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 5
timeoutSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- name: postgres
port: 5432
targetPort: postgres
selector:
app: postgres
</code></pre>
|
<p>As per <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="noreferrer">docs</a>.</p>
<blockquote>
<p>A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.</p>
</blockquote>
<p>In short, <code>hostPath</code> type refers to node (machine or VM) resource, where you will schedule pod. It mean that you already need to have this folder on this node.
To assign resources to specify node you have to use <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="noreferrer">nodeSelector</a> in your <code>Deployment</code>, <code>PV</code>.</p>
<p>Depends on the scenario, using <code>hostPath</code> is not the best idea, however I will provide below example YAMLs which might show you concept. Based on your YAMLs but with <code>nginx image</code>.</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-postgres
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/postgres" ## this folder need exist on your node. Keep in minds also who have permissions to folder. Used tmp as it have 3x rwx
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntu18-kubeadm-worker1
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
strategy: {}
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /home ## path to folder inside container
name: vol-postgres
affinity: ## specified affinity to schedule all pods on this specific node with name ubuntu18-kubeadm-worker1
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntu18-kubeadm-worker1
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
volumes:
- name: vol-postgres
persistentVolumeClaim:
claimName: pvc-postgres
persistentvolume/pv-postgres created
persistentvolumeclaim/pvc-postgres created
deployment.apps/postgres created
</code></pre>
<p>Unfortunately PV is bounded to PVC in 1:1 relationship, so for each time, you would need to create PV and PVC. </p>
<p>However if you are using <code>hostPath</code> it's enough to specify <code>nodeAffinity</code>, <code>volumeMounts</code> and <code>volumes</code> in <code>Deployment</code> YAML without <code>PV</code> and <code>PVC</code>.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
strategy: {}
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: nginx:latest
name: nginx
volumeMounts:
- mountPath: /home
name: vol-postgres
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntu18-kubeadm-worker1
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
volumes:
- name: vol-postgres
hostPath:
path: /tmp/postgres
deployment.apps/postgres created
user@ubuntu18-kubeadm-master:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-77bc9c4566-jgxqq 1/1 Running 0 9s
user@ubuntu18-kubeadm-master:~$ kk exec -ti postgres-77bc9c4566-jgxqq /bin/bash
root@ubuntu18-kubeadm-worker1:/# cd home
root@ubuntu18-kubeadm-worker1:/home# ls
test.txt txt.txt
</code></pre>
|
<p>I am trying to authenticate my Kafka rest proxy with SASL but I am having trouble transferring the configs made in my local docker compose to Kubernetes.</p>
<p>I am using JAAS configuration to achieve this.
My JAAS file looks like this.</p>
<pre><code>KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="rest"
password="rest-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="rest"
password="restsecret";
};
</code></pre>
<p>and then in my docker compose I have done:</p>
<p><code>KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/rest_jaas.conf</code></p>
<p>How will I transfer this same logic to Kubernetes?
I have tried passing the env variable like this:</p>
<pre><code>env:
- name: KAFKA_OPTS
value: |
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="rest"
password="rest-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="rest"
password="rest-secret";
};
</code></pre>
<p>but it still fails. Here is what my logs say:</p>
<pre><code>Error: Could not find or load main class KafkaClient
/bin/sh: 3: org.apache.kafka.common.security.plain.PlainLoginModule: not found
/bin/sh: 6: Syntax error: "}" unexpected
</code></pre>
<p>Your help will be highly appreciated.</p>
|
<p>Save your Kafka JAAS config file as rest_jaas.conf. Then execute:</p>
<pre><code>kubectl create secret generic kafka-secret --from-file=rest_jaas.conf
</code></pre>
<p>Then in your deployment you insert:</p>
<pre><code> env:
- name: KAFKA_OPTS
value: -Djava.security.auth.login.config=/etc/kafka/secrets/rest_jaas.conf
volumeMounts:
- name: kafka-secret
mountPath: /etc/kafka/secrets
subPath: rest_jaas.conf
volumes:
- name: kafka-secret
secret:
secretName: kafka-secret
</code></pre>
|
<p>I am setting up a secret containing the certificate for ingress controller but getting the below error when I check the ingress logs</p>
<p>Ingress logs:</p>
<pre><code>W0304 05:47:32.020497 7 controller.go:1153] Error getting SSL certificate "default/auth-tls": local SSL certificate default/auth-tls was not found. Using default certificate
W0304 05:47:32.020516 7 controller.go:1407] Error getting SSL certificate "default/auth-tls": local SSL certificate default/auth-tls was not found
I0304 05:47:32.114777 7 main.go:117] "successfully validated configuration, accepting" ingress="hello-kubernetes-ingress" namespace="default"
</code></pre>
<p>Secret:</p>
<pre><code>$ kubectl create secret tls auth-tls --cert key.pem --key out.key
$ kubectl describe secret auth-tls
Name: auth-tls
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tls
Data
====
tls.crt: 3231 bytes
tls.key: 1732 bytes
</code></pre>
<p>Below is my yaml file for ingress</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: https://externalauthentication/authorize
spec:
rules:
- host: hw1.yourdomain
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
- host: hw2.yourdomain
http:
paths:
- backend:
serviceName: hello-kubernetes-second
servicePort: 80
tls:
- hosts:
- externalauthentication
- hw1.yourdomain
secretName: auth-tls
</code></pre>
|
<p>Both the <code>Ingress</code> and the <code>Secret</code> are namespaced resources. You can check yourself with:</p>
<pre><code>$ kubectl api-resources --namespaced=true
NAME SHORTNAMES APIGROUP NAMESPACED KIND
...
secrets true Secret
...
ingresses ing extensions true Ingress
ingresses ing networking.k8s.io true Ingress
</code></pre>
<p>They can only work within their <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">namespace</a>. So in your use case you need to put both of them (<code>Ingress</code> and <code>Secret</code>) in the same namespace.</p>
|
<p>I am trying to create a pod using my own docker image on localhost.</p>
<p>This is the dockerfile used to create the image :</p>
<pre><code>FROM centos:8
RUN yum install -y gdb
RUN yum group install -y "Development Tools"
CMD ["/usr/bin/bash"]
</code></pre>
<p>The yaml file used to create the pod is this :</p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: server
imagePullPolicy: Never
image: localhost:5000/server
ports:
- containerPort: 80
root@node1:~/test/server# docker images | grep server
server latest 82c5228a553d 3 hours ago 948MB
localhost.localdomain:5000/server latest 82c5228a553d 3 hours ago 948MB
localhost:5000/server latest 82c5228a553d 3 hours ago 948MB
</code></pre>
<p>The image has been pushed to localhost registry.</p>
<p>Following is the error I receive.</p>
<pre><code>root@node1:~/test/server# kubectl get pods
NAME READY STATUS RESTARTS AGE
server 0/1 CrashLoopBackOff 5 5m18s
</code></pre>
<p>The output of describe pod :</p>
<pre><code> root@node1:~/test/server# kubectl describe pod server
Name: server
Namespace: default
Priority: 0
Node: node1/10.0.2.15
Start Time: Mon, 07 Dec 2020 15:35:49 +0530
Labels: app=server
Annotations: cni.projectcalico.org/podIP: 10.233.90.192/32
cni.projectcalico.org/podIPs: 10.233.90.192/32
Status: Running
IP: 10.233.90.192
IPs:
IP: 10.233.90.192
Containers:
server:
Container ID: docker://c2982e677bf37ff11272f9ea3f68565e0120fb8ccfb1595393794746ee29b821
Image: localhost:5000/server
Image ID: docker-pullable://localhost.localdomain:5000/server@sha256:6bc8193296d46e1e6fa4cb849fa83cb49e5accc8b0c89a14d95928982ec9d8e9
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 07 Dec 2020 15:41:33 +0530
Finished: Mon, 07 Dec 2020 15:41:33 +0530
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tb7wb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-tb7wb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tb7wb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m default-scheduler Successfully assigned default/server to node1
Normal Pulled 4m34s (x5 over 5m59s) kubelet Container image "localhost:5000/server" already present on machine
Normal Created 4m34s (x5 over 5m59s) kubelet Created container server
Normal Started 4m34s (x5 over 5m59s) kubelet Started container server
Warning BackOff 56s (x25 over 5m58s) kubelet Back-off restarting failed container
</code></pre>
<p>I get no logs :</p>
<pre><code>root@node1:~/test/server# kubectl logs -f server
root@node1:~/test/server#
</code></pre>
<p>I am unable to figure out whether the issue is with the container or yaml file for creating pod. Any help would be appreciated.</p>
|
<p>Posting this as <code>Community Wiki</code>.</p>
<p>As pointed by @David Maze in comment section.</p>
<blockquote>
<p>If docker run exits immediately, a Kubernetes Pod will always go into <code>CrashLoopBackOff</code> state. Your <code>Dockerfile</code> needs to <code>COPY</code> in or otherwise install and application and set its <code>CMD</code> to run it.</p>
</blockquote>
<p>Root cause can be also determined by <code>Exit Code</code>. In <a href="https://containersolutions.github.io/runbooks/posts/kubernetes/crashloopbackoff/" rel="nofollow noreferrer">3) Check the exit code</a> article, you can find a few exit codes like 0, 1, 128, 137 with description.</p>
<blockquote>
<p>3.1) Exit Code 0</p>
<p>This exit code implies that the specified container command completed ‘sucessfully’, but too often for Kubernetes to accept as working.</p>
</blockquote>
<p>In short story, your container was created, all action mentioned was executed and as there was nothing else to do, it exit with <code>Exit Code 0</code>.</p>
<blockquote>
<p>A <code>CrashLoopBackOff</code> error occurs when a pod startup fails repeatedly in Kubernetes.`</p>
</blockquote>
<p>Your image based on <code>centos</code> with few additional installations did not have any process in backgroud left, so it was categorized as <code>Completed</code>. As this happen so fast, kubernetes restarted it and it fall in loop.</p>
<pre><code>$ kubectl run centos --image=centos
$ kubectl get po -w
NAME READY STATUS RESTARTS AGE
centos 0/1 CrashLoopBackOff 1 5s
centos 0/1 Completed 2 17s
centos 0/1 CrashLoopBackOff 2 31s
centos 0/1 Completed 3 46s
centos 0/1 CrashLoopBackOff 3 58s
centos 1/1 Running 4 88s
centos 0/1 Completed 4 89s
centos 0/1 CrashLoopBackOff 4 102s
</code></pre>
<pre><code>$ kubectl describe po centos | grep 'Exit Code'
Exit Code: 0
</code></pre>
<p>But when you have used <code>sleep 3600</code>, in your container, command <code>sleep</code> was executing for hour. After this time it would also exit with <code>Exit Code 0</code>.</p>
<p>Hope it clarified.</p>
|
<p>I have created several PersistenVolumes through PersistentColumeClaims on the "azure-file" StorageClass in AKS. Now the mount options of the StorageClass as it is delivered by Azure didn't fit our needs, and I had to update/reconfigure it with different MountOptions.</p>
<p>Do I have to manually destroy bound PersistentVolumes now to force a recreation and a reconfiguration (different mount) or is the provisioner taking care of that?</p>
<p>What would be the best way of forcing that?</p>
<ul>
<li>Delete the PersistentVolume itself?</li>
<li>Delete the Claim?</li>
<li>Delete the where the volumes are bound (I guess not)</li>
<li>Delete and recreate the whole StatefulSet?</li>
</ul>
|
<p>@SahadatHossain is right with his answer but I would like to expand it a bit with more details and sources.</p>
<p>It is important to understand the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#lifecycle-of-a-volume-and-claim" rel="nofollow noreferrer">Lifecycle of a volume and claim</a>. The interaction between PVs and PVCs follows this lifecycle:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#provisioning" rel="nofollow noreferrer">Provisioning</a> - which can be <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#static" rel="nofollow noreferrer">static</a> or <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#dynamic" rel="nofollow noreferrer">dynamic</a>.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding" rel="nofollow noreferrer">Binding</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#using" rel="nofollow noreferrer">Using</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming" rel="nofollow noreferrer">Reclaiming</a></p>
</li>
</ul>
<p>The Reclaiming step brings us to your actual use case:</p>
<blockquote>
<p>When a user is done with their volume, they can delete the PVC objects
from the API that allows reclamation of the resource. The reclaim
policy for a PersistentVolume tells the cluster what to do with the
volume after it has been released of its claim. Currently, volumes can
either be Retained, Recycled, or Deleted.</p>
</blockquote>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain" rel="nofollow noreferrer">Retain</a> - The <code>Retain</code> reclaim policy allows for manual reclamation of the resource.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#delete" rel="nofollow noreferrer">Delete</a> - For volume plugins that support the <code>Delete</code> reclaim policy, deletion removes both the PersistentVolume object from Kubernetes, as well as the associated storage asset in the external infrastructure.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#recycle" rel="nofollow noreferrer">Recycle</a> - If supported by the underlying volume plugin, the Recycle reclaim policy performs a basic scrub (<code>rm -rf /thevolume/*</code>) on the volume and makes it available again for a new claim. <strong>Warning</strong>: The <code>Recycle</code> reclaim policy is deprecated (<a href="https://github.com/kubernetes/kubernetes/issues/59060" rel="nofollow noreferrer">source</a>). Instead, the recommended approach is to use "Delete" policy (default for dynamic provisioning) or "Retain" if your data is valuable and needs to be persisted between pod runs (see <a href="https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/" rel="nofollow noreferrer">docs</a>).</p>
</li>
</ul>
<p>When it comes to updating Pod specs, you can consider <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">Updating a Deployment</a> (if possible) with a various <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy" rel="nofollow noreferrer">update strategies</a> like for example <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update</a>:</p>
<blockquote>
<p>The Deployment updates Pods in a rolling update fashion when
<code>.spec.strategy.type==RollingUpdate</code>. You can specify <code>maxUnavailable</code>
and <code>maxSurge</code> to control the rolling update process.</p>
</blockquote>
|
<p>Friends, I am trying to implement a HPA following the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">hpa tutorial</a> of k8s and I am having the following error:</p>
<p><strong>ValidationError(HorizontalPodAutoscaler.status): missing required field "conditions" in io.k8s.api.autoscaling.v2beta2.HorizontalPodAutoscalerStatus</strong>.</p>
<p>I couldn't find anything about this field "conditions". Does someone have an idea what I may be doing wrong? Here its the YAML of my HPA:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.name }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Values.name}}
minReplicas: {{ .Values.deployment.minReplicas }}
maxReplicas: {{ .Values.deployment.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
status:
observedGeneration: 1
lastScaleTime: <some-time>
currentReplicas: 2
desiredReplicas: 2
currentMetrics:
- type: Resource
resource:
name: cpu
current:
averageValue: 0
</code></pre>
<p>And here the manifest of my deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ .Values.deployment.replicaCount }}
selector:
matchLabels:
app: {{ .Values.labels}}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
labels:
app: {{ .Values.labels }}
spec:
initContainers:
- name: check-rabbitmq
image: {{ .Values.initContainers.image }}
command: ['sh', '-c',
'until wget http://$(RABBITMQ_DEFAULT_USER):$(RABBITMQ_DEFAULT_PASS)@rabbitmq:15672/api/aliveness-test/%2F;
do echo waiting; sleep 2; done;']
envFrom:
- configMapRef:
name: {{ .Values.name }}
- name: check-mysql
image: {{ .Values.initContainers.image }}
command: ['sh', '-c', 'until nslookup mysql-primary.default.svc.cluster.local; do echo waiting for mysql; sleep 2; done;']
containers:
- name: {{ .Values.name }}
image: {{ .Values.deployment.image }}
ports:
- containerPort: {{ .Values.ports.containerPort }}
resources:
limits:
cpu: 500m
requests:
cpu: 200m
envFrom:
- configMapRef:
name: {{ .Values.name }}
</code></pre>
|
<h3>Background</h3>
<p>Not sure, why you want to create <code>HPA</code> with <code>status</code> section. If you would remove this section it will create <code>HPA</code> without any issue.</p>
<p>In documentation <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status" rel="nofollow noreferrer">Understanding Kubernetes Objects - Object Spec and Status</a> you can find information:</p>
<blockquote>
<p>Almost every Kubernetes object includes two nested object fields that govern the object's configuration: the object <code>spec</code> and the object <code>status</code>. For objects that have a <code>spec</code>, you have to set this when you create the object, providing a description of the characteristics you want the resource to have: its desired state.</p>
</blockquote>
<blockquote>
<p>The <code>status</code> describes the current state of the object, <strong>supplied and updated by the Kubernetes system and its components.</strong> The Kubernetes control plane continually and actively manages every object's actual state to match the desired state you supplied.</p>
</blockquote>
<p>Your situation is partially described in <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#appendix-horizontal-pod-autoscaler-status-conditions" rel="nofollow noreferrer">Appendix: Horizontal Pod Autoscaler Status Conditions</a></p>
<blockquote>
<p>When using the <code>autoscaling/v2beta2</code> form of the <code>HorizontalPodAutoscaler</code>, you will be able to see status conditions set by Kubernetes on the HorizontalPodAutoscaler. These status conditions indicate whether or not the <code>HorizontalPodAutoscaler</code> is able to <code>scale</code>, and whether or not it is currently restricted in any way.</p>
</blockquote>
<h3>Example from my GKE test cluster</h3>
<p>As I mention before, if you will remove <code>status</code> section, you will be able to create <code>HPA</code>.</p>
<pre><code>$ kubectl apply -f - <<EOF
> apiVersion: autoscaling/v2beta2
> kind: HorizontalPodAutoscaler
> metadata:
> name: hpa-apache
> spec:
> scaleTargetRef:
> apiVersion: apps/v1
> kind: Deployment
> name: php-apache
> minReplicas: 1
> maxReplicas: 3
> metrics:
> - type: Resource
> resource:
> name: cpu
> target:
> type: Utilization
> averageUtilization: 50
> EOF
horizontalpodautoscaler.autoscaling/hpa-apache created
</code></pre>
<p>Following <code>HPA</code> documentation, I've created <code>PHP Deployment</code>.</p>
<pre><code>$ kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
deployment.apps/php-apache created
service/php-apache created
</code></pre>
<p>When you executed command <code>kubectl autoscale</code> you have created <code>HPA</code> for deployment <code>php-apache</code></p>
<pre><code>$ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
horizontalpodautoscaler.autoscaling/php-apache autoscaled
</code></pre>
<p>Now you are able to see <code>hpa</code> resource using <code>kubectl get hpa</code> or <code>kubectl get hpa.v2beta2.autoscaling</code>. Output is the same.</p>
<p>First command will show all <code>HPA</code> objects with any <code>apiVersion</code> (<code>v2beta2</code>, <code>v2beta1</code>, etc), second command will show <code>HPA</code> only with <code>apiVersion: hpa.v2beta2.autoscaling</code>. My cluster as default is using <code>v2beta2</code> so output of both commands is the same.</p>
<pre><code>$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0%/50% 1 10 1 76s
$ kubectl get hpa.v2beta2.autoscaling
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0%/50% 1 10 1 84s
</code></pre>
<p>When execute command below, new file with <code>hpa</code> configuration will be created. This file is based on already created <code>HPA</code> from previous <code>kubectl autoscale</code> command.</p>
<pre><code>$ kubectl get hpa.v2beta2.autoscaling -o yaml > hpa-v2.yaml
# If I would use command `kubectl get hpa hpa-apache > hpa-v2.yaml` file would look the same
$ cat hpa-v2.yaml
apiVersion: v1
items:
- apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
...
status:
conditions:
- lastTransitionTime: "2020-12-11T10:44:43Z"
message: recent recommendations were higher than current one, applying the highest
recent recommendation
reason: ScaleDownStabilized
status: "True"
type: AbleToScale
...
currentMetrics:
- resource:
current:
averageUtilization: 0
averageValue: 1m
</code></pre>
<h3>Conclusion</h3>
<p>The <code>status</code> describes the current state of the object, <strong>supplied and updated by the Kubernetes system and it's components.</strong></p>
<p>If you want to create resource based on <code>YAML</code> with <code>status</code>, you have to provide value in <code>status.conditions</code>, where <code>condition</code> require <code>array</code> value.</p>
<pre><code>status:
conditions:
- lastTransitionTime: "2020-12-11T10:44:43Z"
</code></pre>
<h3>Quick solution</h3>
<p>Just remove <code>status</code> section, from your YAML.</p>
<p>Let me know if you still encounter any issue after removing <code>status</code> section from YAML manifest.</p>
|
<p>I have created a Kubernetes clusters with sample Spring Boot application and it works well from public ip. Now I want to access the end point of Spring boot in Kubernetes clusters. I have already followed the tutorial from Google for Configuring Serverless VPC Access. (<a href="https://cloud.google.com/vpc/docs/configure-serverless-vpc-access?hl=bg" rel="nofollow noreferrer">https://cloud.google.com/vpc/docs/configure-serverless-vpc-access?hl=bg</a>). I have created the Serverless VPC access and used in one of cloud function. </p>
<p>Now my problem is, how can I connect the internal ip of Kubernetes clusters from my cloud function?. I have written code in Go. </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>package p
import (
"fmt"
"io/ioutil"
"net/http"
)
func HelloWorld(w http.ResponseWriter, r *http.Request) {
fmt.Println("Starting the application...")
response, err := http.Get("http://10.59.247.177:47002/")
if err != nil {
fmt.Fprint(w, "The HTTP request failed with error %s\n", err)
} else {
data, _ := ioutil.ReadAll(response.Body)
fmt.Fprint(w, string(data))
}
}</code></pre>
</div>
</div>
</p>
<p>But I am getting error: The HTTP request failed with error %s
Get <a href="http://10.59.247.177:47002/" rel="nofollow noreferrer">http://10.59.247.177:47002/</a>: dial tcp 10.59.247.177:47002: i/o timeout</p>
|
<p>By default Kubernetes services are internal to Kubernetes cluster. You have to expose services so that applications from outside of Kubernetes can connect to it. </p>
<p>There are 3 main ways to expose service in Kubernetes:</p>
<ol>
<li>Public load balancer. Service is exposed to Internet. </li>
<li>Internal load balancer. Service is exposed internally within VPC and region.</li>
<li>NodePort. Service is exposed on Kube nodes IP addresses on some high number port. This makes service visible internally and between regions within VPC.</li>
</ol>
<p>Read more here <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types</a> and here <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer</a></p>
|
<p>I have multiple configuration files in two directories. For example, </p>
<ul>
<li>conf.d/parentconf1.conf</li>
<li>con.d/node1/child1.conf</li>
<li>conf.d/node2/child2.conf</li>
</ul>
<p>I need to mount these configuration files in the same directory structure to kubernetes pod using <code>ConfigMap</code>. </p>
<p>Tried using the</p>
<pre><code>kubectl create configmap --from-file=./conf.d --from-file=./conf.d/node1/child1.conf --from-file=./conf.d/node2/child2.conf.
</code></pre>
<p>Config map created, as expected, cannot express the nested directory structure. </p>
<p>Is it possible to create ConfigMap recursively from folders and still retain the folder structure in the name of the key entry for the ConfigMap - since the intention is to mount these ConfigMaps into pods?</p>
|
<p>Unfortunately, reflecting directory structure in configmap is not supported currently. Workaround is to express directory hierarchy like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: testconfig
data:
file1: |
This is file1
file2: |
This is file2 in subdir directory
---
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
restartPolicy: Never
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh","-c", "sleep 1000" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: testconfig
items:
- key: file2
path: subdir/file2
- key: file1
path: file1
</code></pre>
|
<p>I looked up RBD disk space usage, but found different statistics from Ceph and the host which mounts the disk. </p>
<p>From Ceph:</p>
<pre><code>$ rbd -p rbd du
NAME PROVISIONED USED
kubernetes-dynamic-pvc-13a2d932-6be0-11e9-b53a-0a580a800339 40GiB 37.8GiB
</code></pre>
<p>From the host which mounts the disk</p>
<pre><code>$ df -h
Filesystem Size Used Available Use% Mounted on
/dev/rbd0 39.2G 26.6G 10.6G 72% /data
</code></pre>
<p>How could I explain the difference? </p>
|
<p>You can check mount options of /dev/rbd0 device. There should be no 'discard' option. Without that option filesystem cannot report to Ceph about reclaimed space. So Ceph has no idea how much space is actually occupied on rbd volume. This is not a big problem and can be safely ignored. You can rely on stats reported by kubelet.</p>
|
<p>Anyone experienced this <strong>UnexpectedAdmissionError</strong> :
<em>Update plugin resources failed due to failed to write checkpoint file "kubelet_internal_checkpoint": mkdir /var: file exists, which is unexpected.</em></p>
<p><em>$ kubectl get pods</em></p>
<pre><code>NAME READY STATUS RESTARTS AGE
test-6b5ddf5dd4-22tlr 0/1 UnexpectedAdmissionError 0 9m
</code></pre>
<p>This pod is from a deployment</p>
<p><em>$ kubectl describe pod test-6b5ddf5dd4-22tlr</em></p>
<pre><code>Name: test-6b5ddf5dd4-22tlr
Priority: 0
Node: node-1
Start Time: Mon, 07 Dec 2020 18:37:46 +0000
Labels: pod-template-hash=6b5ddf5dd4
Annotations: kubernetes.io/psp: 99-restricted
seccomp.security.alpha.kubernetes.io/pod: docker/default
Status: Failed
Reason: UnexpectedAdmissionError
Message: Pod Update plugin resources failed due to failed to write checkpoint file "kubelet_internal_checkpoint": mkdir /var: file exists, which is unexpected.
IP:
IPs: <none>
Controlled By: ReplicaSet/test-6b5ddf5dd4
Containers:
test-pod:
Image: xxxx
Port: 8070/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/app/config/ from application-config (ro)
/app/data_vol from datavolume (rw)
/app/logback/ from logback-config (ro)
/app/logs_vol from logvolume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kbwnx (ro)
Volumes:
logvolume:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: server1
Path: /release
ReadOnly: false
datavolume:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: server2
Path: /ivol/
ReadOnly: false
application-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: configmap
Optional: false
logback-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: logback-config-map
Optional: false
default-token-kbwnx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kbwnx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 28m default-scheduler Successfully assigned test-6b5ddf5dd4-22tlr to node-1
Warning UnexpectedAdmissionError 28m kubelet, node-1 Update plugin resources failed due to failed to write checkpoint file "kubelet_internal_checkpoint": mkdir /var: file exists, which is unexpected.
</code></pre>
|
<p>Posting this as <code>Community Wiki</code> for better visibility.</p>
<p><strong>Solution</strong> of this issue was <strong>rebooting affected node.</strong></p>
<p>OP confirmed that issue was with only one node, as other pods on other nodes were not affected. As this issue cannot be replicated and now it's impossible to determine root cause, please find some guidelines and troubleshooting documents below which might help resolve similar issue in the future.</p>
<p>It's best to start with the most general troubleshooting like <code>$ kubectl describe node <nodename></code> tog et overall information about node (Capacity, Allocation, Conditions, Events, etc.).</p>
<p>Later it's good to check <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">Resources Quota</a> and verify if node have enough resources - <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/" rel="nofollow noreferrer">Verify Node resources</a>.</p>
<p>Next step is to check logs, it's well described in <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/" rel="nofollow noreferrer">Troubleshoot Clusters</a> document. It's worth to mention that checking <code>kubelet logs</code> using <code>$ journalctl -u kubelet</code> also might provide very useful information.</p>
<p>More detailed node troubleshooting can be found also in documentations below:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/monitor-node-health/" rel="nofollow noreferrer">Apply Node problem detector</a></li>
</ul>
<blockquote>
<p>Node problem detector is a DaemonSet monitoring the node health. It collects node problems from various daemons and reports them to the apiserver as NodeCondition and Event.</p>
</blockquote>
<ul>
<li><a href="https://kubernetes.io/docs/setup/best-practices/node-conformance/" rel="nofollow noreferrer">Validate node setup</a></li>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="nofollow noreferrer">Handling out of resources situation</a></li>
</ul>
|
<p>Ref: <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#providers" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#providers</a></p>
<p>According to the docs</p>
<blockquote>
<p>Resources written as-is without encryption. When set as the first provider, the resource will be decrypted as new values are written.</p>
</blockquote>
<p><code>When set as the first provider, the resource will be decrypted as new values are written.</code> sounds confusing. If resources are written as is with no encryption into etcd, why does <code>decrypted as new values are written</code> mean ?</p>
<p>And following that</p>
<blockquote>
<p>By default, the identity provider is used to protect secrets in etcd, which provides no encryption.</p>
</blockquote>
<p>What kind of security does <code>identity</code> provider give if no encryption happens and if encryption happens, what kind of encryption is it?</p>
|
<p>As stated in <code>etcd</code> about <a href="https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/security.md#does-etcd-encrypt-data-stored-on-disk-drives" rel="nofollow noreferrer">security</a></p>
<blockquote>
<p>Does etcd encrypt data stored on disk drives?</p>
</blockquote>
<p>No. etcd doesn't encrypt key/value data stored on disk drives. If a user need to encrypt data stored on etcd, there are some options:</p>
<ul>
<li>Let client applications encrypt and decrypt the data</li>
<li>Use a feature of underlying storage systems for encrypting stored data like dm-crypt</li>
</ul>
<h2>First part of the question:</h2>
<blockquote>
<p>By default, the <code>identity provider</code> is used to protect secrets in etcd, which provides no encryption.
It means that by default k8s api is using <code>identity provider</code> while storing secrets in <code>etcd</code> and it <strong>doesn't provide any encryption.</strong></p>
</blockquote>
<p>Using <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/" rel="nofollow noreferrer">EncryptionConfiguration</a> with the only one provider: <code>identity</code> gives you the same result as not using <code>EncryptionConfiguration</code> at all (assuming you didn't have any encrypted secrets before at all).
All secret data will be stored in plain text in <code>etcd</code>.</p>
<p>Example:</p>
<pre><code> providers:
- identity: {}
</code></pre>
<h2>Second part of your question:</h2>
<blockquote>
<p>Resources written as-is without encryption.
This is described and explained in the first part of the question</p>
</blockquote>
<p>When set as the first provider, the resource will be decrypted as new values are written.</p>
<p>Take a look at this example:</p>
<pre><code> providers:
- aescbc:
keys:
- name: key1
secret: <BASE 64 ENCODED SECRET>
- identity: {}
</code></pre>
<p>What this configuration means for you:</p>
<ul>
<li>The new provider introduced into your <code>EncryptionConfiguration</code> does not affect existing data.</li>
<li>All existing <code>secrets</code> in <code>etcd</code> (before this configuration has been applied) are still in plain text.</li>
<li>Starting with this configuration all new <code>secrets</code> will be saved using <code>aescbc</code> encryption. All new <code>secrets</code> in <code>etcd</code> will have prefix <code>k8s:enc:aescbc:v1:key1</code>.</li>
<li>In this scenario you will have in <code>etcd</code> a mixture of encrypted and not encrypted data.</li>
</ul>
<p>So the question is why we are using those two providers?</p>
<ul>
<li>provider: <code>aescbc</code> is used to write new <code>secrets</code> as encrypted data during write operation and to decrypt existing <code>secrets</code> during read operation.</li>
<li>provider: <code>identity</code> is still necessary to read all not encrypted secrets.</li>
</ul>
<p>Now we are switching our providers in <code>EncryptionConfiguration</code>:</p>
<pre><code> providers:
- identity: {}
- aescbc:
keys:
- name: key1
secret: <BASE 64 ENCODED SECRET>
</code></pre>
<ul>
<li>In this scenario you will have in <code>etcd</code> a mixture of encrypted and not encrypted data.</li>
<li>Starting with this configuration all new <code>secrets</code> will be saved in plain text</li>
<li>For all existing secrets in etcd with prefix k8s:enc:aescbc:v1:key1 provider: aescbc configuration will be used to decrypt existing secrets stored in etcd.</li>
</ul>
<blockquote>
<p>When set as the first provider, the resource will be decrypted as new values are written</p>
</blockquote>
<p>In order to switch from <code>mixture of encrypted and not encrypted data</code> into scenario that we have only "not encrypted" data, you should perform read/write operation for all secrets:</p>
<p><code>$ kubectl get secrets --all-namespaces -o json | kubectl replace -f -</code></p>
<blockquote>
<p>why's it there if it offers no encryption but the docs seem to talk about decryption and how it protects.</p>
</blockquote>
<p>It's necessary to have the provider type of <code>identity</code> if you have a mixture of encrypted and not encrypted data
or if you want to decrypt all existing <code>secrets</code> (stored in etcd) encrypted by another provider.</p>
<p>The following command reads all secrets and then updates them to apply server side encryption. More details can be found in <a href="https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/#ensuring-all-secrets-are-encrypted" rel="nofollow noreferrer">this paragraph</a></p>
<p><code>$ kubectl get secrets --all-namespaces -o json | kubectl replace -f -</code></p>
<p>Depending on your <code>EncryptionConfiguration</code>, all <code>secrets</code> will be saved as not encrypted -if the first provider is: <code>identity</code> or encrypted if the first provider is different type.</p>
<p><strong>In addtion</strong></p>
<p><code>EncryptionConfig</code> is disabled as default setting. To use it, you have to add <code>--encryption-provider-config</code> in your <code>kube-apiserver</code> configuration. <code>Identity</code> is not encrypting any data, as per <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#providers" rel="nofollow noreferrer">Encrypted Providers documentation</a> it has 3x <code>N/A</code>.</p>
|
<p>I need to monitor my container memory usage running on kubernetes cluster. After read some articles there're two recommendations: "container_memory_rss", "container_memory_working_set_bytes"</p>
<p>The definitions of both metrics are said (from the cAdvisor code)</p>
<ul>
<li>"container_memory_rss" : The amount of anonymous and swap cache memory</li>
<li>"container_memory_working_set_bytes": The amount of working set memory, this includes recently accessed memory, dirty memory, and kernel memory</li>
</ul>
<p>I think both metrics are represent the bytes size on the physical memory that process uses. But there are some differences between the two values from my grafana dashboard.</p>
<p><strong>My question is:</strong></p>
<ul>
<li>What is the difference between two metrics?</li>
<li>Which metrics are much proper to monitor memory usage? Some post said both because one of those metrics reaches to the limit, then that container is oom killed.</li>
</ul>
|
<p>You are right. I will try to address your questions in more detail.</p>
<blockquote>
<p>What is the difference between two metrics?</p>
</blockquote>
<p><code>container_memory_rss</code> equals to the value of <code>total_rss</code> from <code>/sys/fs/cgroups/memory/memory.status</code> file:</p>
<pre><code>// The amount of anonymous and swap cache memory (includes transparent
// hugepages).
// Units: Bytes.
RSS uint64 `json:"rss"`
</code></pre>
<p>The total amount of anonymous and swap cache memory (it includes transparent hugepages), and it equals to the value of <code>total_rss</code> from <code>memory.status</code> file. This should not be confused with the true <code>resident set size</code> or the amount of physical memory used by the cgroup. <code>rss + file_mapped</code> will give you the resident set size of cgroup. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory.</p>
<hr />
<p><code>container_memory_working_set_bytes</code> (as already mentioned by Olesya) is the <code>total usage</code> - <code>inactive file</code>. It is an estimate of how much memory cannot be evicted:</p>
<pre><code>// The amount of working set memory, this includes recently accessed memory,
// dirty memory, and kernel memory. Working set is <= "usage".
// Units: Bytes.
WorkingSet uint64 `json:"working_set"`
</code></pre>
<p>Working Set is the current size, in bytes, of the Working Set of this process. The Working Set is the set of memory pages touched recently by the threads in the process.</p>
<hr />
<blockquote>
<p>Which metrics are much proper to monitor memory usage? Some post said
both because one of those metrics reaches to the limit, then that
container is oom killed.</p>
</blockquote>
<p>If you are <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="noreferrer">limiting the resource usage</a> for your pods than you should monitor both as they will cause an oom-kill if they reach a particular resource limit.</p>
<p>I also recommend <a href="https://faun.pub/how-much-is-too-much-the-linux-oomkiller-and-used-memory-d32186f29c9d" rel="noreferrer">this article</a> which shows an example explaining the below assertion:</p>
<blockquote>
<p>You might think that memory utilization is easily tracked with
<code>container_memory_usage_bytes</code>, however, this metric also includes
cached (think filesystem cache) items that can be evicted under memory
pressure. The better metric is <code>container_memory_working_set_bytes</code> as
this is what the OOM killer is watching for.</p>
</blockquote>
<p><strong>EDIT:</strong></p>
<p>Adding some additional sources as a supplement:</p>
<ul>
<li><p><a href="https://blog.freshtracks.io/a-deep-dive-into-kubernetes-metrics-part-3-container-resource-metrics-361c5ee46e66" rel="noreferrer">A Deep Dive into Kubernetes Metrics — Part 3 Container Resource Metrics</a></p>
</li>
<li><p><a href="https://github.com/google/cadvisor/issues/1744" rel="noreferrer">#1744</a></p>
</li>
<li><p><a href="https://www.magalix.com/blog/memory_working_set-vs-memory_rss" rel="noreferrer">Understanding Kubernetes Memory Metrics</a></p>
</li>
<li><p><a href="https://medium.com/@eng.mohamed.m.saeed/memory-working-set-vs-memory-rss-in-kubernetes-which-one-you-should-monitor-8ef77bf0acee" rel="noreferrer">Memory_working_set vs Memory_rss in Kubernetes, which one you should monitor?</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="noreferrer">Managing Resources for Containers</a></p>
</li>
<li><p><a href="https://github.com/google/cadvisor/blob/50b23f4ed9bc53cf068316b67bee04c4145f1e73/info/v1/container.go#L379" rel="noreferrer">cAdvisor code</a></p>
</li>
</ul>
|
<p>What is the best way for monitoring Kubernetes Cluster. I have a social network website builded with nodejs and react. I need to know how much app resources I need per user (example: for 100 user per hour we must have 3 replcias of backend)
What is the best solution to do this?</p>
|
<p>The only valid way is to do stress testing (using for example AB or Jmeter) of your application and monitoring consumed resources using Prometheus and Grafana. You probably should also enable Horizontal Pod Autoscaler for your application to test how it scales with load.</p>
<p>If you don't have Prometheus and Grafana - here is a simple but powerful instruction way to install it: <a href="https://github.com/coreos/kube-prometheus" rel="nofollow noreferrer">https://github.com/coreos/kube-prometheus</a></p>
|
<p>I have a K8s cluster which runs independent jobs (each job has one pod) and I expect them to run to completion. The scheduler, however, sometimes reschedules them on a different node. My jobs need to be single-run, and restarting them on a different node is not an acceptable outcome for me. </p>
<p>I was looking at Pod disruption budgets (PDB), but from what I understand their selectors apply to a label of pods. Since every one of my job is different and has a separate label, how do I use PDB to tell K8s that <strong><em>all</em></strong> of my pods have a maxUnavailable of 0?</p>
<p>I have also used this annotation</p>
<pre><code>"cluster-autoscaler.kubernetes.io/safe-to-evict": false
</code></pre>
<p>but this does not affect pod evictions on resource pressures.</p>
<p>Ideally, I should be able to tell K8s that none of my Pods should be evicted unless they are complete.</p>
|
<p>You should specify resources in order for your jobs to become Guaranteed quality of service:</p>
<pre><code>resources:
limits:
memory: "200Mi"
cpu: "700m"
requests:
memory: "200Mi"
cpu: "700m"
</code></pre>
<p>Requests should be equal to limits - then your pod will become Guaranteed and will not be anymore evicted. </p>
<p>Read more: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod</a></p>
|
<p>A trying run a simples service in minikube with nginx image</p>
<p>I runned deployment and service using bellow commands</p>
<pre><code>felipeflores@GMEPN004052:~$ kubectl run meu-nginx --image nginx --port 80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/meu-nginx created
felipeflores@GMEPN004052:~$ kubectl expose deployment meu-nginx
service/meu-nginx exposed
</code></pre>
<p>Was create my service correctly, but I can't access the endpoint, with port</p>
<pre><code>felipeflores@GMEPN004052:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 55d
meu-nginx ClusterIP 10.108.164.133 <none> 80/TCP 20m
</code></pre>
<p>But</p>
<pre><code>felipeflores@GMEPN004052:~$ curl 10.108.164.133
curl: (7) Failed to connect to 10.108.164.133 port 80
</code></pre>
<p>For informations I put describes the service and deployment</p>
<pre><code>felipeflores@GMEPN004052:~$ kubectl describe service meu-nginx
Name: meu-nginx
Namespace: default
Labels: run=meu-nginx
Annotations: <none>
Selector: run=meu-nginx
Type: ClusterIP
IP: 10.108.164.133
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 172.17.0.15:80
Session Affinity: None
Events: <none>
felipeflores@GMEPN004052:~$ kubectl describe deploy meu-nginx
Name: meu-nginx
Namespace: default
CreationTimestamp: Mon, 29 Apr 2019 09:07:46 -0300
Labels: run=meu-nginx
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=meu-nginx
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=meu-nginx
Containers:
meu-nginx:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: meu-nginx-546db5bbc9 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 27m deployment-controller Scaled up replica set meu-nginx-546db5bbc9 to 1
</code></pre>
|
<p>Service of type ClusterIP is not available from outside of Kubernetes. You need to use service type LoadBalancer or NodePort.</p>
<p>Please read: </p>
<p><a href="https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service</a>
<a href="https://medium.com/@xcoulon/deploying-your-first-web-app-on-minikube-6e98d2884b3a" rel="nofollow noreferrer">https://medium.com/@xcoulon/deploying-your-first-web-app-on-minikube-6e98d2884b3a</a></p>
|
<p>So I have 4 nodes. 1 is System, 1 is Dev, 1 is Qa and 1 is UAT.</p>
<p>My affinity is as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
namespace: dev
labels:
app: auth
environment: dev
app-role: api
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
environment: dev
app-role: api
tier: backend
annotations:
build: _{Tag}_
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- auth
topologyKey: kubernetes.io/hostname
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: environment
operator: In
values:
- dev
containers:
- name: companyauth
image: company.azurecr.io/auth:_{Tag}_
imagePullPolicy: Always
env:
- name: ConnectionStrings__DevAuth
value: dev
ports:
- containerPort: 80
imagePullSecrets:
- name: ips
</code></pre>
<p>It is my intention to make sure that on my production cluster, which has 3 nodes in 3 different availability zones. That all the pods will be scheduled on a different node/availability zone. However, it appears that if I already have pods scheduled on a node, then when I do a deployment it will not overwrite the pods that already exist.</p>
<p>0/4 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't match node selector.</p>
<p>However, if I remove the podAffinity, it works fine and will overwrite the current node with the new pod from the deployment. What is the correct way to do this to ensure my deployment on my production cluster will always have a pod scheduled on a different node in a different availability zone and also be able to update the existing nodes?</p>
|
<p>Your goal can be achieved using only <a href="https://docs.openshift.com/container-platform/3.6/admin_guide/scheduling/pod_affinity.html" rel="nofollow noreferrer">PodAntiAffinity</a>.</p>
<p>I have tested this with my <code>GKE</code> test cluster, but it should work similar on <code>Azure</code>.</p>
<h3>Current Issue</h3>
<p>In your current setup, you have set <code>podAntiAffinity</code> with <code>nodeAffinity</code>.</p>
<blockquote>
<p><code>Pod anti-affinity</code> can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod.</p>
</blockquote>
<p>In your <code>Deployment</code> setup, new pods will have <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">labels</a> like:</p>
<ul>
<li><code>app: auth</code></li>
<li><code>environment: dev</code></li>
<li><code>app-role: api</code></li>
<li><code>tier: backend</code></li>
</ul>
<p><code>PodAntiAffinity</code> was configured to <strong>not allow</strong> deploying new pod, if there is already pod with label: <code>app: auth</code>.</p>
<p><code>NodeAffinity</code> was configured to deploy <strong>only on the node</strong> with label <code>environment: dev</code>.</p>
<p>To Sum up, your error:</p>
<pre><code>0/4 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't match node selector.
</code></pre>
<blockquote>
<p>1 node(s) didn't match pod affinity/anti-affinity</p>
</blockquote>
<p>your setup allows to deploy only on the node with label <code>environment: dev</code> and only one pod with label <code>app: auth</code>.</p>
<p>As you mention</p>
<blockquote>
<p>if I already have pods scheduled on a node, then when I do a deployment it will not overwrite the pods that already exist.</p>
</blockquote>
<p><code>PodAntiAffinity</code> behavior worked and didn't allow to deploy new pod with label <code>app: auth</code> as there was already one.</p>
<blockquote>
<p>3 node(s) didn't match node selector.</p>
</blockquote>
<p><code>NodeAffinity</code> allows to deploy pods only on the node with label <code>environment: dev</code>. Other nodes have probably labels like <code>environment: system</code>, <code>environment: uat</code>, <code>environment: qa</code> which didn't match <code>environment: dev</code> label thus didn't match <code>node selector</code>.</p>
<h3>Solution</h3>
<p>Easiest way is to remove <code>NodeAffinity</code>.</p>
<p>While <code>TolpologyKey</code> is set to <code>kubernetes.io/hostname</code> in <code>PodAntiAffinity</code> it's enough.</p>
<blockquote>
<p>The topologyKey uses the default label attached to a node to dynamically filter on the name of the node.</p>
</blockquote>
<p>For more information, please check <a href="https://thenewstack.io/strategies-for-kubernetes-pod-placement-and-scheduling/" rel="nofollow noreferrer">this article</a>.</p>
<p>If you will describe your <code>nodes</code> and <code>grep</code> them with <code>kubernetes.io/hostname</code> you will get unique value:</p>
<pre><code>$ kubectl describe node | grep kubernetes.io/hostname
kubernetes.io/hostname=gke-affinity-default-pool-27d6eabd-vhss
kubernetes.io/hostname=gke-affinity-default-pool-5014ecf7-5tkh
kubernetes.io/hostname=gke-affinity-default-pool-c2afcc97-clg9
</code></pre>
<h3>Tests</h3>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
labels:
app: auth
environment: dev
app-role: api
tier: backend
spec:
replicas: 3
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
environment: dev
app-role: api
tier: backend
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- auth
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
</code></pre>
<p>After deploying this YAML.</p>
<pre><code>$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
auth-7fccf5f7b8-4dkc4 1/1 Running 0 9s 10.0.1.9 gke-affinity-default-pool-c2afcc97-clg9 <none> <none>
auth-7fccf5f7b8-5qgt4 1/1 Running 0 8s 10.0.2.6 gke-affinity-default-pool-5014ecf7-5tkh <none> <none>
auth-7fccf5f7b8-bdmtw 1/1 Running 0 8s 10.0.0.9 gke-affinity-default-pool-27d6eabd-vhss <none> <none>
</code></pre>
<p>If you would increase replicas to 7, no more pods will be deployed. All new pods will stuck in <code>Pending</code> state, as <code>antiPodAffinity</code> worked (each node already have pod with label <code>app: dev</code>).</p>
<pre><code>$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
auth-7fccf5f7b8-4299k 0/1 Pending 0 79s <none> <none> <none> <none>
auth-7fccf5f7b8-4dkc4 1/1 Running 0 2m1s 10.0.1.9 gke-affinity-default-pool-c2afcc97-clg9 <none> <none>
auth-7fccf5f7b8-556h5 0/1 Pending 0 78s <none> <none> <none> <none>
auth-7fccf5f7b8-5qgt4 1/1 Running 0 2m 10.0.2.6 gke-affinity-default-pool-5014ecf7-5tkh <none> <none>
auth-7fccf5f7b8-bdmtw 1/1 Running 0 2m 10.0.0.9 gke-affinity-default-pool-27d6eabd-vhss <none> <none>
auth-7fccf5f7b8-q4s2c 0/1 Pending 0 79s <none> <none> <none> <none>
auth-7fccf5f7b8-twb9j 0/1 Pending 0 79s <none> <none> <none> <none>
</code></pre>
<p>Similar solution was described in <a href="https://www.alibabacloud.com/blog/high-availability-deployment-of-pods-on-multi-zone-worker-nodes_595085" rel="nofollow noreferrer">High-Availability Deployment of Pods on Multi-Zone Worker Nodes</a> blog.</p>
|
<p>I'm trying to find out and understand how OOM-killer works on the container.</p>
<p>To figuring it out, I've read lots of articles and I found out that OOM-killer kills container based on the <code>oom_score</code>. And <code>oom_score</code> is determined by <code>oom_score_adj</code> and memory usage of that process.</p>
<p>And there're two metrics <code>container_memory_working_set_bytes</code>, <code>container_memory_rss</code> from the cAdvisor for monitoring memory usage of the container.</p>
<p>It seems that RSS memory (<code>container_memory_rss</code>) has impact on <code>oom_score</code> so I can understand that with the <code>container_memory_rss</code> metric, if that metric reached to memory limit, the OOM-killer will kill the process.</p>
<ul>
<li><a href="https://github.com/torvalds/linux/blob/v3.10/fs/proc/base.c#L439" rel="noreferrer">https://github.com/torvalds/linux/blob/v3.10/fs/proc/base.c#L439</a></li>
<li><a href="https://github.com/torvalds/linux/blob/v3.10/mm/oom_kill.c#L141" rel="noreferrer">https://github.com/torvalds/linux/blob/v3.10/mm/oom_kill.c#L141</a></li>
<li><a href="https://github.com/torvalds/linux/blob/v3.10/include/linux/mm.h#L1136" rel="noreferrer">https://github.com/torvalds/linux/blob/v3.10/include/linux/mm.h#L1136</a></li>
</ul>
<p>But from the articles like below:</p>
<ul>
<li><a href="https://faun.pub/how-much-is-too-much-the-linux-oomkiller-and-used-memory-d32186f29c9d" rel="noreferrer">https://faun.pub/how-much-is-too-much-the-linux-oomkiller-and-used-memory-d32186f29c9d</a></li>
<li><a href="https://blog.freshtracks.io/a-deep-dive-into-kubernetes-metrics-part-3-container-resource-metrics-361c5ee46e66" rel="noreferrer">https://blog.freshtracks.io/a-deep-dive-into-kubernetes-metrics-part-3-container-resource-metrics-361c5ee46e66</a></li>
</ul>
<blockquote>
<p>The better metric is <code>container_memory_working_set_bytes</code> as this is what the OOM killer is watching for.</p>
</blockquote>
<p><strong>I cannot understand the fact that OOM-killer is watching for container's working set memory.</strong> I think I'm not understand the meaning of the working set memory on the container which is 'total usage - inactive file'.</p>
<ul>
<li><a href="https://github.com/google/cadvisor/issues/2582#issuecomment-644883028" rel="noreferrer">https://github.com/google/cadvisor/issues/2582#issuecomment-644883028</a></li>
</ul>
<p>Where can I find the reference? Or could you explain the relationship between working set memory and OOM-kill on the container?</p>
|
<p>As you already know, <code>container_memory_working_set_bytes</code> is:</p>
<blockquote>
<p>the amount of working set memory and it includes recently accessed
memory, dirty memory, and kernel memory. Therefore, Working set is
(lesser than or equal to) </= "usage".</p>
</blockquote>
<p>The <code>container_memory_working_set_bytes</code> is being used for OoM decisions because it <strong>excludes cached data</strong> (<a href="https://www.thomas-krenn.com/en/wiki/Linux_Page_Cache_Basics" rel="noreferrer">Linux Page Cache</a>) that can be evicted in memory pressure scenarios.</p>
<p>So, if the <code>container_memory_working_set_bytes</code> is increased to the limit, it will lead to oomkill.</p>
<p>You can find the fact that when Linux kernel checking available memory, it calls <a href="https://github.com/torvalds/linux/blob/master/mm/util.c#L872" rel="noreferrer"><code>vm_enough_memory()</code></a> to find out how many pages are potentially available.</p>
<p>Then when the machine is low on memory, old page frames including cache will be reclaimed but kernel still may find that it was unable free enough pages to satisfy a request. Now it's time to call <a href="https://github.com/torvalds/linux/blob/master/mm/oom_kill.c#L1048" rel="noreferrer"><code>out_of_memory()</code></a> to kill the process. To determine the candidate process to be killed it uses <code>oom_score</code>.</p>
<p>So when Working Set bytes reached to limits, it means that kernel cannot find availables pages even after reclaiming old pages including cache so kernel will trigger OOM-killer to kill the process.</p>
<p>You can find more details on the Linux kernel documents:</p>
<ul>
<li><a href="https://www.kernel.org/doc/gorman/html/understand/understand016.html" rel="noreferrer">https://www.kernel.org/doc/gorman/html/understand/understand016.html</a></li>
<li><a href="https://www.kernel.org/doc/gorman/html/understand/understand013.html" rel="noreferrer">https://www.kernel.org/doc/gorman/html/understand/understand013.html</a></li>
</ul>
|
<p>I was trying to override the image tag in Helm3 using upgrade command by setting the variable at command line but it did not work. Has someone tried this feature in Helm3.
Stuck for last couple of days, would be helpful to know your views.</p>
<p>Deployment manifest file looks like this:-</p>
<blockquote>
<pre><code> containers:
- image: {{ .Values.image.repository }}:{{.Values.image.tag}}
imagePullPolicy: Always
</code></pre>
</blockquote>
<p>Executing this command from command line:- </p>
<pre><code>> helm upgrade resources-dev resources --set image.tag=72615 --dry-run --debug
</code></pre>
<p>does not override image tag value from 72626 to 72615</p>
<pre><code> containers:
- image: aksresourcesapi.azurecr.io/microservicesinitiative:72626
imagePullPolicy: Always
</code></pre>
<p>Deployment file
<a href="https://i.stack.imgur.com/kr8tB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kr8tB.png" alt="enter image description here"></a></p>
<p>Command Results:-
helm upgrade resources-dev resources --set image.tag=72615 --reuse-values
<a href="https://i.stack.imgur.com/o4sAf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o4sAf.png" alt="enter image description here"></a>
Command Results of
helm upgrade resources-dev resources --set-string image.tag=72615
<a href="https://i.stack.imgur.com/5KALa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5KALa.png" alt="enter image description here"></a></p>
|
<p>Issue is identified, it's not with a --set flag but with the kind of directory structure I have for charts.</p>
<p><a href="https://i.stack.imgur.com/VzdMh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VzdMh.png" alt="enter image description here"></a></p>
<p>while executing the command </p>
<blockquote>
<p>helm upgrade resources-dev resources --set image.tag=72615</p>
</blockquote>
<p>one level up where resources (charts) folder is, it looks for image.tag in "Values.yaml" file of resources folder and not the "Values.yaml" file of backend folder and thus the tags are not replaced.</p>
<p>By executing the below command with <strong>backend.imge.tag</strong> worked
helm upgrade resources-dev resources --install --set backend.image.tag=72615</p>
|
<p>I am looking for a programmatic way to get available Kubernetes versions in AWS EKS. Something similar to the following Azure CLI command:</p>
<pre><code>az aks get-versions --location eastus --output table
</code></pre>
|
<p>As mentioned earlier, there is no API that explicitly returns the list of available Kubernetes versions available in AWS EKS.
However, there is a somewhat hacky way to get this by describing all add-on versions available and getting the K8s versions they are compatible with.</p>
<p>I guess it would be a fair assumption that all available K8s versions in EKS would be compatible with some add-on or the other. In which case, the below CLI command will return the list of available Kubernetes versions present in EKS which can be used.</p>
<pre><code>aws eks describe-addon-versions | jq -r ".addons[] | .addonVersions[] | .compatibilities[] | .clusterVersion" | sort | uniq
</code></pre>
<p>The command gets all Add-ons for EKS and each Add-ones compatible version and then uses jq utility to get the unique Kubernetes versions.</p>
|
<p>I would like <code>kubectl config get-contexts</code> to show all, or any arbitrary subset, of the columns shown in default output.</p>
<p>Currently, <code>kubectl config get-contexts</code> shows <code>CURRENT NAME CLUSTER AUTHINFO</code> and <code>NAMESPACE</code>. On my terminal, that's a total of 221 columns, with <code>NAME</code>, <code>CLUSTER</code>, and <code>AUTHINFO</code> being identical for all contexts.</p>
<p><code>kubectl config get-contexts</code> documentation shows only one output option: <code>-o=name</code>. Attempts to override this with <code>-o=custom-columns="CURRENT:.metadata.current,NAME:.metadata.name"</code> (for example) result in an error.</p>
<p>Am I doing something wrong or is the <code>custom-columns</code> option that is common to <code>kubectl get</code> a missing feature?</p>
<p><strong>Update:</strong> maintainers decided that there was no clean way of implementing output options; see <a href="https://github.com/kubernetes/kubectl/issues/1052" rel="nofollow noreferrer">https://github.com/kubernetes/kubectl/issues/1052</a></p>
|
<p>As indicated by the error message:</p>
<pre><code>error: output must be one of '' or 'name'
</code></pre>
<p>and described in <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-getting-started-strong-" rel="nofollow noreferrer">the docs</a>:</p>
<pre><code>output o Output format. One of: name
</code></pre>
<p>only the value of <code>name</code> can be used with the <a href="https://kubernetes.io/docs/reference/kubectl/overview/#custom-columns" rel="nofollow noreferrer">custom-columns</a> option for <code>kubectl config get-contexts</code>.</p>
<p>The other option that you have left is to list the current context with:</p>
<pre><code>kubectl config current-context
</code></pre>
|
<p>I've set up a Kubernetes (1.17.11) cluster (Azure), and I've installed nginx-ingress-controller via</p>
<p><code>helm install nginx-ingress --namespace z1 stable/nginx-ingress --set controller.publishService.enabled=true</code></p>
<p>the setup seems to be ok and it's working but every now and then it fails, when I check running pods (<code>kubectl get pod -n z1</code>) I see there is a number of restarts for the ingress-controller pod.</p>
<p>I thought maybe there is a huge load so better to increase replicas so I ran <code>helm upgrade --namespace z1 stable/ingress --set controller.replicasCount=3</code> <s>but still only one of the pods (out of 3) seems to be in use</s> and one has fails due to CrashLoopBackOff sometimes (not constantly).</p>
<p>One thing worth mentioning, installed nginx-ingress version is 0.34.1 but 0.41.2 is also available, do you think the upgrade will help, and how can I upgrade the installed version to the new one (AFAIK <code>helm upgrade</code> won't replace the chart with a newer version, I may be wrong) ?</p>
<p>Any idea?</p>
<p><code>kubectl describe pod </code> result:</p>
<pre><code>Name: nginx-ingress-controller-58467bccf7-jhzlx
Namespace: z1
Priority: 0
Node: aks-agentpool-41415378-vmss000000/10.240.0.4
Start Time: Thu, 19 Nov 2020 09:01:30 +0100
Labels: app=nginx-ingress
app.kubernetes.io/component=controller
component=controller
pod-template-hash=58467bccf7
release=nginx-ingress
Annotations: <none>
Status: Running
IP: 10.244.1.18
IPs:
IP: 10.244.1.18
Controlled By: ReplicaSet/nginx-ingress-controller-58467bccf7
Containers:
nginx-ingress-controller:
Container ID: docker://719655d41c1c8cdb8c9e88c21adad7643a44d17acbb11075a1a60beb7553e2cf
Image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
Image ID: docker-pullable://us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller@sha256:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--default-backend-service=z1/nginx-ingress-default-backend
--election-id=ingress-controller-leader
--ingress-class=nginx
--configmap=z1/nginx-ingress-controller
State: Running
Started: Thu, 19 Nov 2020 09:54:07 +0100
Last State: Terminated
Reason: Error
Exit Code: 143
Started: Thu, 19 Nov 2020 09:50:41 +0100
Finished: Thu, 19 Nov 2020 09:51:12 +0100
Ready: True
Restart Count: 8
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: nginx-ingress-controller-58467bccf7-jhzlx (v1:metadata.name)
POD_NAMESPACE: z1 (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-token-7rmtk (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
nginx-ingress-token-7rmtk:
Type: Secret (a volume populated by a Secret)
SecretName: nginx-ingress-token-7rmtk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned z1/nginx-ingress-controller-58467bccf7-jhzlx to aks-agentpool-41415378-vmss000000
Normal Killing 58m kubelet, aks-agentpool-41415378-vmss000000 Container nginx-ingress-controller failed liveness probe, will be restarted
Warning Unhealthy 57m (x4 over 58m) kubelet, aks-agentpool-41415378-vmss000000 Readiness probe failed: HTTP probe failed with statuscode: 500
Warning Unhealthy 57m kubelet, aks-agentpool-41415378-vmss000000 Readiness probe failed: Get http://10.244.1.18:10254/healthz: read tcp 10.244.1.1:54126->10.244.1.18:10254: read: connection reset by peer
Normal Pulled 57m (x2 over 59m) kubelet, aks-agentpool-41415378-vmss000000 Container image "us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1" already present on machine
Normal Created 57m (x2 over 59m) kubelet, aks-agentpool-41415378-vmss000000 Created container nginx-ingress-controller
Normal Started 57m (x2 over 59m) kubelet, aks-agentpool-41415378-vmss000000 Started container nginx-ingress-controller
Warning Unhealthy 57m kubelet, aks-agentpool-41415378-vmss000000 Liveness probe failed: Get http://10.244.1.18:10254/healthz: dial tcp 10.244.1.18:10254: connect: connection refused
Warning Unhealthy 56m kubelet, aks-agentpool-41415378-vmss000000 Liveness probe failed: HTTP probe failed with statuscode: 500
Warning Unhealthy 23m (x10 over 58m) kubelet, aks-agentpool-41415378-vmss000000 Liveness probe failed: Get http://10.244.1.18:10254/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 14m (x6 over 57m) kubelet, aks-agentpool-41415378-vmss000000 Readiness probe failed: Get http://10.244.1.18:10254/healthz: dial tcp 10.244.1.18:10254: connect: connection refused
Warning BackOff 9m28s (x12 over 12m) kubelet, aks-agentpool-41415378-vmss000000 Back-off restarting failed container
Warning Unhealthy 3m51s (x24 over 58m) kubelet, aks-agentpool-41415378-vmss000000 Readiness probe failed: Get http://10.244.1.18:10254/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>Some logs from the controller</p>
<pre><code> NGINX Ingress controller
Release: v0.34.1
Build: v20200715-ingress-nginx-2.11.0-8-gda5fa45e2
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.1
-------------------------------------------------------------------------------
I1119 08:54:07.267185 6 main.go:275] Running in Kubernetes cluster version v1.17 (v1.17.11) - git (clean) commit 3a3612132641768edd7f7e73d07772225817f630 - platform linux/amd64
I1119 08:54:07.276120 6 main.go:87] Validated z1/nginx-ingress-default-backend as the default backend.
I1119 08:54:07.430459 6 main.go:105] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
W1119 08:54:07.497816 6 store.go:659] Unexpected error reading configuration configmap: configmaps "nginx-ingress-controller" not found
I1119 08:54:07.617458 6 nginx.go:263] Starting NGINX Ingress controller
I1119 08:54:08.748938 6 backend_ssl.go:66] Adding Secret "z1/z1-tls-secret" to the local store
I1119 08:54:08.801385 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"z2", Name:"zalenium", UID:"8d395a18-811b-4852-8dd5-3cdd682e2e6e", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"13667218", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress z2/zalenium
I1119 08:54:08.801908 6 backend_ssl.go:66] Adding Secret "z2/z2-tls-secret" to the local store
I1119 08:54:08.802837 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"z1", Name:"zalenium", UID:"244ae6f5-897e-432e-8ec3-fd142f0255dc", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"13667219", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress z1/zalenium
I1119 08:54:08.839946 6 nginx.go:307] Starting NGINX process
I1119 08:54:08.840375 6 leaderelection.go:242] attempting to acquire leader lease z1/ingress-controller-leader-nginx...
I1119 08:54:08.845041 6 controller.go:141] Configuration changes detected, backend reload required.
I1119 08:54:08.919965 6 status.go:86] new leader elected: nginx-ingress-controller-58467bccf7-5thwb
I1119 08:54:09.084800 6 controller.go:157] Backend successfully reloaded.
I1119 08:54:09.096999 6 controller.go:166] Initial sync, sleeping for 1 second.
</code></pre>
|
<p>As OP confirmed in comment section, I am posting solution for this issue.</p>
<blockquote>
<p>Yes I tried and I replaced the deprecated version with the latest version, it completely solved the nginx issue.</p>
</blockquote>
<p>In this setup OP used <a href="https://helm.sh/" rel="nofollow noreferrer">helm chart</a> from <code>stable</code> repository. In Github page, dedicated to <a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">stable/nginx-ingress</a> there is an information that this specific chart is <strong>DEPRECATED</strong>. It was updated 12 days ago so this is a fresh change.</p>
<blockquote>
<p>This chart is deprecated as we have moved to the upstream repo ingress-nginx The chart source can be found here: <a href="https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx</a></p>
</blockquote>
<p>In <a href="https://kubernetes.github.io/ingress-nginx/deploy/#using-helm" rel="nofollow noreferrer">Nginx Ingress Controller</a> deploy guide using <code>Helm</code> option is already with new repository.</p>
<p>To list current repository on the cluster use command <code>$ helm repo list</code>.</p>
<pre><code>$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
ingress-nginx https://kubernetes.github.io/ingress-nginx
</code></pre>
<p>If you don't have new <code>ingress-nginx</code> repository, you have to:</p>
<ul>
<li>Add new repository:
<ul>
<li><code>$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx</code></li>
</ul>
</li>
<li>Update it:
<ul>
<li><code>$ helm update</code></li>
</ul>
</li>
<li>Deploy <code>Nginx Ingress Controller</code>:
<ul>
<li><code>$ helm install my-release ingress-nginx/ingress-nginx </code></li>
</ul>
</li>
</ul>
<p><strong>Disclaimer!</strong></p>
<p>Above commands are specific to <code>Helm v3</code>.</p>
|
<p>When I googled, there were some answers saying that in kubernetes, 100ms cpu means that you are going to use 1/10 time of one cpu core, and 2300ms cpu means that you are going to use 2 cores fully and 3/10 time of another cpu core. Is it correct?</p>
<p>I just wonder if multiple threads can run in parallel on multiple cores at the same time when using under 1000ms cpu requests in kubernetes.</p>
|
<p>Regarding first part, it's true that you can use part of CPU resource to run some tasks.
In <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">Kubernetes documentation - Managing Resources for Containers</a> you can find information that you can specify minimal resources requirements - <code>requests</code> to run pod and <code>limits</code> which cannot be exceeded.</p>
<p>It's well described in <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">this article</a></p>
<blockquote>
<p>Requests and limits are the mechanisms Kubernetes uses to control resources such as CPU and memory. Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.</p>
</blockquote>
<p><strong>CPU Requests/Limits:</strong></p>
<blockquote>
<p>CPU resources are defined in millicores. If your container needs two full cores to run, you would put the value <code>2000m</code>. If your container only needs ¼ of a core, you would put a value of <code>250m</code>.
One thing to keep in mind about CPU requests is that if you put in a value larger than the core count of your biggest node, your pod will never be scheduled.</p>
</blockquote>
<p><strong>Regarding second part, you can use multiple threads in parallel.</strong> Good example of this is <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Kubernetes Job</a>.</p>
<blockquote>
<p>A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).
You can also use a Job to run multiple Pods in parallel.</p>
</blockquote>
<p>Especially part about <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#parallel-jobs" rel="nofollow noreferrer">Parallel execution for Jobs </a></p>
<p>You can also check <a href="https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/" rel="nofollow noreferrer">Parallel Processing using Expansions</a> to run multiple <code>Jobs</code> based on a common template. <strong>You can use this approach to process batches of work in parallel.</strong> In this documentation you can find example with description how it works.</p>
|
<p>When receive udp packets, I want to get the source IP and source Port from the packet, and expect that doesn't change if the packet is from the same source(same IP and same Port). My packet is sent through kube-proxy in iptables mode, but when my packets paused several seconds, and then the source port would change, and set <code>sessionAffinity</code> to "ClientIP" doesn't work. It seems that udp session can only be kept several seconds. Any way to expand the session time, or keep the port stay the same when my packet sender's ip and port haven't changed?</p>
|
<p>This is a community wiki answer. Feel free to expand it.</p>
<p>As already mentioned in the comments, you can try to use the <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX Ingress Controller</a>. The <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">Exposing TCP and UDP services</a> says:</p>
<blockquote>
<p>Ingress does not support TCP or UDP services. For this reason this
Ingress controller uses the flags <code>--tcp-services-configmap</code> and
<code>--udp-services-configmap</code> to point to an existing config map where
the key is the external port to use and the value indicates the
service to expose using the format: <code><namespace/service name>:<service port>:[PROXY]:[PROXY]</code></p>
</blockquote>
<p>The example shows how to expose the service <code>kube-dns</code> running in the namespace <code>kube-system</code> in the port <code>53</code> using the port <code>53</code>:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
53: "kube-system/kube-dns:53"
</code></pre>
<p>If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: proxied-tcp-9000
port: 9000
targetPort: 9000
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
|
<p>I'm checking if key resources.limits is provided in deployment kubernetes using OPA rego code. Below is the code, I'm trying to fetch the resources.limits key and it is always returning TRUE. Regardless of resources provided or not.</p>
<pre><code> package resourcelimits
violation[{"msg": msg}] {
some container; input.request.object.spec.template.spec.containers[container]
not container.resources.limits.memory
msg := "Resources for the pod needs to be provided"
</code></pre>
|
<p>You can try something like this:</p>
<pre><code>import future.keywords.in
violation[{"msg": msg}] {
input.request.kind.kind == "Deployment"
some container in input.request.object.spec.template.spec.containers
not container.resources.limits.memory
msg := sprintf("Container '%v/%v' does not have memory limits", [input.request.object.metadata.name, container.name])
}
</code></pre>
|
<p>I'm trying to deploy a ReactJs app and an Express-GraphQL server through Kubernetes. But I'm having trouble setting up an ingress to route traffic to both services. Specifically I can no longer reach my back-end.</p>
<p>When I made the React front-end and Express back-end as separate services and exposed them, it ran fine. But now I'm trying to enable HTTPS and DNS. And route to both of them through Ingress.</p>
<p>Here are my service yaml files</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: bpmclient
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 5000
selector:
run: bpmclient
type: NodePort
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: bpmserver
namespace: default
spec:
ports:
- port: 3090
protocol: TCP
targetPort: 3090
selector:
run: bpmserver
type: NodePort
</code></pre>
<p>and my Ingress...</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: bpm-nginx
annotations:
kubernetes.io/ingress.global-static-ip-name: bpm-ip
networking.gke.io/managed-certificates: bpmclient-cert
ingress.kubernetes.io/enable-cors: "true"
ingress.kubernetes.io/cors-allow-origin: "https://example.com"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /v2/*
backend:
serviceName: bpmserver
servicePort: 3090
- path: /*
backend:
serviceName: bpmclient
servicePort: 80
</code></pre>
<p>Through this setup I've been able to visit the client successfully using https. But I can't reach my back-end anymore through the client or just browsing to it. I'm getting a 502 server error. But I check the logs for the back-end pod and don't see anything besides 404 logs.</p>
<p>My front-end is reaching the back-end through example.com/v2/graphql. When I run it locally on my machine I go to localhost:3090/graphql. So I don't see why I'm getting a 404 if the routing is done correctly.</p>
|
<p>I see few things that might be wrong here:</p>
<ol>
<li><p>Ingress objects should be created in the same namespace as the services it routes. I see that you have specified <code>namespace: default</code> in your services' yamls but not in Ingress.</p></li>
<li><p>I don't know which version of Ingress you are using but accorind to the <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">documentation</a> after 0.22.0</p></li>
</ol>
<blockquote>
<p>ingress definitions using the annotation
nginx.ingress.kubernetes.io/rewrite-target are not backwards
compatible with previous versions. In Version 0.22.0 and beyond, any
substrings within the request URI that need to be passed to the
rewritten path must explicitly be defined in a capture group.</p>
</blockquote>
<ol start="3">
<li><code>path:</code> should be nested after <code>backend:</code> and capture group should be added to the <code>nginx.ingress.kubernetes.io/rewrite-target: /</code> in numered placeholder like <code>$1</code></li>
</ol>
<p>So you should try something like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: bpm-nginx
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: bpm-ip
networking.gke.io/managed-certificates: bpmclient-cert
ingress.kubernetes.io/enable-cors: "true"
ingress.kubernetes.io/cors-allow-origin: "https://example.com"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: bpmserver
servicePort: 3090
path: /v2/?(.*)
- backend:
serviceName: bpmclient
servicePort: 80
path: /?(.*)
</code></pre>
<p>Please let me know if that helped.</p>
|
<p>I'm able to create a GKE cluster using the golang container lib <a href="https://godoc.org/google.golang.org/api/container/v1" rel="nofollow noreferrer">here</a>.
Now for my golang k8s client to be able to deploy my k8s deployment files there, I need to get the kubeconfig from the GKE cluster. However I can't find the relevant api for that in the <strong>container</strong> lib above. Can anyone please point out what am I missing ?</p>
|
<p>As per @Subhash suggestion I am posting the answer from <a href="https://stackoverflow.com/questions/56191900/is-there-a-golang-sdk-equivalent-of-gcloud-container-clusters-get-credentials/56192493#56192493">this</a> question:</p>
<blockquote>
<p>The GKE API does not have a call that outputs a kubeconfig file (or
fragment). The specific processing between fetching a full cluster
definition and updating the kubeconfig file are implemented in python
in the gcloud tooling. It isn't part of the Go SDK so you'd need to
implement it yourself. </p>
<p>You can also try using <code>kubectl config set-credentials</code> (see
<a href="https://github.com/ahmetb/kubernetes.github.io/blob/master/docs/user-guide/kubectl/kubectl_config_set-credentials.md" rel="nofollow noreferrer">this</a>) and/or see if you can vendor the libraries that implement
that function if you want to do it programmatically.</p>
</blockquote>
|
<p>I have recently got into Istio, and trying to rap my head around the gateway concept.</p>
<p>so fundamentally, I get what it is: an entryway into the service-mesh.</p>
<p>however what I don't understand is how best to use the gateways.</p>
<p>I have installed istio via helm on my k8s cluster, and ran through the bookinfo tutorial.</p>
<p>I created the <code>bookinfo-gateway</code>:</p>
<pre><code>spec:
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
selector:
istio: ingressgateway
</code></pre>
<p>and can access the service via the ingress-gateway created by istio </p>
<p>(found via <code>kubectl get svc -n istio-system</code>).</p>
<p>It seems the gateway I created is tied to the gateway <code>LOADBALANCER</code> created by istio via the <code>selector</code>. </p>
<p>I created a virtualservice, and pointed it to the bookinfo gateway:</p>
<pre><code>spec:
hosts:
- '*'
gateways:
- bookinfo-gateway
http:
- match:
....
</code></pre>
<p>What I don't understand is when/why I would create another gateway. I can also create ANOTHER virtualservice, and point it to the <code>bookinfo-gateway</code> as well.</p>
<p>So when would I create another Gateway? would it only be when I created another istio-ingress-gateway (one with a different IP)?</p>
|
<p>I am somewhat new to Istio as well. Here are a few things to keep in mind.</p>
<p>1) The Istio Ingress Gateway by default lets nothing into the cluster.
2) You define a gateway to let traffic in on the port(s) and protocol(s) you specify with it. The gateway does NOT aim traffic at anything. It just allows it in.
3) To aim traffic from a gateway definition to a actual Kubernetes service you use a Virtual Service (which is really a route). It is the virtual service that connects a gateway to a kubernetes service and aims traffic at it that meet a certain criterial. In particular certain labels. Or certain host that traffic is coming from.</p>
<p>4) The service is Kubernetes stable ip load balancer to the service which is physically deployed on one or more pods.</p>
<p>So to clarify. Istio Ingress Gateway is single point into cluster. Nothing is coming in until you provide a gateway. In the gateway you specify a port and protocol. Like http, 80. This allows that traffic in but it wont go anywhere.</p>
<p>try not to think of the gateway as another path along the traffic flow. Its more a directive to the actual gateway which is always Istio Ingress Gateway. It just says let this kind of traffic in on this port.</p>
<p>Now if you notice virtual service checks labels and based on labels directs to services also based on labels. So you might have more than one virtual service using same gateway to connect to different services.</p>
<p>So I think of it as gateways subdivide traffic from Istio Ingress Gateway by port and protocol. Again they allow certain type of traffic in but do not aim it. A virtual service (route) always routes traffic that has been defined by a gateway to one or more services based on labels.</p>
<p>I dont know if you can have two gateways using same port and protocol. </p>
|
<p>The <a href="https://www.weave.works/docs/net/latest/concepts/router-encapsulation/" rel="nofollow noreferrer">sleeve mode</a> of Weave Net allows adding nodes behind NAT to the mesh, e.g. machines in a company network without external IP.</p>
<p>When Weave Net is used with Kubernetes, such nodes can be added to the cluster. The only drawback (besides the performance compared to <a href="https://www.weave.works/docs/net/latest/tasks/manage/fastdp/" rel="nofollow noreferrer">fastdp</a>) seems to be that the Kubernetes API server can't reach the Kubelet port, so attaching to a Pod or getting logs doesn't work.</p>
<p>Is it somehow possible to work around this issue, e.g. by connecting to the Kubelet port of a NATed node through the weave network instead? </p>
|
<p>Taking under consideration how <code>kubectl exec</code> works and looking at Weave Net documentation makes it impossible to fix the cluster connectivity problem with Weave CNI.</p>
<p>Weave uses the underlying network for sending a packet to the node. I can't find any information saying that it is allowed to put the cluster node behind the NAT. More details can be found <a href="https://www.weave.works/docs/net/latest/concepts/fastdp-how-it-works/" rel="nofollow noreferrer">here</a></p>
<p>Therefore it is impossible to work around this issue as you suggested.</p>
<p>I hope it helps.</p>
|
<p>I'm currently learning kubernetes and started to deploy ELK stack on a minikube cluster (running on a <strong>linux EC2</strong> instance), though i was able to run all the objects successfully, I'm not able to access any of the tool from my <strong>windows browser</strong>, looking for some inputs on how to access all below exposed ports from my windows browser.</p>
<p>Cluster details:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/elasticsearch-deployment-5c7d5cb5fb-g55ft 1/1 Running 0 3m43s
pod/kibana-deployment-76d8744864-ddx4h 1/1 Running 0 3m43s
pod/logstash-deployment-56849fcd7b-bjlzf 1/1 Running 0 3m43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch-service ClusterIP XX.XX.XX.XX <none> 9200/TCP 3m43s
service/kibana-service ClusterIP XX.XX.XX.XX <none> 5601/TCP 3m43s
service/kubernetes ClusterIP XX.XX.XX.XX <none> 443/TCP 5m15s
service/logstash-service ClusterIP 10.XX.XX.XX <none> 9600/TCP,5044/TCP 3m43s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/elasticsearch-deployment 1/1 1 1 3m43s
deployment.apps/kibana-deployment 1/1 1 1 3m43s
deployment.apps/logstash-deployment 1/1 1 1 3m43s
NAME DESIRED CURRENT READY AGE
replicaset.apps/elasticsearch-deployment-5c7d5cb5fb 1 1 1 3m43s
replicaset.apps/kibana-deployment-76d8744864 1 1 1 3m43s
replicaset.apps/logstash-deployment-56849fcd7b 1 1 1 3m43s
</code></pre>
<p>Note: I also tried to run all the above services as NodePort and using the <code>minikube ip</code> i was able hit curl commands to check the status of the application, but still not able to access any of it via my browser</p>
|
<p>Generally if you want expose anything outside the cluster you need to user <code>service type</code>:
<a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a>, <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> or use <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a>. If you will check <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">Minikube documentaton</a>, you will find that <code>Minikube</code> supports all those types.</p>
<p>If you thought about <code>LoadBalancer</code>, you can use <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel" rel="nofollow noreferrer">minikube tunnel</a>.</p>
<p>When you are using <code>cloud environment</code> and non standard ports, you should check <code>firewall rules</code> to check if port/traffic is open.</p>
<p>Regarding error from comment, it seems that you have issue with <code>Kibana</code> port <code>5601</code>.</p>
<p>Did you check similar threads like <a href="https://discuss.elastic.co/t/curl-7-failed-to-connect-to-1-x-x-x-port-5601-connection-refused/146848" rel="nofollow noreferrer">this</a> or <a href="https://discuss.elastic.co/t/kibana-port-5601-connection-refused/71809" rel="nofollow noreferrer">this</a>? If this won't be helpful, please provide Kibana configuration.</p>
|
<p>First of all: I readed other posts like <a href="https://stackoverflow.com/questions/51946393/kubernetes-pod-warning-1-nodes-had-volume-node-affinity-conflict">this</a>.</p>
<p>My staging cluster is allocated on AWS using <strong>spot instances</strong>.</p>
<p>I have arround 50+ pods (runing diferent services / products) and 6 StatefulSets.</p>
<p>I created the StatefulSets this way:</p>
<p>OBS: I do not have PVs and PVCs created manualy, they are being created from the StatfulSet</p>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
serviceName: "redis"
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:alpine
imagePullPolicy: Always
ports:
- containerPort: 6379
name: client
volumeMounts:
- name: data
mountPath: /data
readOnly: false
volumeClaimTemplates:
- metadata:
name: data
labels:
name: redis-gp2
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
name: redis
targetPort: 6379
selector:
app: redis
type: NodePort
</code></pre>
<p>I do have node and pod autoscalers configured.</p>
<p>In the past week after deploying some extra micro-services during the "usage peak" the node autoscaler trigged.</p>
<p>During the scale down some pods(StatefulSets) crashed with the error <code>node(s) had volume node affinity conflict</code>.</p>
<p>My first reaction wast to delete and "recreate" the PVs/PVCs with high priority. That "fixed" the pending pods on that time.</p>
<p>Today I forced another scale-up, so I was able to check what was happening.</p>
<p>The problem occurs during the scalle up and take a long time to go back to normal (+/- 30 min) even after the scalling down.</p>
<p>Describe Pod:</p>
<pre><code>Name: redis-0
Namespace: ***-staging
Priority: 1000
Priority Class Name: prioridade-muito-alta
Node: ip-***-***-***-***.sa-east-1.compute.internal/***.***.*.***
Start Time: Mon, 03 Jan 2022 09:24:13 -0300
Labels: app=redis
controller-revision-hash=redis-6fd5f59c5c
statefulset.kubernetes.io/pod-name=redis-0
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: ***.***.***.***
IPs:
IP: ***.***.***.***
Controlled By: StatefulSet/redis
Containers:
redis:
Container ID: docker://4928f38ed12c206dc5915c863415d3eba98b9592f2ab5c332a900aa2fa2cef64
Image: redis:alpine
Image ID: docker-pullable://redis@sha256:4bed291aa5efb9f0d77b76ff7d4ab71eee410962965d052552db1fb80576431d
Port: 6379/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 03 Jan 2022 09:24:36 -0300
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ngc7q (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-redis-0
ReadOnly: false
default-token-***:
Type: Secret (a volume populated by a Secret)
SecretName: *****
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 59m (x4 over 61m) default-scheduler 0/7 nodes are available: 1 Too many pods, 1 node(s) were unschedulable, 5 node(s) had volume node affinity conflict.
Warning FailedScheduling 58m default-scheduler 0/7 nodes are available: 1 Too many pods, 1 node(s) had taint {ToBeDeletedByClusterAutoscaler: 1641210902}, that the pod didn't tolerate, 1 node(s) were unschedulable, 4 node(s) had volume node affinity conflict.
Warning FailedScheduling 58m default-scheduler 0/7 nodes are available: 1 node(s) had taint {ToBeDeletedByClusterAutoscaler: 1641210902}, that the pod didn't tolerate, 1 node(s) were unschedulable, 2 Too many pods, 3 node(s) had volume node affinity conflict.
Warning FailedScheduling 57m (x2 over 58m) default-scheduler 0/7 nodes are available: 2 Too many pods, 2 node(s) were unschedulable, 3 node(s) had volume node affinity conflict.
Warning FailedScheduling 50m (x9 over 57m) default-scheduler 0/6 nodes are available: 1 node(s) were unschedulable, 2 Too many pods, 3 node(s) had volume node affinity conflict.
Warning FailedScheduling 48m (x2 over 49m) default-scheduler 0/5 nodes are available: 2 Too many pods, 3 node(s) had volume node affinity conflict.
Warning FailedScheduling 35m (x10 over 48m) default-scheduler 0/5 nodes are available: 1 Too many pods, 4 node(s) had volume node affinity conflict.
Normal NotTriggerScaleUp 30m (x163 over 58m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 node(s) had volume node affinity conflict
Warning FailedScheduling 30m (x3 over 33m) default-scheduler 0/5 nodes are available: 5 node(s) had volume node affinity conflict.
Normal SuccessfulAttachVolume 29m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-23168a78-2286-40b7-aa71-194ca58e0005"
Normal Pulling 28m kubelet, ip-***-***-***-***.sa-east-1.compute.internal Pulling image "redis:alpine"
Normal Pulled 28m kubelet, ip-***-***-***-***.sa-east-1.compute.internal Successfully pulled image "redis:alpine" in 3.843908086s
Normal Created 28m kubelet, ip-***-***-***-***.sa-east-1.compute.internal Created container redis
Normal Started 28m kubelet, ip-***-***-***-***.sa-east-1.compute.internal Started container redis
</code></pre>
<p>PVC:</p>
<pre><code>Name: data-redis-0
Namespace: ***-staging
StorageClass: gp2
Status: Bound
Volume: pvc-23168a78-2286-40b7-aa71-194ca58e0005
Labels: app=redis
name=redis-gp2
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
volume.kubernetes.io/selected-node: ip-***-***-***-***.sa-east-1.compute.internal
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: redis-0
Events: <none>
</code></pre>
<p>PV:</p>
<pre><code>Name: pvc-23168a78-2286-40b7-aa71-194ca58e0005
Labels: failure-domain.beta.kubernetes.io/region=sa-east-1
failure-domain.beta.kubernetes.io/zone=sa-east-1b
Annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: gp2
Status: Bound
Claim: ***-staging/data-redis-0
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/zone in [sa-east-1b]
failure-domain.beta.kubernetes.io/region in [sa-east-1]
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://sa-east-1b/vol-061fd23a65185d42c
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
</code></pre>
<p>This happend in 4 of my 6 StatefulSets.</p>
<p><strong>Question:</strong></p>
<p>If I create PVs and PVCs manually setting:</p>
<pre><code>volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- sa-east-1
</code></pre>
<p>will the scale up/down not mess up with StatefulSets?</p>
<p>If not what can I do to avoid this problem ?</p>
|
<p>First of all, it's better to move <code>allowedTopologies</code> stanza to <code>StorageClass</code>. It's more flexible because you can create multiple zone-specific storage classes.</p>
<p>And yes, this should obviously solve your one problem and create another. You basically want to sacrifice high availability to costs/convenience. It's totally up to you, there is no one-size-fits-all recommendation here but I just want to make sure you know the options.</p>
<p>You may still have volumes not tied to specific zones if you always have enough node capacity in every AZ. This can be <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#im-running-cluster-with-nodes-in-multiple-zones-for-ha-purposes-is-that-supported-by-cluster-autoscaler" rel="nofollow noreferrer">achieved</a> using cluster-autoscaler. Generally, you create separate node groups per each AZ and autoscaler will do the rest.</p>
<p>Another option is to build distributed storage like Ceph or Portworx that allows to mount volumes from another AZ. That will greatly increase your cross-AZ traffic costs and needs to be maintained properly but I know companies that do that.</p>
|
<p>Would there be any reason if I'm trying to run a pod on a k8s cluster which stays in the 'Completed' status forever but is never in ready status 'Ready 0/1' ... although there are core-dns , kube-proxy pods,etc running successfully scheduled under each node in the nodepool assigned to a k8s cluster ... all the worker nodes seems to be in healthy state</p>
|
<p>This sounds like the pod lifecycle has ended and the reasons is probably because your pod have finished the task it is meant for.</p>
<p>Something like next example will not do anything, it will be created successfully then started, will pull the image and then it will be marked as completed.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: test-pod
image: busybox
resources:
</code></pre>
<p>Here is how it will look:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
default my-pod 0/1 Completed 1 3s
default testapp-1 0/1 Pending 0 92m
default web-0 1/1 Running 0 3h2m
kube-system event-exporter-v0.2.5-7df89f4b8f-mc7kt 2/2 Running 0 7d19h
kube-system fluentd-gcp-scaler-54ccb89d5-9pp8z 1/1 Running 0 7d19h
kube-system fluentd-gcp-v3.1.1-gdr9l 2/2 Running 0 123m
kube-system fluentd-gcp-v3.1.1-pt8zp 2/2 Running 0 3h2m
kube-system fluentd-gcp-v3.1.1-xwhkn 2/2 Running 5 172m
kube-system fluentd-gcp-v3.1.1-zh29w 2/2 Running 0 7d19h
</code></pre>
<p>For this case, I will recommend you to check your yaml, check what kind of pod you are running? and what is it meant for?</p>
<p>Even, and with test purposes, you can add a command argument to keep it running.</p>
<pre><code>command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]
</code></pre>
<p>something like:</p>
<pre><code>args: ["-c", "while true; do echo hello; sleep 10;done"]
</code></pre>
<p>The yaml with commands added would look like this:</p>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: test-pod
image: busybox
resources:
command: ["/bin/sh","-c"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
</code></pre>
<p>Here is how it will look:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
default my-pod 1/1 Running 0 8m56s
default testapp-1 0/1 Pending 0 90m
default web-0 1/1 Running 0 3h
kube-system event-exporter-v0.2.5-7df89f4b8f-mc7kt 2/2 Running 0 7d19h
kube-system fluentd-gcp-scaler-54ccb89d5-9pp8z 1/1 Running 0 7d19h
kube-system fluentd-gcp-v3.1.1-gdr9l 2/2 Running 0 122m
</code></pre>
<p>Another thing that will help is a <code>kubectl describe pod $POD_NAME</code>, to analyze this further.</p>
|
<p>I need to make available a mysql database hosted in namespace A to an application that is deployed in namespace 'B'. </p>
<p>Thus far, I've tried a few methods, most promising of which was using a combination of Endpoint and service like so:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
ports:
- port: 3306
targetPort: 31234
---
kind: Endpoints
apiVersion: v1
metadata:
name: mysql
subsets:
- addresses:
- ip: 12.34.567.8
ports:
- port: 31234
</code></pre>
<p>While one instance of a mysql container has already been spinned off in namespace 'A', and exposed at 31234 via NodePort configuration.</p>
<p>The Application has an init container, init-mysql that pings the mysql instance, with host name 'mysql' and correct credentials.
I expected the Application to start up as usual, but it was stuck in pod initializing state.
When I tried to check the logs of the init-mysql, I got only the following:</p>
<pre><code>Warning: Using a password on the command line interface can be insecure.
</code></pre>
<p>The commands that are in use for the initContainer 'init-mysql' are:</p>
<pre><code> command:
- sh
- -c
- 'mysqladmin ping -hmysql -P3306 -uusername -ppassword'
</code></pre>
|
<p>This question was asked <a href="https://stackoverflow.com/questions/37221483/service-located-in-another-namespace">here</a></p>
<p>I am posting the accepted answer from Paul (community wiki) for better visibility:</p>
<blockquote>
<p>I stumbled over the same issue and found a nice solution which does
not need any static ip configuration:</p>
<p>You can access a service via it's <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS name</a> (as mentioned by you):
<em>servicename.namespace.svc.cluster.local</em></p>
<p>You can use that DNS name to reference it in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">another namespace via a
local service</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Service
apiVersion: v1
metadata:
name: service-y
namespace: namespace-a
spec:
type: ExternalName
externalName: service-x.namespace-b.svc.cluster.local
ports:
- port: 80
</code></pre>
</blockquote>
|
<p>I'm managing a small Kubernetes cluster on Azure with Postgres. This cluster is accessible through an Nginx controller with a static IP.</p>
<p>The ingress routes to a ClusterIP to a pod which uses a Postgres instance. This Postgres instance has all IPs blocked, with a few exceptions for my own IP and the static IP of the ingress.
This worked well until I pushed an update this morning, where to my amazement I see in the logs an error that the pods IP address differs from the static ingress IP, and it has a permission error because of it.</p>
<p>My question: how is it possible that my pod, with ClusterIP, has a different outer IP address than the ingress static IP I assigned it?
Note that the pod is easily reached, through the Ingress.</p>
|
<p><code>Ingresses</code> and <code>Services</code> handle only incoming pod traffic. Pod outgoing traffic IP depends on Kubernetes networking implementation you use. By default all outgoing connections from pods are source NAT-ed on node level which means pod will have an IP of node which it runs on. So you might want to allow worker node IP addresses in your Postgres.</p>
|
<p>I'm trying to template variables from a map inside the values.yaml into my final Kubernetes ConfigMap YAML.</p>
<p>I've read through <a href="https://github.com/helm/helm/issues/2492" rel="noreferrer">https://github.com/helm/helm/issues/2492</a> and <a href="https://helm.sh/docs/chart_template_guide/" rel="noreferrer">https://helm.sh/docs/chart_template_guide/</a> but can't seem to find an answer.</p>
<p>For some context, this is roughly what I'm trying to do:</p>
<p><strong>values.yaml</strong></p>
<pre><code>config:
key1: value
key2: value-{{ .Release.Name }}
</code></pre>
<p><strong>configmap.yaml</strong></p>
<pre><code>kind: ConfigMap
data:
config-file: |
{{- range $key, $value := .Values.config }}
{{ $key }} = {{ $value }}
{{- end }}
</code></pre>
<p>Where the desired output with would be:</p>
<p><strong>helm template --name v1 mychart/</strong></p>
<pre><code>kind: ConfigMap
data:
config-file: |
key1 = value
key2 = value-v1
</code></pre>
<p>I've tried a few variations using template functions and pipelining, but to no avail:</p>
<pre><code>{{ $key }} = {{ tpl $value . }}
{{ $key }} = {{ $value | tpl . }}
{{ $key }} = {{ tpl $value $ }}
</code></pre>
|
<p>The above would also have worked in this way</p>
<p><strong>values.yaml</strong></p>
<pre><code>config:
key1: "value"
key2: "value-{{ .Release.Name }}"
</code></pre>
<p><strong>configmap.yaml</strong></p>
<pre><code>kind: ConfigMap
data:
config-file: |
{{- range $key, $value := .Values.config }}
{{ $key }} = {{ tpl $value $ }}
{{- end }}
</code></pre>
<p>What I changed was : I put value in quotes in <code>value.yaml</code> and used template <code>tpl</code> in the config map.</p>
|
<p>Trying to pull Docker images to a s390x architecture (available as Hyperprotect VS on IBM public Cloud) and the web based dockerhub search interface doesn't really have a way to only list the specific tags where a Docker image exists for a particular architecture. </p>
<p>I tried <code>using docker pull</code>, <code>docker search</code>, <code>docker manifest</code>, along with some of the "experimental" features. If a Docker image exists, the command will pull it (for example <code>docker pull node:8.11.2</code>) but what if I wanted to see what Node images actually were in dockerhub (or any other repository for that matter) for the s390x, arm, ppcle64, architectures?</p>
<p>Ideas anyone?</p>
<pre><code>$ docker search node
docker pull node:8.11.2-alpine
8.11.2-alpine: Pulling from library/node
no matching manifest for unknown in the manifest list entries
</code></pre>
|
<p>I am posting the answer from <a href="https://stackoverflow.com/questions/31251356/how-to-get-a-list-of-images-on-docker-registry-v2">this question</a>:</p>
<blockquote>
<p>For the latest (as of 2015-07-31) version of Registry V2, you can get
<a href="https://registry.hub.docker.com/u/distribution/registry/" rel="nofollow noreferrer">this image</a>
from DockerHub:</p>
<pre><code>docker pull distribution/registry:master
</code></pre>
<p>List all repositories (effectively images):</p>
<pre><code>curl -X GET https://myregistry:5000/v2/_catalog
> {"repositories":["redis","ubuntu"]}
</code></pre>
<p>List all tags for a repository:</p>
<pre><code>curl -X GET https://myregistry:5000/v2/ubuntu/tags/list
> {"name":"ubuntu","tags":["14.04"]}
</code></pre>
</blockquote>
|
<p>I've deployed my bitnami/mongodb helm chart:</p>
<pre><code>helm install mongodb bitnami/mongodb \
--set architecture="replicaset" \
--set auth.enabled=false
</code></pre>
<p>Services:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-arbiter-headless ClusterIP None <none> 27017/TCP 30m
mongodb-headless ClusterIP None <none> 27017/TCP 30m
</code></pre>
<p>Both nodes are accesible behing <code>mongodb-headless</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>Name: mongodb-headless
Namespace: salut
Labels: app.kubernetes.io/component=mongodb
app.kubernetes.io/instance=mongodb
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=mongodb
helm.sh/chart=mongodb-10.30.11
Annotations: meta.helm.sh/release-name: mongodb
meta.helm.sh/release-namespace: salut
Selector: app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb
Type: ClusterIP
IP Families: <none>
IP: None
IPs: None
Port: mongodb 27017/TCP
TargetPort: mongodb/TCP
Endpoints: 10.42.0.9:27017,10.42.2.5:27017 <<<<<<<<<<<<<<<<<<<<<<<<<<
Session Affinity: None
Events: <none>
</code></pre>
<p>I've created this Traefik <code>IngressRoute</code> in order to get access to my deployed replicaset mongo:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: mongodb
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`mongodb.localhost`)
services:
- kind: Service
name: mongodb-headless
port: 27017
</code></pre>
<p>I need to get access from my host to this deployed replicaset, but I'm not able to reach it:</p>
<pre><code>mongo mongodb://mongodb.localhost:80/mpi
MongoDB shell version v5.0.5
connecting to: mongodb://mongodb.localhost:80/mpi?compressors=disabled&gssapiServiceName=mongodb
...
...
...
...
waiting, bit never reached
</code></pre>
<p>Any ideas?</p>
|
<p><code>IngressRoute</code> is for HTTP services. MongoDB is a TCP service so you should use <a href="https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroutetcp" rel="nofollow noreferrer"><code>IngressRouteTCP</code></a> instead.</p>
|
<p>I've read through all the docs and a few SO posts and can't find an answer to this question:</p>
<p>Where does minikube persist its persistent volumes in my local mac filing system?</p>
<p>Thanks</p>
|
<p>First of all keep in mind that Kubernetes is running on <a href="https://minikube.sigs.k8s.io/docs/" rel="noreferrer">Minikube</a> cluster. <code>Minikube</code> itself run on Virtual Machine, so all data would be stored in this VM not on your MacOS.</p>
<p>When you want to point exact place where you would like to save this data in Kubernetes you could choose between:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="noreferrer">hostpath</a></li>
</ul>
<blockquote>
<p>A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.</p>
</blockquote>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="noreferrer">local</a></li>
</ul>
<blockquote>
<p>A local volume represents a mounted local storage device such as a disk, partition or directory.</p>
<p>Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet.</p>
<p>Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume's node constraints by looking at the node affinity on the PersistentVolume.</p>
</blockquote>
<p>However, <code>Minikube</code> <strong>supports only</strong> <code>hostpath</code>.</p>
<p>In this case you should check <code>Minikube documentation</code> about <a href="https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/" rel="noreferrer">Persistent Volumes</a></p>
<blockquote>
<p>minikube supports <code>PersistentVolumes</code> of type <code>hostPath</code> out of the box. These PersistentVolumes are mapped to a directory inside the running minikube instance (usually a VM, unless you use <code>--driver=none</code>, <code>--driver=docker</code>, or <code>--driver=podman</code>). For more information on how this works, read the Dynamic Provisioning section below.</p>
<p>minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.</p>
<ul>
<li>/data</li>
<li>/var/lib/minikube</li>
<li>/var/lib/docker</li>
<li>/tmp/hostpath_pv</li>
<li>/tmp/hostpath-provisioner</li>
</ul>
</blockquote>
<p>If you would like to mount directory from host you would need to use <code>minikube mount</code>.</p>
<pre><code>$ minikube mount <source directory>:<target directory>
</code></pre>
<p>For more details, please check <a href="https://minikube.sigs.k8s.io/docs/handbook/mount/" rel="noreferrer">Minikube Mounting filesystems</a> documentation.</p>
|
<p>I'm following this tutorial for deploying pgAdmin in a kubernetes cluster:
<a href="https://www.enterprisedb.com/blog/how-deploy-pgadmin-kubernetes" rel="nofollow noreferrer">https://www.enterprisedb.com/blog/how-deploy-pgadmin-kubernetes</a></p>
<p>Mostly it works, but I get erros about the acl permissions of the volume:</p>
<pre><code>WARNING: Failed to set ACL on the directory containing the configuration database:
[Errno 1] Operation not permitted: '/var/lib/pgadmin'
HINT : You may need to manually set the permissions on
/var/lib/pgadmin to allow pgadmin to write to it.
</code></pre>
<p>Since I saw some similar errors before I adjusted in the statefulset the securityContext to:</p>
<pre><code> spec:
securityContext:
runAsUser: 5050
runAsGroup: 5050
fsGroup: 5050
containers:
...
</code></pre>
<p>Some of the issues are gone through this, but not the one above. In the pgAdmin docs I can only find how to solve this by using <code>chmod</code> on the folder, but I want these permissions in the yml files for stable deployment.</p>
<p>How can I do this in my configuration files?</p>
|
<p>In a kubernetes cluster the tutorial seems to have multiple flaws: The ACL permissions can fail, which can be dealt with an initContainer, and the namespaces must fit. Afterwards, it works like a charm.</p>
|
<p>For some reason my master node can no longer connect to my cluster after upgrading from kubernetes 1.11.9 to 1.12.9 via kops (version 1.13.0). In the manifest I'm upgrading <code>kubernetesVersion</code> from 1.11.9 -> 1.12.9. This is the only change I'm making. However when I run <code>kops rolling-update cluster --yes</code> I get the following error:</p>
<pre><code>Cluster did not pass validation, will try again in "30s" until duration "5m0s" expires: machine "i-01234567" has not yet joined cluster.
Cluster did not validate within 5m0s
</code></pre>
<p>After that if I run a <code>kubectl get nodes</code> I no longer see that master node in my cluster.</p>
<p>Doing a little bit of debugging by sshing into the disconnected master node instance I found the following error in my api-server log by running <code>sudo cat /var/log/kube-apiserver.log</code>:</p>
<p><code>controller.go:135] Unable to perform initial IP allocation check: unable to refresh the service IP block: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:4001: connect: connection refused</code></p>
<p>I suspect the issue might be related to etcd, because when I run <code>sudo netstat -nap | grep LISTEN | grep etcd</code> there is no output.</p>
<p>Anyone have any idea how I can get my master node back in the cluster or have advice on things to try?</p>
|
<p>I have made some research I got few ideas for you:</p>
<ol>
<li><p>If there is no output for the etcd grep it means that your etcd server is down. Check the logs for the 'Exited' etcd container <code>| grep Exited | grep etcd</code> and than <code>logs <etcd-container-id></code></p></li>
<li><p>Try this <a href="https://github.com/kubernetes/kops/issues/5864#issuecomment-426771043" rel="nofollow noreferrer">instruction</a> I found:</p></li>
</ol>
<blockquote>
<p>1 - I removed the old master from de etcd cluster using etcdctl. You
will need to connect on the etcd-server container to do this.</p>
<p>2 - On the new master node I stopped kubelet and protokube services.</p>
<p>3 - Empty Etcd data dir. (data and data-events)</p>
<p>4 - Edit /etc/kubernetes/manifests/etcd.manifests and
etcd-events.manifest changing ETCD_INITIAL_CLUSTER_STATE from new to
existing.</p>
<p>5 - Get the name and PeerURLS from new master and use etcdctl to add
the new master on the cluster. (etcdctl member add "name"
"PeerULR")You will need to connect on the etcd-server container to do
this.</p>
<p>6 - Start kubelet and protokube services on the new master.</p>
</blockquote>
<ol start="3">
<li>If that is not the case than you might have a problem with the certs. They are provisioned during the creation of the cluster and some of them have the allowed master's endpoints. If that is the case you'd need to create new certs and roll them for the api server/etcd clusters.</li>
</ol>
<p>Please let me know if that helped. </p>
|
<p>I have a persistent volume created locally in my Kubernetes cluster running in VULTR managed K8S.</p>
<p>When I then deploy multiple pods (for example a Webservice where one can upload images - deployed with multiple replicas) that use this persistent volume via a persistent volume claim, (mounted to a specific path in the pod via volumeMounts) -></p>
<p>How is it possible, when an image gets uploaded via that described Webservice, that runs in the described replicated pods, to make these uploads (uploaded on a specific pod of that service, that runs on a specific node), available to the other pods/nodes?</p>
<p>What happens is, that an image uploaded via the Webservice on pod-x gets saved in the volumeMounts path ONLY and that pod-x, and is not available on the others pods volumeMounts path.</p>
<p>As using Kubernetes in most cases assumed running an application in more then 1 pod/node, what am I missing out here?</p>
<p>PS I am not using a cloud providers storage, I applied the PV via mode „local“.</p>
<p>I also guess that this is not the issue. The issue must be on the PVC end of side as this is where the pods access that bound PV..</p>
<p>Maybe someone has experience with this topic.</p>
<p>Franz</p>
|
<p>Kubernetes does not replicate data across local volumes. You need to use shared storage like NFS or SMB.</p>
|
<p>I setup a Kubernetes cluster with calico.
The setup is "simple"</p>
<ul>
<li>1x master (local network, ok)</li>
<li>1x node (local network, ok)</li>
<li>1x node (cloud server, not ok)</li>
</ul>
<p>All debian buster with docker 19.03</p>
<p>On the cloud server the calico pods do not come up:</p>
<p>calico-kube-controllers-token-x:</p>
<pre><code> Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 47m (x50 over 72m) kubelet Pod sandbox changed, it will be killed and re-created.
Warning FailedMount 43m kubelet MountVolume.SetUp failed for volume "calico-kube-controllers-token-x" : failed to sync secret cache: timed out waiting for the condition
Normal SandboxChanged 3m41s (x78 over 43m) kubelet Pod sandbox changed, it will be killed and re-created.
</code></pre>
<p>calico-node-x:</p>
<pre><code> Warning Unhealthy 43m (x5 over 43m) kubelet Liveness probe failed: calico/node is not ready: Felix is not live: Get "http://localhost:9099/liveness": dial tcp [::1]:9099: connect: connection refused
Warning Unhealthy 14m (x77 over 43m) kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/bird/bird.ctl: connect: no such file or directory
Warning BackOff 4m26s (x115 over 39m) kubelet Back-off restarting failed container
</code></pre>
<p>My guess is that there is something wrong with IP/Network config, but did not figure out which.</p>
<ul>
<li>Required ports (k8s&BGP) are forwarded from the router, also tried the master directly connected to the internet</li>
<li>--control-plane-endpoint is a hostname and public resolveable</li>
<li>Calico is using BGP peering (using public ip as peer)</li>
</ul>
<p>This entry does worry me the most:</p>
<ul>
<li>displayes local ip: kubectl get --raw /api</li>
</ul>
<p>I tried to find a way to change this to the public IP of the master, without success.</p>
<p>Anyone got a clue what to try next?</p>
|
<p>After an additional time spend with analysis the problem happend to be the distributed api ip address was the local one, not the dns-name.</p>
<p>Created a vpn with wireguard from the cloud node to the local master, so the local ip of the master is reachable from the cloud node.</p>
<p>Don't know if that is the cleanest solution, but it works.</p>
|
<p>I am trying to deploy a <code>mongo db</code> deployment together with service, as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:5.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: mongo
ports:
- protocol: TCP
port: 27017
targetPort: 27017
</code></pre>
<p>Even though everything seems to be configured right and deployed, it gets to a <code>CrashLoopBackOff</code> state instead of <code>Running</code>, using a <code>kubectl logs <deployment-name></code> I get the following error:</p>
<pre><code>MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
</code></pre>
<p>Does anybody know what to do?</p>
|
<p>To solve this issue I had to run an older <code>mongo-db</code> docker image version (4.4.6), as follows:</p>
<pre><code>image: mongo:4.4.6
</code></pre>
<p>Reference:</p>
<p><a href="https://github.com/docker-library/mongo/issues/485" rel="noreferrer">Mongo 5.0.0 crashes but 4.4.6 works #485</a></p>
|
<p>So I have 2 PVCs in 2 namespaces binding to 1 PV:</p>
<p>The following are the PVCs:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-git
namespace: mlo-dev
labels:
type: local
spec:
storageClassName: mlo-git
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-git
namespace: mlo-stage
labels:
type: local
spec:
storageClassName: mlo-git
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
</code></pre>
<p>and the PV:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-git
labels:
type: local
spec:
storageClassName: mlo-git
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
hostPath:
path: /git
</code></pre>
<p>in the namespace "mlo-dev", the binding is successful:</p>
<pre><code>$ kubectl describe pvc pvc-git -n mlo-dev
Name: pvc-git
Namespace: mlo-dev
StorageClass: mlo-git
Status: Bound
Volume: pv-git
Labels: type=local
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By:
...
various different pods here...
...
Events: <none>
</code></pre>
<p>Whereas in the namespace "mlo-stage", the binding is failed with the error message: storageclass.storage.k8s.io "mlo-git" not found</p>
<pre><code>$ kubectl describe pvc pvc-git -n mlo-stage
Name: pvc-git
Namespace: mlo-stage
StorageClass: mlo-git
Status: Pending
Volume:
Labels: type=local
Annotations: Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By:
...
various different pods here...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 3m4s (x302 over 78m) persistentvolume-controller storageclass.storage.k8s.io "mlo-git" not found
</code></pre>
<p>As I know, PV is not scoped to namespace, so it should be possible for PVCs in different namespaces to bind to the same PV?</p>
<p>+++++
Added:
+++++</p>
<p>when "kubectl describe pv pv-git", I got the following:</p>
<pre><code>$ kubectl describe pv pv-git
Name: pv-git
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: mlo-git
Status: Bound
Claim: mlo-dev/pvc-git
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /git
HostPathType:
Events: <none>
</code></pre>
|
<p>I've tried to reproduced your scenario (however if you would provide your <code>storageclass</code> yaml to exact reproduction, and changed <code>AccessMode</code> for tests) and in my opinion this behavior is correctly (worked as designed).</p>
<p>When you want to check if specific object is <code>namespaced</code> you can use command:</p>
<pre><code>$ kubectl api-resources | grep pv
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
</code></pre>
<p>As <code>PVC</code> is true its mean <code>pvc</code> is namespaced, and <code>PV</code> is not.</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer">PersistentVolumeClain</a> and <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PersistentVolume</a> are bounding in relationship 1:1. When your first PVC bounded to PV, this PV is <code>taken</code> and cannot be used again in that moment. <strong>You should create second <code>PV</code>.</strong> It can be changed depends on <code>reclaimPolicy</code> and what happend with <code>pop</code>/<code>deployment</code></p>
<p>I guess you are using <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#static" rel="nofollow noreferrer">Static</a> provisioning.</p>
<blockquote>
<p>A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.</p>
</blockquote>
<p>In this case you have to create 1 <code>PV</code> to 1 <code>PVC</code>.</p>
<p>If you would use cloud environment, you would use <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#dynamic" rel="nofollow noreferrer">Dynamic</a> provisioning.</p>
<blockquote>
<p>When none of the static PVs the administrator created match a user's PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class for dynamic provisioning to occur.</p>
</blockquote>
<p>As for example, on <a href="https://cloud.google.com/kubernetes-engine" rel="nofollow noreferrer">GKE</a> I've tried to reproduce it and 1 <code>PVC</code> bound to <code>PV</code>. As GKE is using <code>Dynamic provisioning</code>, when you defined only <code>PVC</code> it used <code>default storageclass</code> and automatically created <code>PV</code>.</p>
<pre><code>$ kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-git 1Gi RWO Retain Bound mlo-dev/pvc-git mlo-git 15s
persistentvolume/pvc-e7a1e950-396b-40f6-b8d1-8dffc9a304d0 1Gi RWO Delete Bound mlo-stage/pvc-git mlo-git 6s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mlo-dev persistentvolumeclaim/pvc-git Bound pv-git 1Gi RWO mlo-git 10s
mlo-stage persistentvolumeclaim/pvc-git Bound pvc-e7a1e950-396b-40f6-b8d1-8dffc9a304d0 1Gi RWO mlo-git 9s
</code></pre>
<p><strong>Solution</strong></p>
<p>To fix this issue, you should create another <code>PersistentVolume</code> to bound second <code>PVC</code>.</p>
<p><strong>For more details</strong> about bounding you can check <a href="https://stackoverflow.com/questions/57839938/kubernetes-pvcs-sharing-a-single-pv">this topic</a>. If you would like more information about <code>PVC</code> check <a href="https://stackoverflow.com/questions/63412552/why-readwriteonce-is-working-on-different-nodes">this SO thread</a>.</p>
<p><strong>If second <code>PV</code> won't help</strong>, please provide more details about your environment (Minikube/Kubeadm, K8s version, OS, etc.) and your <code>storageclass</code> YAML.</p>
|
<p>I'm using the AKS cluster with version 1.19, and I found that this version of K8s using <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd" rel="nofollow noreferrer">Containerd</a> instead of Dockershim as the container runtime.
I also use Fluentd to collect logs from my spring apps, with k8s version 1.18 it works okay, but with k8s version 1.19 I can't collect logs from my spring app.
I use <a href="https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-azureblob.yaml" rel="nofollow noreferrer">this file</a> for my Fluentd DeamonSet.
I wonder if the log files of my applications are not lived in var/log/containers, is this correct?</p>
|
<p>I found a solution here: <a href="https://github.com/fluent/fluentd-kubernetes-daemonset#use-cri-parser-for-containerdcri-o-logs" rel="nofollow noreferrer">use-cri-parser-for-containerdcri-o-logs</a></p>
<blockquote>
<p>By default, these images use json parser for /var/log/containers/
files because docker generates json formatted logs. On the other hand,
containerd/cri-o use different log format. To parse such logs, you
need to use cri parser instead.</p>
</blockquote>
<p>We need to build a new fluentd image using cri parser, that works for me.</p>
|
<p>I'm trying to use GRPC between my services. I have just three services. Two of them running in k8s. One in GCP.
I make calls from the service in GCP to other services. I use the same NettyChannelBuilder for my stubs. The clients have the same requests per seconds rate. I've set keepAliveWithoutCalls=true, keepAliveTime, idleTimeout in the builder.
Unfortunately I periodically get this error from only one client? </p>
<pre><code>io.grpc.StatusRuntimeException: UNAVAILABLE: upstream connect error or disconnect/reset before headers. reset reason: connection failure
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:233)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:214)
</code></pre>
<p>Could anyone please help me to find solution?
I've read many issues on github already. Now, I'm going to set retry policy for calls but it's not the best solution. </p>
|
<p>From <a href="https://github.com/grpc/grpc-web/issues/528" rel="nofollow noreferrer">github</a></p>
<blockquote>
<p>We faced the same error. The reason for us was because tcp_keepalive was set too high for our upstream service. We changed keepalive_time to 300 seconds and the problem went away.</p>
</blockquote>
<blockquote>
<p>It makes sense in our case because we have Envoy pointing to a network load balancer (aws) which has a 350s idle timeout.</p>
</blockquote>
<blockquote>
<p>We added this to our envoy config.</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>clusters:
- name: grpc-service
connect_timeout: 0.25s
http2_protocol_options: {}
upstream_connection_options:
tcp_keepalive:
keepalive_time: 300
</code></pre>
|
<p>My company has one project which required 3-4 days of deployment time. I thought about it and try to make one deployment modal for this project using Kubernetes. </p>
<p>I read all about it but getting into project-level create some problem.
What is done till now...</p>
<ol>
<li>Created Kubernetes cluster with one master node and one worker node in ubuntu VM.</li>
<li>Understand I need to create a Deployment file, Service file, Persistent volume, and claim.</li>
<li>Created a custom image with the base image as CentOS7 and python2.7 with certain requirements and uploaded them on the docker hub.</li>
</ol>
<p>Now I created one Deployment.yml file to pull that image but it is Showing CrashLoopBackOff error and IT IS NOT able to pull the image through Deployment.yml file</p>
<p>Note: I pulled the image separately using docker and it is working. </p>
<p>Thanks in advance :)</p>
|
<p>It is a very wide area but I could give you certain high level points with respect to kubernetes.</p>
<ul>
<li>Create different clusters for different projects. Also create different cluster for different environment like QA, Dev, Production.</li>
<li>Set resource quotas for individual projects. Also your deployments should have resource limits for RAM and CPUs. Precisely estimate the resource demand for each and every application.</li>
<li>Use namespaces for logical separation and using tags is always a good approach.</li>
<li>If you want to follow template based approach, you could search about helm charts.</li>
<li>Your k8s nodes, disks, deployments, services, ingress any other kind of kubernetes object you create should have labels.</li>
<li>Use node auto scaling (cloud specific) and horizontal pod auto scaling techniques for better scaling and resilience.</li>
<li>Always try to distribute your k8s deployments across region for fail-over strategy. If anything goes down in some part of your hosted region then your application should sustain it.</li>
<li>In case your want to move project to some reputed cloud provider, try to integrate cloud provided security and firewall rules with your k8s cluster.</li>
</ul>
<p>I hope this would help.</p>
|
<p>I would like to use <a href="https://github.com/kubernetes-sigs/kind" rel="noreferrer">kind</a> (Kubernetes in Docker) to test a tool I am writing. I would love to have a test matrix of different Kubernetes versions to test against, is there a way to configure the kubernetes version in <a href="https://github.com/kubernetes-sigs/kind" rel="noreferrer">kind</a> somehow?</p>
|
<p>You can specify the image to be used for the nodes and choose any other published version:</p>
<pre><code>kind create cluster --image "kindest/node:v1.14.1"
</code></pre>
<p>Available tags can be found at <a href="https://hub.docker.com/r/kindest/node/tags" rel="noreferrer">https://hub.docker.com/r/kindest/node/tags</a></p>
|
<p>I have tried the database application for mysql and postgres too. When I use pv type as OSS, and deploy the application, pods goes in CrashbackLoopOff. The error which I am constantly getting is </p>
<pre><code>chown: changing ownership of '/var/lib/mysql/': Input/output error
</code></pre>
<p>(For PostgreSQL I get the same error with <code>var/lib/postgresql/data</code>.)</p>
<p>The path which it is giving error is of container.The ways which I tried is, before uploading files to OSS I changed the ownership of files and folders from 999 to root and then uploaded to OSS.
But then also I am getting this error for every database.
Please give me solution for this as I am stuck from longer time.
Thanks in Advance </p>
|
<p>If I understand you correctly there are few things you can do:</p>
<ol>
<li><p>Launch the db container as <code>root</code> and than <code>chown</code> the directory. In case of mysql if you still can't change it than try running <code>sudo chown -R mysql:mysql /var/lib/mysql</code> as <code>mysql:mysql</code> is the default ownership there.</p></li>
<li><p>Use <code>initContainer</code> in order to change the target folder to <code>/var/lib/mysql/</code></p></li>
<li><p>Use <code>securityContext</code> for <code>containers</code>. For example:</p></li>
</ol>
<pre><code>containers:
- name: mysql
image: <msql_image>
securityContext:
runAsUser: 0
</code></pre>
<p>It should all also work for <code>postgresql</code>.</p>
<p>Please let me know if that helped. </p>
|
<p>I am trying to pull images from my docker-hub repo. I followed the documentation found here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
<p>However, after typing in the command:</p>
<blockquote>
<p>kubectl create secret generic docker-credentials --from-file=/my/local/path/to/.docker/config.json --type=kubernetes.io/dockerconfigjson</p>
</blockquote>
<p>I get the following error:</p>
<blockquote>
<p>The Secret "docker-credentials" is invalid: data[.dockerconfigjson]:
Required value</p>
</blockquote>
<p>I tried deleting the config.json and re-logging in but with no change in behaviour.</p>
<p>docker version prints:</p>
<pre><code>Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea838
Built: Wed Nov 13 07:29:52 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea838
Built: Wed Nov 13 07:28:22 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
</code></pre>
<p>kubectl version prints:</p>
<pre><code>Client Version: version.Info{
Major:"1",
Minor:"17",
GitVersion:"v1.17.2",
GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89",
GitTreeState:"clean",
BuildDate:"2020-01-18T23:30:10Z",
GoVersion:"go1.13.5",
Compiler:"gc",
Platform:"linux/amd64"
}
Server Version: version.Info{
Major:"1",
Minor:"15",
GitVersion:"v1.15.2",
GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568",
GitTreeState:"clean",
BuildDate:"2019-08-05T09:15:22Z",
GoVersion:"go1.12.5",
Compiler:"gc",
Platform:"linux/amd64"
}
</code></pre>
<p>the config.json looks like this:</p>
<pre><code>{
"auths": {
"https://index.docker.io/v1/": {
"auth": "secret-stuff"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.5 (linux)"
}
}
</code></pre>
<p>For the moment I can enter the credentials manually but I'd like to understand what's going wrong.</p>
|
<p>I ran into the same issue with <code>kubectl create secret generic --type=kubernetes.io/dockerconfigjson</code> ("<em>The Secret "xxx" is invalid: data[.dockerconfigjson]: Required value</em>") and it was because of a mistake in the command line, which I think Marcus's command has too.</p>
<p>I had:</p>
<p><code>--from-file=/run/user/xxxx/containers/auth.json</code></p>
<p>It was supposed to be:</p>
<p><code>--from-file=.dockerconfigjson=/run/user/xxxx/containers/auth.json</code></p>
<p>I mis-interpreted the --from-file option as taking a file path, but it takes a key=value, which I guess in this case is supposed to be the key ".dockerconfigjson".</p>
<p>The answer by Crou hints at --from-file as the culprit, but I thought I would add an answer to spell out what's missing. (And I think the point about the file <code>type</code> being missing was misleading, because the --type option is there in the command, that's why I skipped over that answer originally when looking at this question for help.)</p>
<p>Note that I got a hint of what was wrong by running kubectl in verbose mode (--v=N, see <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging</a>), which showed kubectl telling the API server the structure of the Secret resource, and the structure didn't say ".dockerconfigjson" like I expected from the Secret YAML examples in the Kubernetes docs.</p>
|
<p>recently I have deployed an kubernetes cluster which is running wordpress instance and phpmyadmin. I'm using Nginx ingress controller to perform path based routing for both the services. However, request to <code>/</code> is happening without any hassle but when I request <code>domain.com/phpmyadmin/</code> I get a login page after which I have been redirected to <code>domain.com/index.php</code> instead of <code>domain.com/phpmyadmin/index.php</code>. Please suggest me possible turn around for this. Thank you guys for the support :)</p>
<p>My ingress.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/add-base-url : "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
# ingress.kubernetes.io/rewrite-target: "^/phpmyadmin/"
spec:
rules:
- host: example.domain.com
http:
paths:
- path: /
backend:
serviceName: wordpress
servicePort: 80
- path: /phpmyadmin(/|$)(.*)
backend:
serviceName: phpmyadmin
servicePort: 80
</code></pre>
|
<p>I'd say issue is not on <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">Nginx Ingress</a> side.</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: "/$2"
...
- path: /phpmyadmin(/|$)(.*)
</code></pre>
<p>Should work properly for you.</p>
<p>However there is second part, configuration of <code>myphpadmin</code>. As you didn't provide this configuration I would guess what could cause this issue.</p>
<p>Like mentioned in <a href="https://docs.phpmyadmin.net/en/latest/config.html#config" rel="nofollow noreferrer">phpmyadmin docs</a>, sometimes you need to set <code>$cfg['PmaAbsoluteUri']</code></p>
<blockquote>
<p>In some setups (like separate SSL proxy or load balancer) you might have to set $cfg['PmaAbsoluteUri'] for correct redirection.</p>
</blockquote>
<p>As I based on <a href="https://gist.github.com/dnaroma/178b3b187aa329c01b27d90a7b38709c" rel="nofollow noreferrer">this configuration</a>, many depends on how you configured <code>PMA_ABSOLUTE_URI</code>, is it <code>http://somedomain.com/phpmyadmin</code> or different?
Is important as you might encounter situation like:</p>
<ul>
<li>When you enter to <code>http://somedomain.com/phpmyadmin</code> and login you will be redirected to <code>http://somedomain.com/</code> so <code>Ingress</code> will redirect you to <code>path: /</code> set in ingress</li>
<li>If you will again enter <code>http://somedomain.com/phpmyadmin</code> you will be able to see <code>phpmyadmin</code> content, like you would be already logged in.</li>
</ul>
<p>You could try to add <code>env</code> in your <code>myphpadmin</code> deployment. It would look similar like below:</p>
<pre><code> env:
- name: PMA_ABSOLUTE_URI
value: http://somedomain.com/myphpadmin/
</code></pre>
<p>Last thing, its not recommended to use expose <code>phpmyadmin</code> without <code>https</code>.</p>
<p>For some extra information you can read <a href="https://blog.dnaroma.eu/2020/07/31/study-k8s-with-microk8s/" rel="nofollow noreferrer">this</a> article.</p>
<p><strong>In short:</strong></p>
<ul>
<li>Nginx ingress configuration looks ok</li>
<li>Check your <code>myphpadmin</code> configuration, especially <code>PMA_ABSOLUTE_URI</code>.</li>
</ul>
|
<p>When I'm creating resources for OpenShift/K8s, I might be out of coverage area. I'd like to get schema definition being offline.</p>
<p>How I can get from command line a schema for a kind. For example I would like to get a generic schema for Deployment, DeploymentConfig, Pod or a Secret.
Is there a way to get schema without using google? Ideally if I could get some documentation description for it.</p>
|
<p>Posting @Graham Dumpleton comment as a community wiki answer basing on the response from OP saying that it solved his/her problem:</p>
<blockquote>
<p>Have you tried running <code>oc explain --recursive Deployment</code>? You still
need to be connected when you generate it, so you would need to save
it to a file for later reference. Maybe also get down and read the
free eBook at openshift.com/deploying-to-openshift which mentions this
command and lots of other stuff as well. – Graham Dumpleton</p>
</blockquote>
|
<p>I need some beginner help to KrakenD. I am running it on Ubuntu. The config is provided below.</p>
<p>I am able to reach the /healthz API without problem.</p>
<p>My challenge is that the /hello path returns error 500. I want this path to redirect to a Quarkus app that runs at <a href="http://getting-started36-getting-going.apps.bamboutos.hostname.us/" rel="nofollow noreferrer">http://getting-started36-getting-going.apps.bamboutos.hostname.us/</a>.</p>
<p>Why is this not working? If I modify the /hello backend and use a fake host, I get the exacts ame result. This suggests that KrakendD is not even trying to connect to the backend.</p>
<p>In logs, KrakendD is saying:</p>
<p><code>Error #01: invalid character 'H' looking for beginning of value</code></p>
<p>kraken.json:</p>
<pre><code>{
"version": 2,
"port": 9080,
"extra_config": {
"github_com/devopsfaith/krakend-gologging": {
"level": "DEBUG",
"prefix": "[KRAKEND]",
"syslog": false,
"stdout": true,
"format": "default"
}
},
"timeout": "3000ms",
"cache_ttl": "300s",
"output_encoding": "json",
"name": "KrakenD API Gateway Service",
"endpoints": [
{
"endpoint": "/healthz",
"extra_config": {
"github.com/devopsfaith/krakend/proxy": {
"static": {
"data": { "status": "OK"},
"strategy": "always"
}
}
},
"backend": [
{
"url_pattern": "/",
"host": ["http://fake-backend"]
}
]
},
{
"endpoint": "/hello",
"extra_config": {},
"backend": [
{
"url_pattern": "/hello",
"method": "GET",
"host": [
"http://getting-started36-getting-going.apps.bamboutos.hostname.us/"
]
}
]
}
]
}
</code></pre>
<p>What am I missing?</p>
|
<p>add "encoding": "string" to the backend section.</p>
<pre><code>"backend": [
{
"url_pattern": "/hello",
"method": "GET",
"encoding": "string" ,
"host": [
"http://getting-started36-getting-going.apps.bamboutos.hostname.us/"
]
}
]
</code></pre>
|
<p>kubernetes V19</p>
<p>Create a new NetworkPolicy named allow-port-from-namespace that allows Pods in the existing namespace internal to connect to port 80 of other Pods in the same namespace.</p>
<p>Ensure that the new NetworkPolicy:</p>
<p>does not allow access to Pods not listening on port 80
does not allow access from Pods not in namespace internal</p>
<p>i need to know if i can do it without adding a labels to namspace and pod or not ?</p>
|
<ol>
<li>You need to label the namespace first</li>
</ol>
<p>For e.g <strong>kubectl label ns namespace-name env: testing</strong></p>
<p>2.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: staging
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
env: staging
ports:
- protocol: TCP
port: 80
</code></pre>
|
<p>How do I clear old deployments? I'm able shrink a deployment to 0 replicas via <code>kubectl scale deployment.v1.apps/hello-kubernetes3 --replicas=0</code>, but as shown below they're still present in some form.</p>
<pre><code>$ kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
hello-kubernetes-5cb547b7d 1 1 1 27m hello-kubernetes paulbouwer/hello-kubernetes:1.8 app=hello-kubernetes,pod-template-hash=5cb547b7d
hello-kubernetes-6d9fd679cd 0 0 0 32m hello-kubernetes paulbouwer/hello-kubernetes:1.8 app=hello-kubernetes,pod-template-hash=6d9fd679cd
hello-kubernetes3-6d9fd679cd 0 0 0 25m hello-kubernetes paulbouwer/hello-kubernetes:1.8 app=hello-kubernetes,pod-template-hash=6d9fd679cd
</code></pre>
|
<p>This is a community wiki answer as a part of it would be based on @meaningqo comment but I would like to share some more light on this topic with a help of the official documentations.</p>
<p>What you were doing in the first place is not deleting a deployment but actually scaling it to 0. In order to delete a deployment or any other resource you should use the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete" rel="nofollow noreferrer">kubectl delete command</a>:</p>
<blockquote>
<p>Delete resources by filenames, stdin, resources and names, or by
resources and label selector.</p>
<p>JSON and YAML formats are accepted. Only one type of the arguments may
be specified: filenames, resources and names, or resources and label
selector.</p>
</blockquote>
<p>Note that:</p>
<blockquote>
<p>Some resources, such as pods, support graceful deletion. These
resources define a default period before they are forcibly terminated
(the grace period) (...) Because these resources often represent
entities in the cluster, deletion may not be acknowledged immediately.</p>
</blockquote>
<p>So you may want to wait a bit before seeing the results.</p>
<p>Referring to your second question. There are also other options aimed to work with <code>ReplicaSets</code> specifically:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#deleting-a-replicaset-and-its-pods" rel="nofollow noreferrer">Deleting a ReplicaSet and its Pods</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#deleting-just-a-replicaset" rel="nofollow noreferrer">Deleting just a ReplicaSet</a></p>
</li>
</ul>
<p>I also recommend familiarizing yourself with the whole <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet guide</a> for better understanding of this particular topic.</p>
|
<p>We want to disable <code>oc get/describe</code> for <code>secrets</code> to prevent token login</p>
<p>The current policy prevent create, update, delete but not the viewing of secrets</p>
<pre><code>package admission
import data.k8s.matches
# Deny all user for doing secret ops except policyadmin
deny[query] {
matches[[resource]]
not "policyadmin" == resource.userInfo.username
"Secret" == resource.kind.kind
msg := sprintf("Custom Unauthorized user: %v", [resource.userInfo.username])
query = {
"id": "policy-admin-for-secret-only",
"resource": {
"kind": kind,
"namespace": namespace,
"name": name
},
"resolution": {
"message": msg
},
}
}
</code></pre>
<p>The data in the resource object is just: </p>
<blockquote>
<p>{\"kind\": {\"group\": \"\", \"kind\": \"Secret\", \"version\":
\"v1\"}, \"name\": \"s5-token-n6v6q\", \"namespace\": \"demo\",
\"operation\": \"DELETE\", \"resource\": {\"group\": \"\",
\"resource\": \"secrets\", \"version\": \"v1\"}, \"uid\":
\"748cdab2-1c1d-11ea-8b11-080027f8814d\", \"userInfo\": {\"groups\":
[\"system:cluster-admins\", \"system:masters\",
\"system:authenticated\"], \"username\": \"system:admin\"}</p>
</blockquote>
<p>The example in <a href="https://github.com/raffaelespazzoli/openshift-opa/blob/master/examples/authorization-webhooks/unreadable_secrets.rego" rel="nofollow noreferrer">https://github.com/raffaelespazzoli/openshift-opa/blob/master/examples/authorization-webhooks/unreadable_secrets.rego</a> uses the <strong>resource.spec</strong> object, but I don't think it's available in my <code>input/AdmissionReview</code> object?</p>
<p>I am using </p>
<ul>
<li>minishift 1.24 </li>
<li>openshift v3.9.0+2e78773-56 </li>
<li>kubernetes v1.9.1+a0ce1bc657 </li>
<li>etcd 3.2.16</li>
</ul>
|
<p>Admission control in Kubernetes does NOT let you control a <code>get</code>. It only lets you control <code>create</code>, <code>update</code>, <code>delete</code>, and <code>connect</code>. The API docs for the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#validatingwebhookconfiguration-v1-admissionregistration-k8s-io" rel="nofollow noreferrer">validating webhook</a> and its descendent RuleWithOperations (no handy link) don't make this clear, but the <a href="https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#admission-control" rel="nofollow noreferrer">docs introducing API access</a> state it explicitly. </p>
<p>To control <code>get</code>, you need to use <a href="https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#authorization" rel="nofollow noreferrer">authorization</a>. You could use <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> to restrict who can <code>get</code> any of the <code>Secret</code>s. To use OPA for authorization you would need the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules" rel="nofollow noreferrer">authorization webhook mode</a>.</p>
<p>In Andrew's code that you link to, he is using an authorization webhook--not an admission control webhook. That's why some of the data he is using from <code>input</code> isn't the same as what you see from an admission control webhook. Taking a quick look at his writeup, it seems you need to follow his instructions to <a href="https://github.com/raffaelespazzoli/openshift-opa#enable-authorization" rel="nofollow noreferrer">Enable Authorization</a>. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.