Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
β | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
β |
---|---|---|---|
<p>I'm trying to deploy my web application using Kubernetes. I used Minikube to create the cluster and successfully exposed my frontend react app using ingress. Yet when I attached the backend service's URL in "env" field in the frontend's deployment.yaml, it does not work. When I tried to connect to the backend service through the frontend pod, the connection refused.</p>
<p>frontend deployment yaml</p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: <image_name>
imagePullPolicy: Never
ports:
- containerPort: 80
env:
- name: REACT_APP_API_V1_ENDPOINT
value: http://backend-svc
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: frontend
</code></pre>
<p>Ingress for frontend</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: front-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-read-timeout: "12h"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: front-testk.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-svc
port:
number: 80
</code></pre>
<p>Backend deployment yaml</p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: backend
labels:
name: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: <image_name>
ports:
- containerPort: 80
imagePullPolicy: Never
restartPolicy: Always
---
kind: Service
apiVersion: v1
metadata:
name: backend-svc
labels:
app: backend
spec:
selector:
app: backend
ports:
- name: http
port: 80
targetPort: 80
</code></pre>
<pre><code>% kubectl get svc backend-svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
backend-svc ClusterIP 10.109.107.145 <none> 80/TCP 21h app=backend
</code></pre>
<p>Connected inside frontend pod and try to connect to the backend using the ENV created during deploy:</p>
<pre><code>β― kubectl exec frontend-75579c8499-x766s -it sh
/app # apk update && apk add curl
OK: 10 MiB in 20 packages
/app # env
REACT_APP_API_V1_ENDPOINT=http://backend-svc
/app # curl $REACT_APP_API_V1_ENDPOINT
curl: (7) Failed to connect to backend-svc port 80: Connection refused
/app # nslookup backend-svc
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: backend-svc.default.svc.cluster.local
Address: 10.109.107.145
** server can't find backend-svc.cluster.local: NXDOMAIN
</code></pre>
<p>Exec into my backend pod</p>
<pre><code># netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1/node
# netstat -lnturp
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
</code></pre>
| efgdh | <p>As I suspected your application listens on port 8080. If you closely at your output from <code>netstat</code> you will notice that <code>Local Address</code> is <code>0.0.0.0:8080</code>:</p>
<pre><code># netstat -tulpn
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 π 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1/node
</code></pre>
<p>In order to fix that you have to correct your <code>targetPort</code> in your service:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Service
apiVersion: v1
metadata:
name: backend-svc
labels:
app: backend
spec:
selector:
app: backend
ports:
- name: http
port: 80
targetPort: 8080 # π change this to 8080
</code></pre>
<p>There is no need to change the port on the deployment side since the <code>containerPort</code> is primarily informational. Not specifying a port there does not prevent that port from being exposed. Any port which is listening on the default <code>"0.0.0.0"</code> address inside a container will be accessible.</p>
| acid_fuji |
<p>For some troubleshooting, I need to manually change the status of a running job from <code>active</code> to <code>successful</code> to make it completed. The job itself is an infinite loop that does not finish. The option to delete the job cannot be used because it puts the job in a failed state.</p>
<p>Update: The job actually does not fail, instead it gets stuck, and therefore I delete it which makes it go to the failed state. Also, it is not possible to change the code of the job (it is not a bash script).</p>
<p>Thanks</p>
| imriss | <p>It looks to me that you are more interested in treating the symptoms of your problem than the actual reasons behind them.</p>
<blockquote>
<p>This is for quick troubleshooting where I do not want to stop the rest
to add a bypass for the status of this job.</p>
</blockquote>
<p>I think that quicker way would be to actually make sure that your other jobs are less dependable on this one instead of trying to force Kubernetes to mark this Job/Pod as successful.</p>
<p>The closest thing I could get to your goal was to <code>curl</code> api-server directly with <code>kube-proxy</code>. But that solution only works if the job is failed first and unfortunately it does not work with running pods.</p>
<p>For this example I used job that exits with status 1:</p>
<pre class="lang-yaml prettyprint-override"><code> containers:
- name: job
image: busybox
args:
- /bin/sh
- -c
- date; echo sleeping....; sleep 5s; exit 1;
</code></pre>
<p>Then run <code>kubectl-proxy</code>:</p>
<pre class="lang-java prettyprint-override"><code>β ~ kubectl proxy --port=8080 &
[1] 18372
β ~ Starting to serve on 127.0.0.1:8080
</code></pre>
<p>And post the status to api-server:</p>
<pre class="lang-sh prettyprint-override"><code>curl localhost:8080/apis/batch/v1/namespaces/default/jobs/job3/status -XPATCH -H "Accept: application/json" -H "Content-Type: application/strategic-merge-patch+json" -d '{"status": {"succeeded": 1}}'
</code></pre>
<pre class="lang-java prettyprint-override"><code> ],
"startTime": "2021-01-28T14:02:31Z",
"succeeded": 1,
"failed": 1
}
}%
</code></pre>
<p>If then check the job status I can see that it was marked as completed.</p>
<pre class="lang-java prettyprint-override"><code>β ~ k get jobs
NAME COMPLETIONS DURATION AGE
job3 1/1 45s 45s
</code></pre>
<p>PS. I tried this way to set up the status to successful/completed for either the job or pod but that was not possible. The status changed for moment and then <code>controller-manager</code> reverted the status to running. Perhaps this small <code>window</code> with status changed might be what you want and it will allow your other jobs to move on. I'm merely assuming this since I don't the details.</p>
<p>For more reading how to access API in that way please have a look at the <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#using-kubectl-proxy" rel="nofollow noreferrer">using kubectl</a> docs.</p>
| acid_fuji |
<p>We are target using Prometheus, alertmanager and Grafana for monitoring AKS but it has been found that cannot obtain the kubelet metrics, I don't know whether it is blackbox/hidden by Azure or not. In addition, container CPU usage i.e. container_cpu_usage_seconds_total cannot obtain in Prometheus. Is anyone have experience on using Prometheus to monitor AKS ? </p>
<p>Remark: I using this <a href="https://github.com/camilb/prometheus-kubernetes" rel="nofollow noreferrer">https://github.com/camilb/prometheus-kubernetes</a> to install Prometheus on AKS</p>
| John Jin | <p>I assume kubelet is not being detected target to scrape metrics from.
it has to do with your AKS version,
in prior 1.15 versions the metrics-server was started as follow:</p>
<ul>
<li>command:
<ul>
<li>/metrics-server</li>
<li>--kubelet-port=10255</li>
<li>--deprecated-kubelet-completely-insecure=true</li>
<li>--kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP</li>
</ul>
</li>
</ul>
<p>while in recent versions of aks:</p>
<ul>
<li>command:
<ul>
<li>/metrics-server</li>
<li>--kubelet-insecure-tls</li>
<li>--kubelet-preferred-address-types=InternalIP</li>
</ul>
</li>
</ul>
| talal |
<p>I am running a deployment on a cluster of 1 master and 4 worker nodes (2-32GB and 2-4GB machine). I want to run a maximum of 10 pods on 4GB machines and 50 pods in 32GB machines.</p>
<p>Is there a way to assign different number of pods to different nodes in Kubernetes for same deployment?</p>
| love | <blockquote>
<p>I want to run a maximum of 10 pods on 4GB machines and 50 pods in 32GB
machines.</p>
</blockquote>
<p>This is possible with <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="nofollow noreferrer">configuring</a> kubelet to limit the maximum pod count on the node:</p>
<pre class="lang-golang prettyprint-override"><code>// maxPods is the number of pods that can run on this Kubelet.
MaxPods int32 `json:"maxPods,omitempty"`
</code></pre>
<p>Github can be found <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubelet/config/v1beta1/types.go#L480-L488" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>Is there a way to assign different number of pods to different nodes
in Kubernetes for same deployment?</p>
</blockquote>
<p>Adding this to your request makes it not possible. There is no such native mechanism in Kubernetes at this point to suffice this. And this more or less goes in spirit of how Kubernetes works and its principles. Basically you schedule your application and let scheduler decides where it should go, unless there is very specific resource required like GPU. And this is possible with labels,affinity etc .</p>
<p>If you look at the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#deployment-v1-apps" rel="nofollow noreferrer">Kubernetes-API</a> you notice the there is no such field that will make your request possible. However, API functionality can be extended with custom resources and this problem can be tackled with creating your own scheduler. But this is not the easy way of fixing this.</p>
<p>You may want to also set appropriate memory requests. Higher requests will tell scheduler to deploy more pods into node which has more memory resources. It's not ideal but it is something.</p>
| acid_fuji |
<p>I have the following code</p>
<pre><code>#/bin/bash
set -e
set -x
requestResponse=$(ssh jump.gcp.xxxxxxx """source dev2 spi-dev
kubectl get pods -o json | jq '.items[] |select(.metadata.name[0:3]=="fea")' | jq .status.podIP
2>&1""")
echo $requestResponse
</code></pre>
<p>In the above code <code>source dev2 spi-dev</code> means we have moved to spi-dev namespace inside dev2 cluster. <code>kubectl get pods -o json | jq '.items[] |select(.metadata.name[0:3]=="fea")' | jq .status.podIP 2>&1""") </code> means to print ip address of pod starting with fea. If I do manually kubectl command works. I have also tried escaping fea like <code>\"fea\"</code></p>
| Sriram Chowdary | <p>These triple quotes <code>"""</code> are not working as you expect.
Try to change it like this:</p>
<pre><code>ssh jump.gcp.xxxxxxx << EOF
source dev2 spi-dev
kubectl get pods -o json | \
jq '.items[] | \
select(.metadata.name[0:3]=="fea")' | \
jq .status.podIP 2>&1
EOF
</code></pre>
| Ivan |
<p>Is there way in istio to set a custom request header (x-custom-header) with value as dynamic value (uuid) or setting value of the custom header from an already existing header?</p>
<p>I am using gateway + virtualservice + envoy(sidecar)</p>
<p>Example configuration options in nginx =
proxy_set_header x-custom-header $connection:$connection_requests;</p>
<p>I tried below snippet in my virtual service and got literal value "{{ uuid }}" forwarded to my application.</p>
<p>` http:</p>
<ul>
<li><p>headers:
request:
set:
x-custom-header: '{{ uuid() }}'</p>
<p>match:</p>
<ul>
<li>uri:
prefix: /`</li>
</ul>
</li>
</ul>
| aayush kumar | <p>I was able to solve this by using following snippet in virtual service</p>
<pre><code> http:
- headers:
request:
set:
x-custom-header: '%REQ(x-request-id)%'
</code></pre>
| aayush kumar |
<p>I deployed <strong>Spring Boot Thymeleaf Application</strong> to <strong>AWS Kubernetes Cluster</strong>.</p>
<p>Locally it works fine:</p>
<pre><code>http://localhost:8097
</code></pre>
<p>But when I deploy it to <strong>AWS Kubernetes Cluster</strong>, I see the following error:</p>
<p><strong>Whitelabel Error Page 404</strong></p>
<p>Here are some files of my application:</p>
<p><strong>application.properties:</strong></p>
<pre><code>### server port
server.port=8097
</code></pre>
<p><strong>WebController.java:</strong></p>
<pre><code>@Controller
public class WebController {
@Autowired
private CustomerDAO customerDAO;
@GetMapping(path = "/")
public String index() {
return "external";
}
}
</code></pre>
<p><strong>pom.xml:</strong></p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.6</version>
<relativePath />
</parent>
<groupId>skyglass</groupId>
<artifactId>customer-management-keycloak</artifactId>
<version>1.0.0</version>
<name>customer-management-keycloak</name>
<description>Customer Management Application</description>
<properties>
<java.version>11</java.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.keycloak.bom</groupId>
<artifactId>keycloak-adapter-bom</artifactId>
<version>13.0.1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.keycloak</groupId>
<artifactId>keycloak-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<addResources>true</addResources>
</configuration>
</plugin>
<!-- Docker Spotify Plugin -->
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.4.13</version>
<executions>
<execution>
<id>default</id>
<goals>
<goal>build</goal>
</goals>
</execution>
</executions>
<configuration>
<repository>skyglass/${project.name}</repository>
<tag>${project.version}</tag>
<skipDockerInfo>true</skipDockerInfo>
</configuration>
</plugin>
</plugins>
</build>
</project>
</code></pre>
<p><strong>Dockerfile:</strong></p>
<pre><code>FROM adoptopenjdk/openjdk11:alpine-jre
VOLUME /tmp
EXPOSE 8097
ADD target/*.jar app.jar
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
</code></pre>
<p><strong>customermgmt.yaml:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: customermgmt
labels:
app: customermgmt
spec:
replicas: 1
selector:
matchLabels:
app: customermgmt
template:
metadata:
labels:
app: customermgmt
spec:
containers:
- name: customermgmt
image: skyglass/customer-management-keycloak:1.0.0
imagePullPolicy: Always
ports:
- containerPort: 8097
hostPort: 8097
---
apiVersion: v1
kind: Service
metadata:
name: customermgmt
spec:
ports:
- protocol: TCP
name: web
port: 80
targetPort: 8097
selector:
app: customermgmt
</code></pre>
<p><strong>traefik-ingress.yaml:</strong></p>
<pre><code>apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-customermgmt-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/customermgmt"
backend:
serviceName: "customermgmt"
servicePort: 80
</code></pre>
| Mykhailo Skliar | <p>The following configuration in <strong>application.properties</strong> fixed the issue:</p>
<pre><code>server.servlet.context-path=/customermgmt
</code></pre>
<p>The <strong>path name</strong> should be equal to the <strong>ingress path</strong></p>
<p>See the full code on my <strong>github</strong>:</p>
<p><strong><a href="https://github.com/skyglass-examples/customer-management-keycloak" rel="nofollow noreferrer">https://github.com/skyglass-examples/customer-management-keycloak</a></strong></p>
| Mykhailo Skliar |
<p>I have deployed the Bitnami helm chart of elasticsearch on the Kubernetes environment.</p>
<p><a href="https://github.com/bitnami/charts/tree/master/bitnami/elasticsearch" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/elasticsearch</a></p>
<p>Unfortunately, I am getting the following error for the coordinating-only pod. However, the cluster is restricted.</p>
<p><strong>Pods "elasticsearch-elasticsearch-coordinating-only-5b57786cf6-" is forbidden: unable to validate against any pod security policy:
[spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]; Deployment does not have minimum availability.</strong></p>
<p>I there anything I need to adapt/add-in default <strong>values.yaml</strong>?</p>
<p>Any suggestion to get rid of this error?
Thanks.</p>
| kishorK | <p>You can't validate if your cluster is restricted with some security policy. In your situation someone (assuming administrator) has blocked the option to run privileged containers for you.</p>
<p>Here's an example of how pod security policy blocks privileged containers:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: example
spec:
privileged: false # Don't allow privileged pods!
seLinux:
rule: RunAsAny
----
</code></pre>
<p>What is require for you is to have appropriate <code>Role</code> with a <code>PodSecurityPolicy</code> resource and <code>RoleBinding</code> that will allow you to run privileged containers.</p>
<p>This is very well explained in kubernetes documentation at <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#enabling-pod-security-policies" rel="nofollow noreferrer">Enabling pod security policy</a></p>
| acid_fuji |
<p>In <a href="https://github.com/behrangsa/hello-k8s/tree/stackoverflow" rel="nofollow noreferrer">this</a> project, let's say I set the value of <a href="https://github.com/behrangsa/hello-k8s/blob/stackoverflow/deployment/kustomize/base/virtualservice.yaml#L7" rel="nofollow noreferrer">hosts</a> to <code>hello</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-k8s
spec:
hosts:
- ???
http:
- name: hello-k8s
match:
- uri:
prefix: /
route:
- destination:
host: hello # <--------------
port:
number: 8080
</code></pre>
<p>After deploying the project to my local minikube:</p>
<ul>
<li>What domain name should I send my requests to? <code>hello.local</code>?</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>$ curl what.should.be.here:8080
</code></pre>
<ul>
<li>Is there a way to find it out using <code>kubectl</code> and introspection?</li>
</ul>
<hr />
<p><em>Update 1:</em></p>
<p>I changed the host to <code>hello.local</code>.</p>
<p>Then I verified that <code>istio</code> is enabled in <code>minikube</code>:</p>
<pre><code>β minikube addons list
|-----------------------------|----------|--------------|
| ADDON NAME | PROFILE | STATUS |
|-----------------------------|----------|--------------|
| ambassador | minikube | disabled |
| csi-hostpath-driver | minikube | disabled |
| dashboard | minikube | disabled |
| default-storageclass | minikube | enabled β |
| efk | minikube | disabled |
| freshpod | minikube | disabled |
| gcp-auth | minikube | disabled |
| gvisor | minikube | disabled |
| helm-tiller | minikube | disabled |
| ingress | minikube | disabled |
| ingress-dns | minikube | disabled |
| istio | minikube | enabled β |
| istio-provisioner | minikube | enabled β |
| kubevirt | minikube | disabled |
| logviewer | minikube | disabled |
| metallb | minikube | disabled |
| metrics-server | minikube | disabled |
| nvidia-driver-installer | minikube | disabled |
| nvidia-gpu-device-plugin | minikube | disabled |
| olm | minikube | disabled |
| pod-security-policy | minikube | disabled |
| registry | minikube | disabled |
| registry-aliases | minikube | disabled |
| registry-creds | minikube | disabled |
| storage-provisioner | minikube | enabled β |
| storage-provisioner-gluster | minikube | disabled |
| volumesnapshots | minikube | disabled |
|-----------------------------|----------|--------------|
</code></pre>
<p>I deployed the app and verified everything is working:</p>
<pre><code>β kubectl apply -k base/
service/hello-k8s unchanged
deployment.apps/hello-k8s unchanged
virtualservice.networking.istio.io/hello-k8s configured
β kubectl get deployments/hello-k8s
NAME READY UP-TO-DATE AVAILABLE AGE
hello-k8s 1/1 1 1 20h
β kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-k8s-6d9cc7d655-plzs8 1/1 Running 0 20h
β kubectl get service/hello-k8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-k8s ClusterIP 10.98.144.226 <none> 8080/TCP 21h
β kubectl get virtualservice/hello-k8s
NAME GATEWAYS HOSTS AGE
hello-k8s ["hello.local"] 20h
β kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-8577c95547-6c9sk 1/1 Running 0 21h
istiod-6ccd677dc7-7cvr2 1/1 Running 0 21h
prometheus-7767dfd55-x2pv6 2/2 Running 0 21h
</code></pre>
<p>Not sure why I have to do this, but apparently it should be done:</p>
<pre><code>β kubectl label namespace default istio-injection=enabled --overwrite
namespace/default labeled
β kubectl get namespace -L istio-injection
NAME STATUS AGE ISTIO-INJECTION
default Active 21h enabled
istio-operator Active 21h
istio-system Active 21h disabled
kube-node-lease Active 21h
kube-public Active 21h
kube-system Active 21h
playground Active 21h
</code></pre>
<p>I inspected the ip of <code>minikube</code> and added an entry to /etc/hosts for <code>hello.local</code>:</p>
<pre><code>β minikube ip
192.168.49.2
β tail -n 3 /etc/hosts
# Minikube
192.168.49.2 hello.local
</code></pre>
<p>Then I queried the port of <code>istio-ingressgateway</code> according to <a href="https://shekhargulati.com/2019/03/05/istio-minikube-tutorial-deploying-a-hello-world-example-application-with-istio-in-kubernetes/" rel="nofollow noreferrer">this blog post</a>:</p>
<pre><code>β kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}'
30616
</code></pre>
<p>Finally I sent a request to <code>hello.local:30616</code>, but the requests are not arriving at my app:</p>
<pre><code>β curl -iv hello.local:30616/hello
* Trying 192.168.49.2:30616...
* TCP_NODELAY set
* Connected to hello.local (192.168.49.2) port 30616 (#0)
> GET /hello HTTP/1.1
> Host: hello.local:30616
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
HTTP/1.1 404 Not Found
< date: Thu, 25 Feb 2021 14:51:18 GMT
date: Thu, 25 Feb 2021 14:51:18 GMT
< server: istio-envoy
server: istio-envoy
< content-length: 0
content-length: 0
<
* Connection #0 to host hello.local left intact
</code></pre>
| Behrang Saeedzadeh | <p><strong>What domain name should I send my requests to? <code>hello.local</code>?</strong></p>
<p>The answer would be that you can use whatever the domain you want but you will be required to add this domain in <code>/etc/hosts</code>. Then <code>/etc/hosts</code> domain will be the ip address of the <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports" rel="nofollow noreferrer">istio gateway loadbalancer</a>.
Thanks to it you will be able to use that domain instead of the ingress gateway ip</p>
<p><strong>Moving on there are couple of things about hosts/host in Virtual Service to consider:</strong></p>
<p>If you set virtual service <code>hosts</code> value to just <code>hello</code> then as per documentation Istio will interpret the short name based on the namespace of the rule, not the service. A rule in the <code>default</code> namespace with host <code>hello</code> will be interpreted as <code>hello.default.svc.cluster.local</code>.</p>
<p>You can also reference services along domains in hosts.</p>
<p><strong>So to avoid potential misconfigurations, it is recommended to always use fully qualified domain names over short names.</strong></p>
<p>Now coming next to <code>destination.host</code>:</p>
<blockquote>
<p>The name of a service from the service registry. Service names are
looked up from the platformβs service registry (e.g., Kubernetes
services, Consul services, etc.) and from the hosts declared by
<a href="https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry" rel="nofollow noreferrer">ServiceEntry</a>.
Traffic forwarded to destinations that are not found in either of the
two, will be dropped.</p>
</blockquote>
<p>Here's also a <a href="https://medium.com/@emirmujic/istio-and-metallb-on-minikube-242281b1134b" rel="nofollow noreferrer">good exmpale</a> how to configure metallb as loadbalancer and istio in minikube. Having metallb will give your service automatically configure as loadbalancer.</p>
<hr />
<p>After further checking into your case I figure that problem might be related to your <a href="https://istio.io/latest/docs/reference/config/networking/gateway/." rel="nofollow noreferrer">gateway</a> not being used at all:</p>
<pre class="lang-sh prettyprint-override"><code>β kubectl get virtualservice/hello-k8s
NAME GATEWAYS HOSTS AGE
hello-k8s ["hello.local"] 20h ## Gateway describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections.
</code></pre>
<blockquote>
<p>"Gateway describes a load balancer operating at the edge of the mesh
receiving incoming or outgoing HTTP/TCP connections. Using Istio
Gateway configuration you can bind VirtualService via
virtualservice.spec.gateways with specific gateways (allowing external
traffic or internal traffic inside istio service mesh) "The reserved
word mesh is used to imply all the sidecars in the mesh. When this
field is omitted, the default gateway (mesh) will be used"
<a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService</a></p>
</blockquote>
<p>Make sure you configure <code>Gateway</code> and <code>VirtualService</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
spec:
selector:
istio: ingressgateway ### this should be verified with istio ingress gateway pod labels
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*" ### for specific hosts it can be f.e. hello.local
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-k8s
spec:
hosts:
- hello.local
gateways:
- my-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: hello-k8s.default.svc.cluster.local
port:
number: 8080
</code></pre>
| acid_fuji |
<p>I'm trying to use <code>Kubernetes</code> with <code>Docker</code>. My image runs with Docker. I have one master-node and two worker-nodes. I also created a local registry like this <code>$ docker run -d -p 5000:5000 --restart=always --name registry registry:2</code> and pushed my image into it. Everything worked fine so far.</p>
<p>I added <code>{ "insecure-registries":["xxx.xxx.xxx.xxx:5000"] }</code> to the <code>daemon.json</code> file at <code>/etc/docker</code>. And I also changed the content of the <code>docker-file</code> at <code>/etc/default/</code>to <code>DOCKER_OPTS="--config-file=/etc/docker/daemon.json"</code>. I made the changes on all nodes and I restarted the docker daemon afterwards.</p>
<p>I am able to pull my image from every node with the following command: </p>
<p><code>sudo docker pull xxx.xxx.xxx.xxx:5000/helloworldimage</code></p>
<p>I try to create my container from the master node with the command bellow:</p>
<p><code>sudo kubectl run test --image xxx.xxx.xxx.xxx:5000/helloworldimage</code></p>
<p>Than I get the following error:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/test-775f99f57-m9r4b to rpi-2
Normal BackOff 18s (x2 over 44s) kubelet, rpi-2 Back-off pulling image "xxx.xxx.xxx.xxx:5000/helloworldimage"
Warning Failed 18s (x2 over 44s) kubelet, rpi-2 Error: ImagePullBackOff
Normal Pulling 3s (x3 over 45s) kubelet, rpi-2 Pulling image "xxx.xxx.xxx.xxx:5000/helloworldimage"
Warning Failed 3s (x3 over 45s) kubelet, rpi-2 Failed to pull image "xxx.xxx.xxx.xxx:5000/helloworldimage": rpc error: code = Unknown desc = failed to pull and unpack image "xxx.xxx.xxx.xxx:5000/helloworldimage:latest": failed to resolve reference "xxx.xxx.xxx.xxx:5000/helloworldimage:latest": failed to do request: Head https://xxx.xxx.xxx.xxx:5000/v2/helloworldimage/manifests/latest: http: server gave HTTP response to HTTPS client
Warning Failed 3s (x3 over 45s) kubelet, rpi-2 Error: ErrImagePull
</code></pre>
<p>This is the <code>docker</code> version I use:</p>
<pre><code>Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:37:22 2019
OS/Arch: linux/arm
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:31:17 2019
OS/Arch: linux/arm
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
</code></pre>
<p>This is the <code>Kubernetes</code> version I use:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0+k3s.1", GitCommit:"0f644650f5d8e9f091629f860b342f221c46f6d7", GitTreeState:"clean", BuildDate:"2020-01-06T23:20:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0+k3s.1", GitCommit:"0f644650f5d8e9f091629f860b342f221c46f6d7", GitTreeState:"clean", BuildDate:"2020-01-06T23:20:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/arm"}```
</code></pre>
| angel285 | <p>It appears that in some situations solution described <a href="https://github.com/distribution/distribution/issues/1874#issuecomment-468101614" rel="nofollow noreferrer">here</a> solved the problem:</p>
<ol>
<li><code>sudo systemctl edit docker</code></li>
<li>Add below lines:</li>
</ol>
<ul>
<li><code>[Service]</code></li>
<li><code>ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry registry:5000</code></li>
</ul>
<ol start="3">
<li><code>sudo systemctl daemon-reload</code></li>
<li><code>systemctl restart docker</code></li>
<li><code>systemctl status docker</code></li>
</ol>
| acid_fuji |
<p>I have successfully exposed two microservices on <strong>AWS</strong> with <strong>Traefik Ingress Controller</strong> and <strong>AWS HTTPS Load Balancer</strong> on my registered domain.</p>
<p>Here is the source code:
<a href="https://github.com/skyglass-examples/user-management-keycloak" rel="nofollow noreferrer">https://github.com/skyglass-examples/user-management-keycloak</a></p>
<p>I can easily access both microservices with https url:</p>
<pre><code>https://users.skycomposer.net/usermgmt/swagger-ui/index.html
https://users.skycomposer.net/whoami
</code></pre>
<p>So, it seems that <strong>Traefik Ingress Controller</strong> and <strong>AWS HTTPS Load Balancer</strong> configured correctly.</p>
<p>Unfortunately, <strong>Keycloak Server</strong> doesn't work in this environment.
When I try to access it by https url:</p>
<pre><code>https://users.skycomposer.net/keycloak
</code></pre>
<p>I receive the following response:</p>
<pre><code>404 page not found
</code></pre>
<p>Do I miss something in my configuration?</p>
<p>Here are some <strong>keycloak kubernetes manifests</strong>, which I use:</p>
<p><strong>keycloak-config.yaml:</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: keycloak
data:
KEYCLOAK_USER: admin@keycloak
KEYCLOAK_MGMT_USER: mgmt@keycloak
JAVA_OPTS_APPEND: '-Djboss.bind.address.management=0.0.0.0'
PROXY_ADDRESS_FORWARDING: 'true'
KEYCLOAK_LOGLEVEL: INFO
ROOT_LOGLEVEL: INFO
DB_VENDOR: H2
</code></pre>
<p><strong>keycloak-deployment.yaml:</strong></p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: jboss/keycloak:12.0.4
imagePullPolicy: Always
ports:
- containerPort: 9990
hostPort: 9990
volumeMounts:
- name: keycloak-data
mountPath: /opt/jboss/keycloak/standalone/data
env:
- name: KEYCLOAK_USER
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_USER
- name: KEYCLOAK_MGMT_USER
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_MGMT_USER
- name: JAVA_OPTS_APPEND
valueFrom:
configMapKeyRef:
name: keycloak
key: JAVA_OPTS_APPEND
- name: DB_VENDOR
valueFrom:
configMapKeyRef:
name: keycloak
key: DB_VENDOR
- name: PROXY_ADDRESS_FORWARDING
valueFrom:
configMapKeyRef:
name: keycloak
key: PROXY_ADDRESS_FORWARDING
- name: KEYCLOAK_LOGLEVEL
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_LOGLEVEL
- name: ROOT_LOGLEVEL
valueFrom:
configMapKeyRef:
name: keycloak
key: ROOT_LOGLEVEL
- name: KEYCLOAK_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak
key: KEYCLOAK_PASSWORD
- name: KEYCLOAK_MGMT_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak
key: KEYCLOAK_MGMT_PASSWORD
volumes:
- name: keycloak-data
persistentVolumeClaim:
claimName: keycloak-pvc
</code></pre>
<p><strong>keycloak-service.yaml:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: keycloak
spec:
ports:
- protocol: TCP
name: web
port: 80
targetPort: 9990
selector:
app: keycloak
</code></pre>
<p><strong>traefik-ingress.yaml:</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: traefik-lb
spec:
controller: traefik.io/ingress-controller
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-usermgmt-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/usermgmt"
backend:
serviceName: "usermgmt"
servicePort: 80
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-whoami-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/whoami"
backend:
serviceName: "whoami"
servicePort: 80
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-keycloak-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/keycloak"
backend:
serviceName: "keycloak"
servicePort: 80
</code></pre>
<p>See all other files on my <strong>github</strong>: <a href="https://github.com/skyglass-examples/user-management-keycloak" rel="nofollow noreferrer">https://github.com/skyglass-examples/user-management-keycloak</a></p>
<p>I also checked the logs for keycloak pod, running on my K3S Kubernetes Cluster:</p>
<pre><code>20:57:34,147 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: Keycloak 12.0.4 (WildFly Core 13.0.3.Final) started in 43054ms - Started 687 of 972 services (687 services are lazy, passive or on-demand)
20:57:34,153 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management
20:57:34,153 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
</code></pre>
<p>Everything seems to be fine, Admin console is listening on <a href="http://127.0.0.1:9990" rel="nofollow noreferrer">http://127.0.0.1:9990</a></p>
<p>I also tried using <strong>9990</strong> target port in deployment and service manifests, instead of <strong>8080</strong>, but still the same result.</p>
| Mykhailo Skliar | <p>Finally solved the issue.</p>
<p>The following configuation is required to run <strong>keycloak</strong> behind <strong>traefik</strong>:</p>
<pre><code> PROXY_ADDRESS_FORWARDING=true
KEYCLOAK_HOSTNAME=${YOUR_KEYCLOAK_HOSTNAME}
</code></pre>
<p>Also, I had to use the root path "<strong>/</strong>" for the ingress rule:</p>
<pre><code>apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-keycloak-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/"
backend:
serviceName: "keycloak"
servicePort: 80
</code></pre>
<p>Here, you can find other configuration properties, which you might find useful:
<a href="https://github.com/Artiume/docker/blob/master/traefik-SSO.yml" rel="nofollow noreferrer">https://github.com/Artiume/docker/blob/master/traefik-SSO.yml</a></p>
<p>Believe it or not, this is the only resource on the internet, which mentioned <strong>KEYCLOAK_HOSTNAME</strong> to fix my problem. Two days of searching by keyword "<strong>keycloak traefik 404</strong>" and no results!</p>
<p>You can find the full fixed code, with correct configuration, on my github:
<strong><a href="https://github.com/skyglass-examples/user-management-keycloak" rel="nofollow noreferrer">https://github.com/skyglass-examples/user-management-keycloak</a></strong></p>
| Mykhailo Skliar |
<p>I have successfully exposed two microservices on <strong>AWS</strong> with <strong>Traefik Ingress Controller</strong> and <strong>AWS HTTPS Load Balancer</strong> on my registered domain.</p>
<p>Here is the source code:
<a href="https://github.com/skyglass-examples/user-management-keycloak" rel="nofollow noreferrer">https://github.com/skyglass-examples/user-management-keycloak</a></p>
<p>I can easily access both microservices with https url:</p>
<pre><code>https://users.skycomposer.net/usermgmt/swagger-ui/index.html
https://users.skycomposer.net/whoami
</code></pre>
<p>So, it seems that <strong>Traefik Ingress Controller</strong> and <strong>AWS HTTPS Load Balancer</strong> configured correctly.</p>
<p>Unfortunately, <strong>Keycloak Server</strong> doesn't work in this environment.
When I try to access it by https url:</p>
<pre><code>https://users.skycomposer.net/keycloak
</code></pre>
<p>I receive the following response:</p>
<pre><code>404 page not found
</code></pre>
<p>Do I miss something in my configuration?</p>
<p>Here are some <strong>keycloak kubernetes manifests</strong>, which I use:</p>
<p><strong>keycloak-config.yaml:</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: keycloak
data:
KEYCLOAK_USER: admin@keycloak
KEYCLOAK_MGMT_USER: mgmt@keycloak
JAVA_OPTS_APPEND: '-Djboss.bind.address.management=0.0.0.0'
PROXY_ADDRESS_FORWARDING: 'true'
KEYCLOAK_LOGLEVEL: INFO
ROOT_LOGLEVEL: INFO
DB_VENDOR: H2
</code></pre>
<p><strong>keycloak-deployment.yaml:</strong></p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: jboss/keycloak:12.0.4
imagePullPolicy: Always
ports:
- containerPort: 9990
hostPort: 9990
volumeMounts:
- name: keycloak-data
mountPath: /opt/jboss/keycloak/standalone/data
env:
- name: KEYCLOAK_USER
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_USER
- name: KEYCLOAK_MGMT_USER
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_MGMT_USER
- name: JAVA_OPTS_APPEND
valueFrom:
configMapKeyRef:
name: keycloak
key: JAVA_OPTS_APPEND
- name: DB_VENDOR
valueFrom:
configMapKeyRef:
name: keycloak
key: DB_VENDOR
- name: PROXY_ADDRESS_FORWARDING
valueFrom:
configMapKeyRef:
name: keycloak
key: PROXY_ADDRESS_FORWARDING
- name: KEYCLOAK_LOGLEVEL
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_LOGLEVEL
- name: ROOT_LOGLEVEL
valueFrom:
configMapKeyRef:
name: keycloak
key: ROOT_LOGLEVEL
- name: KEYCLOAK_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak
key: KEYCLOAK_PASSWORD
- name: KEYCLOAK_MGMT_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak
key: KEYCLOAK_MGMT_PASSWORD
volumes:
- name: keycloak-data
persistentVolumeClaim:
claimName: keycloak-pvc
</code></pre>
<p><strong>keycloak-service.yaml:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: keycloak
spec:
ports:
- protocol: TCP
name: web
port: 80
targetPort: 9990
selector:
app: keycloak
</code></pre>
<p><strong>traefik-ingress.yaml:</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: traefik-lb
spec:
controller: traefik.io/ingress-controller
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-usermgmt-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/usermgmt"
backend:
serviceName: "usermgmt"
servicePort: 80
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-whoami-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/whoami"
backend:
serviceName: "whoami"
servicePort: 80
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-keycloak-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/keycloak"
backend:
serviceName: "keycloak"
servicePort: 80
</code></pre>
<p>See all other files on my <strong>github</strong>: <a href="https://github.com/skyglass-examples/user-management-keycloak" rel="nofollow noreferrer">https://github.com/skyglass-examples/user-management-keycloak</a></p>
<p>I also checked the logs for keycloak pod, running on my K3S Kubernetes Cluster:</p>
<pre><code>20:57:34,147 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: Keycloak 12.0.4 (WildFly Core 13.0.3.Final) started in 43054ms - Started 687 of 972 services (687 services are lazy, passive or on-demand)
20:57:34,153 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management
20:57:34,153 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
</code></pre>
<p>Everything seems to be fine, Admin console is listening on <a href="http://127.0.0.1:9990" rel="nofollow noreferrer">http://127.0.0.1:9990</a></p>
<p>I also tried using <strong>9990</strong> target port in deployment and service manifests, instead of <strong>8080</strong>, but still the same result.</p>
| Mykhailo Skliar | <p>I have found one small workaround, but unfortunately, this is not the best solution for me.</p>
<p>I forwarded the port:</p>
<pre><code>kubectl port-forward --address 0.0.0.0 service/keycloak 32080:http
</code></pre>
<p>Now Keycloak Server is available on:</p>
<pre><code>http://localhost:32080/auth/
</code></pre>
<p>But how to make it available externally by this url ?</p>
<pre><code>https://keycloak.skycomposer.net/keycloak/auth
</code></pre>
<p>It is still not clear to me, why the keycloak is not visible from the outside, with my current configuration.</p>
| Mykhailo Skliar |
<p>I am attempting to assign a static global LB IP in GCP to an ingress controller (Ingress-Nginx) in GKE but whenever I do so I get an error message regarding</p>
<pre><code>Error syncing load balancer: failed to ensure load balancer: requested ip "<IP_ADDRESS>" is neither static nor assigned to the LB
</code></pre>
<p>If I attempt to set that LB IP value to a static regional IP that is in the same region as my cluster it works. If I don't pass in a LB IP value that also works and an IP is assigned to me.</p>
<p>I am using the command</p>
<p><code>helm install ingress-nginx --set controller.service.loadBalancerIP="<IP_ADDRESS>" ingress-nginx/ingress-nginx</code></p>
<p>Is there an additional <code>set</code> flag that I am not using or does my GKE cluster need to be in more than 1 region?</p>
| rk92 | <p>This was due to assigning a global IP to the ingress controller. It should use a regional IP instead of a global one.</p>
| rk92 |
<p>I'm trying to set up an ingress controller in Kubernetes that will give me strict alternation between two (or more) pods running in the same service.</p>
<p>My testing setup is a single Kubernetes node, with a deployment of two nginx pods.
The deployment is then exposed with a NodePort service.</p>
<p>I've then deployed an ingress contoller (I've tried both <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Kubernetes Nginx Ingress Controller</a> and <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">Nginx Kubernetes Ingress Controller</a>, separately) and created an ingress rule for the NodePort service.</p>
<p>I edited index.html on each of the nginx pods, so that one shows "SERVER A" and the other "SERVER B", and ran a script that then <code>curl</code>s the NodePort service 100 times. It <code>grep</code>s "SERVER x" each time, appends it to an output file, and then tallies the number of each at the end.</p>
<p>As expected, curling the NodePort service itself (which uses kube-proxy), I got completely random results-- anything from 50:50 to 80:20 splits between the pods.</p>
<p>Curling the ingress controller, I consistently get something between 50:50 and 49:51 splits, which is great-- the default round-robin distribution is working well.</p>
<p><strong>However</strong>, looking at the results, I can see that I've curled the same server up to 4 times in a row, but I need to enforce a strict alternation A-B-A-B. I've spent quite a researching this and trying out different options, but I can't find a setting that will do this. Does anyone have any advice, please?</p>
<p>I'd prefer to stick with one of the ingress controllers I've tried, but I'm open to trying a different one, if it will do what I need.</p>
| hiiamelliott | <p>Looks like there are 2 versions of ingress controller.</p>
<ol>
<li><p>Which K8S community has been maintaining which is <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></p>
</li>
<li><p>Which Nginx is maintaining(Opensource & Paid): <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress</a></p>
</li>
</ol>
<p>The second one seems to be doing strict round robin(still testing) after adding <code>nginx.org/lb-method: βround_robinβ</code> while the first one does 50:50 aggregate load balancing between replica's</p>
<p>In my opinion its an important difference but with lot of confusion with names, difference between them can be read <a href="https://www.nginx.com/blog/guide-to-choosing-ingress-controller-part-4-nginx-ingress-controller-options/" rel="nofollow noreferrer">here</a></p>
<p>I composed this answer with help of comments from @hiiamelliott...</p>
| Laukik |
<p>I have created EKS cluster with Fargate. I deployed two microservices. Everything is working properly with ingress and two separate application load balancers. I am trying to create ingress with one alb which will route the traffic to the services. The potential problem is that both services use the same port (8080). How to create ingress for this problem? Also I have got a registered domain in route 53.</p>
| Bakula33 | <p>You can have a common ALB for your services running inside EKS, even if they use the same port; you can associate it with different listener rules on ALB based on path.</p>
<p>If you are using an ingress controller, your ingress can be configured to handle the creation of these different listener rules.</p>
<p>For eg. if you are using aws alb ingress controller, you can have one common alb and then create ingresses with annotation:</p>
<p><code>alb.ingress.kubernetes.io/group.name: my-group</code></p>
<p>All ingresses part of this group will be under the same alb associated to the group.</p>
<p>checkout -<a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">userguide-alb-ingress</a> for more info</p>
| Kaish kugashia |
<p>Today my kubernetes cluster(v1.15.2) node disk full and cause pods give this tips:</p>
<pre><code>Update plugin resources failed due to failed to write checkpoint file "kubelet_internal_checkpoint": write /var/lib/kubelet/device-plugins/.261578065: no space left on device, which is unexpected.
MountVolume.SetUp failed for volume "default-token-xnrwt" : mkdir /opt/k8s/k8s/kubelet/pods/67eaa71b-adf4-4365-a1c7-42045d5e9426: no space left on device
</code></pre>
<p>I login into server and find the disk usage is 100%, so I remove some log file and release 10GB + disk space, but now it seems pod is not recovery automaticlly and still have this error tips:</p>
<p><a href="https://i.stack.imgur.com/bTlcL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bTlcL.png" alt="enter image description here"></a></p>
<p>what should I do to fix this problem? I am try to restart all pods, and all pods works fine. But finally I found the error tips message still give me tips no space left and did not disappear automaticlly. I check the node status and find the node has no disk pressure. How to make the error tips disappear?</p>
| Dolphin | <p>Other possibility is incorrect unit values for resource requests/limits (ex, using <code>mi</code> instead of <code>Mi</code>).</p>
<p>For example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
spec:
containers:
- name: {container_name}
resources:
limits:
memory: "512mi" # incorrect; should be "512Mi"
cpu: "200m"
</code></pre>
| v1d3rm3 |
<p>I have a number of <code>.yml</code> files with deployments and their services and need to set <code>.spec.progressDeadlineSeconds</code> to all of them preferrably from one common place.</p>
| noname7619 | <p>I would recommend checking out <a href="https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#field-name-commonannotations" rel="nofollow noreferrer">kustomize</a> for doing tasks like this. From the limited information it sounds like <code>commonAnnotations</code> might do the trick for your use case, but if not there are other transformers that could like <code>Patch</code> or <code>PatchStrategicMerge</code>.</p>
| BasicExp |
<p>I am trying to install my rancher(RKE) kubernetes cluster bitnami/mongodb-shared . But I couldn't create a valid PV for this helm chart.</p>
<p>The error that I am getting:
no persistent volumes available for this claim and no storage class is set</p>
<p>This is the helm chart documentation section about PersistenceVolume: <a href="https://github.com/bitnami/charts/tree/master/bitnami/mongodb-sharded/#persistence" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/mongodb-sharded/#persistence</a></p>
<p>This is the StorageClass and PersistentVolume yamls that I created for this helm chart PVCs':</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd-nfs-storage
provisioner: nope
parameters:
archiveOnDelete: "false"
----------
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
labels:
name: db-nfs
spec:
storageClassName: ssd-nfs-storage # same storage class as pvc
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
nfs:
server: 142.251.33.78 # ip addres of nfs server
path: "/bitnami/mongodb" # path to directory
</code></pre>
<p>This is the PVC yaml that created by the helm chart:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2021-06-06T17:50:40Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app.kubernetes.io/component: shardsvr
app.kubernetes.io/instance: sam-db
app.kubernetes.io/name: mongodb-sharded
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/name: {}
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2021-06-06T17:50:40Z"
name: datadir-sam-db-mongodb-sharded-shard1-data-0
namespace: default
resourceVersion: "960381"
uid: c4313ed9-cc99-42e9-a64f-82bea8196629
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem
status:
phase: Pending
</code></pre>
<p>Can you tell me what I am missing?</p>
| noway | <p>I am giving the <code>bitnami/mongodb-sharded</code> installation instruction with NFS server on Rancher(v2.5.8).</p>
<p>I have three Centos 8 VM. One NFS server(lets we say 1.1.1.1), two k8s nodes(lets we say 8.8.8.8 and 9.9.9.9) on k8s-cluster, i am using RKE(aka Rancher K8S Engine)</p>
<ol>
<li>We will create a NFS server</li>
<li>We will bind the nodes to the NFS server</li>
<li>We will add <code>nfs-subdir-external-provisioner</code> HELM repository to the Rancher Chart Repositories</li>
<li>We will install <code>nfs-subdir-external-provisioner</code> via rancher charts</li>
<li>We will add <code>bitnami</code> HELM repo to the Rancher Chart Repositories</li>
<li>We will install <code>mongodb-sharded</code> via Rancher charts</li>
</ol>
<hr />
<ol>
<li>Create a NFS server</li>
</ol>
<pre><code># nfs server install
dnf install nfs-utils -y
systemctl start nfs-server.service
systemctl enable nfs-server.service
systemctl status nfs-server.service
# you can verify the version
rpcinfo -p | grep nfs
# nfs deamon config: /etc/nfs.conf
# nfs mount config: /etc/nfsmount.conf
mkdir /mnt/storage
# allows creation from client
#Β for mongodb-sharded: /mnt/storage
chown -R nobody: /mnt/storage
chmod -R 777 /mnt/storage
# restart service again
systemctl restart nfs-utils.service
# grant access to the client
vi /etc/exports
/mnt/storage 8.8.8.8(rw,sync,no_all_squash,root_squash)
/mnt/storage 9.9.9.9(rw,sync,no_all_squash,root_squash)
#Β check exporting
exportfs -arv
exportfs -s
#Β exporting 8.8.8.8:/mnt/storage
#Β exporting 9.9.9.9:/mnt/storage
</code></pre>
<hr />
<ol start="2">
<li>Bind the k8s nodes to the NFS server</li>
</ol>
<pre><code># nfs client install
dnf install nfs-utils nfs4-acl-tools -y
# see from the client shared folder
showmount -e 1.1.1.1
# create mounting folder for client
mkdir /mnt/cstorage
# mount server folder to the client folder
mount -t nfs 1.1.1.1:/mnt/storage /mnt/cstorage
# check mounted folder vis nfs
mount | grep -i nfs
# mount persistent upon a reboot
vi /etc/fstab
# add following codes
1.1.1.1:/mnt/storage /mnt/cstorage nfs defaults 0 0
# all done
</code></pre>
<p><strong>Bonus:</strong> Unbind nodes.</p>
<pre><code># un mount and delete from client
umount -f -l /mnt/cstorage
rm -rf /mnt/cstorage
# delete added volume from fstab
vi /etc/fstab
</code></pre>
<hr />
<ol start="3">
<li>Add nfs-subdir-external-provisioner helm repository</li>
</ol>
<p><strong>Helm Repository URL:</strong> <code>https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/</code></p>
<ul>
<li>Rancher --></li>
<li>Cluster Explorer --></li>
<li>Apps & Marketplace</li>
<li>Chart Repositories --></li>
<li>Create --></li>
<li>Add url like below <a href="https://i.stack.imgur.com/WQFAa.png" rel="nofollow noreferrer">this ccreenshot</a> --></li>
<li>Save --></li>
</ul>
<hr />
<ol start="4">
<li>Install <code>nfs-subdir-external-provisioner</code> via Charts</li>
</ol>
<ul>
<li>Rancher --></li>
<li>Cluster Explorer --></li>
<li>Apps & Marketplace</li>
<li>Charts --></li>
<li><a href="https://i.stack.imgur.com/5gNLH.png" rel="nofollow noreferrer">find nfs-subdir-external-provisioner chart</a> --></li>
<li>Select --></li>
<li>Give a name(like nfs-pr) --></li>
<li>Select Values YAML --></li>
<li><a href="https://i.stack.imgur.com/zKOvQ.png" rel="nofollow noreferrer">set path, server ip and StorageClass name(we will use this class name later)</a> --></li>
<li>Install --></li>
</ul>
<hr />
<ol start="5">
<li>Add <code>bitnami</code> HELM repo to the Rancher Chart Repositories</li>
</ol>
<p>Bitnami HELM URL: <code>https://charts.bitnami.com/bitnami</code></p>
<ul>
<li>Rancher --></li>
<li>Cluster Explorer --></li>
<li>Apps & Marketplace</li>
<li>Chart Repositories --></li>
<li>Create --></li>
<li>Add url like step 3's screenshot --></li>
<li>Save --></li>
</ul>
<hr />
<ol start="6">
<li>Install <code>mongodb-sharded</code> via Rancher Charts</li>
</ol>
<ul>
<li><p>Rancher --></p>
</li>
<li><p>Cluster Explorer --></p>
</li>
<li><p>Apps & Marketplace</p>
</li>
<li><p>Charts --></p>
</li>
<li><p>Find <code>mongodb-sharded</code> --></p>
</li>
<li><p>Select --></p>
</li>
<li><p>Give a name(my-db) --></p>
</li>
<li><p>Select Values YAML --></p>
</li>
<li><p><a href="https://i.stack.imgur.com/IpCKF.png" rel="nofollow noreferrer">Add global.storageClassname: nfs-client</a>(we set this value step 5) --></p>
</li>
<li><p>Install</p>
</li>
</ul>
| noway |
<p>I am trying to setup a basic k8s cluster </p>
<p>After doing a kubeadm init --pod-network-cidr=10.244.0.0/16, the coredns pods are stuck in ContainerCreating status</p>
<pre><code>NAME READY STATUS RESTARTS AGE
coredns-6955765f44-2cnhj 0/1 ContainerCreating 0 43h
coredns-6955765f44-dnphb 0/1 ContainerCreating 0 43h
etcd-perf1 1/1 Running 0 43h
kube-apiserver-perf1 1/1 Running 0 43h
kube-controller-manager-perf1 1/1 Running 0 43h
kube-flannel-ds-amd64-smpbk 1/1 Running 0 43h
kube-proxy-6zgvn 1/1 Running 0 43h
kube-scheduler-perf1 1/1 Running 0 43h
</code></pre>
<p>OS-IMAGE: Ubuntu 16.04.6 LTS
KERNEL-VERSION: 4.4.0-142-generic
CONTAINER-RUNTIME: docker://19.3.5</p>
<p>Errors from journalctl -xeu kubelet command</p>
<pre><code>Jan 02 10:31:44 perf1 kubelet[11901]: 2020-01-02 10:31:44.112 [INFO][10207] k8s.go 228: Using Calico IPAM
Jan 02 10:31:44 perf1 kubelet[11901]: E0102 10:31:44.118281 11901 cni.go:385] Error deleting kube-system_coredns-6955765f44-2cnhj/12cd9435dc905c026bbdb4a1954fc36c82ede1d703b040a3052ab3370445abbf from
Jan 02 10:31:44 perf1 kubelet[11901]: E0102 10:31:44.118828 11901 remote_runtime.go:128] StopPodSandbox "12cd9435dc905c026bbdb4a1954fc36c82ede1d703b040a3052ab3370445abbf" from runtime service failed:
Jan 02 10:31:44 perf1 kubelet[11901]: E0102 10:31:44.118872 11901 kuberuntime_manager.go:898] Failed to stop sandbox {"docker" "12cd9435dc905c026bbdb4a1954fc36c82ede1d703b040a3052ab3370445abbf"}
Jan 02 10:31:44 perf1 kubelet[11901]: E0102 10:31:44.118917 11901 kuberuntime_manager.go:676] killPodWithSyncResult failed: failed to "KillPodSandbox" for "e44bc42f-0b8d-40ad-82a9-334a1b1c8e40" with
Jan 02 10:31:44 perf1 kubelet[11901]: E0102 10:31:44.118939 11901 pod_workers.go:191] Error syncing pod e44bc42f-0b8d-40ad-82a9-334a1b1c8e40 ("coredns-6955765f44-2cnhj_kube-system(e44bc42f-0b8d-40ad-
Jan 02 10:31:47 perf1 kubelet[11901]: W0102 10:31:47.081709 11901 cni.go:331] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "747c3cc9455a7d
Jan 02 10:31:47 perf1 kubelet[11901]: 2020-01-02 10:31:47.113 [INFO][10267] k8s.go 228: Using Calico IPAM
Jan 02 10:31:47 perf1 kubelet[11901]: E0102 10:31:47.118526 11901 cni.go:385] Error deleting kube-system_coredns-6955765f44-dnphb/747c3cc9455a7db202ab14576d15509d8ef6967c6349e9acbeff2207914d3d53 from
Jan 02 10:31:47 perf1 kubelet[11901]: E0102 10:31:47.119017 11901 remote_runtime.go:128] StopPodSandbox "747c3cc9455a7db202ab14576d15509d8ef6967c6349e9acbeff2207914d3d53" from runtime service failed:
Jan 02 10:31:47 perf1 kubelet[11901]: E0102 10:31:47.119052 11901 kuberuntime_manager.go:898] Failed to stop sandbox {"docker" "747c3cc9455a7db202ab14576d15509d8ef6967c6349e9acbeff2207914d3d53"}
Jan 02 10:31:47 perf1 kubelet[11901]: E0102 10:31:47.119098 11901 kuberuntime_manager.go:676] killPodWithSyncResult failed: failed to "KillPodSandbox" for "52ffb25e-06c7-4cc6-be70-540049a6be20" with
Jan 02 10:31:47 perf1 kubelet[11901]: E0102 10:31:47.119119 11901 pod_workers.go:191] Error syncing pod 52ffb25e-06c7-4cc6-be70-540049a6be20 ("coredns-6955765f44-dnphb_kube-system(52ffb25e-06c7-4cc6-
</code></pre>
<p>I have tried kubdeadm reset as well but no luck so far</p>
| Harish Gupta | <p>Looks like the issue was because I tried switching from calico to flannel cni. Following the steps mentioned here has resolved the issue for me</p>
<p><a href="https://stackoverflow.com/questions/53900779/pods-failed-to-start-after-switch-cni-plugin-from-flannel-to-calico-and-then-fla">Pods failed to start after switch cni plugin from flannel to calico and then flannel</a></p>
<p>Additionally you may have to clear the contents of <code>/etc/cni/net.d</code></p>
| Harish Gupta |
<p>I am trying to do <a href="https://rancher.com/docs/k3s/latest/en/installation/airgap/#install-options" rel="nofollow noreferrer">offline setup of k3s</a> i.e. without internet connectivity for <em>Single Server Configuration</em> by below steps, but at the end k3s service status is <code>loaded</code> instead of <code>active</code> and the <code>default/kube-system</code> pods not coming up.</p>
<p>I Downloaded k3s binary from <a href="https://github.com/k3s-io/k3s/releases/tag/v1.22.3+k3s1" rel="nofollow noreferrer">Assets</a> and <a href="https://get.k3s.io/" rel="nofollow noreferrer">install.sh script</a>, then:</p>
<ol>
<li><code>cp /home/my-ubuntu/k3s /usr/local/bin/</code></li>
<li><code>cd /usr/local/bin/</code></li>
<li><code>chmod 770 k3s</code> - giving executable rights to k3s binary</li>
<li>placed <a href="https://github.com/k3s-io/k3s/releases/download/v1.22.3%2Bk3s1/k3s-airgap-images-amd64.tar" rel="nofollow noreferrer">airgap-images-amd64.tar</a> image at <code>/var/lib/rancher/k3s/agent/images/</code></li>
<li><code>mkdir /etc/rancher/k3s</code></li>
<li><code>cp /home/my-ubuntu/k3s.yaml /etc/rancher/k3s</code> - copying k3s config file from different machine (because when I tried without config file, I cant set export variable (Step 7) & can't get to see default pods by <code>kubectl get all -A</code>). I think I am mistaken at this step, please confirm.</li>
<li><code>chmod 770 /etc/rancher/k3s/k3s.yaml</code></li>
<li><code>export KUBECONFIG=/etc/rancher/k3s/k3s.yaml</code> - setting KUBECONFIG env. variable</li>
<li><code>INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh</code></li>
</ol>
<p>Error in <code>journalctl -xe</code>:</p>
<pre class="lang-text prettyprint-override"><code> -- Unit k3s.service has begun starting up.
Nov 09 19:11:51 my-ubuntu sh[14683]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Nov 09 19:11:51 my-ubuntu sh[14683]: /bin/sh: 1: /usr/bin/systemctl: not found
Nov 09 19:11:51 my-ubuntu k3s[14695]: time="2021-11-09T19:11:51.488895919+05:30" level=fatal msg="no default routes found in \"/proc/net/route\" or \"/proc/net/ipv6_route\""
Nov 09 19:11:51 my-ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 09 19:11:51 my-ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 09 19:11:51 my-ubuntu systemd[1]: Failed to start Lightweight Kubernetes.
-- Subject: Unit k3s.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- Unit k3s.service has failed.
-- The result is RESULT.
Nov 09 19:11:56 shreya-Virtual-Machine systemd[1]: k3s.service: Service hold-off time over, scheduling restart.
Nov 09 19:11:56 shreya-Virtual-Machine systemd[1]: k3s.service: Scheduled restart job, restart counter is at 20.
-- Subject: Automatic restarting of a unit has been scheduled
</code></pre>
<p><strong>PS:</strong> At this stage, the moment I connect this machine to internet, below default pods started coming for command <code>kubectl get all -A</code>:</p>
<pre class="lang-text prettyprint-override"><code>NAMESPACE NAME READY
kube-system metrics-server-86cbb8457f-rthkn 1/1
kube-system helm-install-traefik-crd-w6wgf 1/1
kube-system helm-install-traefik-m7lkg 1/1
kube-system svclb-traefik-x6qbc 2/2
kube-system traefik-97b44b794-98nkl 1/1
kube-system local-path-provisioner-5ff76fc89d-l8825 1/1
kube-system coredns-7448499f4d-br6tm 1/1
</code></pre>
<p><strong>My aim is simply install k3s without internet connectivity (offline) & get all these pods up running.</strong> Please let me what am I missing here?</p>
| Thor | <p>There is an <a href="https://github.com/k3s-io/k3s/issues/1103" rel="nofollow noreferrer">open issue</a> for offline installation- default gateway need to be set.</p>
<p>follow <a href="https://github.com/k3s-io/k3s/issues/1144#issuecomment-559316077" rel="nofollow noreferrer">this comment</a>, it should work.</p>
<blockquote>
<p>[aiops@7 ~]$ ip route<br/>
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1<br/>
192.168.100.0/24 dev ens192 proto kernel scope link src 192.168.100.7 metric 100<br/>
sudo ip route add default via 192.168.100.1</p>
</blockquote>
| gaurav sinha |
<p>I have kubernetes cluster on bare-metal, and I want to have logical separation of nodes where
DEV environment will be on less power machines while the production environment will be on most powerful machines. I implemented PodNodeSelector of Admission controller. It works well on newly created namespace but on existing namespace, edited and added annotation to it all existing pods on Statefulset are not going to the specified powerful machines, just keep running. How can I apply the new annotation and have effect on Statefulset without recreating or deleting the pods/Statefulsets, cant afford downtime on Elassandra</p>
| Jagi Sarcilla | <p>You cannot. This is how it works at the moment. Once a pod is scheduled on a node it stays on a node.</p>
<p>Same applies to affinity/antiaffinity.</p>
<p>But in <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="nofollow noreferrer">k8s docs</a> you can read:</p>
<blockquote>
<p>In the future we plan to offer
<strong>requiredDuringSchedulingRequiredDuringExecution</strong> which will be just
like requiredDuringSchedulingIgnoredDuringExecution except that it
will evict pods from nodes that cease to satisfy the pods' node
affinity requirements.</p>
</blockquote>
<p>And this is what you are looking for, but it is not implemented yet.
It looks like deleting pods to force then to reschedule is the only option available at the moment.</p>
| Matt |
<p>Im using the following code</p>
<pre><code>kubeconfig = resolveKubeconfig(kubeconfig)
if kubeconfig == "" {
return nil, fmt.Errorf("%s: %w", msgCouldNotCreateClientConfig, ErrLocatingKubeconfig)
}
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
</code></pre>
<p>Now I use unstruct to apply resource to the cluster (5 simple deployments)</p>
<p>And I saw that the apply is taking time like starting throttling,</p>
<p>when I change the config to use like</p>
<pre><code>config.QPS = 250
</code></pre>
<p>it works better, and I dont face any issue.</p>
<p>I understand that its related for the rate limit of k8s api with the api server.</p>
<p>two questions:</p>
<ol>
<li>why it happen if I dont apply too much resource</li>
<li>I increase the value, is it ok or can make some issue ?</li>
</ol>
<p>Should I use also Burst?</p>
<p>I didnt find a lot of docs on the QPS and Burst ...</p>
| Jenny M | <p>By default in <code>client-go</code> the Burst is 10 and QPS is 5
so it should not block you if you just applied several request.</p>
<p>To increase the <code>Burst</code> and <code>OPS</code> no effect to your application but may create heavy load on the <code>api-server</code>.</p>
| vincent pli |
<p>I followed the next guide <a href="https://computingforgeeks.com/deploy-kubernetes-cluster-on-ubuntu-with-kubeadm/" rel="nofollow noreferrer">https://computingforgeeks.com/deploy-kubernetes-cluster-on-ubuntu-with-kubeadm/</a> in BareMetal Ubuntu 20.04 with 2 nodes.</p>
<p>I chose Docker as my Container Runtime and started the cluster with <code>sudo kubeadm init --pod-network-cidr 10.16.0.0/16</code></p>
<p>Everything seems to run fine at the beginning <strong>The problem that I'm having</strong> is when a pod needs to connect to kube dns to resolve a domain name, although kubedns is working fine, so it seems that the problem is with the connection between.
I ran the debugging tool for the DNS <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a> and when I ran <code>kubectl exec -i -t dnsutils -- nslookup kubernetes</code> I got the following output:</p>
<p><a href="https://i.stack.imgur.com/232ly.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/232ly.png" alt="enter image description here" /></a></p>
<p>This are the logs of my kube dns:
<a href="https://i.stack.imgur.com/yegoZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yegoZ.png" alt="enter image description here" /></a></p>
<p>And this is the resolv.conf inside my pod:
<a href="https://i.stack.imgur.com/WLDEr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WLDEr.png" alt="enter image description here" /></a></p>
<p>This is my kubectl and kubeadm info:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:09:38Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>[edit with extra information]</strong></p>
<p>Calico Pods Status:
<a href="https://i.stack.imgur.com/4Qw2H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Qw2H.png" alt="enter image description here" /></a></p>
<p>Querying DNS directly:
<a href="https://i.stack.imgur.com/QgCKa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QgCKa.png" alt="enter image description here" /></a></p>
| Ernesto Solano | <p>I use Flannel and had a similar problem. Restarting the coredns deployment solved it for me:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl rollout restart -n kube-system deployment/coredns
</code></pre>
| dac.le |
<p>I want to be able to know the Kubernetes uid of the Deployment that created the pod, from within the pod.</p>
<p>The reason for this is so that the Pod can spawn another Deployment and set the <code>OwnerReference</code> of that Deployment to the original Deployment (so it gets Garbage Collected when the original Deployment is deleted).</p>
<p>Taking inspiration from <a href="https://stackoverflow.com/questions/42274229/kubernetes-deployment-name-from-within-a-pod">here</a>, I've tried*:</p>
<ol>
<li>Using field refs as env vars:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: test-operator
env:
- name: DEPLOYMENT_UID
valueFrom:
fieldRef: {fieldPath: metadata.uid}
</code></pre>
<ol start="2">
<li>Using <code>downwardAPI</code> and exposing through files on a volume:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>containers:
volumeMounts:
- mountPath: /etc/deployment-info
name: deployment-info
volumes:
- name: deployment-info
downwardAPI:
items:
- path: "uid"
fieldRef: {fieldPath: metadata.uid}
</code></pre>
<p>*Both of these are under <code>spec.template.spec</code> of a resource of kind: Deployment.</p>
<p>However for both of these the uid is that of the Pod, not the Deployment. Is what I'm trying to do possible?</p>
| James Cockbain | <p>The behavior is correct, the <code>Downward API</code> is for <code>pod</code> rather than <code>deployment/replicaset</code>.</p>
<p>So I guess the solution is set the name of deployment manually in <code>spec.template.metadata.labels</code>, then adopt <code>Downward API</code> to inject the labels as env variables.</p>
| vincent pli |
<p>I am trying to create a kubernetes deployment. Here is the manifest:</p>
<p><code>server-deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 1
selector:
matchLabels:
tier: server
template:
metadata:
labels:
tier: server
spec:
containers:
- image: rocketblast2481/chatto-server
name: server-container
imagePullPolicy: Always
</code></pre>
<p>Then I run the following command:</p>
<pre><code>kubectl apply -f=server-deployment.yaml
</code></pre>
<p>But then I get the following error:</p>
<pre><code>The Deployment "server-deployment" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"tier":"server"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutabl
</code></pre>
| chai | <p>It looks like you are trying to update deployment selector. Which is not possible. To update selector first delete existing deployment using.
kubectl delete deployment server-deployment
Then run
kubectl apply -f server-deployment.yml</p>
| Akshay Gopani |
<p>I have docker image with custom user (for example tod). I want to find out information about this user without creating a container.</p>
<p>Base image: centos8</p>
<p>UPD: A little about context.
I run an unprivileged container in k8s and get the following error:
<code>container has runAsNonRoot and image has non-numeric user (tod), cannot verify user is non-root</code></p>
<p>I read <a href="https://stackoverflow.com/questions/49720308/kubernetes-podsecuritypolicy-set-to-runasnonroot-container-has-runasnonroot-and">this</a> answer and cannot understand why it is not possible to get the user id from my container.</p>
| jesmart | <p>Lets try to analyze the following code:</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/cea1d4e20b4a7886d8ff65f34c6d4f95efcb4742/pkg/kubelet/kuberuntime/security_context_others.go#L47-L48" rel="nofollow noreferrer">github code reference link</a></p>
<pre><code>case uid == nil && len(username) > 0:
return fmt.Errorf("container has runAsNonRoot and image has non-numeric user (%s), cannot verify user is non-root (pod: %q, container: %s)", username, format.Pod(pod), container.Name)
</code></pre>
<p>This is the code that print the error you see. You see the error because <code>uid == nil</code> and at the same time <code>username != ""</code>.</p>
<p>But why username has value and uid does not? Why couldn't they both have value?</p>
<p>It turns out that they couldn't because UID and the username are mutually exclusive. Have a look at the description of these parameters:</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/3895b1e4a0f8044540365a5a0a086cba0af77340/staging/src/k8s.io/cri-api/pkg/apis/runtime/v1/api.pb.go#L5158-L5164" rel="nofollow noreferrer">github code reference link</a>:</p>
<pre><code>// UID that will run the command(s). This is used as a default if no user is
// specified when creating the container. UID and the following user name
// are mutually exclusive.
Uid *Int64Value `protobuf:"bytes,5,opt,name=uid,proto3" json:"uid,omitempty"`
// User name that will run the command(s). This is used if UID is not set
// and no user is specified when creating container.
Username string `protobuf:"bytes,6,opt,name=username,proto3" json:"username,omitempty"`
</code></pre>
<p>So turns out it's not a bug. It's just how the container runtime interface standard got designed, and you can't do much about it.</p>
<hr />
<p>What you could do is change the way you use <a href="https://docs.docker.com/engine/reference/builder/#user" rel="nofollow noreferrer">USER instruction in Dockerfile</a>.</p>
<p>Instead of using username with USER instruction, create a user with a known uid and use this uid instead, like in example below:</p>
<pre><code>RUN useradd -r -u 1001 appuser
USER 1001
</code></pre>
| Matt |
<p>Consider the below PersistentVolumeClaim, as well as the Deployment using it.</p>
<p>Being ReadWriteOnce, the PVC can only be mounted by one node at the time. As there shall only be one replica of my deployment, I figured this should be fine. However, upon restarts/reloads, two Pods will co-exist during the switchover.</p>
<p>If Kubernetes decides to start the successor pod on the same node as the original pod, they will both be able to access the volume and the switchover goes fine. But - if it decides to start it on a new node, which it seems to prefer, my deployment ends up in a deadlock:</p>
<blockquote>
<p>Multi-Attach error for volume "pvc-c474dfa2-9531-4168-8195-6d0a08f5df34" Volume is already used by pod(s) test-cache-5bb9b5d568-d9pmd</p>
</blockquote>
<p>The successor pod can't start because the volume is mounted on another node, while the original pod/node, of course, won't let go of the volume until the pod is taken out of service. Which it won't be until the successor is up.</p>
<p>What am I missing here?</p>
<hr>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vol-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: do-block-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-cache
spec:
selector:
matchLabels:
app: test-cache-deployment
replicas: 1
template:
metadata:
labels:
app: test-cache-deployment
spec:
containers:
- name: test-cache
image: myrepo/test-cache:1.0-SNAPSHOT
volumeMounts:
- mountPath: "/test"
name: vol-mount
ports:
- containerPort: 8080
imagePullPolicy: Always
volumes:
- name: vol-mount
persistentVolumeClaim:
claimName: vol-name
imagePullSecrets:
- name: regcred
</code></pre>
| John-Arne Boge | <p>I figured out a workaround:</p>
<p>While far from ideal, it's an acceptable compromise in my particular case.</p>
<p>ReadWriteOnce volumes apparently doesn't play well with Kubernetes default upgrade strategy: "Rolling" (even in the case of single replica deployments). If instead I use the "Recreate" upgrade strategy, Kubernetes will destroy the original pod before starting the successor, thus detaching the volume before it is mounted again.</p>
<pre><code>...
spec:
selector:
matchLabels:
app: test-cache-deployment
replicas: 1
strategy:
type: Recreate
...
</code></pre>
<p>This solution obviously comes with a major drawback: The deployment will be offline between shutdown and successful startup - which might take anywhere from a few seconds to eternity.</p>
| John-Arne Boge |
<p>I have deployed Kubernetes with Docker before. I have trouble deploying K8s with Kata Container. Does kata container completely replace Docker in Kubernetes? So I have to uninstall Docker from my nodes to make it work? I am so confused right now.</p>
| jarge | <p>Kata containers work with Docker. You can look at this git repo where they have details setting up Kata with Docker and kubernetes both.
<a href="https://github.com/kata-containers/packaging/tree/master/kata-deploy" rel="nofollow noreferrer">Kata Deploy</a></p>
| Ratnadeep Bhattacharyya |
<p>I want to use Redis on AWS.
And my development env is depends on Skaffold with Kubernetes.</p>
<p>My security group configuration is:</p>
<p><a href="https://i.stack.imgur.com/oRclY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oRclY.png" alt="Inbound rules" /></a></p>
<p><a href="https://i.stack.imgur.com/ceZS2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ceZS2.png" alt="Outbound rules" /></a></p>
<p>And my Redis config like that:</p>
<p><a href="https://i.stack.imgur.com/LZ0yv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LZ0yv.png" alt="Redis config" /></a></p>
<p>I want to connect this Redis on my nodeJS app from Kubernetes pods
So, my code snipped like that:</p>
<pre><code>let statsQueue = new Queue('statistics', '<Primary-Endpoint-On-AWS-Redis');
</code></pre>
<p>But I getting this error:</p>
<pre><code> triggerUncaughtException(err, true /* fromPromise */);
^
Error: connect EINVAL 0.0.24.235:6379 - Local (0.0.0.0:0)
at internalConnect (node:net:909:16)
at defaultTriggerAsyncIdScope (node:internal/async_hooks:431:12)
at GetAddrInfoReqWrap.emitLookup [as callback] (node:net:1055:9)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:8) {
errno: -22,
code: 'EINVAL',
syscall: 'connect',
address: '0.0.24.235',
port: 6379
}
</code></pre>
<p>Why I'm getting this error? How can Δ± solve this?</p>
| akasaa | <p>The error solved with adding Redis:// from URL.</p>
<p>So connection like that:
let statsQueue = new Queue('statistics', 'Redis://<Primary-Endpoint-On-AWS-Redis');</p>
| akasaa |
<p>I have a YAML file as like below which I have exported from an existing cluster:</p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2019-03-20T23:17:42Z
name: default
namespace: dev4
resourceVersion: "80999"
selfLink: /api/v1/namespaces/dev4/serviceaccounts/default
uid: 5c6e0d09-4b66-11e9-b4e3-0a779a87bb40
secrets:
- name: default-token-tl4dd
- apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"pod-labeler","namespace":"dev4"}}
creationTimestamp: 2020-04-21T05:46:25Z
name: pod-labeler
namespace: dev4
resourceVersion: "113455688"
selfLink: /api/v1/namespaces/dev4/serviceaccounts/pod-labeler
uid: 702dadda-8393-11ea-abd9-0a768ca51346
secrets:
- name: pod-labeler-token-6vgp7
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
<p>If I do the above YAML and apply to a new cluster, I get an error, which is out of the scope of this question.</p>
<p>In summary, I have to get rid of the below attributes:</p>
<pre><code>uid:
selfLink:
resourceVersion:
creationTimestamp:
</code></pre>
<p>So I got a sed command like below which does the trick</p>
<pre><code>sed -i '/uid: \|selfLink: \|resourceVersion: \|creationTimestamp: /d' dev4-serviceaccounts.yaml
</code></pre>
<p>The final YAML file is like below:</p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: dev4
secrets:
- name: default-token-tl4dd
- apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"pod-labeler","namespace":"dev4"}}
name: pod-labeler
namespace: dev4
secrets:
- name: pod-labeler-token-6vgp7
kind: List
metadata:
</code></pre>
<p>My question is, is it the correct YAML file as it erases the empty tag and values for metadata...(AT the very much last of the YAML file)</p>
<p>I can create objects - serviceaccounts in this instance, but I just want to ensure if I am doing is correct or any other better approach.</p>
| learner | <p>In this specific case this is correct but you need to be carefull with it since there is no consistent way to do this for all types of resources.</p>
<p>Historically kubectl had --export flag that was generating yamls ready to apply, but it got depricated because of many bugs. Check out <a href="https://github.com/kubernetes/kubernetes/pull/73787" rel="nofollow noreferrer">the issue on k8s github repo</a> for more details.</p>
<p>There is also another way to export the resources if you used <code>kubectl apply</code> to create it.</p>
<pre><code>kubectl apply view-last-applied <api-resource> <name> -oyaml
e.g.:
kubectl apply view-last-applied serviceaccount pod-labeler -oyaml
</code></pre>
<p>But remember that this won't work for resources created with helm or other tools.</p>
<p>The best thing you could do is to always keep all your source files in git or similar, so you don't ever need to export it.</p>
| Matt |
<p>Following the documentation I try to setup the Seldon-Core quick-start <a href="https://docs.seldon.io/projects/seldon-core/en/v1.11.1/workflow/github-readme.html" rel="nofollow noreferrer">https://docs.seldon.io/projects/seldon-core/en/v1.11.1/workflow/github-readme.html</a></p>
<p>I don't have LoadBalancer so I would like to use port-fowarding for accessing to the service.</p>
<p>I run the following script for setup the system:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash -ev
kind create cluster --name seldon
kubectl cluster-info --context kind-seldon
sleep 10
kubectl get pods -A
istioctl install -y
sleep 10
kubectl get pods -A
kubectl create namespace seldon-system
kubens seldon-system
helm install seldon-core seldon-core-operator \
--repo https://storage.googleapis.com/seldon-charts \
--set usageMetrics.enabled=true \
--namespace seldon-system \
--set istio.enabled=true
sleep 100
kubectl get validatingwebhookconfigurations
kubectl create namespace modelns
kubens modelns
kubectl apply -f - << END
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: iris-model
namespace: modelns
spec:
name: iris
predictors:
- graph:
implementation: SKLEARN_SERVER
modelUri: gs://seldon-models/v1.12.0-dev/sklearn/iris
name: classifier
name: default
replicas: 1
END
sleep 100
kubectl get pods -A
kubectl get svc -A
INGRESS_GATEWAY_SERVICE=$(kubectl get svc --namespace istio-system --selector="app=istio-ingressgateway" --output jsonpath='{.items[0].metadata.name}')
kubectl port-forward --namespace istio-system svc/${INGRESS_GATEWAY_SERVICE} 8080:80 &
</code></pre>
<p>I gess the port-forwarding argument <code>8080:80</code> is probably wrong.</p>
<p>I'm using the following script for testing:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash -ev
export INGRESS_HOST=localhost
export INGRESS_PORT=8080
SERVICE_HOSTNAME=$(kubectl get inferenceservice sklearn-iris -n kserve-test -o jsonpath='{.status.url}' | cut -d "/" -f 3)
curl -X POST http://$INGRESS_HOST:$INGRESS_PORT/seldon/modelns/iris-model/api/v1.0/predictions \
-H 'Content-Type: application/json' \
-d '{ "data": { "ndarray": [1,2,3,4] } }'
</code></pre>
<p>But I got the following error:</p>
<pre class="lang-sh prettyprint-override"><code>Handling connection for 8080
E1012 10:52:32.074812
52896 portforward.go:400] an error occurred forwarding 8080 -> 8080:
error forwarding port 8080 to pod b9bd4ff03c6334f4af632044fe54e1c2531e95976a5fe074e30b4258d145508a,
uid : failed to execute portforward in network namespace "/var/run/netns/cni-2b4d8573-3cfe-c70e-1c36-e0dc53cbd936": failed to connect to localhost:8080 inside namespace "b9bd4ff03c6334f4af632044fe54e1c2531e95976a5fe074e30b4258d145508a",
IPv4: dial tcp4 127.0.0.1:8080: connect: connection refused IPv6 dial tcp6 [::1]:8080: connect: connection refused
</code></pre>
<p>Please can somebody known how to fix this? What is the right port forwarding argument?</p>
| ucsky | <p>If you install with istio enabled you also need to install the istio gateway.</p>
<p>I've tested your flow and it didn't work, and then did work after installing the following istio gateway.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: seldon-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
</code></pre>
<p>You can read more about istio configuration on Seldon Core here: <a href="https://docs.seldon.io/projects/seldon-core/en/latest/ingress/istio.html" rel="nofollow noreferrer">https://docs.seldon.io/projects/seldon-core/en/latest/ingress/istio.html</a></p>
| Majolo |
<p>We would like to use annotation for redirecting requests to a different backend service based on url args (query)</p>
<p>Example:</p>
<p><a href="https://example.com/foo?differentQueryString=0" rel="nofollow noreferrer">https://example.com/foo?differentQueryString=0</a> -> service-a</p>
<p><a href="https://example.com/foo/bar?queryString=0" rel="nofollow noreferrer">https://example.com/foo/bar?queryString=0</a> - service-b</p>
<ul>
<li>Notes: path does not matter, this can be either /foo/bar or /foo or /bar/foo</li>
</ul>
<p>We followed up on this</p>
<p><a href="https://stackoverflow.com/questions/66994472/kubernetes-nginx-ingress-controller-different-route-if-query-string-exists">Kubernetes NGINX Ingress controller - different route if query string exists</a></p>
<p>and this</p>
<p><a href="https://stackoverflow.com/questions/53858277/kubernetes-ingress-routes-with-url-parameter">Kubernetes ingress routes with url parameter</a></p>
<p>But we don't want to setup ConfigMap just for this and also we don't want to duplicate requests to the ingress but rewriting</p>
<p>This is what we tried</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($args ~ queryString=0){
backend.service.name = service-b
}
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-a
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: service-b
port:
number: 80
</code></pre>
<p>We were expecting to get the response but we got 502 from the Ingress Nginx</p>
| Matan Baruch | <p>We managed to find a nice solution without rewriting and ConfigMap</p>
<p>Works great and also include Nginx Ingress metrics so we can do HPA accordingly</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($args ~ queryString=0){
set $proxy_upstream_name "default-service-b-80";
set $proxy_host $proxy_upstream_name;
set $service_name "service-b";
}
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-a
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: service-b
port:
number: 80
</code></pre>
<p>$proxy_upsteam_name convention is NAMESPACE-SERVICE_NAME-PORT</p>
| Matan Baruch |
<p>By default, in Kubernetes, an internal address is created when a service is deployed. It is possible to reach this service through the following address: <code>my-svc.my-namespace.svc.cluster-domain.example</code> (<a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="nofollow noreferrer">doc reference</a>)</p>
<p>I would like to add more internal addresses to this service when it gets deployed (I use helm to deploy my services). I want to be able to access the same service through <code>my-svc</code>, <code>my-svc-1</code> and <code>my-svc-2</code>. I was wondering if I could add these aliases into the deployment process so it automatically generates multiple entries for the same address.</p>
<p>As a workaround, I read that I can use CoreDNS and add CNAME entries but that's definitely not the optimal choice since I don't know if I can automatically add these entries into CoreDNS when I deploy the service. If that's possible, I'd be happy to know how it can be achieved.</p>
| Nate | <p>Solution mentioned by Manuel is correct. I tested it and it works. I used the following yamls:</p>
<p><strong>Regular service expose:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
run: my-svc
name: my-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: my-svc
</code></pre>
<p><strong>ExternalName services:</strong></p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: my-svc-1
spec:
type: ExternalName
externalName: my-svc.default.svc.cluster.local
---
apiVersion: v1
kind: Service
metadata:
name: my-svc-2
spec:
type: ExternalName
externalName: my-svc.default.svc.cluster.local
</code></pre>
<p>As expected, ExternalName type service creates CNAME records pointing to <code>my-svc</code></p>
<pre><code>$ dig my-svc-2 +search
;; QUESTION SECTION:
;my-svc-2.default.svc.cluster.local. IN A
;; ANSWER SECTION:
my-svc-2.default.svc.cluster.local. 13 IN CNAME my-svc.default.svc.cluster.local.
my-svc.default.svc.cluster.local. 13 IN A 10.100.222.87
</code></pre>
| Matt |
<p>"<em><strong>oc get deployment</strong></em>" command is returning "<strong>No resources Found</strong>" as the result.</p>
<p>Even if I put an option of assigning or defining the namespace using -n as the option to above command, I am getting the same result.</p>
<p>Whereas, I am getting the correct result of <em><strong>oc get pods</strong></em> command.</p>
<p>Meanwhile, the oc version is
oc - v3.6.0
kubernetes - v1.6.1</p>
<p>openshift - v3.11.380</p>
| user1227670 | <p>There are other objects that create pods such as <code>statefulset</code> or <code>deamonset</code>. Because it is OpenShift, my feeling is that the pods created by a <code>deploymentconfig</code> which is popular way to create applications.</p>
<p>Anyway, you can make sure which object is the owner of the pods by looking into the pod annotation. This command should work:</p>
<pre><code>oc get pod -o yaml <podname> | grep ownerReference -A 6
</code></pre>
| Ron Megini |
<p>With my team we are trying to move our micro-services to openj9, they are running on kubernetes. However, we encounter a problem on the configuration of JMX. (openjdk8-openj9)
We have a connection refused when we try a connection with jvisualvm (and a port-forwarding with Kubernetes).
We haven't changed our configuration, except for switching from Hotspot to OpenJ9.</p>
<p>The error :</p>
<pre><code>E0312 17:09:46.286374 17160 portforward.go:400] an error occurred forwarding 1099 -> 1099: error forwarding port 1099 to pod XXXXXXX, uid : exit status 1: 2020/03/12 16:09:45 socat[31284] E connect(5, AF=2 127.0.0.1:1099, 16): Connection refused
</code></pre>
<p>The java options that we use : </p>
<pre><code>-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.rmi.port=1099
</code></pre>
<p>We are using the last adoptopenjdk/openjdk8-openj9 docker image.
Do you have any ideas?</p>
<p>Thank you ! </p>
<p>Regards.</p>
| alexis_sgra | <p>I managed to figure out why it wasn't working.
It turns out that to pass the JMX options to the service we were using the Kubernetes service descriptor in YAML. It looks like this: </p>
<pre><code> - name: _JAVA_OPTIONS
value: -Dzipkinserver.listOfServers=http://zipkin:9411 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.rmi.port=1099
</code></pre>
<p>I realized that the JMX properties were not taken into account from _JAVA_OPTIONS when the application is not launch with ENTRYPOINT in the docker container.
So I pass the properties directly into the Dockerfile like this and it works.</p>
<pre><code>CMD ["java", "-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.ssl=false", "-Dcom.sun.management.jmxremote.local.only=false", "-Dcom.sun.management.jmxremote.port=1099", "-Dcom.sun.management.jmxremote.rmi.port=1099", "-Djava.rmi.server.hostname=127.0.0.1", "-cp","app:app/lib/*","OurMainClass"]
</code></pre>
<p>It's also possible to keep _JAVA_OPTIONS and setup an ENTRYPOINT in the dockerfile.</p>
<p>Thanks!</p>
| alexis_sgra |
<p>I have this base url <code>api.example.com</code></p>
<p>So, <code>ingress-nginx</code> will get the request for <code>api.example.com</code> and it should do follow things.</p>
<p>Forward <code>api.example.com/customer</code> to <code>customer-srv</code></p>
<p>It doesn't work, it forwards whole mtach to customer-srv i.e. <code>/customer/requested_url</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: api.example.in
http:
paths:
- path: /customer/?(.*)
pathType: Prefix
backend:
service:
name: customer-srv
port:
number: 3000
</code></pre>
<p>I tried using rewrite annotation but that doesn't work either however this worked but this is not I want to achieve.</p>
<pre><code> paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: customer-srv
port:
number: 3000
</code></pre>
<p>For example,</p>
<p><code>api.example.com/customer</code> should go to <code>http://localhost:3000</code> not <code>http://localhost:3000/customer</code></p>
| confusedWarrior | <p>Here is the yaml I used:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: api.example.in
http:
paths:
- path: /customer/?(.*)
pathType: Prefix
backend:
service:
name: customer-srv
port:
number: 3000
</code></pre>
<p>For test purpouses I created an echo server:</p>
<pre><code>kubectl run --image mendhak/http-https-echo customer
</code></pre>
<p>And then a service:</p>
<pre><code>kubectl expose po customer --name customer-srv --port 3000 --target-port 80
</code></pre>
<p>I checked igress ip:</p>
<pre><code>$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> api.example.in 192.168.39.254 80 3m43s
</code></pre>
<p>And I did a curl to check it:</p>
<pre><code>curl 192.168.39.254/customer/asd -H "Host: api.example.in"
{
"path": "/asd",
"headers": {
"host": "api.example.in",
...
}
</code></pre>
<p>Notice that the echo server echoed back a path that it received, and sice it received a path that got rewritten from <code>/customer/asd</code> to <code>/asd</code> it shows this exectly path (/asd).</p>
<p>So as you see this does work.</p>
| Matt |
<p>I am authenticating via the following</p>
<p>First I authenticate into AWS via the following</p>
<pre><code>aws ecr get-login-password --region cn-north-1 | docker login --username AWS --password-stdin xxxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn
</code></pre>
<p>Then I created the <code>regcred</code> file that I reference in my deployment config</p>
<pre><code>kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/noobskie/.docker/config.json --type=kubernetes.io/dockerconfigjson
</code></pre>
<p>So this was working fine the first 12 hours but now that the AWS token has expired I am having trouble figuring out how to properly refresh it. I have rerun the first command but it doesn't work.</p>
<p>the error I get is</p>
<pre><code>Error response from daemon: pull access denied for xxxxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn/baopals, repository does not exist or may require 'docker login': denied: Your authorization token has expired. Reauthenticate and try again.
</code></pre>
<p><strong>EDIT</strong></p>
<p>I have just discovered that I can just reconfigure with the following command but I am curious if this is the correct way to handle it and if there are any other AWS ways offered.</p>
<pre><code>kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/noobskie/.docker/config.json --dry-run -o yaml | kubectl apply -f -
</code></pre>
| NooBskie | <p>Use the following command to generate token if aws-cli and aws-iam-authenticator is installed and configured.</p>
<pre><code>aws-iam-authenticator token -i cluster name
</code></pre>
| Venky Ohana |
<p>I have a TCP service. I created a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer">TCP readiness probe</a> for my service which appears to be working just fine.</p>
<p>Unfortunately, my EC2 target group wants to perform an HTTP health check on my instance. My service doesn't respond to HTTP requests, so my target group is considering my instance unhealthy.</p>
<p>Is there a way to change my target group's health check from "does it return an HTTP success response?" to "can a TCP socket be opened to it?"</p>
<p>(I'm also open to other ways of solving the problem if what I suggested above isn't possible or doesn't make sense.)</p>
| Jason Swett | <p>TCP is a valid protocol for health checks in 2 cases:</p>
<ol>
<li>the classic flavor of the ELB, <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html#health-check-configuration" rel="nofollow noreferrer">see docs</a></li>
<li>The network load balancer, <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html#health-check-settings" rel="nofollow noreferrer">see docs</a></li>
</ol>
<p>in case you're stuck with the Application Load Balancer - the only idea that comes to mind is to add a sidecar container that will respond to HTTP/HTTPS based on your TCP status. You could easily do this with nginx, although it would probably be quite an overkill.</p>
| andrzejwp |
<p>I'm new to k8s and slowly picking it up. I have built out a web api that runs on .net 5 and uses HTTPS (a previous version used http) only, I built the image in docker compose and everything works as expected locally with the default aspnetapp.pfx cert. What I am struggling with is that my ingress routing seems to terminaate the connection early.</p>
<p>I have created a cert pfx for kestrel to run on with the CN name of a.b.com this was from the crt and key files that are needed to create secrets in the documentation. but from my understanding kestrel needs a pfx to run (straight out of the box).</p>
<p>Below are my ingress, service and deployment snippets as well as the entries from the logs:
I believe that my issue is that in the logs it is showing as a "http" request but it should be https</p>
<p>Logs:</p>
<pre><code>2021/02/24 09:44:11 [error] 3231#3231: *44168360 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: _, request: "GET /dayofweek/api/1/dayofweek/action HTTP/2.0", upstream: "http://<podip>/api/1/dayofweek/action", host: "<clusterip>:<nodePort>"
</code></pre>
<p>Ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dayofweek-ingress-path
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
#kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: a.b.com
- http:
paths:
- backend:
service:
name: dayofweek-svc
port:
number: 9057
path: /dayofweek/?(.*)
pathType: Prefix
</code></pre>
<p>Service + Deployment</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: dayofweek-svc
labels:
run: dayofweek-svc
spec:
ports:
- port: 9057
targetPort: 3441
protocol: TCP
name: https
selector:
app: dayofweek
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dayofweek
spec:
replicas: 1
selector:
matchLabels:
app: dayofweek
template:
metadata:
labels:
app: dayofweek
spec:
volumes:
- name: cert-volume
persistentVolumeClaim:
claimName: persistentcerts-claim
containers:
- name: dayofweek
image: ...omitted
ports:
- containerPort: 3441
env:
- name: DOTNET_ENVIRONMENT
value: Development
- name: Culture
value: en-US #English by default
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /https/aspnetapp.pfx
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: password
volumeMounts:
- mountPath: /https
name: cert-volume
</code></pre>
<p>I followed through with the guide here: <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/</a></p>
<p>And I seem to have it up and running, but for some reason when I'm not sure if I've overcomplicated it by adding in the "-host" element of the ingress.</p>
<p>Any help would be greatly appreciated!</p>
| jb94 | <p>Service and deployment look correct, but I can see some issues with ingress.</p>
<p>When using ssl-passthrough path based routing doesn't work so you can skip it.</p>
<p>Also, there is a typo in your config:</p>
<pre><code>- host: a.b.com
- http: # <- HERE
</code></pre>
<p>there shouldn't be the second dash.</p>
<p>Here is how it should look like:</p>
<pre><code>spec:
rules:
- host: a.b.com
http:
paths:
</code></pre>
<p>Additionally, have a look what nginx ingres docs has to say about <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/#ssl-passthrough" rel="nofollow noreferrer">ssl-passthrough</a>:</p>
<blockquote>
<p><strong>SSL Passthrough</strong></p>
<p><strong>The --enable-ssl-passthrough flag enables the SSL
Passthrough feature, which is disabled by default. This is required to
enable passthrough backends in Ingress objects</strong>.</p>
<p>Warning</p>
<p>This feature is implemented by intercepting all traffic on the
configured HTTPS port (default: 443) and handing it over to a local
TCP proxy. This bypasses NGINX completely and introduces a
non-negligible performance penalty.</p>
<p>SSL Passthrough leverages SNI and reads the virtual domain from the
TLS negotiation, which requires compatible clients. After a connection
has been accepted by the TLS listener, it is handled by the controller
itself and piped back and forth between the backend and the client.</p>
<p>If there is no hostname matching the requested host name, the request
is handed over to NGINX on the configured passthrough proxy port
(default: 442), which proxies the request to the default backend.</p>
</blockquote>
<hr />
<p>There is also <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="nofollow noreferrer">this in docs</a>:</p>
<blockquote>
<p>SSL Passthrough</p>
<p>nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller
to send TLS connections directly to the backend instead of letting
NGINX decrypt the communication. See also TLS/HTTPS in the User guide.</p>
<p>**Note
SSL Passthrough is disabled by default and requires starting the
controller with the --enable-ssl-passthrough flag.</p>
<p><strong>Attention</strong></p>
<p><strong>Because SSL Passthrough works on layer 4 of the OSI model (TCP) and
not on the layer 7 (HTTP), using SSL Passthrough invalidates all the
other annotations set on an Ingress object.</strong></p>
</blockquote>
<hr />
<p>So, according to the docs, in order for it to work you need to enable ssl-passthrough feature first. After this is done, you can use ssl-passthrough annotation but this invalidates all the other annotations and path based routing stops working.</p>
| Matt |
<p>We are building a proprietary Java application based on Docker. Right now, we are using local docker installation and development is in progress. When we want to share this application, hope this should be deployed in some docker registry. Docker registry is free and opensource? or how can I securely+freely allow my customers to access my application?.Basically, we want zero cost secure deployment option using docker.</p>
| JavaUser | <p>If you're fine with putting your docker images public - you can use the <a href="https://hub.docker.com" rel="nofollow noreferrer">docker hub</a>.
If you want to keep it private - you can opt for one of the free private registries, e.g. <a href="https://treescale.com/" rel="nofollow noreferrer">treescale</a></p>
<p>See a longer list of free private registries <a href="https://johanbostrom.se/blog/list-of-free-private-docker-registry-and-repository" rel="nofollow noreferrer">here</a></p>
| andrzejwp |
<p>I'm using GKE and Helm v3 and I'm trying to create/reserve a static IP address using ComputeAddress and then to create DNS A record with the previously reserved IP address.</p>
<p>Reserve IP address</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: ip-address
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
location: global
</code></pre>
<p>Get reserved IP address</p>
<pre><code>kubectl get computeaddress ip-address -o jsonpath='{.spec.address}'
</code></pre>
<p>Create DNS A record</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: dns.cnrm.cloud.google.com/v1beta1
kind: DNSRecordSet
metadata:
name: dns-record-a
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
name: "{{ .Release.Name }}.example.com"
type: "A"
ttl: 300
managedZoneRef:
external: example-com
rrdatas:
- **IP-ADDRESS-VALUE** <----
</code></pre>
<p>Is there a way to reference the IP address value, created by ComputeAddress, in the DNSRecordSet resource?</p>
<p>Basically, I need something similar to the output values in Terraform.</p>
<p>Thanks!</p>
| Milan Ilic | <p>Currently, there is not possible to assign different value as string (IP Address) on the field "rrdatas". So you are not able to "call" another resource like the IP Address created before. You need to put the value on format x.x.x.x</p>
| blueboy1115 |
<p>I'm trying to install Jaeger into my K8s cluster using the streaming strategy. I need to use the existing Kafka cluster from my cloud provider. It requires a username and password. Jaeger documentation mentions only broker and topic:</p>
<pre><code> spec:
strategy: streaming
collector:
options:
kafka: # <1>
producer:
topic: jaeger-spans
brokers: my-cluster-kafka-brokers.kafka:9092
</code></pre>
<p>How can I configure Kafka credentials in CRD?</p>
<p>-Thanks in advance!</p>
| Timur | <p>Based on following example from <a href="https://www.jaegertracing.io/docs/1.21/operator/#streaming-strategy" rel="nofollow noreferrer">jaeger docs</a>:</p>
<pre><code>apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-streaming
spec:
strategy: streaming
collector:
options:
kafka: # <1>
producer:
topic: jaeger-spans
brokers: my-cluster-kafka-brokers.kafka:9092
ingester:
options:
kafka: # <1>
consumer:
topic: jaeger-spans
brokers: my-cluster-kafka-brokers.kafka:9092
ingester:
deadlockInterval: 5s # <2>
storage:
type: elasticsearch
options:
es:
server-urls: http://elasticsearch:9200
</code></pre>
<p>and on example <a href="https://www.jaegertracing.io/docs/1.21/cli/" rel="nofollow noreferrer">cli falgs</a>:</p>
<pre><code>--kafka.producer.topic jaeger-spans
The name of the kafka topic
--kafka.producer.brokers 127.0.0.1:9092
The comma-separated list of kafka brokers. i.e. '127.0.0.1:9092,0.0.0:1234'
--kafka.producer.plaintext.password
The plaintext Password for SASL/PLAIN authentication
--kafka.producer.plaintext.username
The plaintext Username for SASL/PLAIN authentication
</code></pre>
<p>I infere that you should be able to do the following:</p>
<pre><code>spec:
strategy: streaming
collector:
options:
kafka: # <1>
producer:
topic: jaeger-spans
brokers: my-cluster-kafka-brokers.kafka:9092
plaintext:
username: <username>
password: <password>
</code></pre>
<p>Notice that I split the cli options with the dot and added it as a nested fields in yaml. Do the same to other parameters by analogy.</p>
| Matt |
<p>Why my pod error "Back-off restarting failed container" when I have <code>imagePullPolicy: "Always"</code>, Before It worked but today I deploy it on other machine, it show that error</p>
<h2>My Yaml:</h2>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: couchdb
labels:
app: couch
spec:
replicas: 3
serviceName: "couch-service"
selector:
matchLabels:
app: couch
template:
metadata:
labels:
app: couch # pod label
spec:
containers:
- name: couchdb
image: couchdb:2.3.1
imagePullPolicy: "Always"
env:
- name: NODE_NETBIOS_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NODENAME
value: $(NODE_NETBIOS_NAME).couch-service # FQDN in vm.args
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: admin
- name: COUCHDB_SECRET
value: b1709267
- name: ERL_FLAGS
value: "-name couchdb@$(NODENAME)"
- name: ERL_FLAGS
value: "-setcookie b1709267" # the βpasswordβ used when nodes connect to each other.
ports:
- name: couchdb
containerPort: 5984
- name: epmd
containerPort: 4369
- containerPort: 9100
volumeMounts:
- name: couch-pvc
mountPath: /opt/couchdb/data
volumeClaimTemplates:
- metadata:
name: couch-pvc
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
selector:
matchLabels:
volume: couch-volume
</code></pre>
<p>I describe it:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23s default-scheduler Successfully assigned default/couchdb-0 to b1709267node1
Normal Pulled 17s kubelet Successfully pulled image "couchdb:2.3.1" in 4.368553213s
Normal Pulling 16s (x2 over 22s) kubelet Pulling image "couchdb:2.3.1"
Normal Created 10s (x2 over 17s) kubelet Created container couchdb
Normal Started 10s (x2 over 17s) kubelet Started container couchdb
Normal Pulled 10s kubelet Successfully pulled image "couchdb:2.3.1" in 6.131837401s
Warning BackOff 8s (x2 over 9s) kubelet Back-off restarting failed container
</code></pre>
<p>What shound I do? Thanks</p>
| cksawd | <p><code>ImagePullPolicy</code> doesn't really have much to do with container restarts. It only determines on what occasion should the image be pulled from the container registry, <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">read more here</a></p>
<p>If a container in a pod keeps restarting - it's usually because there is some error in the command that is the entrypoint of this container. There are 2 places where you should be able to find additional information that should point you to the solution:</p>
<ul>
<li>logs of the pod (check using <code>kubectl logs _YOUR_POD_NAME_</code> command)</li>
<li>description of the pod (check using <code>kubectl describe _YOUR_POD_NAME_</code> command)</li>
</ul>
| andrzejwp |
<p>What would be the best way to create a liveness probe based on incoming TCP traffic on particular port?</p>
<p><code>tcpdump</code> and <code>bash</code> are available inside so it could be achieved by some script checking if there is incoming traffic on that port, but I wonder if there are better (cleaner) ways?</p>
<p>The example desired behaviour:</p>
<p>if there is no incoming traffic on port <code>1234</code> for the last <code>10 seconds</code> the container crashes</p>
| Wiktor Kisielewski | <p>if there is no incoming traffic on port 1234 for the last 10 seconds the container will be restarted with the below configuration. Also, note that there is no probe that makes the container crashes</p>
<pre><code>livenessProbe:
tcpSocket:
port: 1234
periodSeconds: 10
failureThreshold: 1
</code></pre>
<p>Here is the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer">documentation</a></p>
| KaΔan Mersin |
<p>while i was reading <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pausing-and-resuming-a-deployment" rel="nofollow noreferrer">Pausing and Resuming a Deployment</a> from kubernetes official docs, here i found that i can apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.<br />
Now, when i only changed the image of the deployment then it works fine, but when i changed the number of replicas in the deployment manifest and applied that then the pod created/deleted immediately according to the replicas number even the deployment was in paused state.</p>
<p>Now, my question is, why it is happening? doesn't it allowed to change the replicas during paused deployment? or only image changing is allowed? the doc did not specify it.</p>
<p>Another thing is if i change number of replicas and also the image then only the replicas get applied during paused deployments and created pods with the previous image not the current one.</p>
<p>the manifest file of the deployment that is used is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1
ports:
- containerPort: 80
</code></pre>
<h1>Updated Question (Gave full details)</h1>
<p><strong>What happened</strong>:
While I was reading <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pausing-and-resuming-a-deployment" rel="nofollow noreferrer">Pausing and Resuming a Deployment</a> portion of kubernetes official docs. I paused a deployment and during deployment I changed the number of replicas in the deployment manifest file. After that when I applied that changes using <code>kubectl apply -f deployment.yaml</code> then according to the new number of replicas the pods immediately created/deleted even during the paused deployment state, those pods came with the previous image.</p>
<p>First I created the deployment with this manifest using the command <code>kubectl apply -f deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.1
ports:
- containerPort: 80
</code></pre>
<p>after that I opened two terminal for monitoring the pods and replicaset by using the commands:</p>
<pre><code>kubectl get pods -w
kubectl get rs -w
</code></pre>
<p>after that I paused the deployment using <code>kubectl rollout pause deployment/nginx-deployment</code> command.</p>
<p>then I change the manifest and it like below:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 5 # <------ here changed the number of replicas
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1 # <---- here changed the image
ports:
- containerPort: 80
</code></pre>
<p>after applying this manifest using <code>kubectl apply -f deployment.yaml</code> the things that I saw is: two new pods get created with the previous image <code>image: nginx:1.14.1</code></p>
<p><strong>What I expected to happen</strong>:
According to the pausing concepts, any changes into the PodTemplateSpec of the paused Deployment will not trigger new rollouts as long as it is paused. Now why did the changing replicas implemented during the paused deployment.</p>
<p><strong>Anything else?</strong>:
Also if I use horizontalautoscaler the same thing happens.</p>
<p><strong>Environment</strong>:</p>
<ul>
<li>Kubernetes version (use <code>kubectl version</code>):</li>
</ul>
<pre><code>Client Version: v1.20.1
Server Version: v1.19.1
</code></pre>
| Sahadat Hossain | <p>I don't think this is an issue. I belive this is expected behaviuor and
I will try to explain why I think so.</p>
<p>Pause functionality is part of <code>kubectl rollout</code> command.</p>
<pre><code>kubectl rollout pause ...
</code></pre>
<p>and it is used to pause a specific deployment from rolling update.</p>
<p>But what does rolling update actuall is? Let's try to define it.
In <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">k8s docs</a> we can read:</p>
<blockquote>
<p>Rolling updates allow Deployments' update to take place with zero
downtime by incrementally updating Pods instances with new ones</p>
</blockquote>
<p>So we can see that rolling update is being used to apply changes with zero downtime.
This includes changes to image version, container limits etc. Basically anything related to how application runs and behaves.</p>
<p>Now, does changing number of replicas impact the running application? Does it change the way it behaves? No! It only adds/deletes replicas of the same application. Users still can use the same version that was available before pausing the deployment. No changes are applied.</p>
<p>After you resume the deployment, all the changes submited after the pause will be applied.</p>
<p>So to summarize, I belive this is expected behaviour, since the change of number of replicas does not change the actual application and in the meanwhile the actuall update is indeed paused.</p>
<p>If you are still not convinced, I'd recommend you to open an issue on <a href="https://github.com/kubernetes/kubernetes/issues" rel="nofollow noreferrer">k8s github repo</a> and ask developers directly.</p>
| Matt |
<p>Iβm attempting to execute a Jenkins & Docker CLI container on Kubernetes. Here are my steps:</p>
<p>I create the pod using:</p>
<pre><code>kubectl --kubeconfig my-kubeconfig.yml run my-jenkins-pod --image=trion/jenkins-docker-client --restart=Never
</code></pre>
<p>Which creates a pod-based on the image <a href="https://hub.docker.com/r/trion/jenkins-docker-client" rel="nofollow noreferrer">https://hub.docker.com/r/trion/jenkins-docker-client</a></p>
<p>I create the deployment using:</p>
<pre><code>kubectl --kubeconfig my-kubeconfig.yml apply -f /kub/kube
</code></pre>
<p><code>/kub/kube</code> contains <code>jenkins-deployment-yaml</code> which I have configured as:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-jenkins-pod
spec:
ports:
- protocol: "TCP"
port: 50000
targetPort: 5001
selector:
app: my-jenkins-pod
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-jenkins-pod
spec:
selector:
matchLabels:
app: my-jenkins-pod
replicas: 1
template:
metadata:
labels:
app: my-jenkins-pod
spec:
containers:
- name: ml-services
image: trion/jenkins-docker-client
ports:
- containerPort: 5001
</code></pre>
<p>To access the Jenkins container I expose the IP using:</p>
<pre><code>kubectl --kubeconfig my-kubeconfig.yml expose deployment my-jenkins-pod --type=LoadBalancer --name=my-jenkins-pod-public
</code></pre>
<p>To return the IP of the Jenkins and Docker image I use :</p>
<pre><code>kubectl --kubeconfig my-kubeconfig.yml get services my-jenkins-pod-public
</code></pre>
<p>Which returns:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-jenkins-pod-public LoadBalancer 9.5.52.28 161.61.222.16 5001:30878/TCP 10m
</code></pre>
<p>To test I open the URL at location:</p>
<p><a href="http://161.61.222.16:5001/" rel="nofollow noreferrer">http://161.61.222.16:5001/</a></p>
<p>Which returns:</p>
<pre><code>This page isnβt working
161.61.222.16 didnβt send any data.
ERR_EMPTY_RESPONSE
</code></pre>
<p>It seems the service has started but the port mappings are incorrect?</p>
<p>The log of the pod my-jenkins-pod contains:</p>
<blockquote>
<p>Running from: /usr/share/jenkins/jenkins.war webroot:
EnvVars.masterEnvVars.get("JENKINS_HOME") 2021-04-03 11:15:42.899+0000
[id=1] INFO org.eclipse.jetty.util.log.Log#initialized: Logging
initialized @274ms to org.eclipse.jetty.util.log.JavaUtilLog
2021-04-03 11:15:43.012+0000 [id=1] INFO winstone.Logger#logInternal:
Beginning extraction from war file 2021-04-03 11:15:44.369+0000
[id=1] WARNING o.e.j.s.handler.ContextHandler#setContextPath: Empty
contextPath 2021-04-03 11:15:44.416+0000
[id=1] INFO org.eclipse.jetty.server.Server#doStart:
jetty-9.4.39.v20210325; built: 2021-03-25T14:42:11.471Z; git:
9fc7ca5a922f2a37b84ec9dbc26a5168cee7e667; jvm 1.8.0_282-b08 2021-04-03
11:15:44.653+0000
[id=1] INFO o.e.j.w.StandardDescriptorProcessor#visitServlet: NO JSP
Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
2021-04-03 11:15:44.695+0000
[id=1] INFO o.e.j.s.s.DefaultSessionIdManager#doStart:
DefaultSessionIdManager workerName=node0 2021-04-03 11:15:44.695+0000
[id=1] INFO o.e.j.s.s.DefaultSessionIdManager#doStart: No
SessionScavenger set, using defaults 2021-04-03 11:15:44.696+0000
[id=1] INFO o.e.j.server.session.HouseKeeper#startScavenging: node0
Scavenging every 660000ms 2021-04-03 11:15:45.081+0000
[id=1] INFO hudson.WebAppMain#contextInitialized: Jenkins home
directory: /var/jenkins_home found at:
EnvVars.masterEnvVars.get("JENKINS_HOME") 2021-04-03 11:15:45.203+0000
[id=1] INFO o.e.j.s.handler.ContextHandler#doStart: Started
w.@24f43aa3{Jenkins
v2.286,/,file:///var/jenkins_home/war/,AVAILABLE}{/var/jenkins_home/war}
2021-04-03 11:15:45.241+0000
[id=1] INFO o.e.j.server.AbstractConnector#doStart: Started
ServerConnector@4f0f2942{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2021-04-03 11:15:45.241+0000
[id=1] INFO org.eclipse.jetty.server.Server#doStart: Started @2616ms
2021-04-03 11:15:45.245+0000 [id=21] INFO winstone.Logger#logInternal:
Winstone Servlet Engine running: controlPort=disabled 2021-04-03
11:15:46.479+0000 [id=26] INFO jenkins.InitReactorRunner$1#onAttained:
Started initialization 2021-04-03 11:15:46.507+0000
[id=26] INFO jenkins.InitReactorRunner$1#onAttained: Listed all
plugins 2021-04-03 11:15:47.654+0000
[id=27] INFO jenkins.InitReactorRunner$1#onAttained: Prepared all
plugins 2021-04-03 11:15:47.660+0000
[id=26] INFO jenkins.InitReactorRunner$1#onAttained: Started all
plugins 2021-04-03 11:15:47.680+0000
[id=27] INFO jenkins.InitReactorRunner$1#onAttained: Augmented all
extensions 2021-04-03 11:15:48.620+0000
[id=26] INFO jenkins.InitReactorRunner$1#onAttained: System config
loaded 2021-04-03 11:15:48.621+0000
[id=26] INFO jenkins.InitReactorRunner$1#onAttained: System config
adapted 2021-04-03 11:15:48.621+0000
[id=27] INFO jenkins.InitReactorRunner$1#onAttained: Loaded all jobs
2021-04-03 11:15:48.622+0000
[id=27] INFO jenkins.InitReactorRunner$1#onAttained: Configuration for
all jobs updated 2021-04-03 11:15:48.704+0000
[id=40] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$0: Started
Download metadata 2021-04-03 11:15:48.722+0000
[id=40] INFO hudson.util.Retrier#start: Attempt #1 to do the action
check updates server 2021-04-03 11:15:49.340+0000
[id=26] INFO jenkins.install.SetupWizard#init:</p>
<hr />
<hr />
<p>************************************************************* Jenkins initial setup is required. An admin user has been created and a
password generated. Please use the following password to proceed to
installation: ab5dbf74145c405fb5a33456d4b97436 This may also be found
at: /var/jenkins_home/secrets/initialAdminPassword</p>
<hr />
<hr />
<p>************************************************************* 2021-04-03 11:16:08.107+0000
[id=27] INFO jenkins.InitReactorRunner$1#onAttained: Completed
initialization 2021-04-03 11:16:08.115+0000
[id=20] INFO hudson.WebAppMain$3#run: Jenkins is fully up and running
2021-04-03 11:16:08.331+0000
[id=40] INFO h.m.DownloadService$Downloadable#load: Obtained the
updated data file for hudson.tasks.Maven.MavenInstaller 2021-04-03
11:16:08.332+0000 [id=40] INFO hudson.util.Retrier#start: Performed
the action check updates server successfully at the attempt #1
2021-04-03 11:16:08.334+0000
[id=40] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$0: Finished
Download metadata. 19,626 ms</p>
</blockquote>
<p>Is Jenkins server is started at port 8080? because of this log message:</p>
<blockquote>
<p>11:15:45.241+0000 [id=1] INFO o.e.j.server.AbstractConnector#doStart:
Started ServerConnector@4f0f2942{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2021-04-03</p>
</blockquote>
<p>I tried changing <code>jenkins-deployment-yaml</code> to point at port <code>8080</code> instead of <code>50000</code>, resulting in the updated <code>jenkins-deployment-yaml</code> :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-jenkins-pod
spec:
ports:
- protocol: "TCP"
port: 8080
</code></pre>
<p>But the same error is returned when I attempt to access <a href="http://161.61.222.16:5001/" rel="nofollow noreferrer">http://161.61.222.16:5001/</a></p>
<p>Are my port mappings incorrect? Is this a correct method of adding an existing docker container that is available on the docker hub to a Kubernetes cluster?</p>
<p>Update:</p>
<p>The result of command <code>kubectl describe services my-jenkins-pod-public</code> is :</p>
<pre><code>Name: my-jenkins-pod-public
Namespace: default
Labels: <none>
Annotations: kubernetes.digitalocean.com/load-balancer-id: d46ae9ae-6e8a-4fd8-aa58-43c08310059a
Selector: app=my-jenkins-pod
Type: LoadBalancer
IP Families: <none>
IP: 10.245.152.228
IPs: 10.245.152.228
LoadBalancer Ingress: 161.61.222.16
Port: <unset> 5001/TCP
TargetPort: 5001/TCP
NodePort: <unset> 30878/TCP
Endpoints: 10.214.12.12:5001
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>Trying to access <a href="http://161.61.222.16:30878/" rel="nofollow noreferrer">http://161.61.222.16:30878/</a> via browser returns:</p>
<blockquote>
<p>This site canβt be reached159.65.211.46 refused to connect. Try:</p>
<p>Checking the connection Checking the proxy and the firewall
ERR_CONNECTION_REFUSED</p>
</blockquote>
<p>Trying to access <a href="http://161.61.222.16:5001/" rel="nofollow noreferrer">http://161.61.222.16:5001/</a> via browser returns:</p>
<blockquote>
<p>This page isnβt working
161.61.222.16 didnβt send any data.
ERR_EMPTY_RESPONSE</p>
</blockquote>
<p>Seems the port <code>5001</code> is exposed/accessible but is not sending any data.</p>
<p>I also tried accessing <code>10.214.12.12</code> on ports <code>5001</code> & <code>30878</code> but both requests time out.</p>
| blue-sky | <p>You need to use <a href="http://161.61.222.16:30878/" rel="nofollow noreferrer">http://161.61.222.16:30878/</a> from outside of the host which is running containers on. Port 5001 is just accessible inside the cluster with internal IP (9.5.52.28 is in your case). Whenever you expose your deployment, automatically (also you can define manually) one of the NodePort (by default between 30000 - 32767)assign to the service for external request.</p>
<p>For service details, you need to run the below command. The command output will give you NodePort and another details.</p>
<pre><code>kubectl describe services my-service
</code></pre>
<p>Please check related kubernetes <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="nofollow noreferrer">documentation</a></p>
<p>Also you have configured service with port 5001 but Jenkins working with 8080 as far as I see in logs. Try to change target port of service to 8080 from 5001</p>
| KaΔan Mersin |
<p>Can we create multiple persistent volumes at a time through CSI ? Basically storage supports number of clones to be created from a source. I want to see if I can leverage this through CSI so that I will have just 1 call to create 10 clones.</p>
| Harrish A | <p>You can use <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">dynamic volume provisioning</a> to have PV's created on demand.</p>
| Perryn Gordon |
<p>In my headless service, I configure sessionAffinity so that <strong>connections from a particular client are passed to the same Pod each time</strong> <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">as described here</a></p>
<p>Here is the manifest :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service1
spec:
clusterIP: None
selector:
app: nginx
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 30
</code></pre>
<p>I run some nginx pods to test :</p>
<pre><code>$ kubectl create deployment nginx --image=stenote/nginx-hostname
</code></pre>
<p>The problem is that when I curl my service, I am redirected to different pods and sessionAffinity seems to be ignored.</p>
<pre><code>$ kubectl run --generator=run-pod/v1 --rm utils -it --image arunvelsriram/utils bash
root@utils:/# for i in $(seq 1 10) ; do curl service1; done
nginx-78d58889-b7fm2
nginx-78d58889-b7fm2
nginx-78d58889-b7fm2
nginx-78d58889-b7fm2
nginx-78d58889-b7fm2
nginx-78d58889-8rpxd
nginx-78d58889-b7fm2
nginx-78d58889-62jlw
nginx-78d58889-8rpxd
nginx-78d58889-62jlw
</code></pre>
<p>NB. When I check with</p>
<pre><code>$ kubectl describe svc service1
Name: service1
Namespace: abdelghani
Labels: <none>
Annotations: <none>
Selector: app=nginx
Type: ClusterIP
IP Families: <none>
IP: None
IPs: <none>
Session Affinity: ClientIP
Events: <none>
</code></pre>
<p><code>SessionAffinity</code> configuration is present.</p>
<p>Note that my service is headless i.e. <code>clusterIP: None</code>. SessionAffinity seems to work fine with non-headless services. But, I cant find a clear explanation in the documentation. Is this related to the platform not doing any proxying?</p>
<p>Abdelghani</p>
| Abdelghani | <p>When using headless service (clusterIP: None) you don't use proxy.</p>
<p>From <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">k8s docs</a>:</p>
<blockquote>
<p>For headless Services, a cluster IP is not allocated, kube-proxy does
not handle these Services, and there is no load balancing or proxying
done by the platform for them. How DNS is automatically configured
depends on whether the Service has selectors defined</p>
</blockquote>
<p>So when using headless service, dns responds with randomized list of ips of all pods associated with given service.</p>
<pre><code>/app # dig service1 +search +short
172.17.0.8
172.17.0.10
172.17.0.9
/app # dig service1 +search +short
172.17.0.9
172.17.0.10
172.17.0.8
/app # dig service1 +search +short
172.17.0.8
172.17.0.10
172.17.0.9
/app # dig service1 +search +short
172.17.0.10
172.17.0.9
172.17.0.8
/app # dig service1 +search +short
172.17.0.9
172.17.0.8
172.17.0.10
</code></pre>
<p>and curl just gets one and goes with it.</p>
<p>Since this hapens every request, every time you get different ip from dns, you connect to different pod.</p>
| Matt |
<p>I was trying to login to Grafana deployed using <a href="https://prometheus-community.github.io/helm-charts" rel="nofollow noreferrer">https://prometheus-community.github.io/helm-charts</a> using the credentials <code>admin:admin</code> but it was failing.</p>
<p>I found out the correct credentials to login to Grafana from the secret <code>grafana</code> which is <code>admin:prom-operator</code>
To see from where this value is getting injected to the secret I went to the template files and values.yaml files available at <a href="https://github.com/grafana/helm-charts/tree/main/charts/grafana" rel="nofollow noreferrer">https://github.com/grafana/helm-charts/tree/main/charts/grafana</a></p>
<p>The <code>secret</code> files is written as:</p>
<pre><code>{{- if or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret)) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "grafana.fullname" . }}
namespace: {{ include "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
type: Opaque
data:
{{- if and (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) }}
admin-user: {{ .Values.adminUser | b64enc | quote }}
{{- if .Values.adminPassword }}
admin-password: {{ .Values.adminPassword | b64enc | quote }}
{{- else }}
admin-password: {{ include "grafana.password" . }}
{{- end }}
{{- end }}
{{- if not .Values.ldap.existingSecret }}
ldap-toml: {{ tpl .Values.ldap.config $ | b64enc | quote }}
{{- end }}
{{- end }}
</code></pre>
<p>and part of the <code>values.yaml</code> file is written as</p>
<pre><code># Administrator credentials when not using an existing secret (see below)
adminUser: admin
# adminPassword: strongpassword
# Use an existing secret for the admin user.
admin:
## Name of the secret. Can be templated.
existingSecret: ""
userKey: admin-user
passwordKey: admin-password
</code></pre>
<p>and the <code>_helpers.tml</code> contains</p>
<pre><code>{{/*
Looks if there's an existing secret and reuse its password. If not it generates
new password and use it.
*/}}
{{- define "grafana.password" -}}
{{- $secret := (lookup "v1" "Secret" (include "grafana.namespace" .) (include "grafana.fullname" .) ) }}
{{- if $secret }}
{{- index $secret "data" "admin-password" }}
{{- else }}
{{- (randAlphaNum 40) | b64enc | quote }}
{{- end }}
{{- end }}
</code></pre>
<p>which looks to me like the <code>admin-password</code> value is coming from the secret as it's not a random alphanumeric. This seems like a loop to me. Could you please explain to me how the default password value <code>prom-operator</code> is getting injected to the secret in the key <code>admin-password</code>?</p>
| nidooooz | <p><strong>tl;dr</strong>: Value is set in <code>kube-prometheus-stack</code> values as <code>grafana.adminPassword</code> and passed to grafana subchart</p>
<p><code>kube-prometheus-stack</code> uses the the <code>grafana</code> chart as a depencency, so you have to take a look at both those values.yaml files.</p>
<p>In <code>kube-prometheus-stack</code>, there's a value for <code>grafana.adminPassword</code> set per default:</p>
<p><a href="https://github.com/prometheus-community/helm-charts/blob/b8b561eca1df7d70f0cc1e19e831ad58cb8c37f0/charts/kube-prometheus-stack/values.yaml#L877" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/blob/b8b561eca1df7d70f0cc1e19e831ad58cb8c37f0/charts/kube-prometheus-stack/values.yaml#L877</a></p>
<p>This is passed down to <code>grafana</code> where it's referenced as <code>.Values.adminPassword</code> (see <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#overriding-values-from-a-parent-chart" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#overriding-values-from-a-parent-chart</a>)</p>
<p>The <code>grafana</code> chart prefers this value over generating a random password with the helper block you listed. As it is passed from the parent chart <code>kube-prometheus-stack</code> and not empty, this value is used.</p>
<pre class="lang-yaml prettyprint-override"><code> {{- if .Values.adminPassword }}
admin-password: {{ .Values.adminPassword | b64enc | quote }}
{{- else }}
admin-password: {{ include "grafana.password" . }}
{{- end }}
</code></pre>
<p><a href="https://github.com/grafana/helm-charts/blob/de3a51251f5b4fdd93715ba47d0065b282761a79/charts/grafana/templates/secret.yaml#L17-L21" rel="nofollow noreferrer">https://github.com/grafana/helm-charts/blob/de3a51251f5b4fdd93715ba47d0065b282761a79/charts/grafana/templates/secret.yaml#L17-L21</a></p>
| sarcaustech |
<p>I am trying to run a pod where the default limit of my <strong>GKE AutoPilot Cluster is 500m vCPU</strong>.. But I want to run all my pods by 250m vCPU only. I tried the command <code>kubectl run pod-0 --requests="cpu=150m" --restart=Never --image=example/only</code></p>
<p>But I get a warning: <code>Flag --requests has been deprecated, has no effect and will be removed in the future</code>. Then when I describe my pod, it sets <code>500m</code>. I would like to know option to set resource limits with just plainly <code>kubectl run pod</code></p>
| Dean Christian Armada | <p>Since kubectl v1.21 All generators are depricated.</p>
<p>Github issue: <a href="https://github.com/kubernetes/kubernetes/issues/93100" rel="nofollow noreferrer">Remove generators from kubectl commands</a> quote:</p>
<blockquote>
<p>kubectl command are going to get rid of the dependency to the
generators, and ultimately remove them entirely. <strong>The main goal is to
simplyfy the use of kubectl command</strong>. This is already done in kubectl
run, see <a href="https://github.com/kubernetes/kubernetes/pull/87077" rel="nofollow noreferrer">#87077</a> and <a href="https://github.com/kubernetes/kubernetes/pull/68132" rel="nofollow noreferrer">#68132</a>.</p>
</blockquote>
<p>So it looks like <code>--limits</code> and <code>--requests</code> flags won't longer be available in the future.</p>
<p>Here is the PR that did it: <a href="https://github.com/kubernetes/kubernetes/pull/99732" rel="nofollow noreferrer">Drop deprecated run flags and deprecate unused ones</a>.</p>
| Matt |
<p>I'm trying to setup a VPN to access my cluster's workloads without setting public endpoints.</p>
<p>Service is deployed using the OpenVPN helm chart, and kubernetes using Rancher v2.3.2</p>
<ul>
<li>replacing L4 loadbalacer with a simple service discovery</li>
<li>edit configMap to allow TCP to go through the loadbalancer and reach the VPN</li>
</ul>
<p><strong>What does / doesn't work:</strong> </p>
<ul>
<li>OpenVPN client can connect successfully</li>
<li>Cannot ping public servers</li>
<li>Cannot ping Kubernetes services or pods</li>
<li>Can ping openvpn cluster IP "10.42.2.11"</li>
</ul>
<p><strong>My files</strong></p>
<p><code>vars.yml</code></p>
<pre><code>---
replicaCount: 1
nodeSelector:
openvpn: "true"
openvpn:
OVPN_K8S_POD_NETWORK: "10.42.0.0"
OVPN_K8S_POD_SUBNET: "255.255.0.0"
OVPN_K8S_SVC_NETWORK: "10.43.0.0"
OVPN_K8S_SVC_SUBNET: "255.255.0.0"
persistence:
storageClass: "local-path"
service:
externalPort: 444
</code></pre>
<p>Connection works, but I'm not able to hit any ip inside my cluster.
The only ip I'm able to reach is the openvpn cluster ip.</p>
<p><code>openvpn.conf</code>:</p>
<pre><code>server 10.240.0.0 255.255.0.0
verb 3
key /etc/openvpn/certs/pki/private/server.key
ca /etc/openvpn/certs/pki/ca.crt
cert /etc/openvpn/certs/pki/issued/server.crt
dh /etc/openvpn/certs/pki/dh.pem
key-direction 0
keepalive 10 60
persist-key
persist-tun
proto tcp
port 443
dev tun0
status /tmp/openvpn-status.log
user nobody
group nogroup
push "route 10.42.2.11 255.255.255.255"
push "route 10.42.0.0 255.255.0.0"
push "route 10.43.0.0 255.255.0.0"
push "dhcp-option DOMAIN-SEARCH openvpn.svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH cluster.local"
</code></pre>
<p><code>client.ovpn</code></p>
<pre><code>client
nobind
dev tun
remote xxxx xxx tcp
CERTS CERTS
dhcp-option DOMAIN openvpn.svc.cluster.local
dhcp-option DOMAIN svc.cluster.local
dhcp-option DOMAIN cluster.local
dhcp-option DOMAIN online.net
</code></pre>
<p>I don't really know how to debug this.</p>
<p>I'm using windows</p>
<p><code>route</code> command from client</p>
<pre><code>Destination Gateway Genmask Flags Metric Ref Use Ifac
0.0.0.0 livebox.home 255.255.255.255 U 0 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 256 0 0 eth0
192.168.1.17 0.0.0.0 255.255.255.255 U 256 0 0 eth0
192.168.1.255 0.0.0.0 255.255.255.255 U 256 0 0 eth0
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 eth0
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 eth0
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 eth1
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 eth1
0.0.0.0 10.240.0.5 255.255.255.255 U 0 0 0 eth1
10.42.2.11 10.240.0.5 255.255.255.255 U 0 0 0 eth1
10.42.0.0 10.240.0.5 255.255.0.0 U 0 0 0 eth1
10.43.0.0 10.240.0.5 255.255.0.0 U 0 0 0 eth1
10.240.0.1 10.240.0.5 255.255.255.255 U 0 0 0 eth1
127.0.0.0 0.0.0.0 255.0.0.0 U 256 0 0 lo
127.0.0.1 0.0.0.0 255.255.255.255 U 256 0 0 lo
127.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 lo
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 lo
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 lo
</code></pre>
<p>And finally <code>ifconfig</code></p>
<pre><code> inet 192.168.1.17 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 2a01:cb00:90c:5300:603c:f8:703e:a876 prefixlen 64 scopeid 0x0<global>
inet6 2a01:cb00:90c:5300:d84b:668b:85f3:3ba2 prefixlen 128 scopeid 0x0<global>
inet6 fe80::603c:f8:703e:a876 prefixlen 64 scopeid 0xfd<compat,link,site,host>
ether 00:d8:61:31:22:32 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.240.0.6 netmask 255.255.255.252 broadcast 10.240.0.7
inet6 fe80::b9cf:39cc:f60a:9db2 prefixlen 64 scopeid 0xfd<compat,link,site,host>
ether 00:ff:42:04:53:4d (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 1500
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0xfe<compat,link,site,host>
loop (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
</code></pre>
| ogdabou | <p>For anybody looking for a working sample, this is going to go into your openvpn deployment along side your container definition:</p>
<pre><code>initContainers:
- args:
- -w
- net.ipv4.ip_forward=1
command:
- sysctl
image: busybox
name: openvpn-sidecar
securityContext:
privileged: true
</code></pre>
| John Fedoruk |
<p>I want to enable <code>ReadWriteMany</code> access mode in EKS Persistent Volume. Came accross io2 volumetype by EBS AWS. SO using io2 type volume</p>
<p>storage_class.yaml</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: io2
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
type: io2
iopsPerGB: "200"
</code></pre>
<p>persistent_volume.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
accessModes:
- ReadWriteMany
awsElasticBlockStore:
fsType: ext4
volumeID: <IO2 type volume ID>
capacity:
storage: 50Gi
storageClassName: io2
volumeMode: Filesystem
</code></pre>
<p>pv_claim.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
volumeMode: Filesystem
volumeName: pv
storageClassName: io2
</code></pre>
<p>When 3 replicas of pods are deployed across 2 nodes in same AZ, 2 replicas (on one node) successfully mounts to the io2 volume and starts running but third replica on another node does not mount to volume.</p>
<blockquote>
<p>Error -> Unable to attach or mount volumes: unmounted volumes['']</p>
</blockquote>
<p>Also, I want to understand if io2 type volumes are meant to be mount to multiple nodes(EC2 instances in same AZ as volume) in EKS with ReadWriteMany access mode.</p>
| ateet | <p>I looks like there is <a href="https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/449" rel="nofollow noreferrer">open feature request on kubernetes-sigs/aws-ebs-csi-driver</a> repo but no progress on this. So I guess that it is not supported at the moment but you can monitor the issue for updates.</p>
| Matt |
<p>I have installed nignx ingress helm chat on CentOS 8 Kubernetes 1.17 with containerd, ingress pod failed with below error message. Same helm chat worked on CentOS 7 with Docker.</p>
<pre><code>I0116 04:17:06.624547 8 flags.go:205] Watching for Ingress class: nginx
W0116 04:17:06.624803 8 flags.go:250] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W0116 04:17:06.624844 8 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.27.1
Build: git-1257ded99
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.17.7
-------------------------------------------------------------------------------
I0116 04:17:06.624968 8 main.go:194] Creating API client for https://10.224.0.1:443
I0116 04:17:06.630907 8 main.go:238] Running in Kubernetes cluster version v1.17 (v1.17.0) - git (clean) commit 70132b0f130acc0bed193d9ba59dd186f0e634cf - platform linux/amd64
I0116 04:17:06.633567 8 main.go:91] Validated nginx-ingress/nginx-ingress-default-backend as the default backend.
F0116 04:17:06.843785 8 ssl.go:389] unexpected error storing fake SSL Cert: could not create PEM certificate file /etc/ingress-controller/ssl/default-fake-certificate.pem: open /etc/ingress-controller/ssl/default-fake-certificate.pem: permission denied
</code></pre>
<p>if I remove this from deployment, ingress pod is starting.</p>
<pre><code> capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
</code></pre>
<p>I like to understand why same helm chart failing on containerd </p>
<pre><code>containerd --version
containerd github.com/containerd/containerd 1.2.0
</code></pre>
<p>adding deployment.</p>
<pre><code>containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=nginx-ingress/nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=nginx-ingress/nginx-ingress-controller
- --default-ssl-certificate=nginx-ingress/ingress-tls
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.27.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: nginx-ingress
</code></pre>
<p><strong>error message</strong></p>
<pre><code>-------------------------------------------------------------------------------
W0116 16:02:30.074390 8 queue.go:130] requeuing nginx-ingress/nginx-ingress-controller, err
-------------------------------------------------------------------------------
Error: exit status 1
nginx: the configuration file /tmp/nginx-cfg613392629 syntax is ok
2020/01/16 16:02:30 [emerg] 103#103: bind() to 0.0.0.0:80 failed (13: Permission denied)
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
nginx: configuration file /tmp/nginx-cfg613392629 test failed
</code></pre>
| sfgroups | <p>I experienced the same. the solution is not to remove the capability section but to change the runAsuser</p>
<p>if you download the new release (0.27.1) deployment of the Nginx ingress controller, you can see:</p>
<pre><code> securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
</code></pre>
<p>The "runAsUser" line has a different user id. the user id in my old deployment was different so I got this error. Since I Changed the runAsUser to ID 101, the id in the kubernetes definitions is the same as the ID used in the new Nginx image and it works again :) </p>
| user12723163 |
<p>I am testing a connection between a persistent volume and a kubernetes pod by running busybox, but am getting "can't open" "no such file or directory. In order to do further testing, I tried running</p>
<pre><code>echo ls /mntpoint/filename
</code></pre>
<p>This is obviously not the correct command. I have tried a few other iterations - too many to list here.</p>
<p>I want to run ls of the mountpoint and print to the console. How do I do this?</p>
<p>EDIT</p>
<p>My code was closest to Rohit's suggestion (below), so I made the following edits, but the code still does not work. Please help.</p>
<p>Persistent Volume</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: data
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: "/mnt/data"
storageClassName: test
</code></pre>
<p>Persistent Volume Claim</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: test
</code></pre>
<p>Pod</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: persistent-volume
spec:
containers:
- name: busybox
command: ['tail', '-f', '/dev/null']
image: busybox
volumeMounts:
- name: data
mountPath: "/data"
volumes:
- name: data
persistentVolumeClaim:
claimName: data
</code></pre>
<p>EDIT 2</p>
<p>So, after taking a day off, I came back to my (still running) pod and the command (ls) worked. It works as expected on any directory (e.g. "ls /" or "ls /data").</p>
<p>My current interpretation is that I did not wait long enough before running the command - although that does not seem to explain it since I had been monitoring with "kubectl describe pod ." Also I have run the same test several times with short latency between the "apply" and "exec" commands and the behavior has been consistent today.</p>
<p>I am going to continue to keep playing with this, but I think the current problem has been solved. Than you!</p>
| user3877654 | <p>Steps you need to follow when dealing with volumes and Kubernetes resources:</p>
<ol>
<li>Create a Persistent volume.</li>
<li>Create a Persistent volume claim and make sure the state is <strong>bound</strong>.</li>
<li>Once the PV and PVC are bounded, try to use the PV from a pod/deployment through PVC.</li>
<li>Check the logs of the pod/deployment. You might see the entry of command execution.</li>
</ol>
<p>Reference: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/</a></p>
<p>Hope this helps, please try to elaborate more and paste the logs of above-mentioned steps.</p>
| Venkata Sai Hitesh |
<p>There is this example project on <a href="https://github.com/asynkron/protoactor-grains-tutorial" rel="nofollow noreferrer">GitHub</a> that I'm trying to deploy on local Kubernetes cluster (k3d). The developers of Proto.Actor described the k8s deployment pretty much here in the <a href="https://proto.actor/docs/cluster/getting-started-kubernetes/" rel="nofollow noreferrer">official docs</a>. The problem is that the documentation is deploying on Azure Kubernetes Service whereas I want to deploy on local k8s cluster (k3d).</p>
<p>As much as I understand the steps, it's as following:</p>
<ol>
<li>Build docker images for both projects in the solution [I was able to do that step]</li>
</ol>
<pre><code>docker build -f ./ProtoClusterTutorial/Dockerfile . -t proto-cluster-tutorial:1.0.0`
docker build -f ./SmartBulbSimulatorApp/Dockerfile . -t smart-bulb-simulator-app:1.0.0`
</code></pre>
<ol start="2">
<li>Push the docker images into a repository</li>
</ol>
<p>Push the docker images where? Local k3d repository? Docker Hub? GitHub Container Registry?</p>
<p>Next question, the file <code>values.yaml</code> in the Helm chart directory consists of a <code>repository</code> field (<a href="https://github.com/asynkron/protoactor-grains-tutorial/blob/main/chart-tutorial/values.yaml#L5" rel="nofollow noreferrer">here</a>). If I push the docker image to ghcr or Docker hub, I'll just put the image link there, but what if I have to use the k3d local repository? What link should I use in that case?</p>
<p>The next question is how does <code>kubectl get pods</code> know that it has to display the k3d cluster pods and not the Docker Desktop Kubernetes which I have enabled?</p>
<p>I would be grateful if you briefly list the steps that I have to accomplish using k3d, Helm chart and kubectl.</p>
| nop | <p>It doesn't matter where you push your images to, as long as it's a valid implementation of the <a href="https://github.com/opencontainers/distribution-spec" rel="nofollow noreferrer">OCI Distribution Spec</a> (a valid container registry). All the registry options you've listed would work, just pick the one that fits your needs.</p>
<p>Regarding the <code>values.yaml</code> file, the <code>repository</code> field is the url to the repository, depending on which container registry you decide to use (<code>docker.io</code> for Docker Hub, <code>ghcr.io</code> for Github Container Registry, etc.) Please check the docs of the container registry you choose for specific instructions of setting up repositories, building, pushing and pulling.</p>
<p><code>kubectl</code> gets it's configuration from a config file, which can contain multiple clusters. The k3d install script is most likely adding the new cluster as an entry to the config file and setting it as the new context for kubectl.</p>
<p>Back to your problem. A simpler solution might be to import the images in k3d manually as noted <a href="https://stackoverflow.com/a/72120733/13415624">in this answer</a>. I haven't used k3d myself so I can't guarantee this method will work, but it seems like a much simpler approach that can save you a lot of headache.</p>
<p>In case, however, you want to get your hands dirty and learn more about container repositories, helm and k8s, here's an example scenario with a repository hosted on <code>localhost:5000</code> and I strongly encourage you to check the relevant <code>docker/helm/kubernetes</code> docs for each step</p>
<ol>
<li>Login to your registry</li>
</ol>
<pre><code>docker login localhost:5000
</code></pre>
<ol start="2">
<li>Build the images</li>
</ol>
<pre><code>//Note how the image tag includes the repository url where they'll be pushed to
docker build -f ./ProtoClusterTutorial/Dockerfile . -t localhost:5000/proto-cluster-tutorial:1.0.0
docker build -f ./SmartBulbSimulatorApp/Dockerfile . -t localhost:5000/smart-bulb-simulator-app:1.0.0
</code></pre>
<ol start="3">
<li>Push the images</li>
</ol>
<pre><code>docker push localhost:5000/proto-cluster-tutorial:1.0.0
docker push localhost:5000/smart-bulb-simulator-app:1.0.0
</code></pre>
<ol start="4">
<li>Edit the <code>values.yaml</code></li>
</ol>
<pre><code> image:
repository: localhost:5000/proto-cluster-tutorial
pullPolicy: IfNotPresent
tag: "1.0.0"
</code></pre>
<ol start="5">
<li>Run <code>helm install</code> with the modified <code>values.yaml</code> file</li>
</ol>
<p>One thing I've noticed is that guide's helm chart does not include a field for <code>imagePullSecrets</code> since they are using Azure Container Registry and hosting the cluster on Azure which handles the authentication automatically. This means that private repositories will not work with the chart in your scenario and you'll have to edit the helm chart and subsequently the <code>values.yaml</code> to make it work. You can read more about <code>imagePullSecrets</code> <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">here</a></p>
| stefanbankow |
<p>I have a python script and I used on Kubernetes.
After process ended on python script Kubernetes restart pod. And I don't want to this.</p>
<p>I tried to add a line of code from python script like that:</p>
<pre><code>text = input("please a key for exiting")
</code></pre>
<p>And I get EOF error, so its depends on container has no EOF config on my Kubernetes.</p>
<p>After that I tried to use restartPolicy: Never. But restartPolicy is can not be Never and I get error like that:</p>
<pre><code>error validating data: ValidationError(Deployment.spec.template): unknown field \"restartPolicy\" in io.k8s.api.core.v1.PodTemplateSpec;
</code></pre>
<p>How can I make this? I just want to no restart for this pod. Its can be on python script or Kubernetes yaml file.</p>
| akasaa | <p>You get <code>unknown field \"restartPolicy\" in io.k8s.api.core.v1.PodTemplateSpec;</code> because you most probably messed up some indentation.</p>
<p>Here is an example deploymeny with <strong>incorrect indentation</strong> of <code>restartPolicy</code> field:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
restartPolicy: Never # <-------
</code></pre>
<hr />
<p>Here is a deploymet with <strong>correct indentation</strong>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
restartPolicy: Never # <-------
</code></pre>
<p>But this will result in error:</p>
<pre><code>kubectl apply -f deploy.yaml
The Deployment "nginx-deployment" is invalid: spec.template.spec.restartPolicy: Unsupported value: "Never": supported values: "Always"
</code></pre>
<p>Here is an explaination why: <a href="https://stackoverflow.com/questions/55169075/restartpolicy-unsupported-value-never-supported-values-always">restartpolicy-unsupported-value-never-supported-values-always</a></p>
<hr />
<p>If you want to run a one time pod, use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">k8s job</a> or use pod directly:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: ngx
name: ngx
spec:
containers:
- image: nginx
name: ngx
restartPolicy: Never
</code></pre>
| Matt |
<p>I have a 3 nodes cluster running in VirtualBox and I'm trying to create a NFS storage using PV and PVC, but it seems that I'm doing something wrong.</p>
<p>I have the following:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: nfs
spec:
capacity:
storage: 100Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /redis/data
server: 192.168.56.2 #ip of my master-node
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 100Mi
storageClassName: slow
selector:
matchLabels:
type: nfs
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: data
mountPath: "/redis/data"
ports:
- containerPort: 6379
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-pvc
</code></pre>
<p>I've already installed <code>nfs-common</code> in all my nodes.</p>
<p>Whenever creating the PV, PVC and POD the pod does not start and I get the following:</p>
<pre><code>Warning FailedMount 30s kubelet, kubenode02 MountVolume.SetUp failed for volume "redis-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9326d818-b78a-42cc-bcff-c487fc8840a4/volumes/kubernetes.io~nfs/redis-pv --scope -- mount -t nfs -o hard,nfsvers=4.1 192.168.56.2:/redis/data /var/lib/kubelet/pods/9326d818-b78a-42cc-bcff-c487fc8840a4/volumes/kubernetes.io~nfs/redis-pv
Output: Running scope as unit run-rc316990c37b14a3ba24d5aedf66a3f6a.scope.
mount.nfs: Connection timed out
</code></pre>
<p>Here is the status of <code>kubectl get pv, pvc</code></p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/redis-pv 100Mi RWO Retain Bound default/redis-pvc slow 8s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/redis-pvc Bound redis-pv 100Mi RWO slow 8s
</code></pre>
<p>Any ideas of what am I missing?</p>
| Juliano Costa | <p>1 - you need to install your NFS Server: Follow the instructions in this link: </p>
<p><a href="https://vitux.com/install-nfs-server-and-client-on-ubuntu/" rel="nofollow noreferrer">https://vitux.com/install-nfs-server-and-client-on-ubuntu/</a></p>
<p>2- Create your sharedfolder where you want to persist your data<</p>
<pre><code>mount 192.168.56.2:/mnt/sharedfolder /mnt/shared/folder_client
</code></pre>
<p>3- Change in PV.yaml the following instructions:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: nfs
spec:
capacity:
storage: 100Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/sharedfolder
server: 192.168.56.2 #ip of my master-node
</code></pre>
| Yuri Oliveira |
<p>So far I have 2 directories:</p>
<p><code>aws/</code>
<code>k8s/</code></p>
<p>Inside <code>aws/</code> are <code>.tf</code> files describing a VPC, networking, security groups, IAM roles, EKS cluster, EKS node group, and a few EFS mounts. These are all using the AWS provider, the state in stored in S3.</p>
<p>Then in <code>k8s/</code> I'm then using the Kubernetes provider and creating Kubernetes resources inside the EKS cluster I created. This state is stored in the same S3 bucket in a different state file.</p>
<p>I'm having trouble figuring out how to mount the EFS mounts as Persistent Volumes to my pods.</p>
<p>I've found docs describing using an efs-provisioner pod to do this. See <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-pods-efs/" rel="noreferrer">How do I use EFS with EKS?</a>. </p>
<p>In more recent EKS docs they now say to use <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="noreferrer">Amazon EFS CSI Driver</a>. The first step is to do a <code>kubectl apply</code> of the following file.</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
images:
- name: amazon/aws-efs-csi-driver
newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-efs-csi-driver
newTag: v0.2.0
- name: quay.io/k8scsi/livenessprobe
newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-liveness-probe
newTag: v1.1.0
- name: quay.io/k8scsi/csi-node-driver-registrar
newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-node-driver-registrar
newTag: v1.1.0
</code></pre>
<p>Does anyone know how I would do this in Terraform? Or how in general to mount EFS file shares as PVs to an EKS cluster?</p>
| Taylor Turner | <p>@BMW had it right, I was able to get this all into Terraform.</p>
<p>In the <code>aws/</code> directory I created all my AWS resources, VPC, EKS, workers, etc. and EFS mounts.</p>
<pre><code>resource "aws_efs_file_system" "example" {
creation_token = "${var.cluster-name}-example"
tags = {
Name = "${var.cluster-name}-example"
}
}
resource "aws_efs_mount_target" "example" {
count = 2
file_system_id = aws_efs_file_system.example.id
subnet_id = aws_subnet.this.*.id[count.index]
security_groups = [aws_security_group.eks-cluster.id]
}
</code></pre>
<p>I also export the EFS file system IDs from the AWS provider plan.</p>
<pre><code>output "efs_example_fsid" {
value = aws_efs_file_system.example.id
}
</code></pre>
<p><strong>After the EKS cluster is created I had to manually install the EFS CSI driver into the cluster before continuing.</strong></p>
<p>Then in the <code>k8s/</code> directory I reference the <code>aws/</code> state file so I can use the EFS file system IDs in the PV creation.</p>
<pre><code>data "terraform_remote_state" "remote" {
backend = "s3"
config = {
bucket = "example-s3-terraform"
key = "aws-provider.tfstate"
region = "us-east-1"
}
}
</code></pre>
<p>Then created the Persistent Volumes using the Kubernetes provider.</p>
<pre><code>resource "kubernetes_persistent_volume" "example" {
metadata {
name = "example-efs-pv"
}
spec {
storage_class_name = "efs-sc"
persistent_volume_reclaim_policy = "Retain"
capacity = {
storage = "2Gi"
}
access_modes = ["ReadWriteMany"]
persistent_volume_source {
nfs {
path = "/"
server = data.terraform_remote_state.remote.outputs.efs_example_fsid
}
}
}
}
</code></pre>
| Taylor Turner |
<p>I have 6 nodes,all of them have labels "group:emp",4 of them have labels "iKind:spot",2 of them have labels "ikind:normal".</p>
<p>I use the deployment yaml to assign one pod to the normal node and others on the spot node, but it didn't work.</p>
<p>I start to increase the num of the pod from 1 to 6,but when it comes to 2,all the pod are assigned on th spot node</p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: pod-test
namespace: emp
labels:
app: pod-test
spec:
replicas: 2
selector:
matchLabels:
app: pod-test
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: pod-test
spec:
containers:
- name: pod-test
image: k8s.gcr.io/busybox
args: ["sh","-c","sleep 60000"]
imagePullPolicy: Always
resources:
requests:
cpu: 10m
memory: 100Mi
limits:
cpu: 100m
memory: 200Mi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: group
operator: In
values:
- emp
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 70
preference:
matchExpressions:
- key: ikind
operator: In
values:
- spot
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-test
topologyKey: ikind
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-test
topologyKey: "kubernetes.io/hostname"
restartPolicy: Always
terminationGracePeriodSeconds: 10
dnsPolicy: ClusterFirst
schedulerName: default-scheduler
```
</code></pre>
| scut_yk | <p>I add the node prefer matchExpressions to normal and give weight 30,and it works.
In order to avoid the influence of the node nums,i change the weight of the normal and spot.</p>
<p>When replicas is 1,there is 1 pod in normal node</p>
<p>When replicas is 2,there is 1 pod in normal node and 1 pod in spot node</p>
<p>When replicas is 3,there is 2 pod in normal node and 1 pod in spot node</p>
<p><code>
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 70
preference:
matchExpressions:
- key: ikind
operator: In
values:
- normal
- weight: 30
preference:
matchExpressions:
- key: ikind
operator: In
values:
- spot
</code></p>
| scut_yk |
<p>I use a self-hosted instance of GitLab to store my Docker images. As we've recently set up Project Access Tokens, we want to pull images on AKS using individual Secrets for each registry. It means that we have specific credentials for each image in "the same registry".</p>
<p>Problem is, Deployments have a global <code>imagePullSecrets</code> list that refers to multiple Secrets. And those Secrets, which essentially hold different credentials (one per GitLab Container Registry), share the same Docker Registry URL!</p>
<p>Put simply, here an example Deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
template:
spec:
containers:
- name: one
image: 'gitlab.company.com/project-one:1.0.0'
- name: two
image: 'gitlab.company.com/project-two:1.2.0'
imagePullSecrets:
- name: secret-project-one
- name: secret-project-two
</code></pre>
<p>The Secret 1 (secret-project-one):</p>
<pre class="lang-json prettyprint-override"><code>{
"auths": {
"https://gitlab.company.com": {
"username": "project_111_bot",
"password": "password-project-one",
"auth": "Password"
}
}
}
</code></pre>
<p>And the Secret 2 (secret-project-two):</p>
<pre class="lang-json prettyprint-override"><code>{
"auths": {
"https://gitlab.company.com": {
"username": "project_222_bot",
"password": "password-project-two",
"auth": "Password"
}
}
}
</code></pre>
<p>How is Kubernetes supposed to differentiate which Secret to use? Does it assume you have different URLs and will match the right Secret based on the image name? Or will it just try every Secret until one works?</p>
<p>Documentations don't seem to cover this scenario. Any help would be appreciated!</p>
| Quentin Lamotte | <p>In source code I have found that it just tries all one by one, and returns the first one that succeedes.</p>
<p>I am not going to explain the whole process of how I have found this, but here is some of explaination/proof:</p>
<p>When <a href="https://github.com/kubernetes/kubernetes/blob/a55bd631728590045b51a4f65bba31aed1415571/pkg/kubelet/kuberuntime/kuberuntime_image.go#L31" rel="nofollow noreferrer">PullImage</a> function is called, it <a href="https://github.com/kubernetes/kubernetes/blob/a55bd631728590045b51a4f65bba31aed1415571/pkg/kubelet/kuberuntime/kuberuntime_image.go#L38" rel="nofollow noreferrer">grabs the docker registry credentials</a> and <a href="https://github.com/kubernetes/kubernetes/blob/a55bd631728590045b51a4f65bba31aed1415571/pkg/kubelet/kuberuntime/kuberuntime_image.go#L59-L76" rel="nofollow noreferrer">loops over them</a> trying one by one to get an image, and <a href="https://github.com/kubernetes/kubernetes/blob/a55bd631728590045b51a4f65bba31aed1415571/pkg/kubelet/kuberuntime/kuberuntime_image.go#L72" rel="nofollow noreferrer">returns the first one that it finds</a> that <a href="https://github.com/kubernetes/kubernetes/blob/a55bd631728590045b51a4f65bba31aed1415571/pkg/kubelet/kuberuntime/kuberuntime_image.go#L71" rel="nofollow noreferrer">does not result in error</a>.</p>
| Matt |
<p>I've installed <strong>Prometheus</strong> and <strong>Grafana</strong> to monitor my <strong>K8S cluster</strong> and <strong>microservices</strong> using <strong>helm charts</strong>:</p>
<pre><code>helm install monitoring prometheus-community/kube-promehteus-stack --values prometheus-values.yaml --version 16.10.0 --namespace monitoring --create-namespace
</code></pre>
<p>the content of <code>promehteus-values.yaml</code> is:</p>
<pre><code>prometheus:
prometheusSpec:
serviceMonitorSelector:
matchLabels:
prometheus: devops
commonLabels:
prometheus: devops
grafana:
adminPassword: test123
</code></pre>
<p>Then I installed <code>kong-ingress-controller</code> using <code>helm-charts</code>:</p>
<pre><code>helm install kong kong/kong --namespace kong --create-namespace --values kong.yaml --set ingressController.installCRDs=false
</code></pre>
<p>the content of <code>kong.yaml</code> file is:</p>
<pre><code>podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8100"
</code></pre>
<p>I've also changed the value of <code>metricsBindAdress</code> in <code>kube-proxy</code> <strong>configmap</strong> to <code>0.0.0.0:10249</code> .</p>
<p>then I installed <code>kong prometheus plugin</code> using this <code>yaml file</code> :</p>
<pre><code>apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: prometheus
annotations:
kubernetes.io/ingress.class: kong
labels:
global: "true"
plugin: prometheus
</code></pre>
<p>My <code>kong endpoint</code> is :</p>
<p><code>$ kubectl edit endpoints -n kong kong-kong-proxy</code></p>
<pre><code># Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Endpoints
metadata:
annotations:
endpoints.kubernetes.io/last-change-trigger-time: "2021-10-27T03:28:25Z"
creationTimestamp: "2021-10-26T04:44:57Z"
labels:
app.kubernetes.io/instance: kong
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "2.6"
enable-metrics: "true"
helm.sh/chart: kong-2.5.0
name: kong-kong-proxy
namespace: kong
resourceVersion: "6553623"
uid: 91f2054f-7fb9-4d63-8b65-be098b8f6547
subsets:
- addresses:
- ip: 10.233.96.41
nodeName: node2
targetRef:
kind: Pod
name: kong-kong-69fd7d7698-jjkj5
namespace: kong
resourceVersion: "6510442"
uid: 26c6bdca-e9f1-4b32-91ff-0fadb6fce529
ports:
- name: kong-proxy
port: 8000
protocol: TCP
- name: kong-proxy-tls
port: 8443
protocol: TCP
</code></pre>
<p>Finally I wrote the <code>serviceMonitor</code> for <code>kong</code> like this :</p>
<pre><code>
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
generation: 1
labels:
prometheus: devops
name: kong-sm
namespace: kong
spec:
endpoints:
- interval: 30s
port: kong-proxy
scrapeTimeout: 10s
namespaceSelector:
matchNames:
- kong
selector:
matchLabels:
app.kubernetes.io/instance: kong
</code></pre>
<p>After all of this ; the <code>targets</code> in <code>prometheus dashboard</code> looks like this:</p>
<p><a href="https://i.stack.imgur.com/GikR0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GikR0.png" alt="enter image description here" /></a></p>
<p>What did I miss/do wrong?</p>
| samm13 | <p>Let's take a look to the <strong>Kong deployment</strong> first <strong>(</strong> pay extra attention to the bottom of this file <strong>)</strong>:</p>
<p><code>kubectl edit deploy -n kong kong-kong</code> :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
[...]
creationTimestamp: "2021-10-26T04:44:58Z"
generation: 1
labels:
app.kubernetes.io/component: app
app.kubernetes.io/instance: kong
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "2.6"
helm.sh/chart: kong-2.5.0
name: kong-kong
namespace: kong
[...]
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: app
app.kubernetes.io/instance: kong
app.kubernetes.io/name: kong
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
[...]
- env:
[...]
image: kong:2.6
imagePullPolicy: IfNotPresent
[...]
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-tls
protocol: TCP
############################################
## THIS PART IS IMPORTANT TO US : #
############################################
- containerPort: 8100
name: status
protocol: TCP
[...]
</code></pre>
<p>As you can see, in the <code>sepc.template.spec.env.ports</code> part we have <strong>3 ports</strong>, the <strong>8100</strong> will be used for get <strong>metrics</strong> so if you can't see this port in the <strong>kong endpoint</strong>, add it manually to the bottom of kong endpoint:</p>
<p><code>$ kubectl edit endpoints -n kong kong-kong-proxy</code> :</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
annotations:
endpoints.kubernetes.io/last-change-trigger-time: "2021-10-26T04:44:58Z"
creationTimestamp: "2021-10-26T04:44:57Z"
labels:
app.kubernetes.io/instance: kong
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "2.6"
enable-metrics: "true"
helm.sh/chart: kong-2.5.0
name: kong-kong-proxy
namespace: kong
resourceVersion: "7160332"
uid: 91f2054f-7fb9-4d63-8b65-be098b8f6547
subsets:
- addresses:
- ip: 10.233.96.41
nodeName: node2
targetRef:
kind: Pod
name: kong-kong-69fd7d7698-jjkj5
namespace: kong
resourceVersion: "6816178"
uid: 26c6bdca-e9f1-4b32-91ff-0fadb6fce529
ports:
- name: kong-proxy
port: 8000
protocol: TCP
- name: kong-proxy-tls
port: 8443
protocol: TCP
#######################################
## ADD THE 8100 PORT HERE #
#######################################
- name: kong-status
port: 8100
protocol: TCP
</code></pre>
<p>Then save this file and change the <strong>serviceMonitor</strong> of <strong>kong</strong> like this <strong>(</strong> the <strong>port</strong> name is the <strong>same</strong> to the <strong>endpoint</strong> we added recently <strong>)</strong>:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
generation: 1
labels:
prometheus: devops
name: kong-sm
namespace: kong
spec:
endpoints:
- interval: 30s
#############################################################################
## THE NAME OF THE PORT IS SAME TO THE NAME WE ADDED TO THE ENDPOINT FILE #
#############################################################################
port: kong-status
scrapeTimeout: 10s
namespaceSelector:
matchNames:
- kong
selector:
matchLabels:
app.kubernetes.io/instance: kong
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "2.6"
enable-metrics: "true"
helm.sh/chart: kong-2.5.0
</code></pre>
<p>Apply the <strong>serviceMonitor</strong> yaml file and after few seconds <strong>Prometheus</strong> will detect it as a <strong>target</strong> and scrape kong's metrics successfully.</p>
| samm13 |
<p>I want my prometheus server to scrape metrics from a pod.</p>
<p>I followed these steps:</p>
<ol>
<li>Created a pod using deployment - <code>kubectl apply -f sample-app.deploy.yaml</code></li>
<li>Exposed the same using <code>kubectl apply -f sample-app.service.yaml</code></li>
<li>Deployed Prometheus server using <code>helm upgrade -i prometheus prometheus-community/prometheus -f prometheus-values.yaml</code></li>
<li>created a serviceMonitor using <code>kubectl apply -f service-monitor.yaml</code> to add a target for prometheus.</li>
</ol>
<p>All pods are running, but when I open prometheus dashboard, <strong>I don't see <em>sample-app service</em> as prometheus target, under status>targets in dashboard UI.</strong></p>
<p>I've verified following:</p>
<ol>
<li>I can see <code>sample-app</code> when I execute <code>kubectl get servicemonitors</code></li>
<li>I can see sample-app exposes metrics in prometheus format under at <code>/metrics</code></li>
</ol>
<p>At this point I debugged further, entered into the prometheus pod using
<code>kubectl exec -it pod/prometheus-server-65b759cb95-dxmkm -c prometheus-server sh</code>
, and saw that proemetheus configuration (/etc/config/prometheus.yml) didn't have sample-app as one of the jobs so I edited the configmap using</p>
<p><code>kubectl edit cm prometheus-server -o yaml</code>
Added</p>
<pre><code> - job_name: sample-app
static_configs:
- targets:
- sample-app:8080
</code></pre>
<p>Assuming all other fields such as <strong>scraping</strong> interval, scrape_timeout stays default.</p>
<p>I can see the same has been reflected in /etc/config/prometheus.yml, but still prometheus dashboard doesn't show <code>sample-app</code> as targets under status>targets.</p>
<p>following are yamls for prometheus-server and service monitor.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
autopilot.gke.io/resource-adjustment: '{"input":{"containers":[{"name":"prometheus-server-configmap-reload"},{"name":"prometheus-server"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"requests":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"name":"prometheus-server-configmap-reload"},{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"requests":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"name":"prometheus-server"}]},"modified":true}'
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: prom
creationTimestamp: "2021-06-24T10:42:31Z"
generation: 1
labels:
app: prometheus
app.kubernetes.io/managed-by: Helm
chart: prometheus-14.2.1
component: server
heritage: Helm
release: prometheus
name: prometheus-server
namespace: prom
resourceVersion: "6983855"
selfLink: /apis/apps/v1/namespaces/prom/deployments/prometheus-server
uid: <some-uid>
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: prometheus
component: server
release: prometheus
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: prometheus
chart: prometheus-14.2.1
component: server
heritage: Helm
release: prometheus
spec:
containers:
- args:
- --volume-dir=/etc/config
- --webhook-url=http://127.0.0.1:9090/-/reload
image: jimmidyson/configmap-reload:v0.5.0
imagePullPolicy: IfNotPresent
name: prometheus-server-configmap-reload
resources:
limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
securityContext:
capabilities:
drop:
- NET_RAW
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
readOnly: true
- args:
- --storage.tsdb.retention.time=15d
- --config.file=/etc/config/prometheus.yml
- --storage.tsdb.path=/data
- --web.console.libraries=/etc/prometheus/console_libraries
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
image: quay.io/prometheus/prometheus:v2.26.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /-/healthy
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 10
name: prometheus-server
ports:
- containerPort: 9090
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /-/ready
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 4
resources:
limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
securityContext:
capabilities:
drop:
- NET_RAW
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
- mountPath: /data
name: storage-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
serviceAccount: prometheus-server
serviceAccountName: prometheus-server
terminationGracePeriodSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: prometheus-server
name: config-volume
- name: storage-volume
persistentVolumeClaim:
claimName: prometheus-server
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-06-24T10:43:25Z"
lastUpdateTime: "2021-06-24T10:43:25Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-06-24T10:42:31Z"
lastUpdateTime: "2021-06-24T10:43:25Z"
message: ReplicaSet "prometheus-server-65b759cb95" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
</code></pre>
<p>yaml for service Monitor</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor","metadata":{"annotations":{},"creationTimestamp":"2021-06-24T07:55:58Z","generation":1,"labels":{"app":"sample-app","release":"prometheus"},"name":"sample-app","namespace":"prom","resourceVersion":"6884573","selfLink":"/apis/monitoring.coreos.com/v1/namespaces/prom/servicemonitors/sample-app","uid":"34644b62-eb4f-4ab1-b9df-b22811e40b4c"},"spec":{"endpoints":[{"port":"http"}],"selector":{"matchLabels":{"app":"sample-app","release":"prometheus"}}}}
creationTimestamp: "2021-06-24T07:55:58Z"
generation: 2
labels:
app: sample-app
release: prometheus
name: sample-app
namespace: prom
resourceVersion: "6904642"
selfLink: /apis/monitoring.coreos.com/v1/namespaces/prom/servicemonitors/sample-app
uid: <some-uid>
spec:
endpoints:
- port: http
selector:
matchLabels:
app: sample-app
release: prometheus
</code></pre>
| chandraSiri | <p>You need to use the <code>prometheus-community/kube-prometheus-stack</code> chart, which includes the Prometheus operator, in order to have Prometheus' configuration update automatically based on ServiceMonitor resources.</p>
<p>The <code>prometheus-community/prometheus</code> chart you used does not include the Prometheus operator that watches for ServiceMonitor resources in the Kubernetes API and updates the Prometheus server's ConfigMap accordingly.</p>
<p>It seems that you have the necessary CustomResourceDefinitions (CRDs) installed in your cluster, otherwise you would not have been able to create a ServiceMonitor resource. These are not included in the <code>prometheus-community/prometheus</code> chart so perhaps they were added to your cluster previously.</p>
| Arthur Busser |
<p>I'm trying to create kubernetes cluster on google cloud platform through python (3.7) using google-cloud-container module.</p>
<p>Created kubernetes cluster through google cloud platform and was able to successfully retrieve details for that cluster using google-cloud container (python module). </p>
<p>I'm trying now to create kubernetes cluster through this module. I created a JSON file with required key values and passed it as parameter, but i'm getting errors. Would appreciate if provided a sample code for creating kubernetes cluster in google cloud platform. Thank you in advance. </p>
<pre class="lang-py prettyprint-override"><code>from google.oauth2 import service_account
from google.cloud import container_v1
class GoogleCloudKubernetesClient(object):
def __init__(self, file, project_id, project_name, zone, cluster_id):
credentials = service_account.Credentials.from_service_account_file(
filename=file)
self.client = container_v1.ClusterManagerClient(credentials=credentials)
self.project_id = project_id
self.zone = zone
def create_cluster(self, cluster):
print(cluster)
response = self.client.create_cluster(self.project_id, self.zone, cluster=cluster)
print(f"response for cluster creation: {response}")
def main():
cluster_data = {
"name": "test_cluster",
"masterAuth": {
"username": "admin",
"clientCertificateConfig": {
"issueClientCertificate": True
}
},
"loggingService": "logging.googleapis.com",
"monitoringService": "monitoring.googleapis.com",
"network": "projects/abhinav-215/global/networks/default",
"addonsConfig": {
"httpLoadBalancing": {},
"horizontalPodAutoscaling": {},
"kubernetesDashboard": {
"disabled": True
},
"istioConfig": {
"disabled": True
}
},
"subnetwork": "projects/abhinav-215/regions/us-west1/subnetworks/default",
"nodePools": [
{
"name": "test-pool",
"config": {
"machineType": "n1-standard-1",
"diskSizeGb": 100,
"oauthScopes": [
"https://www.googleapis.com/auth/cloud-platform"
],
"imageType": "COS",
"labels": {
"App": "web"
},
"serviceAccount": "[email protected]",
"diskType": "pd-standard"
},
"initialNodeCount": 3,
"autoscaling": {},
"management": {
"autoUpgrade": True,
"autoRepair": True
},
"version": "1.11.8-gke.6"
}
],
"locations": [
"us-west1-a",
"us-west1-b",
"us-west1-c"
],
"resourceLabels": {
"stage": "dev"
},
"networkPolicy": {},
"ipAllocationPolicy": {},
"masterAuthorizedNetworksConfig": {},
"maintenancePolicy": {
"window": {
"dailyMaintenanceWindow": {
"startTime": "02:00"
}
}
},
"privateClusterConfig": {},
"databaseEncryption": {
"state": "DECRYPTED"
},
"initialClusterVersion": "1.11.8-gke.6",
"location": "us-west1-a"
}
kube = GoogleCloudKubernetesClient(file='/opt/key.json', project_id='abhinav-215', zone='us-west1-a')
kube.create_cluster(cluster_data)
if __name__ == '__main__':
main()
Actual Output:
Traceback (most recent call last):
File "/opt/matilda_linux/matilda_linux_logtest/matilda_discovery/matilda_discovery/test/google_auth.py", line 118, in <module>
main()
File "/opt/matilda_linux/matilda_linux_logtest/matilda_discovery/matilda_discovery/test/google_auth.py", line 113, in main
kube.create_cluster(cluster_data)
File "/opt/matilda_linux/matilda_linux_logtest/matilda_discovery/matilda_discovery/test/google_auth.py", line 31, in create_cluster
response = self.client.create_cluster(self.project_id, self.zone, cluster=cluster)
File "/opt/matilda_discovery/venv/lib/python3.6/site-packages/google/cloud/container_v1/gapic/cluster_manager_client.py", line 407, in create_cluster
project_id=project_id, zone=zone, cluster=cluster, parent=parent
ValueError: Protocol message Cluster has no "masterAuth" field.
</code></pre>
| Abhinav | <p>Kind of late answer, but I had the same problem and figured it out. Worth writing for future viewers.</p>
<p>You should not write the field names in the cluster_data as they appear in the <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.zones.clusters" rel="nofollow noreferrer">REST API</a>.
Instead you should translate them to how they would look by python convention, with words separated by underline instead of camelcase.
Thus, instead of writing masterAuth, you should write master_auth. You should make similar changes to the rest of your fields, and then the script should work.</p>
<p>P.S you aren't using the project_name and cluster_id params in GoogleCloudKubernetesClient.<strong>init</strong>. Not sure what they are, but you should probably remove them.</p>
| Nimrod Fiat |
<p>I'm trying to replace the in-memory storage of <code>Grafana</code> deployment with <code>persistent storage</code> using <code>kustomize</code>. What I'm trying to do is that I'm removing the <code>in-memory storage</code> and then mapping <code>persistent storage</code>. But When I'm deploying it then it is giving me an error.</p>
<p><strong>Error</strong></p>
<p><strong>The Deployment "grafana" is invalid: spec.template.spec.containers[0].volumeMounts[1].name: Not found: "grafana-storage"</strong></p>
<p><strong>Kustomize version</strong></p>
<p><code>{Version:kustomize/v4.0.5 GitCommit:9e8e7a7fe99ec9fbf801463e8607928322fc5245 BuildDate:2021-03-08T20:53:03Z GoOs:linux GoArch:amd64}</code></p>
<p><strong>kustomization.yaml</strong></p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/prometheus-operator/kube-prometheus
- grafana-pvc.yaml
patchesStrategicMerge:
- grafana-patch.yaml
</code></pre>
<p><strong>grafana-pvc.yaml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-storage
namespace: monitoring
labels:
billingType: "hourly"
region: sng01
zone: sng01
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: ibmc-file-bronze
</code></pre>
<p><strong>grafana-patch.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
template:
spec:
volumes:
# use persistent storage for storing users instead of in-memory storage
- $patch: delete <---- trying to remove the previous volume
name: grafana-storage
- name: grafana-storage
persistentVolumeClaim:
claimName: grafana-storage
containers:
- name: grafana
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
</code></pre>
<p>please help.</p>
| metadata | <p>The <code>$patch: delete</code> doesn't seem to work as I would expect.</p>
<p>It may be nice to open an issue on kustomize github: <a href="https://github.com/kubernetes-sigs/kustomize/issues" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize/issues</a> and ask developers about it.</p>
<hr />
<p>Although here is the patch I tried, and it seems to work:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
template:
spec:
volumes:
- name: grafana-storage
emptyDir: null
persistentVolumeClaim:
claimName: grafana-storage
containers:
- name: grafana
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
</code></pre>
<hr />
<p>Based on <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md</a></p>
<p>The following should also work in theory:</p>
<pre><code>spec:
volumes:
- $retainKeys:
- name
- persistentVolumeClaim
name: grafana-storage
persistentVolumeClaim:
claimName: grafana-storage
</code></pre>
<p>But in practise it doesn't, and I think that's because kustomize has its own implementaions of strategic merge (different that k8s).</p>
| Matt |
<p>I have a cluster with Nginx ingress. I receive an API request, for example:</p>
<pre><code>/api/v1/user?json={query}
</code></pre>
<p>I want to redirect this request with ingress to my service. I want to modify it like this:</p>
<pre><code>/api/v2/user/{query}
</code></pre>
| Ruben Aleksanyan | <p>Assuming your domain name is <code>example.com,</code> and you have a service called <code>example-service</code> exposing port <code>80</code>, you can achieve this task by defining the following ingress rule.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($arg_json) {
return 302 https://example.com/api/v2/user/$arg_json;
}
nginx.ingress.kubernetes.io/use-regex: 'true'
name: ingress-rule
namespace: default
spec:
rules:
- host: example.com
http:
paths:
- backend:
service:
name: example-service
port:
number: 80
path: /api/v1/user(.*)
pathType: Prefix
</code></pre>
| Al-waleed Shihadeh |
<p>I've installed <code>kong-ingress-controller</code> using yaml file on a 3-nodes k8s cluster.
but I'm getting this (the status of pod is <code>CrashLoopBackOff</code>):</p>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kong ingress-kong-74d8d78f57-57fqr 1/2 CrashLoopBackOff 12 (3m23s ago) 40m
[...]
</code></pre>
<p>there are 2 container declarations in kong yaml file: <code>proxy</code> and <code>ingress-controller</code>.
The first one is up and running but the <code>ingress-controller</code> container is not:</p>
<pre><code>$kubectl describe pod ingress-kong-74d8d78f57-57fqr -n kong |less
[...]
ingress-controller:
Container ID: docker://8e9a3370f78b3057208b943048c9ecd51054d0b276ef6c93ccf049093261d8de
Image: kong/kubernetes-ingress-controller:1.3
Image ID: docker-pullable://kong/kubernetes-ingress-controller@sha256:cff0df9371d5ad07fef406c356839736ce9eeb0d33f918f56b1b232cd7289207
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 07 Sep 2021 17:15:54 +0430
Finished: Tue, 07 Sep 2021 17:15:54 +0430
Ready: False
Restart Count: 13
Liveness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
CONTROLLER_KONG_ADMIN_URL: https://127.0.0.1:8444
CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY: true
CONTROLLER_PUBLISH_SERVICE: kong/kong-proxy
POD_NAME: ingress-kong-74d8d78f57-57fqr (v1:metadata.name)
POD_NAMESPACE: kong (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ft7gg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ft7gg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 46m default-scheduler Successfully assigned kong/ingress-kong-74d8d78f57-57fqr to kung-node-2
Normal Pulled 46m kubelet Container image "kong:2.5" already present on machine
Normal Created 46m kubelet Created container proxy
Normal Started 46m kubelet Started container proxy
Normal Pulled 45m (x4 over 46m) kubelet Container image "kong/kubernetes-ingress-controller:1.3" already present on machine
Normal Created 45m (x4 over 46m) kubelet Created container ingress-controller
Normal Started 45m (x4 over 46m) kubelet Started container ingress-controller
Warning BackOff 87s (x228 over 46m) kubelet Back-off restarting failed container
</code></pre>
<p>And here is the log of <code>ingress-controller</code> container:</p>
<pre><code>-------------------------------------------------------------------------------
Kong Ingress controller
Release:
Build:
Repository:
Go: go1.16.7
-------------------------------------------------------------------------------
W0907 12:56:12.940106 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
time="2021-09-07T12:56:12Z" level=info msg="version of kubernetes api-server: 1.22" api-server-host="https://10.*.*.1:443" git_commit=632ed300f2c34f6d6d15ca4cef3d3c7073412212 git_tree_state=clean git_version=v1.22.1 major=1 minor=22 platform=linux/amd64
time="2021-09-07T12:56:12Z" level=fatal msg="failed to fetch publish-service: services \"kong-proxy\" is forbidden: User \"system:serviceaccount:kong:kong-serviceaccount\" cannot get resource \"services\" in API group \"\" in the namespace \"kong\"" service_name=kong-proxy service_namespace=kong
</code></pre>
<p>If someone could help me to get a solution, that would be awesome.</p>
<p>============================================================</p>
<p><strong>UPDATE</strong>:</p>
<p>The <code>kong-ingress-controller</code>'s yaml file:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongclusterplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongClusterPlugin
plural: kongclusterplugins
shortNames:
- kcp
scope: Cluster
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
namespace:
type: string
required:
- name
- namespace
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongconsumers.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .username
description: Username of a Kong Consumer
name: Username
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: KongConsumer
plural: kongconsumers
shortNames:
- kc
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
credentials:
items:
type: string
type: array
custom_id:
type: string
username:
type: string
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongingresses.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongIngress
plural: kongingresses
shortNames:
- ki
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
proxy:
properties:
connect_timeout:
minimum: 0
type: integer
path:
pattern: ^/.*$
type: string
protocol:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
read_timeout:
minimum: 0
type: integer
retries:
minimum: 0
type: integer
write_timeout:
minimum: 0
type: integer
type: object
route:
properties:
headers:
additionalProperties:
items:
type: string
type: array
type: object
https_redirect_status_code:
type: integer
methods:
items:
type: string
type: array
path_handling:
enum:
- v0
- v1
type: string
preserve_host:
type: boolean
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
regex_priority:
type: integer
request_buffering:
type: boolean
response_buffering:
type: boolean
snis:
items:
type: string
type: array
strip_path:
type: boolean
upstream:
properties:
algorithm:
enum:
- round-robin
- consistent-hashing
- least-connections
type: string
hash_fallback:
type: string
hash_fallback_header:
type: string
hash_on:
type: string
hash_on_cookie:
type: string
hash_on_cookie_path:
type: string
hash_on_header:
type: string
healthchecks:
properties:
active:
properties:
concurrency:
minimum: 1
type: integer
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
http_path:
pattern: ^/.*$
type: string
timeout:
minimum: 0
type: integer
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
passive:
properties:
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
threshold:
type: integer
type: object
host_header:
type: string
slots:
minimum: 10
type: integer
type: object
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongPlugin
plural: kongplugins
shortNames:
- kp
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
required:
- name
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tcpingresses.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .status.loadBalancer.ingress[*].ip
description: Address of the load balancer
name: Address
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: TCPIngress
plural: tcpingresses
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
rules:
items:
properties:
backend:
properties:
serviceName:
type: string
servicePort:
format: int32
type: integer
type: object
host:
type: string
port:
format: int32
type: integer
type: object
type: array
tls:
items:
properties:
hosts:
items:
type: string
type: array
secretName:
type: string
type: object
type: array
type: object
status:
type: object
version: v1beta1
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kong-serviceaccount
namespace: kong
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kong-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- tcpingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- kongplugins
- kongclusterplugins
- kongcredentials
- kongconsumers
- kongingresses
- tcpingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kong-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kong-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: kong-serviceaccount
namespace: kong
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 8000
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 8443
selector:
app: ingress-kong
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kong-validation-webhook
namespace: kong
spec:
ports:
- name: webhook
port: 443
protocol: TCP
targetPort: 8080
selector:
app: ingress-kong
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: ingress-kong
template:
metadata:
annotations:
kuma.io/gateway: enabled
prometheus.io/port: "8100"
prometheus.io/scrape: "true"
traffic.sidecar.istio.io/includeInboundPorts: ""
labels:
app: ingress-kong
spec:
containers:
- env:
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 ssl http2
- name: KONG_PORT_MAPS
value: 80:8000, 443:8443
- name: KONG_ADMIN_LISTEN
value: 127.0.0.1:8444 ssl
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
- name: KONG_DATABASE
value: "off"
- name: KONG_NGINX_WORKER_PROCESSES
value: "2"
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
image: kong:2.5
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-ssl
protocol: TCP
- containerPort: 8100
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
- env:
- name: CONTROLLER_KONG_ADMIN_URL
value: https://127.0.0.1:8444
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: "true"
- name: CONTROLLER_PUBLISH_SERVICE
value: kong/kong-proxy
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: kong/kubernetes-ingress-controller:1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ingress-controller
ports:
- containerPort: 8080
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
serviceAccountName: kong-serviceaccount
</code></pre>
| samm13 | <p>I had installed <code>kubernetes:1.22</code> and tried to use <code>kong/kubernetes-ingress-controller:1.3</code> .</p>
<p>as @mdaniel said in the comment:</p>
<blockquote>
<p>Upon further investigation into that repo, 1.x only works up to k8s 1.21 so I'll delete my answer and you'll have to downgrade your cluster(!) or find an alternative Ingress controller</p>
</blockquote>
<p>βBased on documentation(you can find this at <a href="https://docs.konghq.com/kubernetes-ingress-controller/1.3.x/references/version-compatibility/" rel="nofollow noreferrer">KIC version compatibility</a>) :</p>
<p><a href="https://i.stack.imgur.com/vpCS7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vpCS7.png" alt="version compatibility" /></a></p>
<p>As you can see, <code>Kong/kubernetes-ingress-controller</code> supports a maximum version of <code>1.21</code> of <code>kubernetes</code>(at the time of writing this answer). So I decided to downgrade my cluster to version <code>1.20</code> and this solved my problem.</p>
| samm13 |
<p>I'm facing this such error in kubernetes( <em>0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable:</em> <em>}, that the pod didn't tolerate.)</em>. My application server is down.
First, I just add one file in daemon set , due to memory allocation (we are having one node), all pods are failed to allocate and shows pending state and fully clashes (stays in pending condition).If I delete all deployments and I run any new deployments also its showing <strong>pending</strong> condition .Now please help to get sort it out this issue. I also tried the taint commands, also it doesn't work.
As per my consent , can I create a node with existing cluster or revoke the instance? thanks in advance</p>
| Cyril | <p>You need to configure autoscaling (it doesn't work by default) for the cluster
<a href="https://docs.aws.amazon.com/eks/latest/userguide/create-managed-node-group.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/create-managed-node-group.html</a>
Or, you can manually change the desired size of the node group.
Also, make sure that your deployment has relevant resources request for your nodes</p>
| pluk |
<p>I have to create a secret like this, but with Python:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create secret generic mysecret -n mynamespace \
--from-literal=etcdpasswd=$(echo -n "PASSWORD" | base64)
</code></pre>
<p>How do I do it using the <code>create_namespaced_secret</code> API of the <code>kubernetes</code> Python client library?</p>
| user2553620 | <p>Try this:</p>
<pre><code>from kubernetes import client, config
from kubernetes.client.rest import ApiException
import base64
import traceback
from pprint import pprint
secret_name = 'mysecret'
namespace = 'mynamespace'
data = {'etcdpasswd': base64.b64encode('<PASSWORD>')}
config.load_kube_config()
core_api_instance = client.CoreV1Api()
dry_run = None
pretty = 'true'
body = client.V1Secret()
body.api_version = 'v1'
body.data = data
body.kind = 'Secret'
body.metadata = {'name': secret_name}
body.type = 'Opaque'
try:
if dry_run != None:
api_response = core_api_instance.create_namespaced_secret(namespace, body, pretty=pretty, dry_run=dry_run)
else:
api_response = core_api_instance.create_namespaced_secret(namespace, body, pretty=pretty)
pprint(api_response)
except ApiException as e:
print("%s" % (str(e)))
traceback.print_exc()
raise
</code></pre>
| pekowski |
<p>I'm trying to run my kuberntes app using minikube on ubuntu20.04 and applied a secret to pull a private docker image from docker hub, but it doesn't seem to work correctly.</p>
<blockquote>
<p>Failed to pull image "xxx/node-graphql:latest": rpc error: code
= Unknown desc = Error response from daemon: pull access denied for xxx/node-graphql, repository does not exist or may require
'docker login': denied: requested access to the resource is denied</p>
</blockquote>
<p>Here's the secret generated by</p>
<pre><code>kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<pathtofile>.docker/config.json \
--type=kubernetes.io/dockerconfigjson
</code></pre>
<p>And here's the secret yaml file I have created</p>
<pre><code>apiVersion: v1
data:
.dockerconfigjson: xxx9tRXpNakZCSTBBaFFRPT0iCgkJfQoJfQp9
kind: Secret
metadata:
name: node-graphql-secret
uid: xxx-2e18-44eb-9719-xxx
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>Did anyone try to pull a private docker image into Kubernetes using a secret? Any kind of help would be appreciated. Thank you!</p>
| DreamBold | <p>I managed to add the secrets config in the following steps.</p>
<p>First, you need to login to docker hub using:</p>
<pre><code>docker login
</code></pre>
<p>Next, you create a k8s secret running:</p>
<pre><code>kubectl create secret generic <your-secret-name>\\n --from-file=.dockerconfigjson=<pathtoyourdockerconfigfile>.docker/config.json \\n --type=kubernetes.io/dockerconfigjson
</code></pre>
<p>And then get the secret in yaml format</p>
<pre><code>kubectl get secret -o yaml
</code></pre>
<p>It should look like this:</p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
data:
.dockerconfigjson: xxxewoJImF1dGhzIjogewoJCSJodHRwczovL2luZGV4LmRvY2tl
kind: Secret
metadata:
creationTimestamp: "2022-10-27T23:06:01Z"
name: <your-secret-name>
namespace: default
resourceVersion: "513"
uid: xxxx-0f12-4beb-be41-xxx
type: kubernetes.io/dockerconfigjson
kind: List
metadata:
resourceVersion: ""
</code></pre>
<p>And I have copied the content for the secret in the secret yaml file:</p>
<pre><code>apiVersion: v1
data:
.dockerconfigjson: xxxewoJImF1dGhzIjogewoJCSJodHRwczovL2luZGV4LmRvY2tlci
kind: Secret
metadata:
creationTimestamp: "2022-10-27T23:06:01Z"
name: <your-secret-name>
namespace: default
resourceVersion: "513"
uid: xxx-0f12-4beb-be41-xxx
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>It works! This is a simple approach to using <code>Secret</code> to pull a private docker image for K8s.</p>
<p>As a side note, to apply the secret, run <code>kubectl apply -f secret.yml</code></p>
<p>Hope it helps</p>
| DreamBold |
<p>I have managed to run one grpc service in a root path. But I tried to add more grpc service by adding custom path route in virtual service which doesn't work. Any help would be highly appreciated.</p>
<p>This is the gateway:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
</code></pre>
<p>I have this virtual service routing to only one grpc service and <strong>is working fine</strong></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-virtual-svc
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- name: "my-grpc-1"
match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 9090
host: my-grpc-1-svc
</code></pre>
<p>But I wanted to try something like below but <strong>it is not working</strong></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-virtual-svc
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- name: "my-grpc-1"
match:
- uri:
prefix: "/my-grpc-1"
route:
- destination:
port:
number: 9090
host: my-grpc-1-svc
- name: "my-grpc-2"
match:
- uri:
prefix: "/my-grpc-2"
route:
- destination:
port:
number: 9090
host: my-grpc-2-svc
</code></pre>
| Prata | <p>Here are some tips when working with grpc and istio (tested with istio 1.9.2).</p>
<p>1.Set protocol in service:</p>
<pre><code>kind: Service
metadata:
name: my-grpc-service
spec:
ports:
- number: 9009
appProtocol: grpc
</code></pre>
<p>2.set protocol in gateway:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
spec:
servers:
- port:
number: 80
name: grpc
protocol: GRPC
hosts:
- "*"
</code></pre>
<p>3.grpc generated uri path prefix will be the same as <code>package</code> value in .proto file:</p>
<pre><code>syntax = "proto3";
package somepath;
</code></pre>
<p>use it as <code>match.uri.prefix</code>:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-virtual-svc
spec:
http:
- name: "my-grpc-1"
match:
- uri:
prefix: "/somepath"
</code></pre>
| Matt |
<p>I'm using ansible script to deploy streamsets on k8s master node. There is play where I'm checking if the streamset dashboard is accessible via <em><a href="http://127.0.0.1" rel="nofollow noreferrer">http://127.0.0.1</a>:{{streamsets_nodePort}}</em> where <code>streamsets_nodePort: 30029</code>. The default port is 30024, which is assigned to other service, so I've changed the port.</p>
<p>The service is Up and the pods are running. </p>
<p><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE</code></p>
<p><code>service/streamsets-service NodePort 10.104.162.67 <none> 18630:30029/TCP 24m</code></p>
<p>When do see the logs I can see,
<code>Running on URI : 'http://streamsets-0.streamsets-service.streamsets-ns.svc.cluster.local:18630'</code>
<code>2020-04-30 13:45:58,149 [user:] [pipeline:] [runner:] [thread:main] [stage:] INFO WebServerTask -</code> <code>Running on URI : 'http://streamsets-0.streamsets-service.streamsets-ns.svc.cluster.local:18630'</code></p>
<p>The below is my service.yml </p>
<p><code>apiVersion: v1
kind: Service
metadata:
name: streamsets-service
labels:
name: streamsets
spec:
type: NodePort
ports:
- port: {{streamsets_port}}
targetPort: 18630
nodePort: {{streamsets_nodePort}}
selector:
role: streamsets</code></p>
<p>These are the assigned port details: </p>
<p><code>streamsets_port: 8630</code></p>
<p><code>streamsets_nodePort: 30029</code></p>
<p><code>streamsets_targetPort: 18630</code></p>
<p>In my play when I'm executing the below block </p>
<pre><code>`- name: Check if Streamsets is accessible.`
`uri:`
`url: http://localhost:{{streamsets_nodePort}}`
`method: GET`
`status_code: 200`
`register: streamsets_url_status`
- debug:`
`var: streamsets_url_status.msg`
</code></pre>
<p>The output I'm getting while executing this block - </p>
<p><code>fatal: [127.0.0.1]: FAILED! => {"changed": false, "content": "", "elapsed": 30, "msg": "Status code was -1 and not [200]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://localhost:30029"}</code></p>
<p><strong>Can someone help me to understand what is the issue?</strong> </p>
| Shuvodeep Ghosh | <p>Perhaps I'm not understanding correctly, but why would the service be responsive on the localhost IP of <code>127.0.0.1</code>?</p>
<p>You're creating a NodePort mapping, which automatically creates a ClusterIP - you can see that in your services listing: <code>10.104.162.67</code>
That IP is the one that should be used to access the application whose port you've exposed with the service, in combination with the 'port' specification you've made (8630 in this case).</p>
<p>Alternatively, if you wanted to directly access the NodePort you created then you would hit the direct internal-IP of the node(s) on which the pod is running. Execute a <code>kubectl get nodes -o wide</code> and note the internal IP address of the Node you're interested in, and then make a call against that IP address in combination with the nodePort you've specified for the service (30029 in this case).</p>
<p>Depending on which layer you're SSH-ing/exec-ing into (pod, node, conatiner, etc.) the resolution for 127.0.0.1 could be completely different - a container you've exec'd into doesn't resolve 127.0.0.1 to the address of the host it's running on, but rather resolves to the pod it's running in.</p>
| Mitch Barnett |
<p>I'm trying to develop a scraping app in which I run a lot of different selenium scripts that are all dependent upon each other. I figured that using Kubernetes would be a good idea for this. In order for this to work, I need the scripts to be able to communicate with each other so I can trigger them to run after each other. How can I accomplish this?<Hr/>
This is an example of some logic that I'm want to perform:
<ol>
<li>Fire of an container, X, that eventually creates a JSON file of some gathered data</li>
<li>Give access to the JSON file to another container, Y.</li>
<li>Trigger the Y container to run</li>
</ol><Hr/>
I would appreciate any help that I can get!</p>
| Lars Mohammed | <p>The concept of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs</a> sounds exactly like the stuff you'd like to achieve.</p>
<blockquote>
<p>A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created.</p>
<p>A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).</p>
<p>You can also use a Job to run multiple Pods in parallel.</p>
</blockquote>
<p>Additionally, you might be interested in a concept of <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Containers</a>.</p>
<blockquote>
<p>Init containers are exactly like regular containers, except:</p>
<p>Init containers always run to completion.
Each init container must complete successfully before the next one starts..</p>
<p>If a Podβs init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. </p>
</blockquote>
<p>And PV+PVC to store and share the data (JSON,etc)</p>
| Squid |
<p>When I run <code>kubectl get secrets</code> after doing a <code>helm upgrade --install <release-name></code> in Kubernetes cluster, our secrets got messy.</p>
<p>Is there any way to stop having <code>sh.helm.release.v1.</code> whenever I declare <code>kubectl get secrets</code>?</p>
<p><a href="https://i.stack.imgur.com/NQpku.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NQpku.png" alt="enter image description here" /></a></p>
| Felix Labayen | <p>No, these secrets are where Helm stores its state.</p>
<p>When you install or upgrade a release, Helm creates a new secret. The secret whoβs name ends in <code>.airflow.v29</code> contains all the information Helm has about revision number <code>29</code> of the <code>airflow</code> release.</p>
<p>Whenever you run commands like <code>helm list</code>, <code>helm history</code>, or <code>helm upgrade</code>, Helm reads these secrets to know what it did in the past.</p>
<p>By default, Helm keeps up to 10 revisions in its state for each release, so up to 10 secrets per release in your namespace. You can have Helm keep a different number of revisions in its state with the <code>--history-max</code> flag.</p>
<p>If you donβt want to keep a history of changes made to your release, you can keep as little as a single revision in Helmβs state.</p>
<p>Running <code>helm upgrade --history-max=1</code> will keep the number of secrets Helm creates to a minimum.</p>
| Arthur Busser |
<p>I am trying to setup remote logging in Airflow <code>stable/airflow</code> helm chart on <code>v1.10.9</code> I am using <strong>Kubernetes executor</strong> and <code>puckel/docker-airflow</code> image. here's my <code>values.yaml</code> file.</p>
<pre><code>airflow:
image:
repository: airflow-docker-local
tag: 1.10.9
executor: Kubernetes
service:
type: LoadBalancer
config:
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: airflow-docker-local
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: 1.10.9
AIRFLOW__KUBERNETES__WORKER_CONTAINER_IMAGE_PULL_POLICY: Never
AIRFLOW__KUBERNETES__WORKER_SERVICE_ACCOUNT_NAME: airflow
AIRFLOW__KUBERNETES__DAGS_VOLUME_CLAIM: airflow
AIRFLOW__KUBERNETES__NAMESPACE: airflow
AIRFLOW__CORE__REMOTE_LOGGING: True
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: "s3://xxx"
AIRFLOW__CORE__REMOTE_LOG_CONN_ID: "s3://aws_access_key_id:aws_secret_access_key@bucket"
AIRFLOW__CORE__ENCRYPT_S3_LOGS: False
persistence:
enabled: true
existingClaim: ''
postgresql:
enabled: true
workers:
enabled: false
redis:
enabled: false
flower:
enabled: false
</code></pre>
<p>but my logs don't get exported to S3, all I get on UI is</p>
<pre><code>*** Log file does not exist: /usr/local/airflow/logs/icp_job_dag/icp-kube-job/2019-02-13T00:00:00+00:00/1.log
*** Fetching from: http://icpjobdagicpkubejob-f4144a374f7a4ac9b18c94f058bc7672:8793/log/icp_job_dag/icp-kube-job/2019-02-13T00:00:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='icpjobdagicpkubejob-f4144a374f7a4ac9b18c94f058bc7672', port=8793): Max retries exceeded with url: /log/icp_job_dag/icp-kube-job/2019-02-13T00:00:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f511c883710>: Failed to establish a new connection: [Errno -2] Name or service not known'))
</code></pre>
<p>any one have more insights what could I be missing? </p>
<p><strong>Edit:</strong> from @trejas's suggestion below. I created a separate connection and using that. here's what my airflow config in <code>values.yaml</code> look like</p>
<pre><code>airflow:
image:
repository: airflow-docker-local
tag: 1.10.9
executor: Kubernetes
service:
type: LoadBalancer
connections:
- id: my_aws
type: aws
extra: '{"aws_access_key_id": "xxxx", "aws_secret_access_key": "xxxx", "region_name":"us-west-2"}'
config:
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: airflow-docker-local
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: 1.10.9
AIRFLOW__KUBERNETES__WORKER_CONTAINER_IMAGE_PULL_POLICY: Never
AIRFLOW__KUBERNETES__WORKER_SERVICE_ACCOUNT_NAME: airflow
AIRFLOW__KUBERNETES__DAGS_VOLUME_CLAIM: airflow
AIRFLOW__KUBERNETES__NAMESPACE: airflow
AIRFLOW__CORE__REMOTE_LOGGING: True
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: s3://airflow.logs
AIRFLOW__CORE__REMOTE_LOG_CONN_ID: my_aws
AIRFLOW__CORE__ENCRYPT_S3_LOGS: False
</code></pre>
<p>I still have the same issue.</p>
| Asav Patel | <p>I was running into the same issue and thought I'd follow up with what ended up working for me. The connection is correct but you need to make sure that the worker pods have the same environment variables:</p>
<pre><code>airflow:
image:
repository: airflow-docker-local
tag: 1.10.9
executor: Kubernetes
service:
type: LoadBalancer
connections:
- id: my_aws
type: aws
extra: '{"aws_access_key_id": "xxxx", "aws_secret_access_key": "xxxx", "region_name":"us-west-2"}'
config:
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: airflow-docker-local
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: 1.10.9
AIRFLOW__KUBERNETES__WORKER_CONTAINER_IMAGE_PULL_POLICY: Never
AIRFLOW__KUBERNETES__WORKER_SERVICE_ACCOUNT_NAME: airflow
AIRFLOW__KUBERNETES__DAGS_VOLUME_CLAIM: airflow
AIRFLOW__KUBERNETES__NAMESPACE: airflow
AIRFLOW__CORE__REMOTE_LOGGING: True
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: s3://airflow.logs
AIRFLOW__CORE__REMOTE_LOG_CONN_ID: my_aws
AIRFLOW__CORE__ENCRYPT_S3_LOGS: False
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__REMOTE_LOGGING: True
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__REMOTE_LOG_CONN_ID: my_aws
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: s3://airflow.logs
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__ENCRYPT_S3_LOGS: False
</code></pre>
<p>I also had to set the fernet key for the workers (and in general) otherwise I get an invalid token error:</p>
<pre><code>airflow:
fernet_key: "abcdefghijkl1234567890zxcvbnmasdfghyrewsdsddfd="
config:
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__FERNET_KEY: "abcdefghijkl1234567890zxcvbnmasdfghyrewsdsddfd="
</code></pre>
| ltken123 |
<p>I deployed efs-provisioner to mount aws-efs in k8s as pvc. Then I'm getting "no volume plugin matched" error while creating pvc. </p>
<pre><code>$ kubectl describe pvc efs -n dev
Name: efs
Namespace: dev
StorageClass: aws-efs
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-class=aws-efs
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
32m 2m 21 persistentvolume-controller Warning ProvisioningFailed no volume plugin matched
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: kubernetes.io/aws-efs
</code></pre>
<p>efs-provisioner-dep.yml at <a href="https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/aws/efs/deploy/deployment.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/aws/efs/deploy/deployment.yaml</a></p>
| hariK | <p>I know this bit old and efs provisioner is officially deprecated. But still see some of the clusters and customers using it. I had the same issue and spend quite sometime troubleshooting it and I believe this can be helpful to someone who got the same issue.</p>
<p>I faced the issue after upgrading the efs provisoner to the latest numbered version which is v2.4.0 which can be found <a href="https://quay.io/repository/external_storage/efs-provisioner?tab=tags" rel="nofollow noreferrer">here</a></p>
<p>It was exactly went to the state as described in the question after the upgrade but I can confirm that it has started working after reverting back to the old version in my case v0.1.2.</p>
<p>After spending sometime, carefully analyzing the answers in <a href="https://github.com/kubernetes-retired/external-storage/issues/1111" rel="nofollow noreferrer">this</a> github issue I was able to fix it with below changes.</p>
<ol>
<li>Changing the StorageClass prvisioner from <code>kubernetes.io/aws-efs</code> to something else such as <code>example.com/aws-efs</code></li>
<li>Updating the efs provisioner deployment with the updated provisioner as below in the environment variables</li>
</ol>
<pre><code> env:
- name: FILE_SYSTEM_ID
value: "{{ .Values.efs_fs_id }}"
- name: AWS_REGION
value: "{{ .Values.region }}"
- name: PROVISIONER_NAME
value: "example.com/aws-efs"
</code></pre>
<p>Make sure to delete the previous StorageClass with <code>kubernetes.io/aws-efs</code> before create the new one with the updated provisioner.</p>
<p>I hope this helps. Thanks.</p>
| Aruna Fernando |
<p>I have <strong>values.yml</strong> file that takes in a list of mountPaths with this format:</p>
<pre><code>global:
mountPath:
hello:
config: /etc/hello/hello.conf
node: /opt/hello/node.jks
key: /opt/hello/key.jks
cert: /opt/hello/cert.jks
</code></pre>
<p>I want the resulting rendered template to be</p>
<pre><code> volumeMounts:
- name: config
mountPath: /etc/hello/hello.conf
subPath: config
- name: node
mountPath: /opt/hello/node.jks
subPath: node
- name: key
mountPath: /opt/hello/key.jks
subPath: key
- name: cert
mountPath: /opt/hello/cert.jks
subPath: cert
</code></pre>
<p>How would I accomplish this? I tried the following in my <strong>deployment.yaml</strong> template file:</p>
<pre><code> volumeMounts:
{{- range $key, $value := pluck .Values.service_name .Values.global.mountPath.serviceName | first }}
- name: {{ $key }}
mountPath: $value
subPath: {{ $key }}
{{- end }}
</code></pre>
<p>the following helm command that i have run but the it won't work for me. How do I accomplish getting to the format I want above based on the input?</p>
<pre><code>helm upgrade --install \
--namespace ${NAMESPACE} \
--set service_name=hello \
--set namespace=${NAMESPACE} \
hello . \
-f values.yaml \
</code></pre>
| γ¨γγ― | <p>Here is what I did:</p>
<pre><code> volumeMounts:
{{- range $key, $value := pluck .Values.service_name .Values.global.mountPath | first }}
- name: {{ $key }}
mountPath: {{ $value }}
subPath: {{ $key }}
{{- end }}
</code></pre>
<p><code>helm template --set service_name=hello [...] </code> seems to render exactly what you want.</p>
<p>Notice I changed the line with mountPath field: <code>$value</code> -> <code>{{ $value }}</code>,
and the line with range: <code>.Values.global.mountPath.serviceName</code> -> <code>.Values.global.mountPath</code></p>
| Matt |
<p>I have gcsfuse in a deployment on GKE, it was working fine and without any changes in the config it started failing since yesterday.</p>
<pre><code>Received signal 15; terminating.
</code></pre>
<p>I have it deployed in 2 different clusters, first I thought it was related to the kubernetes version because I started to see the issue when upgraded to <em>1.17.14-gke.1200</em>, but in the other cluster I still have <em>1.17.14-gke.400</em> and both have same issue.</p>
| Ivan Torrellas | <p>Found my problem, I was using the command as follows <code>gcsfuse -o nonempty --implicit-dirs...</code>, when I removed the <code>-o nonempty</code> flag it started working.</p>
<p>The strange thing is that it was working just fine until 2 days ago, suddenly that stopped working.</p>
<p>I decided to try without that after reading this:
<a href="https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/mounting.md#mount8-and-fstab-compatibility" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/mounting.md#mount8-and-fstab-compatibility</a></p>
| Ivan Torrellas |
<p>I have installed kubernetes cluster thus I have a deployment file for jenkins.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-vol
mountPath: /var/jenkins_vol
spec:
volumes:
- name: jenkins-vol
emptyDir: {}
</code></pre>
<p>The only thing I need is to install kubernetes client (Kubectl) through curl request.
The problem is that when I enter the pod and create curl request it returns Permission denied</p>
<pre><code>curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubectl
Warning: Failed to create the file kubectl: Permission denied
</code></pre>
| Ilia Hanev | <p>Try adding securityContext in your deployment</p>
<pre><code>spec:
securityContext:
runAsUser: 0
</code></pre>
<p>If this doesnt work,( your jenkins deployment is failing or some other issue), then when you enter the pod ( pod exec) check what user is it by running <code>id</code> or <code>whoami</code></p>
| Kiran |
<p>I delete and re-submit a job with the same name, and I often get a 409 HTTP error with a message that says that the object is being deleted -- my submit comes before the job object is removed.</p>
<p>My current solution is to spin-try until I am able to submit a job. I don't like it. This looks quite ugly and I wonder if there's a way to call deletion routine in a way that waits till the object is completely deleted. According to <a href="https://stackoverflow.com/a/52900505/18775">this</a> kubectl waits till the object is actually deleted before returning from delete command. I wonder if there's an option for the Python client.</p>
<p>Here's my spin-submit code (not runnable, sorry):</p>
<pre><code># Set up client
config.load_kube_config(context=context)
configuration = client.Configuration()
api_client = client.ApiClient(configuration)
batch_api = client.BatchV1Api(api_client)
job = create_job_definition(...)
batch_api.delete_namespaced_job(job.metadata.name, "my-namespace")
for _ in range(50):
try:
return batch_api.create_namespaced_job(self.namespace, job)
except kubernetes.client.rest.ApiException as e:
body = json.loads(e.body)
job_is_being_deleted = body["message"].startswith("object is being deleted")
if not job_is_being_deleted:
raise
time.sleep(0.05)
</code></pre>
<p>I wish it was</p>
<pre><code>batch_api.delete_namespaced_job(job.metadata.name, "my-namespace", wait=True)
batch_api.create_namespaced_job(self.namespace, job)
</code></pre>
<p>I have found a similar question, and <a href="https://stackoverflow.com/a/65939132/18775">the answer suggests to use watch</a>, which means I need to start a watch in a separate thread, issue delete command, join the thread that waits till the deletion is confirmed by the watch -- seems like a lot of code for such a thing.</p>
| Anton Daneyko | <p>As you have already mentioned, kubectl delete has the <code>--wait</code> flag that does this exact job and is <code>true</code> by default.</p>
<p>Let's have a look at the code and see how kubectl implements this. <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/cmd/delete/delete.go#L368-L378" rel="nofollow noreferrer">Source</a>.</p>
<pre><code>waitOptions := cmdwait.WaitOptions{
ResourceFinder: genericclioptions.ResourceFinderForResult(resource.InfoListVisitor(deletedInfos)),
UIDMap: uidMap,
DynamicClient: o.DynamicClient,
Timeout: effectiveTimeout,
Printer: printers.NewDiscardingPrinter(),
ConditionFn: cmdwait.IsDeleted,
IOStreams: o.IOStreams,
}
err = waitOptions.RunWait()
</code></pre>
<p>Additionally here are <a href="https://github.com/kubernetes/kubernetes/blob/d8f9e4587ac1265efd723bce74ae6a39576f2d58/staging/src/k8s.io/kubectl/pkg/cmd/wait/wait.go#L233-L265" rel="nofollow noreferrer">RunWait()</a> and <a href="https://github.com/kubernetes/kubernetes/blob/d8f9e4587ac1265efd723bce74ae6a39576f2d58/staging/src/k8s.io/kubectl/pkg/cmd/wait/wait.go#L268-L333" rel="nofollow noreferrer">IsDeleted()</a> function definitions.</p>
<p>Now answering your question:</p>
<blockquote>
<p>[...] which means I need to start a watch in a separate thread, issue delete command, join the thread that waits till the deletion is confirmed by the watch -- seems like a lot of code for such a thing</p>
</blockquote>
<p>It does look like so - it's a lot of code, but I don't see any alternative. If you want to wait for deletion to finish you need to do it manually. There does not seems to be any other way around it.</p>
| Matt |
<p>I'm running Locust in master-slave mode on k8s, I need to auto delete all locust pods when a test is completed. Any suggestion? Thank you.</p>
| kai | <p>Create a service and deployment type Job for master and slave. Job kind in k8s take care of deleting the pods upon completion.</p>
| Vijayasena A |
<p>I have a number of <strong>restful services</strong> within our system</p>
<ul>
<li>Some are our <strong>within</strong> the kubernetes <strong>cluster</strong></li>
<li><strong>Others</strong> are on <strong>legacy</strong> infrasture and are <strong>hosted on VM's</strong></li>
</ul>
<p>Many of our <strong>restful services</strong> make <strong>synchronous calls</strong> to each other (so not asynchronously using message queues)</p>
<p>We also have a number of UI's (fat clients or web apps) that make use of these services</p>
<p>We might define a simple k8s manifest file like this</p>
<ol>
<li>Pod</li>
<li>Service</li>
<li>Ingress</li>
</ol>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: "orderManager"
spec:
containers:
- name: "orderManager"
image: "gitlab-prem.com:5050/image-repo/orderManager:orderManager_1.10.22"
---
apiVersion: v1
kind: Service
metadata:
name: "orderManager-service"
spec:
type: NodePort
selector:
app: "orderManager"
ports:
- protocol: TCP
port: 50588
targetPort: 50588
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: orderManager-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /orders
pathType: Prefix
backend:
service:
name: "orderManager-service"
port:
number: 50588
</code></pre>
<p>I am really not sure what the best way for <strong>restful services</strong> on the cluster to talk to each other.</p>
<ul>
<li>It seems like there is only one good route for callers outside the cluster which is use the url built by the ingress rule</li>
<li>Two options within the cluster</li>
</ul>
<p>This might illustrate it further with an example</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Caller</th>
<th style="text-align: center;">Receiver</th>
<th style="text-align: center;">Example Url</th>
<th style="text-align: right;"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">UI</td>
<td style="text-align: center;">On Cluster</td>
<td style="text-align: center;">http://clusterip/orders</td>
<td style="text-align: right;">The UI would use the cluster ip and the ingress rule to reach the order manager</td>
</tr>
<tr>
<td style="text-align: left;">Service off cluster</td>
<td style="text-align: center;">On Cluster</td>
<td style="text-align: center;">http://clusterip/orders</td>
<td style="text-align: right;">Just like the UI</td>
</tr>
<tr>
<td style="text-align: left;">On Cluster</td>
<td style="text-align: center;">On Cluster</td>
<td style="text-align: center;">http://clusterip/orders</td>
<td style="text-align: right;">Could use ingress rule like the above approach</td>
</tr>
<tr>
<td style="text-align: left;">On Cluster</td>
<td style="text-align: center;">On Cluster</td>
<td style="text-align: center;">http://orderManager-service:50588/</td>
<td style="text-align: right;">Could use the service name and port directly</td>
</tr>
</tbody>
</table>
</div>
<p><em>I write <strong>cluster ip</strong> a few times above but in real life we put something top so there is a friendly name like http://mycluster/orders</em></p>
<p>So when <strong>caller</strong> and <strong>reciever</strong> are <strong>both on cluster</strong> is it either</p>
<ul>
<li><strong>Use</strong> the <strong>ingress rule</strong> which is also used by services and apps outside the cluster</li>
<li><strong>Use</strong> the <strong>nodeport service name</strong> which is used in the ingress rule</li>
<li>Or perhaps something else!</li>
</ul>
<p><strong>One benefit</strong> of using <strong>nodeport service name</strong> is that you do not have to change your base URL.</p>
<ul>
<li>The <strong>ingress</strong> rule <strong>appends</strong> an <strong>extra elements</strong> to the route (in the above case <em>orders</em>)</li>
<li>When I move a restful service from legacy to k8s cluster it will increase the complexity</li>
</ul>
| user3210699 | <p>It depends on whether you want requests to be routed through your ingress controller or not.</p>
<p>Requests sent to the full URL configured in your Ingress resource will be processed by your ingress controller. The controller itself β NGINX in this case β will proxy the request to the Service. The request will then be routed to a Pod.</p>
<p>Sending the request directly to the Serviceβs URL simply skips your ingress controller. The request is directly routed to a Pod.</p>
<p>The trade offs between the two options depend on your setup.</p>
<p>Sending requests through your ingress controller will increase request latency and resource consumption. If your ingress controller does nothing other than route requests, I would recommend sending requests directly to the Service.</p>
<p>However, if you use your ingress controller for other purposes, like authentication, monitoring, logging, or tracing, then you may prefer that the controller process internal requests.</p>
<p>For example, on some of my clusters I use the NGINX ingress controller to measure request latency and track HTTP response statuses. I route requests between apps running in the same cluster through the ingress controller in order to have that information available. I pay the cost of increased latency and resource usage in order to have improved observability.</p>
<p>Whether the trade offs are worth it in your case depends on you. If your ingress controller does nothing more that basic routing, then my recommendation is to skip it entirely. If it does more, then you need to weigh the pros and cons of routing requests through it.</p>
| Arthur Busser |
<p>Is there a way to find the history of commands applied to the kubernetes cluster by kubectl?
For example, I want to know the last applied command was</p>
<pre><code>kubectl apply -f x.yaml
</code></pre>
<p>or</p>
<pre><code>kubectl apply -f y.yaml
</code></pre>
| Andromeda | <p>You can use <code>kubectl apply view-last-applied</code> command to find the last applied configuration:</p>
<pre><code>β ~ kubectl apply view-last-applied --help
View the latest last-applied-configuration annotations by type/name or file.
The default output will be printed to stdout in YAML format. One can use -o option to change output format.
Examples:
# View the last-applied-configuration annotations by type/name in YAML.
kubectl apply view-last-applied deployment/nginx
# View the last-applied-configuration annotations by file in JSON
kubectl apply view-last-applied -f deploy.yaml -o json
[...]
</code></pre>
<p>To get the full history from the beginning of a cluster creation you should use <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">audit logs</a> as already mentioned in comments by @Jonas.</p>
<p>additionally, if you adopt gitops you could have all your cluster state under version control. It will allow you to trace back all the changes made to your cluster.</p>
| Matt |
<p>I am currently setting up a kubernetes cluster (bare ubuntu servers). I deployed metallb and ingress-nginx to handle the ip and service routing. This seems to work fine. I get a response from nginx, when I wget the externalIP of the ingress-nginx-controller service (works on every node). But this only works inside the cluster network. How do I access my services (the ingress-nginx-controller, because it does the routing) from the internet through a node/master servers ip? I tried to set up routing with iptables, but it doesn't seem to work. What am I doing wrong and is it the best practise ?</p>
<pre><code>echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
iptables -t nat -A PREROUTING -i eth0 -p tcp -d <Servers IP> --dport 80 -j DNAT --to <ExternalIP of nginx>:80
iptables -A FORWARD -p tcp -d <ExternalIP of nginx> --dport 80 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
iptables -F
</code></pre>
<p>Here are some more information:</p>
<pre><code>kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.219.111 198.51.100.1 80:31872/TCP,443:31897/TCP 41h
ingress-nginx-controller-admission ClusterIP 10.108.194.136 <none> 443/TCP 41h
</code></pre>
<p>Please share some thoughts
Jonas</p>
| Jonaswinz | <p>I ended up with installing HAProxy on the maschine I want to resole my domain to. HAProxy listens to port 80 and 443 and forwards all trafic to the externalIP of my ingress controller. You can also do this on multiple mashines and DNS failover for high availability.</p>
<p>My haproxy.cfg</p>
<pre><code>frontend unsecure
bind 0.0.0.0:80
default_backend unsecure_pass
backend unsecure_pass
server unsecure_pass 198.51.100.0:80
frontend secure
bind 0.0.0.0:443
default_backend secure_pass
backend secure_pass
server secure_pass 198.51.100.0:443
</code></pre>
| Jonaswinz |
<p>I am using an existing Security Group in <code>security-groups</code> annotation. But while creating ALB through Ingress it is attaching a default SG. Why it is not attaching the existing SG used in my annotation. And I am using <code>alb-ingress-controller</code>.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: "instance"
alb.ingress.kubernetes.io/security-groups: sg-**********
alb.ingress.kubernetes.io/certificate-arn: arn
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
spec:
rules:
- host: host
http:
paths:
- path: /
backend:
serviceName: serviceName
servicePort: 80
</code></pre>
| Parag Poddar | <p>the actual syntax for annotation is
alb.ingress.kubernetes.io/security-groups: sg-xxxx, nameOfSg1, nameOfSg2</p>
<p>You need to create a pair of SGs. One attaching to ELB and other one attaching to worker nodes where the pods are running. </p>
<p>if sg-12345 with Name tag 'elb_sg' is attaching to ELB and 'worker_sg' is attaching to worker nodes, then the annotation should be:
alb.ingress.kubernetes.io/security-groups: sg-12345, elb_sg, worker_sg</p>
<p>And dont forget to add inbound on worker_sg to all traffic from elb_sg.</p>
| Bond007 |
<p>I am using K8S with helm.</p>
<p>I need to run pods and dependencies with a predefined flow order.</p>
<p>How can I create helm dependencies that run the pod only once (i.e - populate database for the first time), and exits after first success?</p>
<p>Also, if I have several pods, and I want to run the pod only on certain conditions occurs and after creating a pod.</p>
<p>Need to build 2 pods, as is described as following:</p>
<p>I have a database.</p>
<p><strong>1st step</strong> is to create the database.</p>
<p><strong>2nd step</strong> is to populate the db.</p>
<p>Once I populate the db, this job need to finish.</p>
<p><strong>3rd step</strong> is another pod (not the db pod) that uses that database, and always in listen mode (never stops).</p>
<p>Can I define in which order the dependencies are running (and not always parallel).</p>
<p>What I see for <code>helm create</code> command that there are templates for deployment.yaml and service.yaml, and maybe pod.yaml is better choice?</p>
<p>What are the best charts types for this scenario?</p>
<p>Also, need the to know what is the chart hierarchy.</p>
<p>i.e: when having a chart of type: listener, and one pod for database creation, and one pod for the database population (that is deleted when finished), I may have a chart tree hierarchy that explain the flow.</p>
<p><a href="https://i.stack.imgur.com/FpBys.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FpBys.png" alt="enter image description here" /></a></p>
<p>The main chart use the populated data (after all the sub-charts and templates are run properly - BTW, can I have several templates for same chart?).</p>
<p>What is the correct tree flow</p>
<p>Thanks.</p>
| Eitan | <p>You can achieve this using helm hooks and K8s Jobs, below is defining the same setup for Rails applications.</p>
<p>The first step, define a k8s job to create and populate the db</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "my-chart.name" . }}-db-prepare
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded
labels:
app: {{ template "my-chart.name" . }}
chart: {{ template "my-chart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
backoffLimit: 4
template:
metadata:
labels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ template "my-chart.name" . }}-db-prepare
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["rake", "db:extensions", "db:migrate", "db:seed"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
initContainers:
- name: init-wait-for-dependencies
image: wshihadeh/wait_for:v1.2
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
imagePullSecrets:
- name: {{ .Values.imagePullSecretName }}
restartPolicy: Never
</code></pre>
<p>Note the following :
1- The Job definitions have helm hooks to run on each deployment and to be the first task</p>
<pre><code> "helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded
</code></pre>
<p>2- the container command, will take care of preparing the db</p>
<pre><code>command: ["/docker-entrypoint.sh"]
args: ["rake", "db:extensions", "db:migrate", "db:seed"]
</code></pre>
<p>3- The job will not start until the db-connection is up (this is achieved via initContainers)</p>
<pre><code>args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]
</code></pre>
<p>the second step is to define the application deployment object. This can be a regular deployment object (make sure that you don't use helm hooks ) example :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "my-chart.name" . }}-web
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
labels:
app: {{ template "my-chart.name" . }}
chart: {{ template "my-chart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.webReplicaCount }}
selector:
matchLabels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
labels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
service: web
spec:
imagePullSecrets:
- name: {{ .Values.imagePullSecretName }}
containers:
- name: {{ template "my-chart.name" . }}-web
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["web"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
{{ toYaml .Values.resources | indent 12 }}
restartPolicy: {{ .Values.restartPolicy }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
</code></pre>
| Al-waleed Shihadeh |
<p>I have been asked to modify a Helm template to accommodate a few changes to check if a value is empty or not as in the code snippet below. I need to check <code>$var.alias</code> inside the <code>printf</code> in the code snippet and write custom logic to print a custom value. Any pointers around the same would be great.</p>
<pre><code>{{- range $key, $value := .Values.testsConfig.keyVaults -}}
{{- range $secret, $var := $value.secrets -}}
{{- if nil $var.alias}}
{{- end -}}
{{ $args = append $args (printf "%s=/mnt/secrets/%s/%s" $var.alias $key $var.name | quote) }}
{{- end -}}
{{- end -}}
</code></pre>
| Avi | <p>I decided to test what madniel wrote in his comment. Here are my files:</p>
<p>values.yaml</p>
<pre><code>someString: abcdef
emptyString: ""
# nilString:
</code></pre>
<p>templates/test.yaml</p>
<pre><code>{{ printf "someEmptyString=%q)" .Values.someString }}
{{ printf "emptyString=%q)" .Values.emptyString }}
{{ printf "nilString=%q)" .Values.nilString }}
{{- if .Values.someString }}
{{ printf "someString evaluates to true" }}
{{- end -}}
{{- if .Values.emptyString }}
{{ printf "emptyString evaluates to true" }}
{{- end -}}
{{- if .Values.nilString }}
{{ printf "nilString evaluates to true" }}
{{- end -}}
{{- if not .Values.emptyString }}
{{ printf "not emptyString evaluates to true" }}
{{- end -}}
{{- if not .Values.nilString }}
{{ printf "not nilString evaluates to true" }}
{{- end -}}
</code></pre>
<p>Helm template output:</p>
<pre><code>β helm template . --debug
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: <REDACTED>
---
# Source: asd/templates/test.yaml
someEmptyString="abcdef")
emptyString="")
nilString=%!q(<nil>))
someString evaluates to true
not emptyString evaluates to true
not nilString evaluates to true
</code></pre>
<p>So yes, it should work if you use <code>{{ if $var.alias }}</code></p>
| Matt |
<p>When running following command to update kubernetes config to get connected with EKS cluster then getting this error "'NoneType' object is not iterable"</p>
<pre><code>aws eks update-kubeconfig --region us-east-2 --name <cluster name>
</code></pre>
| Abhishek Jain | <p>Do you have an existing k8s config? Running</p>
<p><code>aws eks update-kubeconfig --region <region> --name <cluster name></code></p>
<p>Generates a ~/.kube/config.</p>
<p>If you already have a ~/.kube/config, there could be a conflict between the file to be generated, and the file that already exists that prevents them from being merged.</p>
<p>If you have a ~/.kube/config file, and you aren't actively using it, running</p>
<p><code>rm ~/.kube/config</code></p>
<p>and then attempting</p>
<p><code>aws eks update-kubeconfig --region us-east-2 --name <cluster name></code></p>
<p>afterwards will likely solve your issue.</p>
<p>If you are using your <code>~/.kube/config file,</code> rename it something else so you could use it later, and then run the eks command again.</p>
<p>See a similar issue here:
<a href="https://github.com/aws/aws-cli/issues/4843" rel="noreferrer">https://github.com/aws/aws-cli/issues/4843</a></p>
| emh221 |
<p>We have several microservices(NodeJS based applications) which needs to communicate each other and two of them uses Redis and PostgreSQL. Below are the name of of my microservices. Each of them has their own SCM repository and Helm Chart.Helm version is 3.0.1. We have two environments and we have two values.yaml per environments.We have also three nodes per cluster.</p>
<p>First of all, after end user's action, UI service triggers than it goes to Backend. According to the end user request Backend services needs to communicate any of services such as Market, Auth and API.Some cases API and market microservice needs to communicate with Auth microservice as well.</p>
<ol>
<li>UI --></li>
<li>Backend</li>
<li>Market --> use postgreSQL </li>
<li>Auth --> use Redis</li>
<li>API</li>
</ol>
<p>So my questions are,</p>
<ul>
<li><p>What should we take care to communicate microservices among each other? Is this <code>my-svc-namespace.svc.cluster.local</code> enough to provide developers or should we specify ENV in each pod as well?</p></li>
<li><p>Our microservices is NodeJS application. How developers. will handle this in application source code? Did they use this service name if first question's answer is yes?</p></li>
<li><p>We'd like to expose our application via ingress using host per environments? I guess ingress should be only enabled for UI microservice, am I correct?</p></li>
<li><p>What is the best way to test each service can communicate each other?</p></li>
</ul>
<hr>
<pre><code>kubectl get svc --all-namespaces
NAMESPACE NAME TYPE
database my-postgres-postgresql-helm ClusterIP
dev my-ui-dev ClusterIP
dev my-backend-dev ClusterIP
dev my-auth-dev ClusterIP
dev my-api-dev ClusterIP
dev my-market-dev ClusterIP
dev redis-master ClusterIP
ingress ingress-traefik NodePort
</code></pre>
| semural | <p><strong>Two ways to perform Service Discovery in K8S</strong></p>
<p>There are two ways to perform communication (service discovery) within a Kubernetes cluster.</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables" rel="nofollow noreferrer">Environment variable</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#dns" rel="nofollow noreferrer">DNS</a></li>
</ul>
<p>DNS is the simplest way to achieve service discovery within the cluster.
And it does not require any additional ENV variable setting for each pod.
As its simplest, a service within the same namespace is accessible over its service name. e.g <a href="http://my-api-dev:PORT" rel="nofollow noreferrer">http://my-api-dev:PORT</a> is accessible for all the pods within the namespace, <code>dev</code>.</p>
<p><strong>Standard Application Name and K8s Service Name</strong></p>
<p>As a practice, you can give each application a standard name, eg. <code>my-ui</code>, <code>my-backend</code>, <code>my-api</code>, etc. And use the same name to connect to the application.
That practice can be even applied testing locally from developer environment, with entry in the <code>/etc/host</code> as </p>
<pre><code>127.0.0.1 my-ui my-backend my-api
</code></pre>
<p>(Above is nothing to do with k8s, just a practice for the communication of applications with their names in local environments)</p>
<p>Also, on k8s, you may assign service name as the same application name (Try to avoid, suffix like <code>-dev</code> for service name, which reflect the environments (dev, test, prod, etc), instead use namespace or separate cluster for that). So that, target application endpoints can be configured with their service name on each application's configuration file.</p>
<p><strong>Ingress is for services with external access</strong></p>
<p>Ingress should only be enabled for services which required external accesses.</p>
<p><strong>Custom Health Check Endpoints</strong></p>
<p>Also, it is a good practice to have some custom health check that verify all the depended applications are running fine, which will also verify the communications of application are working fine.</p>
| wpnpeiris |
<p>I would like to extend the default "service port range" in <a href="https://docs.k0sproject.io/v1.21.0+k0s.0/" rel="nofollow noreferrer">K0s Kubernetes distro</a>.</p>
<p>I know that in kubernetes, setting <code>--service-node-port-range</code> option in <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> will do the trick.</p>
<p>But, how to do so or where is that option in the <code>K0s</code> distro?</p>
| McLan | <p>It looks like you could use <code>spec.api.extraArgs</code> to pass the <code>service-node-port-range</code> parameter to api-server.</p>
<p><a href="https://docs.k0sproject.io/v1.21.0+k0s.0/configuration/#specapi" rel="nofollow noreferrer">Spec api</a>:</p>
<blockquote>
<p><strong>extraArgs</strong>: Map of key-values (strings) for any extra arguments you wish to pass down to Kubernetes api-server process</p>
</blockquote>
<p>Example:</p>
<pre><code>apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
spec:
api:
extraArgs:
service-node-port-range: 30000-32767
</code></pre>
| Matt |
<p>I'm trying to upgrade the Nginx controller on a Kubernetes cluster version v1.16 (v1.16.10) and unfortunately it was not succeeded.</p>
<p>My Nginx setup is configured as a DaemonSet with the helm stable repository, since the new repository as changed to <a href="https://kubernetes.github.io/ingress-nginx" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx</a> I'm trying to use the new repo and trying to upgrade the version to at lease 0.33 which is helm release 2.10.0</p>
<p>Error behavior:</p>
<p>upgrade is half way succeeded and stuck in a place where saying "Pod is not ready: kube-system/nginx-ingress-controller-xxxx" in the helm controller.
At that time there were pods created by the DaemonSet in the nodes and they were going to the "CrashLoopBackOff" state and then "Error" state, logs displayed the below error:</p>
<pre><code>W0928 05:21:50.497500 6 flags.go:249] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W0928 05:21:50.497572 6 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0928 05:21:50.497777 6 main.go:218] Creating API client for https://172.31.0.1:443
I0928 05:21:50.505692 6 main.go:262] Running in Kubernetes cluster version v1.16 (v1.16.10) - git (clean) commit f3add640dbcd4f3c33a7749f38baaac0b3fe810d - platform linux/amd64
I0928 05:21:50.512138 6 main.go:85] Validated kube-system/nginx-ingress-ingress-nginx-defaultbackend as the default backend.
F0928 05:21:50.517958 6 main.go:91] No service with name kube-system found in namespace nginx-ingress-ingress-nginx-controller: services "nginx-ingress-ingress-nginx-controller" not found
</code></pre>
<p>I can confirm that there were no service running for the Nginx Controller with my current helm release (1.33.1). I'm not sure this service is an essential aspect for this version with the NodePort configuration or I'm missing something here.</p>
<p>Service is set to false in my current config in the DaemonSet</p>
<pre><code> service:
enabled: false
</code></pre>
<p>I found that there were few issues in k8s 1.16 and not sure this is also something related to that. Also, I can confirm that the default backend is registerig successfully as few of the issues I found related to that as well and it couldn't be the reason.</p>
<p>Really appreacite your kind and helpful thoughts here. Thanks.</p>
| Aruna Fernando | <p>Finally, I was able to figure it out and working perfectly. It was due to the change from chart version 0.32 to 0.33 and it is checking the publish-service flag is defined.</p>
<pre><code>--set controller.publishService.enabled=false
</code></pre>
<p>The above parameter should be explicitly set to avoid this.</p>
<p>Related PR: #<a href="https://github.com/kubernetes/ingress-nginx/pull/5553" rel="nofollow noreferrer">5553</a></p>
<p>working command:</p>
<pre><code>helm install nginx-new ingress-nginx/ingress-nginx --version 3.3.0 --set controller.service.enabled=false --set controller.kind=DaemonSet --set controller.publishService.enabled=false
</code></pre>
| Aruna Fernando |
<p>I have k8s setup that contains 2 deployments: client and server deployed from different images. Both deployments have replica sets inside, liveness and readiness probes defined. The client communicates with the server via k8s' service.</p>
<p>Currently, the deployment scripts for both client and server are separated (separate yaml files applied via kustomization). Rollback works correctly for both parts independently but let's consider the following scenario:
1. deployment is starting
2. both deployment configurations are applied
3. k8s master starts replacing pods of server and client
4. server pods start correctly so new replica set has all the new pods up and running
5. client pods have an issue, so the old replica set is still running</p>
<p>In many cases it's not a problem, because client and server work independently, but there are situations when breaking change to the server API is released and both client and server must be updated. In that case if any of these two fails then both should be rolled back (doesn't matter which one fails - both needs to be rolled back to be in sync).</p>
<p>Is there a way to achieve that in k8s? I spent quite a lot of time searching for some solution but everything I found so far describes deployments/rollbacks of one thing at the time and that doesn't solve the issue above.</p>
| Mateusz W | <p>The problem here is something covered in Blue/Green deployments.
<a href="https://www.ianlewis.org/en/bluegreen-deployments-kubernetes" rel="nofollow noreferrer">Here</a> is a good reference of Blue/Green deployments with k8s.</p>
<p>The basic idea is, you deploy the new version (Green deployment) while keeping the previous version (Blue deployment) up and running and only allow traffic to the new version (Green deployment) when everything went fine.</p>
| wpnpeiris |
<p>I'm currently dealing with this situation.</p>
<p>Whenever I create multi-node cluster using Minikube, when I stop it and restart it again. It will lose track of the "middle" nodes, e.g. I create 4 nodes: <code>m1</code>, <code>m2</code>, <code>m3</code>, <code>m4</code>; by some reason Minikube loses track of <code>m2</code> and <code>m3</code>.</p>
<p><strong>Scenario:</strong></p>
<p>Let's say I want to create a Kubernetes cluster with Vault, so then I create one a profile named "vault-cluster" with 4 nodes (1 control plane and 3 worker nodes):</p>
<pre><code>$ minikube start --nodes 4 -p vault-cluster
</code></pre>
<p>Then when I stop them using:</p>
<pre><code>minikube stop -p vault-cluster
</code></pre>
<p><strong>Expected behaviour:</strong></p>
<p><strong>Output:</strong></p>
<pre><code>β Stopping node "vault-cluster" ...
β Stopping node "vault-cluster-m02" ...
β Stopping node "vault-cluster-m03" ...
β Stopping node "vault-cluster-m04" ...
π 4 nodes stopped.
</code></pre>
<p>So when when I started again:</p>
<p><strong>Output:</strong></p>
<pre><code>$ minikube start -p vault-cluster
π [vault-cluster] minikube v1.20.0 on Microsoft Windows 10 Pro 10.0.19042 Build 19042
β¨ Using the virtualbox driver based on existing profile
π minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
π‘ To disable this notice, run: 'minikube config set WantUpdateNotification false'
π Starting control plane node vault-cluster in cluster vault-cluster
π Restarting existing virtualbox VM for "vault-cluster" ...
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
π Configuring CNI (Container Networking Interface) ...
π Verifying Kubernetes components...
βͺ Using image kubernetesui/dashboard:v2.1.0
βͺ Using image kubernetesui/metrics-scraper:v1.0.4
βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
π Starting node vault-cluster-m02 in cluster vault-cluster
π Restarting existing virtualbox VM for "vault-cluster-m02" ...
π Found network options:
βͺ NO_PROXY=192.168.99.120
βͺ no_proxy=192.168.99.120
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
βͺ env NO_PROXY=192.168.99.120
π Verifying Kubernetes components...
π Starting node vault-cluster-m03 in cluster vault-cluster
π Restarting existing virtualbox VM for "vault-cluster-m03" ...
π Found network options:
βͺ NO_PROXY=192.168.99.120,192.168.99.121
βͺ no_proxy=192.168.99.120,192.168.99.121
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
βͺ env NO_PROXY=192.168.99.120
βͺ env NO_PROXY=192.168.99.120,192.168.99.121
π Verifying Kubernetes components...
π Starting node vault-cluster-m04 in cluster vault-cluster
π Restarting existing virtualbox VM for "vault-cluster-m04" ...
π Found network options:
βͺ NO_PROXY=192.168.99.120,192.168.99.121,192.168.99.122
βͺ no_proxy=192.168.99.120,192.168.99.121,192.168.99.122
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
βͺ env NO_PROXY=192.168.99.120
βͺ env NO_PROXY=192.168.99.120,192.168.99.121
βͺ env NO_PROXY=192.168.99.120,192.168.99.121,192.168.99.122
π Verifying Kubernetes components...
π Done! kubectl is now configured to use "vault-cluster" cluster and "default" namespace by default
</code></pre>
<p><strong>ACTUAL BEHAVIOUR:</strong></p>
<pre><code>$ minikube stop -p vault-cluster
β Stopping node "vault-cluster" ...
β Stopping node "vault-cluster-m04" ...
β Stopping node "vault-cluster-m04" ...
β Stopping node "vault-cluster-m04" ...
</code></pre>
<p>So when this is what happens when I try to start cluster again:</p>
<pre><code>$ minikube start -p vault-cluster
π [vault-cluster] minikube v1.20.0 on Microsoft Windows 10 Pro 10.0.19042 Build 19042
β¨ Using the virtualbox driver based on existing profile
π Starting control plane node vault-cluster in cluster vault-cluster
π Restarting existing virtualbox VM for "vault-cluster" ...
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
π Configuring CNI (Container Networking Interface) ...
π Verifying Kubernetes components...
βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
βͺ Using image kubernetesui/metrics-scraper:v1.0.4
βͺ Using image kubernetesui/dashboard:v2.1.0
π Enabled addons: default-storageclass, dashboard
π Starting node vault-cluster-m04 in cluster vault-cluster
π Restarting existing virtualbox VM for "vault-cluster-m04" ...
π Found network options:
βͺ NO_PROXY=192.168.99.120
βͺ no_proxy=192.168.99.120
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
βͺ env NO_PROXY=192.168.99.120
π Verifying Kubernetes components...
π Starting node vault-cluster-m04 in cluster vault-cluster
π Updating the running virtualbox "vault-cluster-m04" VM ...
π Found network options:
βͺ NO_PROXY=192.168.99.120,192.168.99.123
βͺ no_proxy=192.168.99.120,192.168.99.123
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
βͺ env NO_PROXY=192.168.99.120
βͺ env NO_PROXY=192.168.99.120,192.168.99.123
π Verifying Kubernetes components...
π Starting node vault-cluster-m04 in cluster vault-cluster
π Updating the running virtualbox "vault-cluster-m04" VM ...
π Found network options:
βͺ NO_PROXY=192.168.99.120,192.168.99.123
βͺ no_proxy=192.168.99.120,192.168.99.123
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
βͺ env NO_PROXY=192.168.99.120
βͺ env NO_PROXY=192.168.99.120,192.168.99.123
π Verifying Kubernetes components...
π Done! kubectl is now configured to use "vault-cluster" cluster and "default" namespace by default
</code></pre>
<p>This is the output when I least the nodes:</p>
<pre><code>$ minikube node list -p vault-cluster
vault-cluster 192.168.99.120
vault-cluster-m04 192.168.99.123
vault-cluster-m04 192.168.99.123
vault-cluster-m04 192.168.99.123
</code></pre>
<p><a href="https://i.stack.imgur.com/Hcde6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hcde6.png" alt="enter image description here" /></a></p>
<p><strong>Any ideas what could be wrong?</strong></p>
<p>Environment:</p>
<ul>
<li><p>Windows 10 Pro</p>
</li>
<li><p>Virtual Box 6.1</p>
</li>
</ul>
<hr />
<pre><code>$ minikube version
minikube version: v1.20.0
commit: c61663e942ec43b20e8e70839dcca52e44cd85ae
</code></pre>
<hr />
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| Eternal_N00B | <p>There seems to be some issue with minikube v1.20.0 and it also happens on linux with kvm2 driver (my setup) so it is not OS or driver specific.</p>
<p>It also happens on minikube v1.21.0, although it doesn't happen until stopped second time. After the first stop and start all seems to work fine but after second stop I see exactly what you see.</p>
<p>If you want you can create an <a href="https://github.com/kubernetes/minikube/issues" rel="nofollow noreferrer">issue on minikube githib repo</a> and hope developers fix it.</p>
| Matt |
<p>Sup. I created my service, deployment and persistent volume claim so my mysql sholud work inside minikube, but it doesn't. I can't figure out why docker container outside minikube works fine, but when i try to use it inside minikube cluster my database purges somehow. Here's my Dockerfile</p>
<pre><code>FROM alpine:latest
RUN apk update && apk upgrade -a -U
RUN apk add mysql mysql-client openrc supervisor
RUN chown -R mysql:mysql /var/lib/mysql/
COPY ./my.cnf /etc/
COPY ./secure_config.sh /root
RUN rc default
RUN /etc/init.d/mariadb setup
RUN /etc/init.d/mariadb start
RUN chmod 755 /root/secure_config.sh
RUN /root/secure_config.sh
RUN sed -i "s|.*bind-address\s*=.*|bind-address=0.0.0.0|g" /etc/my.cnf
RUN sed -i "s|.*bind-address\s*=.*|bind-address=0.0.0.0|g" /etc/my.cnf.d/mariadb-server.cnf
RUN sed -i "s|.*skip-networking.*|skip-networking|g" /etc/my.cnf
RUN sed -i "s|.*skip-networking.*|skip-networking|g" /etc/my.cnf.d/mariadb-server.cnf
COPY ./wpdb.sh .
COPY ./sql_launch.sh .
RUN chmod 755 /wpdb.sh
RUN chmod 755 /sql_launch.sh
COPY ./supervisord.conf /etc/
EXPOSE 3306
CMD /sql_launch.sh
</code></pre>
<p>wpdb.sh</p>
<pre><code>mysql -e "CREATE DATABASE wordpress;"
mysql -e "CREATE USER 'admin'@'localhost' IDENTIFIED BY 'admin';"
mysql -e "CREATE USER 'lchantel'@'localhost' IDENTIFIED BY 'lchantel';"
mysql -e "CREATE USER 'pstein'@'localhost' IDENTIFIED BY 'pstein'"
mysql -e "CREATE USER 'admins_mom'@'localhost' IDENTIFIED BY 'admins_mom'"
mysql -e "DELETE FROM mysql.user WHERE user = '';"
mysql -e "SET PASSWORD FOR 'admins_mom'@'localhost' = PASSWORD('123456');"
mysql -e "SET PASSWORD FOR 'admin'@'localhost' = PASSWORD('123456');"
mysql -e "SET PASSWORD FOR 'pstein'@'localhost' = PASSWORD('123456');"
mysql -e "SET PASSWORD FOR 'lchantel'@'localhost' = PASSWORD('123456');"
mysql -e "GRANT ALL PRIVILEGES ON wordpress.* TO 'admin'@'localhost' IDENTIFIED BY 'admin';"
mysql -e "FLUSH PRIVILEGES;"
</code></pre>
<p>sql_launch.sh</p>
<pre><code>#!bin/sh
rc default
chmod 777 /wpdb.sh && /wpdb.sh
rc-service mariadb stop
/usr/bin/supervisord -c /etc/supervisord.conf
</code></pre>
<p>This is my mysql output within the container</p>
<pre><code>MariaDB [(none)]> SELECT user FROM mysql.user
-> ;
+-------------+
| User |
+-------------+
| admin |
| admins_mom |
| lchantel |
| mariadb.sys |
| mysql |
| pstein |
| root |
+-------------+
7 rows in set (0.006 sec)
MariaDB [(none)]>
</code></pre>
<p>and this is my outputs inside minikube pod</p>
<pre><code># rc-status
Runlevel: default
mariadb [ stopped ]
Dynamic Runlevel: hotplugged
Dynamic Runlevel: needed/wanted
Dynamic Runlevel: manual
/ # rc-service mariadb start
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/blkio/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/cpu/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/cpu,cpuacct/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/cpuacct/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/cpuset/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/devices/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/freezer/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/hugetlb/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/memory/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/net_cls/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/net_cls,net_prio/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/net_prio/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/perf_event/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/pids/tasks: Read-only file system
/lib/rc/sh/openrc-run.sh: line 100: can't create /sys/fs/cgroup/systemd/tasks: Read-only file system
* Datadir '/var/lib/mysql/' is empty or invalid.
* Run '/etc/init.d/mariadb setup' to create new database.
* ERROR: mariadb failed to start
</code></pre>
<p>And so i guess problem is in yaml file in mountPath section of deployment. There're yaml files</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-mysql
spec:
capacity:
storage: 500Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/lchantel/pv_proj/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-mysql
labels:
app: mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: wildboar-mysql-service
labels:
app: mysql
spec:
type: ClusterIP
selector:
app: mysql
ports:
- name: mysql
port: 3306
targetPort: 3306
protocol: TCP
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wildboar-mysql-deploy
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: wildboar-mysql-pod
image: wildboar.mysql:latest
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysqldb-storage
mountPath: /var/lib/mysql/
env:
- name: MYSQL_ROOT_PASSWORD
value: root
imagePullPolicy: Never
volumes:
- name: mysqldb-storage
persistentVolumeClaim:
claimName: pv-claim-mysql
</code></pre>
<p>Google doesn't help and simply don't know what should i do and where shoud i start from.</p>
| WildBoar | <p>Well, i solved my problem next way</p>
<ol>
<li>I totally remade my mysql Dockerfile. The way alpine wiki suggest to configure mariadb is complicate and doesn't work. <code>VOLUME /var/lib/mysql/</code> command was pretty usefull for persisting data in thata directory.</li>
<li>I decided to turn .sh to mysql script. And, as far as I manage to use my database for Wordpress there is no reason to create several users in mysql database. It's enough to create one admin. Here is my Dockerfile</li>
</ol>
<pre><code>FROM alpine:latest
RUN apk update && apk upgrade -a -U
RUN apk add mysql mysql-client openrc supervisor
COPY ./mariadb-server.cnf /etc/my.cnf
RUN chmod 755 /etc/my.cnf
RUN mkdir /sql_data
COPY ./wpdb.sql /sql_data
COPY ./launch.sh /
RUN chmod 755 ./launch.sh
RUN chown -R mysql:mysql /sql_data
VOLUME /var/lib/mysql/
RUN mkdir -p /run/mysqld/
EXPOSE 3306
CMD /launch.sh
</code></pre>
<p>and sql script</p>
<pre><code>CREATE DATABASE wordpress;
DELETE FROM mysql.user WHERE user = '';
GRANT ALL PRIVILEGES ON wordpress.* TO 'admin'@'%' IDENTIFIED BY 'admin';
SET PASSWORD FOR 'admin'@'%' = PASSWORD('admin');
FLUSH PRIVILEGES;
</code></pre>
<ol start="3">
<li>Superviror service is usefull when you run 2 and more services (supervisord excluded) in the same time. So it's enough to use script with myslq setup and launching it as daemon:</li>
</ol>
<pre><code>#!/bin/sh
mysql_install_db --skip-test-db --user=mysql --datadir=/var/lib/mysql
mysqld --user=mysql --datadir=/var/lib/mysql --init-file=/sql_data/wpdb.sql
</code></pre>
<ol start="4">
<li>There's some little changes in yaml file. We created admin is mysql script, so there is no need to create it in yaml file.</li>
</ol>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
type: ClusterIP
selector:
app: mysql
ports:
- name: mysql
port: 3306
targetPort: 3306
protocol: TCP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
restartPolicy: Always
containers:
- name: mysql
image: mysql:latest
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysqldb-storage
mountPath: "/var/lib/mysql/"
imagePullPolicy: Never
volumes:
- name: mysqldb-storage
persistentVolumeClaim:
claimName: pv-claim-mysql
</code></pre>
<ol start="5">
<li>There are some changes in my.cnf configuration file.</li>
</ol>
<pre><code>#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
user=root
port=3306
datadir=/var/lib/mysql
tmpdir=/tmp
skip-networking=false
socket=/run/mysqld/mysqld.sock
wait_timeout = 600
max_allowed_packet = 64M
# Galera-related settings
#[galera]
# Mandatory settings
#wsrep_on=ON
#wsrep_provider=
#wsrep_cluster_address=
#binlog_format=row
#default_storage_engine=InnoDB
#innodb_autoinc_lock_mode=2
#
# Allow server to accept connections on all interfaces.
#
bind-address=0.0.0.0
#
# Optional setting
#wsrep_slave_threads=1
#innodb_flush_log_at_trx_commit=0
# Disabling symbolic links is recommended to prevent assorted security risks
symbolic-links=0
# this is only for embedded server
[embedded]
</code></pre>
| WildBoar |
<p>I have an kubernetes cluster setup and would like to use datadog to monitor the cluster. I want to setup a monitor to alert me if a container/pod is stuck in the CarshLoopBackOff state.</p>
<p>How do I do this?</p>
| Aman Deep | <p>This is a little counter intuitive, but you need to look for pods that are not flagged by pod.ready. Example find # of failing pods on cluster my-cluster in the default namespace for a monitor:</p>
<pre><code>exclude_null(sum:kubernetes_state.pod.ready{condition:false,kube_cluster_name:my-cluster,kube_namespace:default,!pod_phase:succeeded})
</code></pre>
| Samuel Silbory |
<p>I created a k8s installed by k0s on the aws ec2 instance. In order to make delivery new cluster faster, I try to make an AMI for it.</p>
<p>However, I started a new ec2 instance, the internal IP changed and the node become <code>NotReady</code></p>
<pre><code>ubuntu@ip-172-31-26-46:~$ k get node
NAME STATUS ROLES AGE VERSION
ip-172-31-18-145 NotReady <none> 95m v1.21.1-k0s1
ubuntu@ip-172-31-26-46:~$
</code></pre>
<p>Would it be possible to reconfigure it ?</p>
<hr />
<h2>Work around</h2>
<p>I found a work around to make the AWS AMI working</p>
<h4>Short answer</h4>
<ol>
<li>install node with kubelet's <code>--extra-args</code></li>
<li>update the kube-api to the new IP and restart the kubelet</li>
</ol>
<h2>Details :: 1</h2>
<p>In the kubernete cluster, the <code>kubelet</code> plays the node agent node. It will tell <code>kube-api</code> "Hey, I am here and my name is XXX".</p>
<p>The name of a node is its hostname and could not be changed after created. It could be set by <code>--hostname-override</code>.</p>
<p>If you don't change the node name, the <code>kube-api</code> will try to use the hostname then got errors caused by <code>old-node-name</code> not found.</p>
<h2>Details :: 2</h2>
<p>To k0s, it put kubelet' KUBECONFIG in the <code>/var/lib/k0s/kubelet.conf</code>, there was a kubelet api server location</p>
<pre><code>server: https://172.31.18.9:6443
</code></pre>
<p>In order to connect a new kube-api location, please update it</p>
| qrtt1 | <p>Did you check the kubelet logs? Most likely it's a problem with certificates. You cannot just make an existing node into ami and hope it will work since certificates are signed for specific IP.</p>
<p>Check out the <a href="https://github.com/awslabs/amazon-eks-ami" rel="nofollow noreferrer">awslabs/amazon-eks-ami</a> repo on github. You can check out how aws does its k8s ami.</p>
<p>There is a <a href="https://github.com/awslabs/amazon-eks-ami/blob/5a3df0fdb17e540f8d5a9b405096f32d6b9b0a3f/files/bootstrap.sh" rel="nofollow noreferrer">files/bootstrap.sh</a> file in repo that is run to bootstrap an instance. It does all sort of things that are instance specific which includes getting certificates.</p>
<p>If you want to <em>"make delivery new cluster faster"</em>, I'd recommend to create an ami with <a href="https://docs.k0sproject.io/v1.21.1+k0s.0/k0s-multi-node/#1-download-k0s" rel="nofollow noreferrer">all dependencies</a> but without an actual k8s boostraping. Install the k8s (or k0s in your case) after you start the instance from ami, not before. (Or figure out how to regenerate certs and configs that are node specific.)</p>
| Matt |
<p>I am getting the following error in aws EKS version 16 .</p>
<blockquote>
<p>Failed to install app test. Error: unable to build kubernetes objects
from release manifest: error validating "": error validating data:
ValidationError(VerticalPodAutoscaler.spec): unknown field "labels" in
io.k8s.autoscaling.v1.VerticalPodAutoscaler.spec</p>
</blockquote>
<p>This is my yaml, And also I tried with apiVersion: autoscaling.k8s.io/v1beta2 but same error</p>
<pre><code>---
apiVersion: "autoscaling.k8s.io/v1"
kind: VerticalPodAutoscaler
metadata:
name: web
spec:
labels:
app: web
targetRef:
apiVersion: "apps/v1"
kind: StatefulSet
name: web
updatePolicy:
updateMode: "Auto"
...
</code></pre>
| user3756483 | <p>Check this repo .</p>
<p><a href="https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L61" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L61</a></p>
<p>Spec has only three field:</p>
<blockquote>
<p>TargetRef, UpdatePolicy, ResourcePolicy</p>
</blockquote>
<p>This is true for all three versions available autoscaling.k8s.io/v1 , autoscaling.k8s.io/v1beta1, autoscaling.k8s.io/v1beta2</p>
| Kiran |
<p>I want to disable SNI on the nginx-ingress. If a call using openssl like below is used:</p>
<pre><code>openssl s_client -showcerts -connect ***********.gr:443
</code></pre>
<p>Then I want nginx-ingress to use only the certificate that I have configured and not the fake-k8s-cert.</p>
<p>The certificate is working if a browse the web app but I need also to set the default certificate.</p>
<p>An example is below:</p>
<pre><code>[root@production ~]# openssl s_client -showcerts -connect 3dsecureuat.torawallet.gr:443
CONNECTED(00000003)
depth=0 O = Acme Co, CN = Kubernetes Ingress Controller Fake Certificate
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 O = Acme Co, CN = Kubernetes Ingress Controller Fake Certificate
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
0 s:/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
i:/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
-----BEGIN CERTIFICATE-----
---
Server certificate
subject=/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
issuer=/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
---
Acceptable client certificate CA names
/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Global Root CA
...
</code></pre>
<p>I have also configured ingress to use the secret on all hostnames without specifying host:
tls:
- secretName: ******wte-ingress</p>
| vasilis | <p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/tls.md#default-ssl-certificate" rel="noreferrer">Default SSL Certificate</a> flag solved the issue as OP mentioned.</p>
<p>In Nginx documentation you can read:</p>
<blockquote>
<p>NXINX Ingress controller provides the flag <code>--default-ssl-certificate</code>. The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate.</p>
<p>For instance, if you have a TLS secret foo-tls in the default namespace, add --default-ssl-certificate=default/foo-tls in the nginx-controller deployment.</p>
<p>The default certificate will also be used for ingress tls: sections that do not have a secretName option.</p>
</blockquote>
| Matt |
<p>I am trying to install metrics-server on my Kubernetes cluster. But it is not going to READY mode.</p>
<p>I am was installed metrics-server in this method</p>
<pre><code>kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
</code></pre>
<p><a href="https://i.stack.imgur.com/ITsQL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ITsQL.png" alt="enter image description here" /></a></p>
<p>After installing i was tried some of those commands, kubectl top pods, kubectl top nodes. But i got an error</p>
<p><strong>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)</strong></p>
<p>Metrics server is failed to start</p>
<p><a href="https://i.stack.imgur.com/VAlmz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VAlmz.png" alt="enter image description here" /></a></p>
| Amjed saleel | <p>Enable metrics-server addon in minikube cluster.</p>
<p>Try the following commend.</p>
<pre><code>minikube addons enable metrics-server
</code></pre>
| Amjed saleel |
<p>i have built a Kubernetes Cluster using kubeadm on Ubuntu 16.04 in my home lab 1 master and 2 nodes with Calico as the CNI. all nodes can resolve internet addresses on its consoles but the issue i m noticing that the pods i deploy dont have access to the internet. CoreDNS seems to work fine . that being said is there anything specific i need to do or configure on the Kubernetes cluster so the pods i deploy have access to the internet by default?</p>
<pre><code>cloudadmin@vra-vmwlab-cloud-vm-318:~$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
</code></pre>
<pre><code>cloudadmin@vra-vmwlab-cloud-vm-318:~$ kubectl exec -ti busybox -- ping google.com
ping: bad address 'google.com'
from the busybox Pod i can see its pointing to the right dns ip but still it cant reach google.com as you see above
</code></pre>
<pre><code>cloudadmin@vra-vmwlab-cloud-vm-318:~$ kubectl exec -ti busybox -- sh
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local vmwlab.local
options ndots:5
</code></pre>
<p>any help on this is appreciated . Thank you</p>
| Maher AlAsfar | <p>Issue fixed ..</p>
<p>in the documentation <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/</a> it mentions the following :</p>
<pre><code>Letting iptables see bridged traffic
Make sure that the br_netfilter module is loaded. This can be done by running lsmod | grep br_netfilter. To load it explicitly call sudo modprobe br_netfilter.
As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
</code></pre>
<p>I also chose to use Weave Net instead of calico as the CNI</p>
| Maher AlAsfar |
<p>I am trying to contact influxdb running on kubernetes. I am new on influxdb and I have just started using it. I used the query http API in the following way: <code>curl "http://pod_ip_address/query?q=show+databases" </code>, but the response is <code>{"code":"unauthorized","message":"unauthorized access"}</code>. Now I have just the user UI, so maybe the problem could be related to that. Does anybody know what could be the issue?</p>
| C1ngh10 | <p>InfluxDB 2.0 requires authentication using an ORG and a Token. You can pass these as HTTP headers in your <code>curl</code> call as shown here: <a href="https://docs.influxdata.com/influxdb/v2.0/api-guide/api_intro/#authentication" rel="nofollow noreferrer">https://docs.influxdata.com/influxdb/v2.0/api-guide/api_intro/#authentication</a></p>
| mhall119 |
Subsets and Splits