Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I am trying understand memory requests in k8s. I have observed
that when I set memory request for pod, e.g. nginx, equals 1Gi, it actually consume only 1Mi (I have checked it with <code>kubectl top pods</code>). My question. I have 2Gi RAM on node and set
memory requests for pod1 and pod2 equal 1.5Gi, but they actually consume only 1Mi of memory. I start pod1 and it should be started, cause node has 2Gi memory and pod1 requests only 1.5Gi. But what happens If I try to start pod2 after that? Would it be started? I am not sure, cause pod1 consumes only 1Mi of memory but has request for 1.5Gi. Do memory request of pod1 influences on execution of pod2? How k8s will rule this situation?</p>
| Kirill Bugaev | <p>Memory request is the amount of memory that kubernetes holds for pod. If pod requests some amount of memory, there is a strong guarantee that it will get it. This is why you can't create pod1 with 1.5Gi and pod2 with 1.5Gi request on 2Gi node because if kubernetes would allow it and these pods start using this memory kubernetes won't be able to satisfy the requirements and this is unacceptable.</p>
<p>This is why sum of all pod requests running an specific node cannot exceed this specific node's memory.</p>
<blockquote>
<p>"But what happens If I try to start pod2 after that? [...] How k8s
will rule this situation?"</p>
</blockquote>
<p>If you have only one node with 2Gi of memory then pod2 won't start. You would see that this pod is in Pending state, waiting for resources. If you have spare resources on different node then kubernetes would schedule pod2 to this node.</p>
<p>Let me know if something is not clear and needs more explanation.</p>
| Matt |
<p>I am using Minikube and here is my configuration:</p>
<p><code>kubectl describe deployment mysql</code></p>
<p>the output:</p>
<pre><code>Name: mysql
Namespace: default
CreationTimestamp: Sat, 12 Nov 2022 02:20:54 +0200
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=mysql
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=mysql
Containers:
mysql:
Image: mysql
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'mysql-pass'> Optional: false
Mounts:
/docker-entrypoint-initdb.d from mysql-init (rw)
Volumes:
mysql-init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mysql-init
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: mysql-77fd55bbd9 (1/1 replicas created)
</code></pre>
<p>when I try to connect to it using mysql workbench:</p>
<p><a href="https://i.stack.imgur.com/sGqJt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sGqJt.png" alt="workbench config" /></a></p>
<p>it shows me:</p>
<p><a href="https://i.stack.imgur.com/w94tA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w94tA.png" alt="failed to connect to mysql" /></a></p>
<p>However, when I execute this line to create a <strong>mysql-client</strong> to try to connect to mysql server:</p>
<p><code>kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u skaffold -p</code>
and then enter the password, <strong>it works well!</strong> but still I need to use workbench better.</p>
<p>any help please?</p>
<p>edit 1:</p>
<p>Here is the yaml file for the deployment and the service:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-init
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-init
configMap:
name: mysql-init
---
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
protocol: TCP
selector:
app: mysql
</code></pre>
| Karim Abdel-Qader | <p>If you want to connect directly to your <code>mysql</code> Deployment's Pod via <code>localhost</code>, first, you have to forward a Pod's container port to the <code>localhost</code>.</p>
<pre><code>kubectl port-forward <pod-name> <local-port>:<container-port>
</code></pre>
<p>Then your <code>mysql</code> will be accessible on <code>localhost:<local-port></code>.</p>
<p>The other way to communicate with your Pod is created a Service object that will pass your requests directly to the Pod. There are couple type of Services for different types of usage. Check the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">documentation</a> to learn more.</p>
<p>The reason the following command</p>
<pre><code>kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u skaffold -p
</code></pre>
<p>connects to the database correctly is because the connect command is done inside the mysql container itself.</p>
<p><strong>Edit 1</strong></p>
<p>If you not specified the type of Service, the default is going to be <code>ClusterIP</code> which not allow you to expose port outside the cluster.</p>
<p>Because <code>Minikube</code> doesn't handle <code>LoadBalancer</code> use <code>NodePort</code> Service type instead.</p>
<p>Your Service YAML manifest should look like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
protocol: TCP
selector:
app: mysql
</code></pre>
<p>Finally, therefore your cluster is provisioned via <code>Minikube</code>, you still need to call the command below for fetch the <code>Minikube</code> IP and a Service’s <code>NodePort</code>:</p>
<pre><code>minikube service <service-name> --url
</code></pre>
| Mikolaj |
<p>When I call kubectl logs pod_name, I get both the stdout/err combined. Is it possible to specify that I only want stdout or stderr? Likewise I am wondering if it is possible to do so through the k8s rest interface. I've searched for several hours and read through the repository but could not find anything.</p>
<p>Thanks!</p>
| Zel | <p>No, this is not possible. To my knowlegde, the moment of writing this, kubernetes supports only one <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#read-log-pod-v1-core" rel="nofollow noreferrer">logs api endpoint</a> that returns all logs (stdout and stderr combined).</p>
<p>If you want to access them separately you should consider using different <a href="https://docs.docker.com/config/containers/logging/configure/" rel="nofollow noreferrer">logging driver</a> or query logs directly from docker.</p>
| Matt |
<p>I want to use execute helm on a gitlab-runner on my kubernetes in gitlab pipelines.</p>
<p>My gitlab.ci.yaml:</p>
<pre><code># Deployment step
deploy:
stage: deploy
image: alpine/helm:latest
script:
- helm --namespace gitlab upgrade initial ./iot/
tags:
- k8s
- dev
</code></pre>
<p>What i have done so far:</p>
<ol>
<li>Installed the gitlab-runner on my kubernetes with helm (<a href="https://docs.gitlab.com/runner/install/kubernetes.html" rel="nofollow noreferrer">https://docs.gitlab.com/runner/install/kubernetes.html</a>)</li>
</ol>
<p>My values.yaml:</p>
<pre><code>image: gitlab/gitlab-runner:alpine-v11.6.0
imagePullPolicy: IfNotPresent
gitlabUrl: https://gitlab.com/
runnerRegistrationToken: "mytoken"
unregisterRunners: true
terminationGracePeriodSeconds: 3600
concurrent: 10
checkInterval: 30
## For RBAC support:
rbac:
create: true
## Define specific rbac permissions.
# resources: ["pods", "pods/exec", "secrets"]
# verbs: ["get", "list", "watch", "create", "patch", "delete"]
## Run the gitlab-bastion container with the ability to deploy/manage containers of jobs cluster-wide or only within namespace
clusterWideAccess: false
metrics:
enabled: true
## Configuration for the Pods that that the runner launches for each new job
##
runners:
## Default container image to use for builds when none is specified
##
image: ubuntu:16.04
locked: false
tags: "k8s,dev"
privileged: true
namespace: gitlab
pollTimeout: 180
outputLimit: 4096
cache: {}
## Build Container specific configuration
##
builds: {}
# cpuLimit: 200m memoryLimit: 256Mi cpuRequests: 100m memoryRequests: 128Mi
## Service Container specific configuration
##
services: {}
# cpuLimit: 200m memoryLimit: 256Mi cpuRequests: 100m memoryRequests: 128Mi
## Helper Container specific configuration
##
helpers: {}
securityContext:
fsGroup: 65533
runAsUser: 100
## Configure resource requests and limits ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
affinity: {}
nodeSelector: {}
tolerations: []
envVars:
name: RUNNER_EXECUTOR
value: kubernetes
## list of hosts and IPs that will be injected into the pod's hosts file
hostAliases: []
podAnnotations: {}
podLabels: {}
</code></pre>
<ol start="3">
<li>gitlab-runner is succesfully connected with gitlab.com</li>
</ol>
<p>But i get the following message on gitlab when executing the deployment step:</p>
<pre><code> Error: UPGRADE FAILED: query: failed to query with labels: secrets is forbidden: User "system:serviceaccount:gitlab:default" cannot list resource "secrets" in API group "" in the namespace "gitlab"
</code></pre>
<p>I've checked my RBAC ClusterRules and they are all per default set to a wildcard on verbs and ressources but i have also tried to set the needed rights:</p>
<pre><code> resources: ["pods", "pods/exec", "secrets"]
verbs: ["get", "list", "watch", "create", "patch", "delete"]
</code></pre>
<p>Nothing worked :-(
When i have done wrong?</p>
| rubiktubik | <p>I hope I have found the decision for this problem. Try to create clusterrolebinding like this</p>
<pre><code>kubectl create clusterrolebinding gitlab-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts
</code></pre>
| radikkit |
<p>I deployed sql server 2017 to my kubernetes and I want to put it a sample db such as northwind. There is no a gui for manage sql server. How Can I do that?</p>
| Onur AKKÖSE | <p>You can forward your sql server port to localhost and then connect to the database using SQL Server Management Studio.</p>
<pre><code>kubectl port-forward <sql-pod-name> <localhost-port>:<mssql-port>
</code></pre>
<p>For example:</p>
<pre><code>kubectl port-forward mssql-statefulset-0 1433:1433
</code></pre>
<p>Then your database would be accessed on localhost.</p>
<p><img src="https://i.stack.imgur.com/ERgeO.png" alt="enter image description here" /></p>
<p>Note that there is a comma between the address and the port.</p>
<p>If you manage to connect successfully, you can manually create the database using the SQL Server Management Studio tool.</p>
<p>Another way is to connect directly to your database container inside a pod using exec command and then execute sqlcmd commands.</p>
<pre><code>kubectl exec -it <pod-name> -- /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'SA-password'
</code></pre>
<p>Or just like this</p>
<pre><code>kubectl exec -it <pod-name> -- /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'SA-password' -Q 'CREATE DATABASE <database-name>'
</code></pre>
| Mikolaj |
<p>I made a deployment and scaled out to 2 replicas. And I made a service to forward it.</p>
<p>I found that kube-proxy uses iptables for forwarding from Service to Pod. But the load balancing strategy of iptables is RANDOM.</p>
<p>How can I force my service to forward requests to 2 pods using round-robin strategy without switching my kube-proxy to <code>userspace</code> or <code>ipvs</code> mode?</p>
| Peng Deng | <p>You cannot. </p>
<p>But if you really don't want to change <code>--proxy-mode</code> flag on kube-proxy you can use some third-party proxy/loadbalancer (like HAProxy) and point it to your application. But this is usually not the best option as you need to make sure it's deployed with HA and it will also add complexity to your deployment.</p>
| Matt |
<p>I am trying to setup istio for Airflow webservice
My current airflow url is <a href="http://myorg.com:8080/appv1/airflow" rel="nofollow noreferrer">http://myorg.com:8080/appv1/airflow</a> (without istio)</p>
<p>After i tried to integrate with istio i have written the virtual service as given below but i end up getting a 404 Not Found. I am trying to access the url from <a href="http://myorg.com/v1airlfow" rel="nofollow noreferrer">http://myorg.com/v1airlfow</a></p>
<pre><code>---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: airflow-vservice
namespace: "{{ .Release.Namespace }}"
spec:
hosts:
- "*"
gateways:
- airflow-gateway
http:
- name: airflow-http
match:
- uri:
exact: "/v1airflow"
- uri:
exact: "/v1airflow/"
rewrite:
uri: "/appv1/airflow/"
route:
- destination:
host: {{ .Release.Name }}-airflow-web.{{ .Release.Namespace }}.svc.cluster.local
port:
number: 8080
headers:
request:
set:
X-Forwarded-Proto: "http"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: airflow-gateway
namespace: "{{ .Release.Namespace }}"
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: airflow-http
protocol: HTTP
hosts:
- "*"
</code></pre>
| rufus-atd | <ul>
<li>uri:
prefix: "/v1airflow"
<ul>
<li>uri:
prefix: "/v1airflow/home"</li>
<li>uri:
prefix: "/v1airflow/static"</li>
<li>uri:
prefix: "/v1airflow/login"</li>
<li>rewrite:
regex: ^/v1airflow/(.*)$</li>
</ul></li>
</ul>
<p>Got it working</p>
| rufus-atd |
<p>I would like to add the link to some field and part of that link should differ depends on the used environment. Lets say that I have dev and prod environment.</p>
<ul>
<li>Link for dev should looks like: "<em>https://dev.mytest.com</em>"</li>
<li>Link for prod should looks like: "<em>https://prod.mytest.com</em>"</li>
</ul>
<p>I think that I should somehow put these variables to application.yaml (maybe kubernetes here would be helpful?) but I don't know how to access these variables in typescript file. I'm using spring in my project with the application.yaml file.
Could you give me some tips how I should achieve that or sent a link for helpful articles?</p>
| Koin Arab | <p>You can use some files, usually called <code>environment.[xyz].ts</code>, to achieve some configuration depending on the type of build configured in your <code>angular.json</code> file.</p>
<p>For more informations: <a href="https://angular.io/guide/build" rel="nofollow noreferrer">https://angular.io/guide/build</a></p>
<p>=== EDIT ===</p>
<p>If you need it to be a runtime configuration, you can create a JSON file into <code>src/assets/</code>, retrieve it and parse it with a service by making a simple http call, like: this.http.get('/assets/xyz.json') and storing the configuration inside the service.</p>
<p>To be sure that when you will use this service (via injection) the configuration will be already loaded, you can set its load configuration fuction to be executed on app startup with the <code>APP_INITIALIZER</code> DI token.</p>
<p><a href="https://www.tektutorialshub.com/angular/angular-runtime-configuration/" rel="nofollow noreferrer">This tutorial</a> explains in details what I'm suggesting.</p>
| Thomas Iommi |
<p>In my <a href="https://docs.ovh.com/gb/en/kubernetes/" rel="nofollow noreferrer">OVH Managed Kubernetes</a> cluster I'm trying to expose a NodePort service, but it looks like the port is not reachable via <code><node-ip>:<node-port></code>.</p>
<p>I followed this tutorial: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/#creating-a-service-for-an-application-running-in-two-pods" rel="nofollow noreferrer">Creating a service for an application running in two pods</a>. I can successfully access the service on <code>localhost:<target-port></code> along with <code>kubectl port-forward</code>, but it doesn't work on <code><node-ip>:<node-port></code> (request timeout) (though it works from inside the cluster).</p>
<p>The tutorial says that I may have to "create a firewall rule that allows TCP traffic on your node port" but I can't figure out how to do that.</p>
<p>The security group seems to allow any traffic:</p>
<p><a href="https://i.stack.imgur.com/1AbNG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1AbNG.png" alt="enter image description here" /></a></p>
| sdabet | <p>Well i can't help any further i guess, but i would check the following:</p>
<ol>
<li>Are you using the public node ip address?</li>
<li>Did you configure you service as Loadbalancer properly?
<a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer</a></li>
<li>Do you have a loadbalancer and set it up properly?</li>
<li>Did you install any Ingress controller? (ingress-nginx?) You may need to add a Daemonset for this ingress-controller to duplicate the ingress-controller pod on each node in your cluster</li>
</ol>
<p>Otherwise, i would suggest an Ingress, (if this works, you may exclude any firewall related issues).</p>
<p>This page explains very well:
<a href="https://stackoverflow.com/questions/41509439/whats-the-difference-between-clusterip-nodeport-and-loadbalancer-service-types?rq=1">What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?</a></p>
| Wesley van der Meer |
<p>I looked into the iptables rules which are used by the kube-dns, I'm a little bit confused by the sub chain "KUBE-SEP-V7KWRXXOBQHQVWAT", the content of this sub chain is below:<br>
My question is why we need the target "KUBE-MARK-MASQ" when the source IP address(172.168.1.5) is the kube-dns IP address. Per my understanding, the target IP address should be the kube-dns pod's address 172.168.1.5, not the source IP address. Cause all the DNS queries are from other addesses(serivces), The DNS queries cannot be originated from itself.</p>
<pre><code># iptables -t nat -L KUBE-SEP-V7KWRXXOBQHQVWAT
Chain KUBE-SEP-V7KWRXXOBQHQVWAT (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.18.1.5 anywhere /* kube-system/kube-dns:dns-tcp */
DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */ tcp to:172.18.1.5:53
</code></pre>
<p><strong>Here is the full chain information</strong>:</p>
<pre><code># iptables -t nat -L KUBE-SERVICES
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- !172.18.1.0/24 10.0.62.222 /* kube-system/metrics-server cluster IP */ tcp dpt:https
KUBE-SVC-QMWWTXBG7KFJQKLO tcp -- anywhere 10.0.62.222 /* kube-system/metrics-server cluster IP */ tcp dpt:https
KUBE-MARK-MASQ tcp -- !172.18.1.0/24 10.0.213.2 /* kube-system/healthmodel-replicaset-service cluster IP */ tcp dpt:25227
KUBE-SVC-WT3SFWJ44Q74XUPR tcp -- anywhere 10.0.213.2 /* kube-system/healthmodel-replicaset-service cluster IP */ tcp dpt:25227
KUBE-MARK-MASQ tcp -- !172.18.1.0/24 10.0.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- anywhere 10.0.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-MARK-MASQ udp -- !172.18.1.0/24 10.0.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-SVC-TCOU7JCQXEZGVUNU udp -- anywhere 10.0.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-MARK-MASQ tcp -- !172.18.1.0/24 10.0.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- anywhere 10.0.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
# iptables -t nat -L KUBE-SVC-ERIFXISQEP7F7OF4
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
target prot opt source destination
KUBE-SEP-V7KWRXXOBQHQVWAT all -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */ statistic mode random probability 0.50000000000
KUBE-SEP-BWCLCJLZ5KI6FXBW all -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */
# iptables -t nat -L KUBE-SEP-V7KWRXXOBQHQVWAT
Chain KUBE-SEP-V7KWRXXOBQHQVWAT (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.18.1.5 anywhere /* kube-system/kube-dns:dns-tcp */
DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */ tcp to:172.18.1.5:53
</code></pre>
| jianrui | <p>You can think of kubernetes service routing in iptables as the following steps:</p>
<ol>
<li>Loop through chain holding all kubernetes services</li>
<li>If you hit a matching service address and IP, go to service chain</li>
<li>The service chain will randomly select an endpoint from the list of endpoints (using probabilities)</li>
<li>If the endpoint selected has the same IP as the source address of the traffic, mark it for MASQUERADE later (this is the <code>KUBE-MARK-MASQ</code> you are asking about). In other words, if a pod tries to talk to a service IP and that service IP "resolves" to the pod itself, we need to mark it for MASQUERADE later (actual MASQUERADE target is in the POSTROUTING chain because it's only allowed to happen there)</li>
<li>Do the DNAT to selected endpoint and port. This happens regardless of whether 3) occurs or not.</li>
</ol>
<p>If you look at <code>iptables -t nat -L POSTROUTING</code> there will be a rule that is looking for marked packets which is where the MASQUERADE actually happens.</p>
<p>The reason why the <code>KUBE-MARK-MASQ</code> rule has to exist is for hairpin NAT. The details why are a somewhat involved explanation, but here's my best attempt:</p>
<p>If MASQUERADE <strong>didn't</strong> happen, traffic would leave the pod's network namespace as <code>(pod IP, source port -> virtual IP, virtual port)</code> and then be NAT'd to <code>(pod IP, source port-> pod IP, service port)</code> and immediately sent back to the pod. Thus, this traffic would then arrive at the service with the source being <code>(pod IP, source port)</code>. So when this service replies it will be replying to <code>(pod IP, source port)</code>, <strong>but</strong> the pod (the kernel, really) is expecting traffic to come back on the same IP and port it sent the traffic to originally, which is <code>(virtual IP, virtual port)</code> and thus the traffic would get dropped on the way back.</p>
| maxstr |
<p>I am trying to create a scalable varnish cluster on some managed Kubernetes services (azure's, google's, or amazon's Kubernetes service) but I'm having trouble getting started. Any advice or references are helpful, thanks!</p>
| tim kelly | <p>We (Varnish Software) are working on official Helm charts to make k8s deployments a lot easier. For the time being we only have an official Docker Image.</p>
<p>You can find install instructions on <a href="https://www.varnish-software.com/developers/tutorials/running-varnish-docker/" rel="nofollow noreferrer">https://www.varnish-software.com/developers/tutorials/running-varnish-docker/</a>.</p>
<p>However, I have some standalone k8s files that can be a good way to get started.</p>
<h2>Config map</h2>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: varnish
labels:
name: varnish
data:
default.vcl: |+
vcl 4.1;
backend default none;
sub vcl_recv {
if (req.url == "/varnish-ping") {
return(synth(200));
}
if (req.url == "/varnish-ready") {
return(synth(200));
}
return(synth(200,"Welcome"));
}
</code></pre>
<p>This config map contains the VCL file. This VCL file doesn't do anything useful besides having <code>/varnish-ping</code> & <code>/varnish-ready</code> endpoints. Please customize to your needs.</p>
<h2>Service definition</h2>
<p>Here's a basic service definition that exposes port <code>80</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: varnish
labels:
name: varnish
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
name: varnish-http
selector:
name: varnish
</code></pre>
<h2>Deployment</h2>
<p>And finally here's the deployment. It uses the official Varnish Docker image and more specifically the <em>6.0 LTS version</em>.</p>
<p>It uses the synthetic <code>/varnish-ping</code> & <code>/varnish-ready</code> endpoints and mounts the config map under <code>/etc/varnish</code> to load the VCL file.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: varnish
labels:
name: varnish
spec:
replicas: 1
selector:
matchLabels:
name: varnish
template:
metadata:
labels:
name: varnish
spec:
containers:
- name: varnish
image: "varnish:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /varnish-ping
port: 80
initialDelaySeconds: 30
periodSeconds: 5
readinessProbe:
httpGet:
path: /varnish-ready
port: 80
initialDelaySeconds: 30
periodSeconds: 5
volumeMounts:
- name: varnish
mountPath: /etc/varnish
volumes:
- name: varnish
configMap:
name: varnish
</code></pre>
<h2>Deploying the config</h2>
<p>Run <code>kubectl apply -f .</code> in the folder with the various k8s files (config map, service definition & deployment). This is the output you'll get:</p>
<pre><code>$ kubectl apply -f .
configmap/varnish created
deployment.apps/varnish created
service/varnish created
</code></pre>
<p>By running <code>kubectl get all</code> you'll see the status of the deployment.</p>
<p>When running this on your local computer, just call <code>kubectl port-forward service/varnish 8080:80</code> to port forward the Varnish service to <code>localhost:8080</code>. This allows you to test Varnish on k8s locally by accessing <code>http://localhost:8080</code>.</p>
<p>Run <code>kubectl delete -f .</code> to tear it down again.</p>
<h2>Disclaimer</h2>
<p>Although these configs were featured in my <a href="https://www.varnish-software.com/developers/varnish-6-by-example-book/" rel="nofollow noreferrer">Varnish 6 by Example</a> book, this is not an official tutorial. These scripts can probably be improved. However, it is a simple way to get started.</p>
| Thijs Feryn |
<p>I am working on improving the utilization of cluster, and the cluster is YARN and will be Kubernetes.</p>
<p>My question is that how to improve the utilization ratio ?
How to think this question, are there some methods?
For YARN, and for Kubernetes?</p>
<p>For YARN, I have read some articles or watched some videos.
YARN has NM and RM.</p>
<ol>
<li>Oversubscription based on historical job running data. (<a href="https://databricks.com/session/oversubscribing-apache-spark-resource-usage-for-fun-and" rel="nofollow noreferrer">https://databricks.com/session/oversubscribing-apache-spark-resource-usage-for-fun-and</a>)<br/>
a. set appropriate MEMORY (5G) and CPU for job<br/>
b. set a buffer for the job (1G)<br/>
c. do preemption to actively on NM</li>
</ol>
<br/>
<ol start="2">
<li><p>Oversubscription based on the real time utilization. (<a href="https://research.facebook.com/publications/ubis-utilization-aware-cluster-scheduling/" rel="nofollow noreferrer">https://research.facebook.com/publications/ubis-utilization-aware-cluster-scheduling/</a>)<br/>
a. do not modify job settings<br/>
b. monitor the utilization and allocation of NM, do oversubscription to the node<br/>
c. do preemption actively on NM<br/></p>
</li>
<li><p>Oversubscription of NM resources<br/>
a. NM has 100G and 30 cores in physical, but announce have 120G and 40 cores.<br/>
b. preemption handled by spark or YARN framework.<br/></p>
</li>
</ol>
| siyu | <p>I have had a lot of success with over subscription. Classically users overestimate their requirements.</p>
<p>A cool tool that Linked in release is <a href="https://engineering.linkedin.com/blog/2016/04/dr-elephant-open-source-self-serve-performance-tuning-hadoop-spark" rel="nofollow noreferrer">Dr. Elephant</a>. It looked at helping users tune their own jobs to help educate users/give them the tools to stop over subscription. <a href="https://github.com/linkedin/dr-elephant" rel="nofollow noreferrer">https://github.com/linkedin/dr-elephant</a>. It seems to have been quite for a couple of years but might be worth while to look at the code to see what they looked at to help you make some educated judgements about over subscriptions.</p>
<p>I don't have anything to do with <a href="https://www.pepperdata.com/" rel="nofollow noreferrer">PepperData</a> but they're tuning does use over subscription to optimize the cluster. So it's definitely a recognized pattern. If you want a service provider to help you with optimizing they might be a good team to talk to.</p>
<p>I would suggest that you just use a classic performance tuning strategy. Record your existing weekly metrics. Understand what going on in your cluster. Make a change - Bump everything by 10% and see if you get a boost in performance. Understand what going on in your cluster. If it works and is stable do it again the following week. Do that until you see an issue or stop seeing improvement. It takes time and careful recording of what's happening but it's likely the only way to tune your cluster as it's likely special.</p>
| Matt Andruff |
<p>I'm translating a service manual from Linux to Windows command line, and running into some issues with escape characters. After looking at other entries here and general googling I haven't been able to find anything that works for whatever reason.</p>
<p>In this line:</p>
<blockquote>
<pre><code>kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "demo-registry"}]}' -n testNamespace
</code></pre>
</blockquote>
<p>I'm unable to find a combination of <code>`</code>, <code>^</code>, or <code>\</code> that allows me to escape the double quotes. I was able to get it to work in the powershell command below though.</p>
<blockquote>
<pre><code>kubectl patch serviceaccount default -p "{\`"imagePullSecrets\`": [{\`"name\`": \`"demo-registry\`"}]}" -n testNamespace
</code></pre>
</blockquote>
| Nyyen8 | <p>Inverting the " and 's around the inputs allows the command to run without needing escape characters. The errors I had received were in part due to issues with my local environment.</p>
<pre><code>kubectl patch serviceaccount default -p "{'imagePullSecrets': [{'name': 'demo-registry'}]}" -n testNamespace
</code></pre>
| Nyyen8 |
<p>I wanted to use some AWS EBS volumes as a persistent storage for a deployment. I've configured the storage class and a PV, but I haven't been able to configure a Cloud provider. </p>
<p>The K8s <a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/" rel="nofollow noreferrer">documentation</a> (as far as I understand) is for Kubernetes clusters running on a specific cloud provider, instead of an on-prem cluster using cloud resources. As the title says: Is it possible to have AWS EBS persistent volumes on an on-prem K8s cluster? </p>
<p>If so, can you a cloud provider to your existing cluster? (everything I've found online suggests that you add it when running kubeadm init).</p>
<p>Thank you!</p>
| Gaby | <p>You cannot use EBS storage in the same manner as you would when running on the cloud but you can use <a href="https://aws.amazon.com/storagegateway/features/" rel="nofollow noreferrer">AWS Storage Gateway</a> to store snapshots/backups of your volumes in cloud.</p>
<blockquote>
<p>AWS Storage Gateway is a hybrid cloud storage service that connects
your existing on-premises environments with the AWS Cloud</p>
</blockquote>
<p>The feature you are intrested in is called <strong>Volume Gateway</strong></p>
<blockquote>
<p>The Volume Gateway presents your applications block storage volumes
using the iSCSI protocol. Data written to these volumes can be
asynchronously backed up as point-in-time snapshots of your volumes,
and stored in the cloud as Amazon EBS snapshots.</p>
</blockquote>
<p>Unfortunately you might not be able to automate creation of volumes in a way you could when running directly on AWS so some things you might have to do manually.</p>
| Matt |
<p>I have a running k8s deployment, with one container.</p>
<p>I want to deploy 10 more containers, with a few differences in the deployment manifest (i.e command launched, container name, ...).</p>
<p>Rather than create 10 more .yml files with the whole deployment, I would prefer use templating. What can I do to achieve this ?</p>
<pre><code>---
apiVersion: v1
kind: CronJob
metadata:
name: myname
labels:
app.kubernetes.io/name: myname
spec:
schedule: "*/10 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app.kubernetes.io/name: myname
spec:
serviceAccountName: myname
containers:
- name: myname
image: 'mynameimage'
imagePullPolicy: IfNotPresent
command: ["/my/command/to/launch"]
restartPolicy: OnFailure
</code></pre>
| Cowboy | <p>Using helm which is a templating engine for Kubernetes manifests you can create your own template by following me through.</p>
<blockquote>
<p>If you have never worked with <code>helm</code> you can check the <a href="https://helm.sh/docs/" rel="nofollow noreferrer">official docs</a></p>
<blockquote>
<p>In order for you to follow make sure you have helm already installed!</p>
</blockquote>
</blockquote>
<h3>- create a new chart:</h3>
<pre class="lang-sh prettyprint-override"><code>helm create cowboy-app
</code></pre>
<p>this will generate a new project for you.</p>
<h3>- <strong>DELETE EVERYTHING WITHING THE <code>templates</code> DIR</strong></h3>
<h3>- <strong>REMOVE ALL <code>values.yaml</code> content</strong></h3>
<h3>- create a new file <code>deployment.yaml</code> in <code>templates</code> directory and paste this:</h3>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}
labels:
chart: {{ .Values.appName }}
spec:
selector:
matchLabels:
app: {{ .Values.appName }}
replicas: 1
template:
metadata:
labels:
app: {{ .Values.appName }}
spec:
containers:
{{ toYaml .Values.images | indent 8 }}
</code></pre>
<h3>- in <code>values.yaml</code> paste this:</h3>
<pre class="lang-yaml prettyprint-override"><code>appName: cowboy-app
images:
- name: app-1
image: image-1
- name: app-2
image: image-2
- name: app-3
image: image-3
- name: app-4
image: image-4
- name: app-5
image: image-5
- name: app-6
image: image-6
- name: app-7
image: image-7
- name: app-8
image: image-8
- name: app-9
image: image-9
- name: app-10
image: image-10
</code></pre>
<p>So if you are familiar with helm you can tell that <code>{{ toYaml .Values.images | indent 10 }}</code> in the <strong>deployment.yaml</strong> is referring to data specified in <strong>values.yaml</strong> as YAML and by running <code>helm install release-name /path/to/chart</code> will generate and deploy a manifest file which is <strong>deployment.yaml</strong> that looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: cowboy-app
labels:
chart: cowboy-app
spec:
selector:
matchLabels:
app: cowboy-app
replicas: 1
template:
metadata:
labels:
app: cowboy-app
spec:
containers:
- image: image-1
name: app-1
- image: image-2
name: app-2
- image: image-3
name: app-3
- image: image-4
name: app-4
- image: image-5
name: app-5
- image: image-6
name: app-6
- image: image-7
name: app-7
- image: image-8
name: app-8
- image: image-9
name: app-9
- image: image-10
name: app-10
</code></pre>
| Affes Salem |
<p>Is there any way to access local docker images directly (without using 'docker save') with k3s?</p>
<p>Like minikube accesses local docker images after running this command</p>
<pre><code>eval $(minikube docker-env)
</code></pre>
<p>A little bit of background.</p>
<p>I have set up a machine using Ubuntu 19.04 as 'master' and raspberry pi as 'worker' using k3s. Now, I want to use a local image to create a deployment on the worker node.</p>
<p><strong>Update</strong></p>
<p>Adding screenshot as said in the comment below.</p>
<p><a href="https://i.stack.imgur.com/R0g6N.png" rel="noreferrer">Screenshot for the image listings</a></p>
<p><a href="https://i.stack.imgur.com/R0g6N.png" rel="noreferrer"><img src="https://i.stack.imgur.com/R0g6N.png" alt="enter image description here"></a></p>
| Naveed | <p>While this doesn't make all Docker images available,, a useful work-around is to export local Docker images and import them to your <code>ctr</code>:</p>
<pre><code>docker save my/local-image:v1.2.3 | sudo k3s ctr images import -
</code></pre>
<p>This will make them available on-demand to your k3s cluster.
This is useful for users who cannot get <code>k3s server</code> to work with the <code>--docker</code> flag.</p>
| Thomas Anderson |
<p>Inside a namespace, I have created a pod with its specs consisting of memory limit and memory requests parameters. Once up a and running, I would like to know how can I get the memory utilization of the pod in order to figure out if the memory utilization is within the specified limit or not. "kubectl top" command returns back with a services related error.</p>
| Vinodh Nagarajaiah | <p><code>kubectl top pod <pod-name> -n <fed-name> --containers</code></p>
<p>FYI, this is on v1.16.2</p>
| Umakant |
<p>KEDA scaler not scales with scaled object defined with trigger using pod identity for authentication for service bus queue.
I'm following <a href="https://github.com/kedacore/sample-dotnet-worker-servicebus-queue/blob/main/pod-identity.md" rel="nofollow noreferrer">this</a> KEDA service bus triggered scaling project.<br />
The scaling works fine with the connection string, but when I try to scale using the pod identity for KEDA scaler the keda operator fails to get the azure identity bound to it with the following keda operator error message log:</p>
<pre><code>github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).isScaledObjectActive
/workspace/pkg/scaling/scale_handler.go:228
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).checkScalers
/workspace/pkg/scaling/scale_handler.go:211
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).startScaleLoop
/workspace/pkg/scaling/scale_handler.go:145
2021-10-10T17:35:53.916Z ERROR azure_servicebus_scaler error {"error": "failed to refresh token, error: adal: Refresh request failed. Status Code = '400'. Response body: {\"error\":\"invalid_request\",\"error_description\":\"Identity not found\"}\n"}
</code></pre>
<p>Edited on 11/09/2021
I opened a github issue at keda, and we did some troubleshoot. But it seems like an issue with AAD Pod Identity as @Tom suggests. The AD Pod Identity MIC pod gives logs like this:</p>
<pre><code>E1109 03:15:34.391759 1 mic.go:1111] failed to update user-assigned identities on node aks-agentpool-14229154-vmss (add [2], del [0], update[0]), error: failed to update identities for aks-agentpool-14229154-vmss in MC_Arun_democluster_westeurope, error: compute.VirtualMachineScaleSetsClient#Update: Failure sending request: StatusCode=0 -- Original Error: Code="LinkedAuthorizationFailed" Message="The client 'fe0d7679-8477-48e3-ae7d-43e2a6fdb957' with object id 'fe0d7679-8477-48e3-ae7d-43e2a6fdb957' has permission to perform action 'Microsoft.Compute/virtualMachineScaleSets/write' on scope '/subscriptions/f3786c6b-8dca-417d-af3f-23929e8b4129/resourceGroups/MC_Arun_democluster_westeurope/providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-14229154-vmss'; however, it does not have permission to perform action 'Microsoft.ManagedIdentity/userAssignedIdentities/assign/action' on the linked scope(s) '/subscriptions/f3786c6b-8dca-417d-af3f-23929e8b4129/resourcegroups/arun/providers/microsoft.managedidentity/userassignedidentities/autoscaler-id' or the linked scope(s) are invalid."
</code></pre>
<p>Any clues how to fix it?</p>
<p>My scaler objects' definition is as below:</p>
<pre><code>apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: trigger-auth-service-bus-orders
spec:
podIdentity:
provider: azure
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: order-scaler
spec:
scaleTargetRef:
name: order-processor
# minReplicaCount: 0 Change to define how many minimum replicas you want
maxReplicaCount: 10
triggers:
- type: azure-servicebus
metadata:
namespace: demodemobus
queueName: orders
messageCount: '5'
authenticationRef:
name: trigger-auth-service-bus-orders
</code></pre>
<p>Im deploying the azure identity to the <code>namespace keda</code> where my keda deployment resides.
And installs KEDA with the following command to set the <code>pod identity binding</code> using helm:</p>
<pre><code>helm install keda kedacore/keda --set podIdentity.activeDirectory.identity=app-autoscaler --namespace keda
</code></pre>
<p><strong>Expected Behavior</strong>
The KEDA scaler should have worked fine with the assigned pod identity and access token to perform scaling</p>
<p><strong>Actual Behavior</strong>
The KEDA operator could not be able to find the azure identity assigned and scaling fails</p>
<p><strong>Scaler Used</strong>
Azure Service Bus</p>
<p><strong>Steps to Reproduce the Problem</strong></p>
<ol>
<li>Create the azure identity and bindings for the KEDA</li>
<li>Install KEDA with the aadpodidentitybinding</li>
<li>Create the scaledobject and triggerauthentication using KEDA pod identity</li>
<li>The scaler fails to authenticate and scale</li>
</ol>
| iarunpaul | <p>First and foremost, I am using AKS with kubenet plugin.</p>
<p>By default
'AAD Pod Identity is disabled by default on Clusters with Kubenet starting from release v1.7.'</p>
<p>This is because of the Kubenet is vulnerable to ARP Spoofing.
Please read it <a href="https://azure.github.io/aad-pod-identity/docs/configure/aad_pod_identity_on_kubenet/" rel="nofollow noreferrer">here</a>.</p>
<p>Even then you can have a workaround to enable the KEDA scaling in Kubenet powered AKS.(The script holds good for other CNI's also, except that you dont need to edit anything with the <code>aad-pod-identity</code> component <code>nmi daemonset</code> definition yaml, if it runs well with your cluster plugins.).</p>
<p>Below I'm adding an e2e script for the same.
Please visit the <a href="https://github.com/kedacore/keda/issues/2178" rel="nofollow noreferrer">github issue</a> for access to all the discussions.</p>
<pre><code># Define aks name and resource group
$aksResourceGroup = "K8sScalingDemo"
$aksName = "K8sScalingDemo"
# Create resource group
az group create -n $aksResourceGroup -l centralindia
# Create the aks cluster with default kubenet plugin
az aks create -n $aksName -g $aksResourceGroup
# Resourcegroup where the aks resources will be deployed
$resourceGroup = "$(az aks show -g $aksResourceGroup -n $aksName --query nodeResourceGroup -otsv)"
# Set the kubectl context to the newly created aks cluster
az aks get-credentials -n $aksName -g $aksResourceGroup
# Install AAD Pod Identity into the aad-pod-identity namespace using helm
kubectl create namespace aad-pod-identity
helm repo add aad-pod-identity https://raw.githubusercontent.com/Azure/aad-pod-identity/master/charts
helm install aad-pod-identity aad-pod-identity/aad-pod-identity --namespace aad-pod-identity
# Check the status of installation
kubectl --namespace=aad-pod-identity get pods -l "app.kubernetes.io/component=mic"
kubectl --namespace=aad-pod-identity get pods -l "app.kubernetes.io/component=nmi"
# the nmi components will Crashloop, ignore them for now. We will make them right later
# Get Resourcegroup Id of our $ResourceGroup
$resourceGroup_ResourceId = az group show --name $resourceGroup --query id -otsv
# Get the aks cluster kubeletidentity client id
$aad_pod_identity_clientid = az aks show -g $aksResourceGroup -n $aksName --query identityProfile.kubeletidentity.clientId -otsv
# Assign required roles for cluster over the resourcegroup
az role assignment create --role "Managed Identity Operator" --assignee $aad_pod_identity_clientid --scope $resourceGroup_ResourceId
az role assignment create --role "Virtual Machine Contributor" --assignee $aad_pod_identity_clientid --scope $resourceGroup_ResourceId
# Create autoscaler azure identity and get client id and resource id of the autoscaler identity
$autoScaleridentityName = "autoscaler-aad-identity"
az identity create --name $autoScaleridentityName --resource-group $resourceGroup
$autoscaler_aad_identity_clientId = az identity show --name $autoScaleridentityName --resource-group $resourceGroup --query clientId -otsv
$autoscaler_aad_identity_resourceId = az identity show --name $autoScaleridentityName --resource-group $resourceGroup --query id -otsv
# Create the app azure identity and get client id and resource id of the app identity
$appIdentityName = "app-aad-identity"
az identity create --name app-aad-identity --resource-group $resourceGroup
$app_aad_identity_clientId = az identity show --name $appIdentityName --resource-group $resourceGroup --query clientId -otsv
$app_aad_identity_resourceId = az identity show --name $appIdentityName --resource-group $resourceGroup --query id -otsv
# Create service bus and queue
$servicebus = 'svcbusdemo'
az servicebus namespace create --name $servicebus --resource-group $resourceGroup --sku basic
$servicebus_namespace_resourceId = az servicebus namespace show --name $servicebus --resource-group $resourceGroup --query id -otsv
az servicebus queue create --namespace-name $servicebus --name orders --resource-group $resourceGroup
$servicebus_queue_resourceId = az servicebus queue show --namespace-name $servicebus --name orders --resource-group $resourceGroup --query id -otsv
# Assign Service Bus Data Receiver role to the app identity created
az role assignment create --role 'Azure Service Bus Data Receiver' --assignee $app_aad_identity_clientId --scope $servicebus_queue_resourceId
# Create a namespace for order app deployment
kubectl create namespace keda-dotnet-sample
# Create a yaml deployment configuration variable
$app_with_identity_yaml= @"
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentity
metadata:
name: $appIdentityName
annotations:
aadpodidentity.k8s.io/Behavior: namespaced
spec:
type: 0 # 0 means User-assigned MSI
resourceID: $app_aad_identity_resourceId
clientID: $app_aad_identity_clientId
---
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
name: $appIdentityName-binding
spec:
azureIdentity: $appIdentityName
selector: order-processor
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-processor
labels:
app: order-processor
spec:
selector:
matchLabels:
app: order-processor
template:
metadata:
labels:
app: order-processor
aadpodidbinding: order-processor
spec:
containers:
- name: order-processor
image: ghcr.io/kedacore/sample-dotnet-worker-servicebus-queue:latest
env:
- name: KEDA_SERVICEBUS_AUTH_MODE
value: ManagedIdentity
- name: KEDA_SERVICEBUS_HOST_NAME
value: $servicebus.servicebus.windows.net
- name: KEDA_SERVICEBUS_QUEUE_NAME
value: orders
- name: KEDA_SERVICEBUS_IDENTITY_USERASSIGNEDID
value: $app_aad_identity_clientId
"@
# Create the app deployment with identity bindings using kubectl apply
$app_with_identity_yaml | kubectl apply --namespace keda-dotnet-sample -f -
# Now the order processor app works with the pod identity and
# processes the queues
# You can refer the [project ](https://github.com/kedacore/sample-dotnet-worker-servicebus-queue/blob/main/pod-identity.md) for that.
# Now start installation of KEDA in namespace keda-system
kubectl create namespace keda-system
# Create a pod identity and binding for autoscaler azure identity
$autoscaler_yaml =@"
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentity
metadata:
name: $autoScaleridentityName
spec:
type: 0 # 0 means User-assigned MSI
resourceID: $autoscaler_aad_identity_resourceId
clientID: $autoscaler_aad_identity_clientId
---
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
name: $autoScaleridentityName-binding
spec:
azureIdentity: $autoScaleridentityName
selector: $autoScaleridentityName
"@
$autoscaler_yaml | kubectl apply --namespace keda-system -f -
# Install KEDA using helm
helm install keda kedacore/keda --set podIdentity.activeDirectory.identity=autoscaler-aad-identity --namespace keda-system
# Assign Service Bus Data Owner role to keda autoscaler identity
az role assignment create --role 'Azure Service Bus Data Owner' --assignee $autoscaler_aad_identity_clientId --scope $servicebus_namespace_resourceId
# Apply scaled object definition and trigger authentication provider as `azure`
$aap_autoscaling_yaml = @"
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: trigger-auth-service-bus-orders
spec:
podIdentity:
provider: azure
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: order-scaler
spec:
scaleTargetRef:
name: order-processor
# minReplicaCount: 0 Change to define how many minimum replicas you want
maxReplicaCount: 10
triggers:
- type: azure-servicebus
metadata:
namespace: $servicebus
queueName: orders
messageCount: '5'
authenticationRef:
name: trigger-auth-service-bus-orders
"@
$aap_autoscaling_yaml | kubectl apply --namespace keda-dotnet-sample -f -
# Now the Keda is getting 401 unauthorized error as the AAD Pod Identity comnponent `nmi` is not runnig on the system
# To fix it edit the daemonset for `nmi` component
# add the container arg `--allow-network-plugin-kubenet=true` by editing the `daemonset.apps/aad-pod-identity-nmi`
kubectl edit daemonset.apps/aad-pod-identity-nmi -n aad-pod-identity
# the containe arg section should look like this after editing:
spec:
containers:
- args:
- --node=$(NODE_NAME)
- --http-probe-port=8085
- --enableScaleFeatures=true
- --metadata-header-required=true
- --operation-mode=standard
- --kubelet-config=/etc/default/kubelet
- --allow-network-plugin-kubenet=true
env:
# Now the KEDA is authenticated by aad-pod-identity metadata endpoint and the orderapp should scale up
# with the queue counts
# If the order app still falls back to errors please delete and redeploy it.
# And that's it you just scaled your app up using KEDA on Kubenet AKS cluster.
</code></pre>
<h5>Note: Read <a href="https://azure.github.io/aad-pod-identity/docs/configure/aad_pod_identity_on_kubenet/" rel="nofollow noreferrer">this instruction</a> before you run AAD Identity On a Kubenet powered AKS.</h5>
| iarunpaul |
<p>I have a running k8s deployment, with one container.</p>
<p>I want to deploy 10 more containers, with a few differences in the deployment manifest (i.e command launched, container name, ...).</p>
<p>Rather than create 10 more .yml files with the whole deployment, I would prefer use templating. What can I do to achieve this ?</p>
<pre><code>---
apiVersion: v1
kind: CronJob
metadata:
name: myname
labels:
app.kubernetes.io/name: myname
spec:
schedule: "*/10 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app.kubernetes.io/name: myname
spec:
serviceAccountName: myname
containers:
- name: myname
image: 'mynameimage'
imagePullPolicy: IfNotPresent
command: ["/my/command/to/launch"]
restartPolicy: OnFailure
</code></pre>
| Cowboy | <p>You can either specify a set of containers to be created you can do that like this:</p>
<pre class="lang-sh prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container1
image: your-image
- name: container2
image: your-image
- name: container3
image: your-image
</code></pre>
<p>and you can repeat that container definition as many times as you want.</p>
<p>The other way around is to use a templating engine like helm/kustomize as mentioned above.</p>
| MOHAMED AMINE RAJAH |
<p>Is it possible to have the HPA scale based on the number of available running pods?</p>
<p>I have set up a readiness probe that cuts out a pod based it's internal state (idle, working, busy). When a pod is 'busy', it no longer receives new requests. But the cpu, and memory demands are low.</p>
<p>I don't want to scale based on cpu, mem, or other metrics.</p>
<p>Seeing as the readiness probe removes it from active service, can I scale based on the average number of active (not busy) pods? When that number drops below a certain point more pods are scaled.</p>
<p>TIA for any suggestions.</p>
| J.E. | <p>You can create <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="nofollow noreferrer">custom metrics</a>, a number of <code>busy-pods</code> for HPA.
That is, the application should emit a metric value when it is busy. And use that metric to create HorizontalPodAutoscaler.</p>
<p>Something like this:</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: custom-metric-sd
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: custom-metric-sd
minReplicas: 1
maxReplicas: 20
metrics:
- type: Pods
pods:
metricName: busy-pods
targetAverageValue: 4
</code></pre>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/custom-metrics-autoscaling" rel="nofollow noreferrer">Here</a> is another reference for HPA with custom metrics.</p>
| wpnpeiris |
<p>I restarted my mac (Mac OS High Sierra) and now Visual Studio code can't find kubectl binary even it is installed via brew.</p>
<pre><code>$ which kubectl
/usr/local/bin/kubectl
</code></pre>
<p><a href="https://i.stack.imgur.com/fKTSg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/fKTSg.png" alt="enter image description here"></a></p>
<p>The weird thing is that it could find kubectl before I restarted my laptop.</p>
| user674669 | <p>adding the below config is vscode settings.json may resolve this</p>
<pre><code>"vs-kubernetes": {
"vscode-kubernetes.minikube-path.mac": "/path/to/minikube",
"vscode-kubernetes.kubectl-path.mac": "/path/to/kubectl",
"vscode-kubernetes.helm-path.mac" : "/path/to/helm"
}
</code></pre>
| psadi |
<p>I was recently trying to create a docker container and connect it with my SQLDeveloper but I started facing some strange issues.
I downloaded the docker image using below pull request:</p>
<pre><code>docker pull store/oracle/database-enterprise:12.2.0.1-slim
</code></pre>
<p>then I started the container from my docker-desktop using port 1521. The container started with a warning.</p>
<p><img src="https://i.stack.imgur.com/YKUXA.png" alt="enter image description here" /></p>
<p>terminal message:</p>
<pre><code>docker run -d -it -p 1521:1521 --name oracle store/oracle/database-enterprise:12.2.0.1-slim
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
5ea14c118397ce7ef2880786ac1fac061e8c92f9b09070edffe365653dcc03af
</code></pre>
<p>Now when I try connect to db using below command :</p>
<pre><code>docker exec -it 5ea14c118397 bash -c "source /home/oracle/.bashrc; sqlplus /nolog"
SQL> connect sys as sysdba;
Enter password:
ERROR:
ORA-12547: TNS:lost contact
</code></pre>
<p>it shows this message, PASSWORD I USE IS Oradoc_db1.</p>
<p>Now after seeing some suggestions I tried using the below command for connecting to sqlplus:</p>
<pre><code> docker exec -it f888fa9d0247 bash -c "source /home/oracle/.bashrc; sqlplus / as sysdba"
SQL*Plus: Release 12.2.0.1.0 Production on Mon Sep 6 06:15:58 2021
Copyright (c) 1982, 2016, Oracle. All rights reserved.
ERROR:
ORA-12547: TNS:lost contact
</code></pre>
<p>I also tried changing permissions of oracle file in $ORACLE_HOME as well for execution permissions as well but it didn't work.</p>
<p>Please help me out as I am stuck and don't know what to do.</p>
| harshit srivastava | <p>There are two issues here:</p>
<ol>
<li>Oracle Database is not supported on ARM processors, only Intel. See here: <a href="https://github.com/oracle/docker-images/issues/1814" rel="nofollow noreferrer">https://github.com/oracle/docker-images/issues/1814</a></li>
<li>Oracle Database Docker images are only supported with Oracle Linux 7 or Red Hat Enterprise Linux 7 as the host OS. See here: <a href="https://github.com/oracle/docker-images/tree/main/OracleDatabase/SingleInstance" rel="nofollow noreferrer">https://github.com/oracle/docker-images/tree/main/OracleDatabase/SingleInstance</a></li>
</ol>
<blockquote>
<p>Oracle Database ... is supported for Oracle Linux 7 and Red Hat Enterprise Linux (RHEL) 7. For more details please see My Oracle Support note: <strong>Oracle Support for Database Running on Docker (Doc ID 2216342.1)</strong></p>
</blockquote>
<p>The referenced My Oracle Support Doc ID goes on to say that the database binaries in their Docker image are built specifically for Oracle Linux hosts, and will also work on Red Hat. That's it.</p>
<p>Linux being what it is (flexible), lots of people have gotten the images to run on other flavors like Ubuntu with a bit of creativity, but only on x86 processors and even then the results are not guaranteed by Oracle: you won't be able to get support or practical advice when (and it's always <em>when</em>, not <em>if</em> in IT) things don't work as expected. You might not even be able to <em>tell</em> when things aren't working as they should. This is a case where creativity is not particularly rewarded; if you want it to work and get meaningful help, my advice is to use the <em>supported</em> hardware architecture and operating system version. Anything else is a complete gamble.</p>
| pmdba |
<p>I'm currently navigating the intricacies of Horizontal Pod Autoscaler (HPA) behavior in the context of pods housing a sidecar container. Here's the context:</p>
<p>I've set up a pod serving images, paired with a sidecar container running nginx for caching. Both containers have their own resource requests defined. While the system is performing well, I'm seeking clarity on the HPA's scaling decision-making process.</p>
<p>Currently, the HPA triggers scaling when CPU request usage hits 80%. However, I'm unsure whether the HPA evaluates metrics for each container independently or if it considers both containers' metrics collectively.</p>
<p>In essence, does the HPA analyze metrics for individual containers separately? If so, how does it prioritize their evaluation? Alternatively, does it treat both containers as a single unit when assessing CPU request consumption?</p>
<p>I'd appreciate any insights into the HPA's metric evaluation approach in this specific setup. Thank you for your help!</p>
| pwoltschk | <p>Per Kubernetes official <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">documentation</a>.</p>
<blockquote>
<p>The HorizontalPodAutoscaler API also supports a container metric source
where the HPA can track the resource usage of individual containers
across a set of Pods, in order to scale the target resource. This lets
you configure scaling thresholds for the containers that matter most
in a particular Pod. For example, if you have a web application and a
logging sidecar, you can scale based on the resource use of the web
application, ignoring the sidecar container and its resource use.</p>
</blockquote>
<blockquote>
<p>If you revise the target resource to have a new Pod specification with
a different set of containers, you should revise the HPA spec if that
newly added container should also be used for scaling. If the
specified container in the metric source is not present or only
present in a subset of the pods then those pods are ignored and the
recommendation is recalculated.</p>
</blockquote>
<p>To use container resources for autoscaling define a metric source as follows, an example where the HPA controller scales the target such that the average utilization of the cpu in the <strong>application container</strong> of all the pods is 60%.:</p>
<pre><code>type: ContainerResource
containerResource:
name: cpu
container: application
target:
type: Utilization
averageUtilization: 60
</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">Additional info with algorithm used by HPA</a></p>
| jmvcollaborator |
<p>Kubernetes pods are not able to update repositories of Debian based.</p>
<p>I have set up the k8s cluster below steps.</p>
<p>Using Kubernetes official links</p>
<ol>
<li><p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/</a></p>
</li>
</ol>
<pre><code>
sudo apt update
sudo apt install docker.io
sudo apt-get update
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
## Just in the master node
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
</code></pre>
<pre><code># Deployed below nginx specs
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: ClusterIP
selector:
app: nginx
ports:
- port: 8080
targetPort: 80
name: nginx-http
</code></pre>
<pre><code>
kubectl apply -f nginx.yaml
kubectl exec -it nginx-56bcb5bb6c-ts7l2 -- bash
After running update from inside pod, throws below errors.
root@nginx-56bcb5bb6c-ts7l2:/# apt update
Get:1 http://security.debian.org/debian-security bullseye-security InRelease
Get:2 http://deb.debian.org/debian bullseye InRelease
Get:3 http://deb.debian.org/debian bullseye-updates InRelease
Err:1 http://security.debian.org/debian-security bullseye-security InRelease
Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?)
Err:2 http://deb.debian.org/debian bullseye InRelease
Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?)
Err:3 http://deb.debian.org/debian bullseye-updates InRelease
Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?)
Reading package lists... Done
N: See apt-secure(8) manpage for repository creation and user configuration details.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
E: The repository 'http://security.debian.org/debian-security bullseye-security InRelease' is no longer signed.
E: Failed to fetch http://security.debian.org/debian-security/dists/bullseye-security/InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?)
E: Failed to fetch http://deb.debian.org/debian/dists/bullseye/InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?)
E: The repository 'http://deb.debian.org/debian bullseye InRelease' is no longer signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch http://deb.debian.org/debian/dists/bullseye-updates/InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?)
E: The repository 'http://deb.debian.org/debian bullseye-updates InRelease' is no longer signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
root@nginx-56bcb5bb6c-ts7l2:/#
</code></pre>
<pre><code>Below is the version of the Kubernetes cluster.
osboxes@k-master:~$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:15:38Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
osboxes@k-master:~$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.2
Kustomize Version: v4.5.4
Server Version: v1.24.2
</code></pre>
<p>Why these errors are coming from inside the pod, while an apt update from the host node runs fine?</p>
| devops-admin | <p>Seems issue is with network <code>does the network require authentication</code>.</p>
<p>Please check it using another network as I am able to run <code>apt update</code> command from with in the container.</p>
<pre><code>$ kubectl run mginx --rm -i --tty --restart=Never --image=nginx -- /bin/sh
If you don't see a command prompt, try pressing enter.
#
#
#
# id
uid=0(root) gid=0(root) groups=0(root)
# apt update
Get:1 http://security.debian.org/debian-security bullseye-security InRelease [44.1 kB]
Get:2 http://deb.debian.org/debian bullseye InRelease [116 kB]
Get:3 http://deb.debian.org/debian bullseye-updates InRelease [39.4 kB]
Get:4 http://security.debian.org/debian-security bullseye-security/main amd64 Packages [154 kB]
Get:5 http://deb.debian.org/debian bullseye/main amd64 Packages [8182 kB]
Get:6 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [2592 B]
Fetched 8539 kB in 1s (7158 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
1 package can be upgraded. Run 'apt list --upgradable' to see it.
#
</code></pre>
| Dhiraj Bansal |
<p>i'm bit troubled, i want to create ingress with multiple pathes </p>
<p>here is my /templates/ingress.yaml</p>
<pre><code>{{- if .Values.ingress.enabled -}}
{{- $ingressPath := .Values.ingress.path -}}
{{- $appName := .Values.appName -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Values.appName }}-ingress
labels:
app: {{ .Values.appName }}
chart: {{ template "chart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $appName }}-service
servicePort: 80
{{- end }}
{{- end }}
</code></pre>
<p>and here is my values.yaml file </p>
<pre><code>appName: vsemPrivet
replicaCount: 1
image:
repository: kakoito.domen.kg
tag: dev-56739-272faaf
pullPolicy: Always
imagePullSecretName: regcred
nodeSelector:
project: vazhni-project
service: vsem-privet
name:
type: ClusterIP
protocol: TCP
targetPort: 8080
## Configure ingress resourse
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx-prod-01"
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE, OPTIONS, HEAD"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "*"
nginx.ingress.kubernetes.io/cors-max-age: "3600"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
hosts:
- mirtebe4elovek.com
path: /letter
hosts:
- mirtebe4elovek.com
path: /swagger-ui
hosts:
- mirtebe4elovek.com
path: /webjars
tls:
- secretName: ssl-secret
hosts:
- qa-ibank.anthill.fortebank.com
</code></pre>
<p>So here im my scenario i want to make 3 different paths, but when i helm install and then kubectl describe ing my-ing i get the following</p>
<pre><code>Name: service-core-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
ssl-secret terminates mirtebe4elovek.com
Rules:
Host Path Backends
---- ---- --------
mirtebe4elovek.com
/webjars my-service:80 (<none>)
Annotations:
kubernetes.io/ingress.class: nginxnginx
nginx.ingress.kubernetes.io/cors-allow-methods: GET, POST, PUT, DELETE, OPTIONS, HEAD
nginx.ingress.kubernetes.io/cors-max-age: 3600
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/cors-allow-credentials: true
nginx.ingress.kubernetes.io/cors-allow-headers: *
nginx.ingress.kubernetes.io/cors-allow-origin: *
nginx.ingress.kubernetes.io/enable-cors: true
nginx.ingress.kubernetes.io/from-to-www-redirect: true
Events: <none>
</code></pre>
<p>so as you can see i have only 1 path /webjars but where is 2 others? /letters and /swagger-ui which i've described in my values.yaml file
how can i fix this? </p>
| Joom187 | <p>In Helm, <code>range</code> operator is used to iterate through a collection.
Looks like here need to have multiple paths over a single host <code>mirtebe4elovek.com</code></p>
<p>You may modified the <code>ingress.yaml</code> as the following.</p>
<pre><code>rules:
- host: {{ .Values.ingress.host }}
http:
paths:
{{- range .Values.ingress.paths }}
- path: {{ . }}
backend:
serviceName: {{ $appName }}-service
servicePort: 80
{{- end }}
</code></pre>
<p>And the value.yaml file as:</p>
<pre><code>ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx-prod-01"
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE, OPTIONS, HEAD"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "*"
nginx.ingress.kubernetes.io/cors-max-age: "3600"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
host: "mirtebe4elovek.com"
paths:
- "/letter"
- "/swagger-ui"
- "/webjars"
</code></pre>
| wpnpeiris |
<p>I have a statefulset with 5 replica. This means I have pods with name pod-0 .. pod-4.</p>
<p>I want to create 2 service.</p>
<ol>
<li><p>A service that only routes the request to pod-0 ( this is my edit server ) : I know I can achieve this by "statefulset.kubernetes.io/pod-name=pod-0" in the service selector. This is fine.</p>
</li>
<li><p>A service that routes the request to all remaining nodes excluding pod-0( even if the application scales up and add more instances those new instances needs to be mapped to this service) : I am unable to achieve this.</p>
</li>
</ol>
<p>Any idea on how to achieve the 2nd service ??</p>
<p>Since it is deployed via 1 stateful manifest file , all pods are having same label .</p>
| Raj | <p>Thanks for the info . I followed this approach.</p>
<ol>
<li>Created Role + role binding + service account which have rights to update the label.</li>
<li>At the end of the script that I execute using Docker file [CMD] , I installed kubectl and then executed
<em>kubectl label pod ${HOSTNAME} pod-type=${POD_TYPE} --server=kubernetes.default --token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt -n ${POD_NAMESPACE}</em></li>
</ol>
<p>Above env variables I am passing in the statefulsets. It served my purpose.</p>
| Raj |
<p>I have a single-master kubeadm cluster setup with 3 nodes (2 workers).
I can access the kubernetes-dashboard trough kubectl proxy on my local computer until I enable my firewall.
My firewall(ufw) config is:
<strong>master-node</strong></p>
<pre><code>Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
6443/tcp ALLOW Anywhere
2379:2380/tcp ALLOW Anywhere
10250/tcp ALLOW Anywhere
10251/tcp ALLOW Anywhere
10252/tcp ALLOW Anywhere
10255/tcp ALLOW Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
8443/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
6443/tcp (v6) ALLOW Anywhere (v6)
2379:2380/tcp (v6) ALLOW Anywhere (v6)
10250/tcp (v6) ALLOW Anywhere (v6)
10251/tcp (v6) ALLOW Anywhere (v6)
10252/tcp (v6) ALLOW Anywhere (v6)
10255/tcp (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
8443/tcp (v6) ALLOW Anywhere (v6)
</code></pre>
<p><strong>worker-nodes</strong></p>
<pre><code>Status: active
To Action From
-- ------ ----
10250/tcp ALLOW Anywhere
10255/tcp ALLOW Anywhere
30000:32767/tcp ALLOW Anywhere
22/tcp ALLOW Anywhere
10250/tcp (v6) ALLOW Anywhere (v6)
10255/tcp (v6) ALLOW Anywhere (v6)
30000:32767/tcp (v6) ALLOW Anywhere (v6)
22/tcp (v6) ALLOW Anywhere (v6)
</code></pre>
<p>Is there a port I forgot to allow? or could it come from something else?</p>
<p>Thanks!</p>
| Antoine Coulon | <p>In order to access the dashboard, you need to have a Kubernetes <code>Service</code> expose the dashboard. Assuming you have it installed using the instructions <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">here</a>, you can patch the service to expose the port as a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a>.</p>
<p><code>kubectl patch service/kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' --type=merge</code></p>
<p>Then run this command which will return the NodePort number:</p>
<p><code>kubectl get service/kubernetes-dashboard -o jsonpath='{.spec.ports[0].nodePort}'</code></p>
<p>Then update your firewall to open that port on one or all of your workers.</p>
<p>Then hit that port on any worker: <code>https://[WorkerIP]:[NodePort]</code></p>
| Tyzbit |
<p>I have the following Dockerfile:</p>
<pre class="lang-sh prettyprint-override"><code>FROM python:3-alpine
WORKDIR /usr/src/app
COPY requirements.txt .
RUN pip install -qr requirements.txt
COPY target-watch.py .
CMD ["python3", "./target-watch.py"]
</code></pre>
<p>If I deploy this to a Kubernetes cluster the build went fine, but I got an error from the Kubernetes logs. To verfiy my image I run the following command:</p>
<pre class="lang-sh prettyprint-override"><code>docker run --rm -it --entrypoint /bin/bash docker-conveyor.xxx.com/myorg/my_cron_jobs:2021.12.08_03.51_abcdef
</code></pre>
<p>Which gives me this response:</p>
<pre class="lang-none prettyprint-override"><code>docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown.
</code></pre>
<p>How can this be fixed? I assume a shell is missing in my image. How do i have to change my Dockerfile to make it work without any errors?</p>
| Test Me | <p>Your container image doesn't have bash so you should use /bin/sh instead of /bin/bash.</p>
<pre><code>docker run --rm -it --entrypoint /bin/sh docker-conveyor.xxx.com/myorg/my_cron_jobs:2021.12.08_03.51_abcdef
</code></pre>
<p>Alpine docker image doesn't have bash installed by default. You will need to add the following commands to get bash:</p>
<pre><code>RUN apk update && apk add bash
</code></pre>
<p>If you're using Alpine 3.3+ then you can just do</p>
<pre><code>RUN apk add --no-cache bash
</code></pre>
| Hamid Ostadvali |
<p>I have a local Kubernetes cluster based on <a href="https://microk8s.io/" rel="nofollow noreferrer">MicroK8s</a> running on an Ubuntu 18.04 machine.</p>
<p><strong>What I want to achieve:</strong> Generally I want to expose my applications to DNS names and test them locally.</p>
<p><strong>My setup:</strong> </p>
<p>I created the following test deployment </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
labels:
app: hello-app
tier: backend
version: v1
spec:
selector:
matchLabels:
app: hello-app
replicas: 2
template:
metadata:
labels:
app: hello-app
spec:
containers:
- name: hello-app
image: localhost:5000/a-local-hello-image
ports:
- containerPort: 3000
</code></pre>
<p>I added the following service descriptor:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-app
spec:
selector:
app: hello-app
ports:
- protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>Now I want to see my app available, let's say, at <code>http://hello.someurl.com:3000</code>.</p>
<p><strong>Question:</strong> What do I need to setup in addition to my current configuration to map my application to a DNS name locally?</p>
<p><strong>Note:</strong> I have read <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">the documentation</a> which unfortunately didn't help. I also <a href="https://microk8s.io/docs/addon-dns" rel="nofollow noreferrer">enabled DNS addon</a> on my cluster. </p>
<p>I would appreciate any help, any directions on how to move forward.</p>
| Sasha Shpota | <p>The easiset way to achieve what you want would be using</p>
<pre><code>kubectl port-forward service/hello-app 3000:3000
</code></pre>
<p>and appending following entry to <code>/etc/hosts</code> file</p>
<pre><code>127.0.0.1 hello.someurl.com
</code></pre>
<p>Then you can just open your browser and go to <code>http://hello.someurl.com:3000</code></p>
| Matt |
<p>I am getting this error in the logs.
<code>[error] 117#117: *16706 upstream timed out (110: Operation timed out) while reading response header from upstream</code>.I have tried every possible way to check from where i am getting this exact 60s timeout.</p>
<p>I'll add more detail of how i am producing this error in details also (if needed). I don't see any timeout when I run the dotnet api (dockerized) locally.
That API runs for more then 5 minutes but here in AKS cluster it gets timeout in exactly 60s.</p>
<p>So i am using these headers in my config map (for nginx ingress controller).I have checked it by removing and adding these headers one by one but no change in that timeout.</p>
<pre><code> client-header-timeout: "7200"
keep-alive: "300"
keep-alive-requests: "100000"
keepalive-timeout: "300"
proxy-connect-timeout: "7200"
proxy-read-timeout: "7200"
proxy-send-timeout: "7200"
upstream-keepalive-requests: "100000"
upstream-keepalive-timeout: "7200"
</code></pre>
<p>And i have also tried adding these headers in my ingress resource/rule for that microservice.</p>
<pre><code>nginx.ingress.kubernetes.io/client-body-timeout: "7200"
nginx.ingress.kubernetes.io/client-header-timeout: "7200"
nginx.ingress.kubernetes.io/client-max-body-size: 5000m
nginx.ingress.kubernetes.io/keep-alive: "300"
nginx.ingress.kubernetes.io/keepalive-timeout: "300"
nginx.ingress.kubernetes.io/large-client-header-buffers: 64 128k
nginx.ingress.kubernetes.io/proxy-body-size: 5000m
nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
nginx.ingress.kubernetes.io/proxy-connect-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-read-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-send-timeout: "7200"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/send_timeout: "7200"
</code></pre>
<p>Nginx ingress controller version:</p>
<pre><code> Release: v1.0.5
Build: 7ce96cbcf668f94a0d1ee0a674e96002948bff6f
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
</code></pre>
<p>We are using Azure Kubernetes cluster and The core dns is resovling all the api/url from an active directory which is deployed in an Azure windows Virtual machine.</p>
<p>Backend API in in dotnet core <code>(sdk:6.0.400 and ASP.NET core Runtime 6.0.8)</code> (all the keepalives and request-timeouts settings are already tested that are defined in the code).</p>
| basit khan | <p>Found the problem. May be i have missed something but seems these</p>
<pre><code>proxy-read-timeout: "7200"
proxy-send-timeout: "7200"
</code></pre>
<p>headers doesn't effect the timeouts for the backend GRPC communication.
I had to add the "server-snippet" to add these</p>
<pre><code>grpc_read_timeout 120s; grpc_send_timeout 120s; client_body_timeout 120s;
</code></pre>
| basit khan |
<p>I have a workflow where a persistent volume is created however, the volume is not removed after the job has finished successfully.</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: some_volume
spec:
accessModes: ['ReadWriteMany']
resources:
requests:
storage: 2Gi
</code></pre>
<p>I have been looking for some policy that can be set to remove the volume after the workflow has finished successfully however, I could not find anything.
Someone suggested creating a cronjob to remove the volume. But is there an easier way to define some policies to remove a persistent volume claim after the workflow has finished successfully?</p>
| Marc | <p>The persistent volume will be deleted by setting podGC and volumeClaimGC:</p>
<pre><code>spec:
entrypoint: main
volumeClaimGC:
strategy: OnWorkflowCompletion
podGC:
strategy: OnPodSuccess
volumeClaimTemplates:
- metadata:
name: test-volume
spec:
accessModes: ['ReadWriteMany']
resources:
requests:
storage: 1Gi
</code></pre>
| Marc |
<p>I am working on a microservice application and I am unable to connect my React to my backend api pod.</p>
<p><em>The request will be internal as I am using ServerSideRendering</em>, so when the page load first, the client pod connects directly to the backend pod. <strong>I am using ingress-nginx to connect them internally as well.</strong></p>
<blockquote>
<p><strong>Endpoint(from React pod --> Express pod):</strong></p>
</blockquote>
<pre><code>http://ingress-nginx-controller.ingress-nginx.svc.cluster.local
</code></pre>
<blockquote>
<p><strong>Ingress details:</strong></p>
</blockquote>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.245.81.11 149.69.37.110 80:31702/TCP,443:31028/TCP 2d1h
</code></pre>
<blockquote>
<p><strong>Ingress-Config:</strong></p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: cultor.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
</code></pre>
<blockquote>
<p><strong>Ingress log:</strong></p>
</blockquote>
<pre><code>[error] 1230#1230: *1253654 broken header: "GET /api/users/currentuser HTTP/1.1
</code></pre>
<p>Also, I am unable to ping <strong>ingress-nginx-controller.ingress-nginx.svc.cluster.local</strong> from <strong>inside of client pod</strong>.</p>
<hr />
<blockquote>
<p><strong>EXTRA LOGS</strong></p>
</blockquote>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get ns
NAME STATUS AGE
default Active 2d3h
ingress-nginx Active 2d1h
kube-node-lease Active 2d3h
kube-public Active 2d3h
kube-system Active 2d3h
#####
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-mongo-srv ClusterIP 10.245.155.193 <none> 27017/TCP 6h8m
auth-srv ClusterIP 10.245.1.179 <none> 3000/TCP 6h8m
client-srv ClusterIP 10.245.100.11 <none> 3000/TCP 6h8m
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 2d3h
</code></pre>
<hr />
<blockquote>
<p><strong>UPDATE</strong>:
Ingress logs:</p>
</blockquote>
<pre><code>[error] 1230#1230: *1253654 broken header: "GET /api/users/currentuser HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
host: cultor.dev
x-request-id: 5cfd15996dc8481114b39a16f0be5f06
x-real-ip: 45.248.29.8
x-forwarded-for: 45.248.29.8
x-forwarded-proto: https
x-forwarded-host: cultor.dev
x-forwarded-port: 443
x-scheme: https
cache-control: max-age=0
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36
sec-fetch-site: none
sec-fetch-mode: navigate
sec-fetch-user: ?1
sec-fetch-dest: document
accept-encoding: gzip, deflate, br
accept-language: en-US,en-IN;q=0.9,en;q=0.8,la;q=0.7
</code></pre>
| Karan Kumar | <p>This is a bug in using Ingress loadbalancer with Digitalocean for proxy to <em>connect pods internally via load balancer</em>:</p>
<blockquote>
<p><strong>Workaround</strong>:</p>
</blockquote>
<p>DNS record for a custom hostname (at a provider of your choice) must be set up that points to the external IP address of the load-balancer. Afterwards, digitalocean-cloud-controller-manager must be instructed to return the custom hostname (instead of the external LB IP address) in the service ingress status field status.Hostname by specifying the hostname in the service.beta.kubernetes.io/do-loadbalancer-hostname annotation. Clients may then connect to the hostname to reach the load-balancer from inside the cluster.</p>
<p><a href="https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/examples/README.md#accessing-pods-over-a-managed-load-balancer-from-inside-the-cluster" rel="nofollow noreferrer">Full official explaination of this bug</a></p>
| Karan Kumar |
<p>I have a service running in a docker container in Kubernetes. It has https/tls hand off by an ingress and then into the container via http. My issue is when the webApp that's running in the container returns a redirect or a request for the resource, it is returning http endpoints not https. </p>
<p>So for example: </p>
<p>Request: <a href="https://my.service" rel="nofollow noreferrer">https://my.service</a> </p>
<p>Returns redirect: <a href="http://my.service/login.html" rel="nofollow noreferrer">http://my.service/login.html</a></p>
<p>Is there any way around this? </p>
<p>Thanks for your help.</p>
| Alizkat | <p>I see your application is returning redirects to <code>http</code> and you are trying to rewrite these <code>http</code> to <code>https</code> in responses.</p>
<p>When using <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">kubernetes nginx ingress controller</a> adding these two annotations to your ingress object will solve your problem:</p>
<pre><code>nginx.ingress.kubernetes.io/proxy-redirect-from: http
nginx.ingress.kubernetes.io/proxy-redirect-to: https
</code></pre>
<p>More details can be found in <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#proxy-redirect" rel="nofollow noreferrer">ingress controller annotation descriptions</a> and in <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect" rel="nofollow noreferrer">official nginx documentation</a></p>
<p>Let me know it it helped.</p>
| Matt |
<p>I have problem with injecting Kubernetes Secret's value into Pod env.
I have following <code>pg-secrets.yml</code>:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: pg-secrets
type: Opaque
data:
POSTGRES_USER: cG9zdGdyZXMK
POSTGRES_PASSWORD: cGFzc3dvcmQK
# postgres & password
</code></pre>
<p>And then I inject POSTGRES_PASSWORD from it to <code>application-deployment.yml</code> ENV:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
...
spec:
containers:
- name: realo
image: abialiauski/realo
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: PG_USERNAME
valueFrom:
configMapKeyRef:
name: realo-configmap
key: PG_USERNAME
- name: PG_PASSWORD
valueFrom:
secretKeyRef:
name: pg-secrets
key: POSTGRES_PASSWORD
- name: PG_HOST
value: postgres
</code></pre>
<p>And having this <code>application.yml</code>:</p>
<pre><code>spring:
application:
name: realo
datasource:
username: ${PG_USERNAME}
password: ${PG_PASSWORD}
url: jdbc:postgresql://${PG_HOST}:5432/postgres
driver-class-name: org.postgresql.Driver
</code></pre>
<p>So, after starting PostgreSQL, the backend application crashes refusing the database connection with this exception:</p>
<pre><code>Caused by: org.postgresql.util.PSQLException: Something unusual has occurred to cause the driver to fail. Please report this exception.
at org.postgresql.Driver.connect(Driver.java:282) ~[postgresql-42.3.8.jar:42.3.8]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-4.0.3.jar:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364) ~[HikariCP-4.0.3.jar:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-4.0.3.jar:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476) ~[HikariCP-4.0.3.jar:na]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561) ~[HikariCP-4.0.3.jar:na]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) ~[HikariCP-4.0.3.jar:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) ~[HikariCP-4.0.3.jar:na]
at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:266) ~[liquibase-core-4.9.1.jar:na]
... 180 common frames omitted
Caused by: java.lang.IllegalArgumentException: Prohibited character
at org.postgresql.shaded.com.ongres.saslprep.SaslPrep.saslPrep(SaslPrep.java:105) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.shaded.com.ongres.scram.common.stringprep.StringPreparations$2.doNormalize(StringPreparations.java:55) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.shaded.com.ongres.scram.common.stringprep.StringPreparations.normalize(StringPreparations.java:65) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.shaded.com.ongres.scram.common.ScramMechanisms.saltedPassword(ScramMechanisms.java:152) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.shaded.com.ongres.scram.common.ScramFunctions.saltedPassword(ScramFunctions.java:59) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.shaded.com.ongres.scram.client.ScramSession$ClientFinalProcessor.<init>(ScramSession.java:196) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.shaded.com.ongres.scram.client.ScramSession$ClientFinalProcessor.<init>(ScramSession.java:163) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.shaded.com.ongres.scram.client.ScramSession$ServerFirstProcessor.clientFinalProcessor(ScramSession.java:130) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.jre7.sasl.ScramAuthenticator.processServerFirstMessage(ScramAuthenticator.java:147) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:816) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:180) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.Driver.makeConnection(Driver.java:402) ~[postgresql-42.3.8.jar:42.3.8]
at org.postgresql.Driver.connect(Driver.java:261) ~[postgresql-42.3.8.jar:42.3.8]
... 188 common frames omitted
</code></pre>
<p>If I inspect ENV everything will be OK, but backend still fails.</p>
<pre><code>kubectl exec -i -t pod/realo-66856b59b4-bp27r -- /bin/sh -c 'echo "username: $PG_USERNAME\npassword: $PG_PASSWORD\n"'
</code></pre>
<p>output:</p>
<pre><code>username: postgres\npassword: password
</code></pre>
| h1alexbel | <p>You used</p>
<pre><code>$ echo postgres | base64
cG9zdGdyZXMK
</code></pre>
<p>to generate your base64 encoded secret.</p>
<p>The problem with that is echo adds a linefeed to its output, you should call it like this:</p>
<pre><code>$ echo -n postgres | base64
cG9zdGdyZXM=
</code></pre>
<p>You need to do the same with password too:</p>
<pre><code>$ echo -n password | base64
cGFzc3dvcmQ=
</code></pre>
| Bill Mair |
<p>Getting this message:</p>
<pre><code>Failed to pull image....Error response from daemon: pull access denied for {private_repo}, the repository does not exist or may require 'docker login'
</code></pre>
<p>After deploying new helm chart using AWS ECR BUT</p>
<ol>
<li>Full private repo path is correct and image exists in ECR, in ReplicationController: private_repo/serviceXYZ:latest-develop</li>
<li>Other pods using the SAME repo but different paths ARE working, ex: private_repo/serviceABC (their latest local images are several months old and we did deploy them recently which tells me we don't pull them locally but straight from ECR)</li>
<li><code>~/.docker/config.json</code> shows that it's logged in</li>
<li>There is NO secret in other services (no <strong>imagePullSecrets</strong>) which are pulled successfully</li>
</ol>
<p>Any thoughts appreciated.</p>
| Anton Kim | <p>You need to authenticate to ECR to pull image. If you haven't done so, follow <a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth" rel="nofollow noreferrer">instructions here</a>. Basically you get an authorization token from AWS to pass it to <code>docker login</code>. The account required by ECR is IAM-based and different from your local Docker account.</p>
<p>If you have already done that, the token may have expired. Log in again then.</p>
<p>The reason you don't have to do this for other pods is likely those images have been built or pulled to local so Docker doesn't have to download it (with the <code>imagePullPolicy</code> of the pod set to <code>IfNotPresent</code> as default).</p>
| Son Nguyen |
<p>I've been reading a lot about microservices and kubernetes a lot lately, and also experimenting with the two technologies.
I am aware of the different pieces available in k8s such as pods, services, deployments...
I am also aware that services can be of type ClusterIP to restrict connectivity to within the cluster.</p>
<p>Let's say we have 10 microservices that each expose its functionality using some kind of http server. Basically each microservice is a REST API.
In kubernetes terms, this would be a container managed by a Pod and there would be a Service managing this Pod's connectivity.</p>
<p>My question is the following:
If you are not using a framework such as Spring that utilizes Eureka, for example bare bone golang http server in each microservice. How does each service find the other service without passing the service name and port number to each microservice.</p>
<p>For example:
Let's say Service A is exposed on port 8080 which needs to call Service B which is exposed on port 8081 and another service which is running on port 8082; this means that we need to pass the ports as environment variables to Service A...You can clearly see that when the number of services increases, it will be hard maintaining all of these environment variables.</p>
<p>How do people solve such a problem? do they use some kind of centralized service registry? if yes then how would that service registry integrate with the kubernetes Service?</p>
<p>Thanks in advance</p>
<p><strong>UPDATE</strong></p>
<p>The provided accepted answer is spot on with the explanation, the thing that I was failing to see is that for example in kubernetes, typically each deployment object has its own IP address so it's okay to use the same port number for each microservice.</p>
<p>In case anyone is interested with a simple/dummy microservice example, I put together a small project that can be run on a local kubernetes cluster with minikube.</p>
<p><a href="https://github.com/fouadkada/simple_microservice_example" rel="nofollow noreferrer">https://github.com/fouadkada/simple_microservice_example</a></p>
| Fouad | <p>I understand that by saying service you mean microservice,
but to avoid ambiguity, first I would like to define some things. When I say <strong>service</strong> I am refering to k8s service. When I say <strong>application</strong> I am refering to application running in pod.</p>
<p>Now your questions:</p>
<blockquote>
<p>How does each service find the other service without passing the service name and port number to each microservice</p>
</blockquote>
<p>In kubernetes there is a concept of services (<a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">link to docs</a>).
Each service is registered in k8s dns server (typically CoreDNS). You can use names of these services as regular FQDN.</p>
<hr>
<blockquote>
<p>Let's say Service A is exposed on port 8080 which needs to call Service B which is exposed on port 8081 and another service which is running on port 8082; this means that we need to pass the ports as environment variables to Service A.</p>
</blockquote>
<p>As @sachin already mentioned correctly, you don't typically use different ports for every application just beacuse you can.
Porst are good to use to know what type of application you can expect on specific port. e.g. when you see port 80 it is almost certain it's HTTP server. When you see 6379 you can be pretty sure its redis etc.</p>
<p>In k8s documentation on <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model" rel="nofollow noreferrer">Kubernetes networking model</a> you can find:</p>
<blockquote>
<p>Every Pod gets its own IP address. This means you do not need to explicitly create links between Pods and you almost never need to deal with mapping container ports to host ports. This creates a clean, backwards-compatible model where Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.</p>
</blockquote>
<hr>
<p>One last thing you may have not known about is that when k8s starts a pod some information about already existing services is passed through environment variables. You can check it yourself; just exec to any pod an run <code>set</code> to see all environment variables.</p>
<p>Here is example output (I removed unimportant part):</p>
<pre><code>KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
</code></pre>
<p>But please notice that these environment variables are services listed only within the same namespace as your application and only those that existed then creating the pod. Anything created after container started will not be reflected in envs.</p>
<hr>
<p>To summarize and answer your question:</p>
<blockquote>
<p>How do people solve such a problem?</p>
</blockquote>
<p>People use DNS (existing on every k8s cluster) and static ports (ports that don't change randomly).</p>
<p>Of course there are solutions like <a href="https://www.consul.io/" rel="nofollow noreferrer">Consul</a>, but out of the box functionality of k8s is sufficient for 90%+ usecases.</p>
| Matt |
<p>I have deployed mysql statefulset following the link <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/</a> and all the 3 mysql pods are running fine.
I have written an application in Golang which reads MySQL environmental variable from a config.toml file when connecting to MySQL server on my local machine. The config.toml file contain these variables. These are use when my application is running on my local machine.</p>
<pre><code>MySQLServer = "127.0.0.1"
Port = "3306"
MySQLDatabase = "hss_lte_db"
User = "hss"
Password = "hss"
</code></pre>
<p>Now I would like to deploy my application in my Kubernetes cluster so that it connects to the MySQL Statefulset service. I created my deployment as shown below but the pod show Error and CrashLoopBackOff. Need help as how to connect my application to the MySQL Statefulset service.
Also am not sure if the MySQLServer connection string is right in the configMap.</p>
<pre><code>apiVersion: v1
data:
config.toml: |
MySQLServer = "mysql-0.mysql,mysql-1.mysql,mysql-2.mysql"
Port = "3306"
MySQLDatabase = "hss_lte_db"
User = "root"
Password = ""
GMLCAddressPort = ":8000"
NRFIPAddr = "192.168.31.115"
NRFPort = "30005"
kind: ConfigMap
metadata:
name: vol-config-gmlcapi
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gmlc-instance
namespace: default
spec:
selector:
matchLabels:
app: gmlc-instance
replicas: 1
template:
metadata:
labels:
app: gmlc-instance
version: "1.0"
spec:
nodeName: k8s-worker-node2
containers:
- name: gmlc-instance
image: abrahaa1/gmlcapi:1.0.0
imagePullPolicy: "Always"
ports:
- containerPort: 8000
volumeMounts:
- name: configs
mountPath: /gmlcapp/config.toml
subPath: config.toml
volumeMounts:
- name: gmlc-configs
mountPath: /gmlcapp/profile.json
subPath: profile.json
volumes:
- name: configs
configMap:
name: vol-config-gmlcapi
- name: gmlc-configs
configMap:
name: vol-config-profile
</code></pre>
<p>I have made some variable name changes to deployment, so the updated deployment is as above but still did not connect. The description of the pod is as</p>
<pre><code>ubuntu@k8s-master:~/gmlc$ kubectl describe pod gmlc-instance-5898989874-s5s5j -n default
Name: gmlc-instance-5898989874-s5s5j
Namespace: default
Priority: 0
Node: k8s-worker-node2/192.168.31.151
Start Time: Sun, 10 May 2020 19:50:09 +0300
Labels: app=gmlc-instance
pod-template-hash=5898989874
version=1.0
Annotations: <none>
Status: Running
IP: 10.244.1.120
IPs:
IP: 10.244.1.120
Controlled By: ReplicaSet/gmlc-instance-5898989874
Containers:
gmlc-instance:
Container ID: docker://b756e67a39b7397e24fe394a8b17bc6de14893329903d3eace4ffde86c335213
Image: abrahaa1/gmlcapi:1.0.0
Image ID: docker-pullable://abrahaa1/gmlcapi@sha256:e0c8ac2a3db3cde5015ea4030c2099126b79bb2472a9ade42576f7ed1975b73c
Port: 8000/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 10 May 2020 19:50:33 +0300
Finished: Sun, 10 May 2020 19:50:33 +0300
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 10 May 2020 19:50:17 +0300
Finished: Sun, 10 May 2020 19:50:17 +0300
Ready: False
Restart Count: 2
Environment: <none>
Mounts:
/gmlcapp/profile.json from gmlc-configs (rw,path="profile.json")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-prqdp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
configs:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vol-config-gmlcapi
Optional: false
gmlc-configs:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vol-config-profile
Optional: false
default-token-prqdp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-prqdp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 9s (x3 over 28s) kubelet, k8s-worker-node2 Pulling image "abrahaa1/gmlcapi:1.0.0"
Normal Pulled 7s (x3 over 27s) kubelet, k8s-worker-node2 Successfully pulled image "abrahaa1/gmlcapi:1.0.0"
Normal Created 7s (x3 over 26s) kubelet, k8s-worker-node2 Created container gmlc-instance
Normal Started 6s (x3 over 26s) kubelet, k8s-worker-node2 Started container gmlc-instance
Warning BackOff 6s (x3 over 21s) kubelet, k8s-worker-node2 Back-off restarting failed container
</code></pre>
<p>Still did not able to connect. </p>
<p>Logs output:
ubuntu@k8s-master:~/gmlc$ kubectl logs gmlc-instance-5898989874-s5s5j -n default
2020/05/10 18:13:21 open config.toml: no such file or directory</p>
<p>It looks like the config.toml file is the problem and my application meed this file to run.</p>
<p>I have 2 files (config.toml and profile.json) that have to be in the /gmlcapp/ directory for the application to run. Because the profile.json is huge to add to the deployment as in the above, i have created the its configmap seperately. This it configmap output</p>
<pre><code>ubuntu@k8s-master:~/gmlc$ kubectl get configmaps
NAME DATA AGE
mysql 2 4d3h
vol-config-gmlcapi 1 97m
vol-config-profile 1 7h56m
</code></pre>
<p>Also this is the logs when i comment the vol-config-profile in the deployment.</p>
<pre><code>ubuntu@k8s-master:~/gmlc$ kubectl logs gmlc-instance-b4ddd459f-fd8nr -n default
root:@tcp(mysql-0.mysql,mysql-1.mysql,mysql-2.mysql:3306)/hss_lte_db
2020/05/10 18:39:43 GMLC cannot ping MySQL sever
2020/05/10 18:39:43 Cannot read json file
panic: Cannot read json file
goroutine 1 [running]:
log.Panic(0xc00003dda8, 0x1, 0x1)
/usr/local/go/src/log/log.go:351 +0xac
gmlc-kube/handler.init.0()
/app/handler/init.go:43 +0x5e9
</code></pre>
| tom | <p>I have got it running by changing the volumeMount in the deployment.</p>
<p>Solution below:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: gmlc-instance
namespace: default
spec:
selector:
matchLabels:
app: gmlc-instance
replicas: 1
template:
metadata:
labels:
app: gmlc-instance
version: "1.0"
spec:
nodeName: k8s-worker-node2
containers:
- name: gmlc-instance
image: abrahaa1/gmlcapi:1.0.0
imagePullPolicy: "Always"
ports:
- containerPort: 8000
volumeMounts:
- name: configs
mountPath: /gmlcapp/config.toml
subPath: config.toml
readOnly: true
- name: gmlc-configs
mountPath: /gmlcapp/profile.json
subPath: profile.json
volumes:
- name: configs
configMap:
name: vol-config-gmlcapi
- name: gmlc-configs
configMap:
name: vol-config-profile
</code></pre>
| tom |
<p>I'm using readinessProbe on my container and configured it work on HTTPS with scheme attribute.
My server expects the get the certificates. how can I configure the readiness probe to support HTTPS with certificates exchange? I don't want it to skip the certificates</p>
<pre><code> readinessProbe:
httpGet:
path: /eh/heartbeat
port: 2347
scheme: HTTPS
initialDelaySeconds: 210
periodSeconds: 10
timeoutSeconds: 5
</code></pre>
| Mary1 | <p>You can use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">Readiness command</a> instead of HTTP request. This will give you complete control over the check, including the certificate exchange.</p>
<p>So, instead of:</p>
<pre><code>readinessProbe:
httpGet:
path: /eh/heartbeat
port: 2347
scheme: HTTPS
</code></pre>
<p>, you would have something like:</p>
<pre><code>readinessProbe:
exec:
command:
- python
- your_script.py
</code></pre>
<p>Be sure the script returns 0 if all is well, and non-zero value on failure.
(python your_script.py is, of course, just one example. You would know what is the best approach for you)</p>
| Dragan Bocevski |
<p>I currently trying to deploy simple RabbitMQ cluster for my project.
So far, I found this two example that suit my needed.
<a href="https://www.linkedin.com/pulse/deploying-rabbitmq-cluster-kubernetes-part-1-darshana-dinushal/" rel="nofollow noreferrer">https://www.linkedin.com/pulse/deploying-rabbitmq-cluster-kubernetes-part-1-darshana-dinushal/</a></p>
<p><a href="https://github.com/marcel-dempers/docker-development-youtube-series/blob/master/messaging/rabbitmq/kubernetes/rabbit-statefulset.yaml" rel="nofollow noreferrer">https://github.com/marcel-dempers/docker-development-youtube-series/blob/master/messaging/rabbitmq/kubernetes/rabbit-statefulset.yaml</a></p>
<p>And since official website, articles and tutorial videos about RabbitMQ on Kubernetes are pointing to large production deployment. I only get this 2 example to follow.</p>
<p>And when I try it RabbitMQ pod is stuck on pending state while other pods are running fine. I currently use YAML files from the first example.</p>
<pre><code>kubectl describe pod rabbitmq-0
Name: rabbitmq-0
Namespace: default
Priority: 0
Node: <none>
Labels: app=rabbitmq
controller-revision-hash=rabbitmq-84b847b5d5
statefulset.kubernetes.io/pod-name=rabbitmq-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/rabbitmq
Init Containers:
config:
Image: busybox
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
cp /tmp/config/rabbitmq.conf /config/rabbitmq.conf && ls -l /config/ && cp /tmp/config/enabled_plugins /etc/rabbitmq/enabled_plugins
Environment: <none>
Mounts:
/config/ from config-file (rw)
/etc/rabbitmq/ from plugins-file (rw)
/tmp/config/ from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ffj67 (ro)
Containers:
rabbitmq:
Image: rabbitmq:3.8-management
Ports: 4369/TCP, 5672/TCP
Host Ports: 0/TCP, 0/TCP
Environment:
RABBIT_POD_NAME: rabbitmq-0 (v1:metadata.name)
RABBIT_POD_NAMESPACE: default (v1:metadata.namespace)
RABBITMQ_NODENAME: rabbit@$(RABBIT_POD_NAME).rabbitmq.$(RABBIT_POD_NAMESPACE).svc.cluster.local
RABBITMQ_USE_LONGNAME: true
RABBITMQ_CONFIG_FILE: /config/rabbitmq
RABBITMQ_ERLANG_COOKIE: <set to the key 'RABBITMQ_ERLANG_COOKIE' in secret 'rabbit-secret'> Optional: false
K8S_HOSTNAME_SUFFIX: .rabbitmq.$(RABBIT_POD_NAMESPACE).svc.cluster.local
Mounts:
/config/ from config-file (rw)
/etc/rabbitmq/ from plugins-file (rw)
/var/lib/rabbitmq from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ffj67 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-rabbitmq-0
ReadOnly: false
config-file:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
plugins-file:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rabbitmq-config
Optional: false
kube-api-access-ffj67:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m25s (x39 over 89m) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
</code></pre>
| TaeXtreme | <p>Since I had tested deploying RabbitMQ via docker-compose and it worked before.</p>
<p>I discovered a tool called Kompose and I use it to convert my rabbitmq docker-compose file to k8s yaml files.</p>
<p>And it works!.</p>
<p>Yaml files that generate from Kompose.</p>
<p>service file</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (40646f47)
labels:
io.kompose.service: rabbitmq3
name: rabbitmq3
spec:
type: LoadBalancer
ports:
- name: amqp
port: 5672
targetPort: 5672
- name: discovery
port: 15672
targetPort: 15672
selector:
io.kompose.service: rabbitmq3
</code></pre>
<p>deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (40646f47)
labels:
io.kompose.service: rabbitmq3
name: rabbitmq3
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: rabbitmq3
strategy: { }
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (40646f47)
labels:
io.kompose.service: rabbitmq3
spec:
containers:
- env:
- name: RABBITMQ_DEFAULT_PASS
value: rabbitpassword
- name: RABBITMQ_DEFAULT_USER
value: rabbituser
image: rabbitmq:management-alpine
name: rabbitmq
ports:
- containerPort: 15672
name: discovery
- containerPort: 5672
name: amqp
resources: { }
restartPolicy: Always
</code></pre>
| TaeXtreme |
<p>I'm building a test automation tool that needs to launch a set of tests, collect logs and results. My plan is to build container with necessary dependency for test framework and launch them in Kubernetes. </p>
<p>Is there any application that abstracts complexity of managing the pod lifecycle and provides a simple API to achieve this use-case preferably through API? Basically my test scheduler need to deploy a container in kubernetes, launch a test and collect log files at the end. </p>
<p>I already looked at Knative and kubeless - they seem to be complex and may over-complicate what I'm trying to do here.</p>
| bram | <p>Based on information you provided all I can recomend is kubernetes API itself.</p>
<p>You can create a pod with it, wait for it to finish and gather logs. If thats all you need, you don't need any other fancy applications. Here is a <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">list of k8s client libraries</a>. </p>
<p>If you don't want to use client libraries you can always use REST api. </p>
<p>If you are not sure how to use REST api, run kubectl commands with <code>--v=10</code> flag for debug output where you can see all requests between kubectl and api-server as a reference guide.</p>
<p>Kubernetes also provided detailed <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/" rel="nofollow noreferrer">documentation for k8s REST api</a>.</p>
| Matt |
<p>I know, for example, that you can get the lastUpdateTime of a Deployment with <strong>kubectl</strong>:</p>
<pre><code>kubectl get deploy <deployment-name> -o jsonpath={.status.conditions[1].lastUpdateTime}
</code></pre>
<p>Or via <strong>client-go</strong>:</p>
<pre><code>func deploymentCheck(namespace string, clientset *kubernetes.Clientset) bool {
// get the deployments in the namespace
deployments, err := clientset.AppsV1().Deployments(namespace).List(context.TODO(), metav1.ListOptions{})
if errors.IsNotFound(err) {
log.Fatal("\nNo deployments in the namespace", err)
} else if err != nil {
log.Fatal("\nFailed to fetch deployments in the namespace", err)
}
var dptNames []string
for _, dpt := range deployments.Items {
dptNames = append(dptNames, dpt.Name)
}
// check the last update time of the deployments
for _, dpt := range deployments.Items {
lastUpdateTime := dpt.Status.Conditions[1].LastUpdateTime
dptAge := time.Since(lastUpdateTime.Time)
fmt.Printf("\nDeployment %v age: %v", dpt.Name, dptAge)
}
}
</code></pre>
<p>The equivalent of <code>lastUpdateTime := dpt.Status.Conditions[1].LastUpdateTime</code> for a StatefulSet doesn't seem to exist.</p>
<p>So, how can I get the lastUpdateTime of a StatefulSet?</p>
| Kaio H. Cunha | <p>I noticed that the only things that change after someone edits a given resource are the resource's lastAppliedConfiguration, Generation and ObservedGeneration. So, I stored them in lists:</p>
<pre><code> for _, deployment := range deployments.Items {
deploymentNames = append(deploymentNames, deployment.Name)
lastAppliedConfig := deployment.GetAnnotations()["kubectl.kubernetes.io/last-applied-configuration"]
lastAppliedConfigs = append(lastAppliedConfigs, lastAppliedConfig)
generations = append(generations, deployment.Generation)
observedGenerations = append(observedGenerations, deployment.Status.ObservedGeneration)
}
</code></pre>
<p>Here's the full function:</p>
<pre><code>func DeploymentCheck(namespace string, clientset *kubernetes.Clientset) ([]string, []string, []int64, []int64) {
var deploymentNames []string
var lastAppliedConfigs []string
var generations []int64
var observedGenerations []int64
deployments, err := clientset.AppsV1().Deployments(namespace).List(context.TODO(), metav1.ListOptions{})
if errors.IsNotFound(err) {
log.Print("No deployments in the namespace", err)
} else if err != nil {
log.Print("Failed to fetch deployments in the namespace", err)
}
for _, deployment := range deployments.Items {
deploymentNames = append(deploymentNames, deployment.Name)
lastAppliedConfig := deployment.GetAnnotations()["kubectl.kubernetes.io/last-applied-configuration"]
lastAppliedConfigs = append(lastAppliedConfigs, lastAppliedConfig)
generations = append(generations, deployment.Generation)
observedGenerations = append(observedGenerations, deployment.Status.ObservedGeneration)
}
return deploymentNames, lastAppliedConfigs, generations, observedGenerations
}
</code></pre>
<p>I use all this information to instantiate a struct called Namespace, which contains all major resources a k8s namespace can have.</p>
<p>Then, after a given time I check the same namespace again and check if its resources had any changes:</p>
<pre><code>if !reflect.DeepEqual(namespace.stsLastAppliedConfig, namespaceCopy.stsLastAppliedConfig) {
...
}
else if !reflect.DeepEqual(namespace.stsGeneration, namespaceCopy.stsGeneration) {
...
}
else if !reflect.DeepEqual(namespace.stsObservedGeneration, namespaceCopy.stsObservedGeneration) {
...
}
</code></pre>
<p>So, the only workaround I found was to compare the resource's configuration, including StatefulSets', after a given time. Apparently, for some resources you cannot get any information about their lastUpdateTime.</p>
<p>I also found out that lastUpdateTime is actually not reliable, as it understands minor cluster changes as the resource's change. For example, if a cluster rotates and kills all pods, the lastUpdateTime of a Deployment will update its time. That's not what I wanted. I wanted to detect user changes to resources, like when someone applies an edited yaml file or run <code>kubectl edit</code>.</p>
<p>@hypperster , I hope it helps.</p>
| Kaio H. Cunha |
<p>I want to create a cluster under EKS in a version that got recently deprecated <code>1.15</code> to test something version specific.</p>
<p>my below command is failing</p>
<pre><code> eksctl create cluster --name playgroundkubernetes --region us-east-1 --version 1.15 --nodegroup-name standard-workers --node-type t2.medium --managed
</code></pre>
<p>is there a workaround where i can create a cluster in version <code>1.15</code>.</p>
| opensource-developer | <p>In addition to mreferre's comment, if you're trying to just create a Kubernetes cluster and don't need it to be in AWS, you could use Kind (<a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">https://kind.sigs.k8s.io/docs/user/quick-start/</a>) or similar to create something much more quickly and probably more cheaply.</p>
| Smarticu5 |
<p>We usually add the <code>Kubernetes</code> configuration file inside the <code>.kube</code> directory in our Home directory when we use either <code>Windows</code> or <code>Linux</code> operating systems.
But when I try to create a <code>.kube</code> directory on <code>Mac OS</code>, it says,</p>
<blockquote>
<p>You can't use a name that begins with the dot because these names are
reserved for the system. Please choose another name.</p>
</blockquote>
<p>How can I do this to put the k8s configuration file into it?</p>
| Kalhara Tennakoon | <p>I created the <code>.kube</code> directory using the terminal app on Mac and also noticed that the <code>.kube</code> directory can be created at any location on your Mac.</p>
<p>Even though you created it, it will not display in Finder. You have to open the Terminal and check its' availability using <code>ls</code> or <code>ls -la</code> commands.</p>
| Kalhara Tennakoon |
<p>Jenkins version is 2.222.4.
We upgraded the jenkins kubernetes plugin from 1.14.2 --> 1.26.0.
What this has done is pre-pluginupgrade, the jenkins slave would mount /home/jenkins as rw so it could use .gradle files in there for its build. </p>
<p>Post plugin upgrade, home/jenkins is now change to <strong>readonly</strong>, and instead the dir called /home/jenkins/agent has become the read/write.
However the build job now has no more r/w access to files in home/jenkins which it needs.</p>
<p>I did a df -h on our slave jnlp pod pre upgrade (k8splugin-V1.14.2) and see the following:</p>
<p><code>Filesystem Size Used Available Use% Mounted on
overlay 119.9G 5.6G 109.1G 5% /
/dev/nvme0n1p2 119.9G 5.6G 109.1G 5% /home/jenkins</code></p>
<p>and can see its mounted as read/write</p>
<p><code>cat /proc/mounts | grep -i jenkins
/dev/nvme0n1p2 /home/jenkins ext4 rw,relatime,data=ordered 0 0</code></p>
<p>Post plugin upgrade if I run a df -h I don't even see /home/jenkins mounted only:
<code>/dev/nvme0n1p2 120G 5.6G 110G 5% /etc/hosts</code></p>
<p>and if I cat /proc/mounts I only see this post upgrade</p>
<p><code>jenkins@buildpod:~$ cat /proc/mounts | grep -i jenkins
/dev/nvme0n1p2 /home/jenkins/agent ext4 rw,relatime,data=ordered 0 0
/dev/nvme0n1p2 /home/jenkins/.jenkins ext4 rw,relatime,data=ordered 0 0</code></p>
<p>Also seeing this in the jenkins job log but not sure if it is relevant:
<code>WARNING] HOME is set to / in the jnlp container. You may encounter troubles when using tools or ssh client. This usually happens if the uid doesnt have any entry in /etc/passwd. Please add a user to your Dockerfile or set the HOME environment variable to a valid directory in the pod template definition.</code></p>
<p>Any ideas or workarounds would be most welcome as badly stuck by this issue.
Brian</p>
| b Od | <p>My colleague just figured this out. He found it goes back to a change the plugin developers made sometime in August 2019, to be compatible with Kubernetes 1.18. That's when they changed the default workspace in release 1.18.0 of the plugin. It was spotted and supposed to be fixed here github.com/jenkinsci/kubernetes-plugin/pull/713 but it persists in our case. Workaround is to hardcode into the jenkinsfile of each job <code>workingDir: '/home/jenkins'</code> under the container</p>
| b Od |
<p>I have the following code for pod creation. I have two nodes one master and another node is worker node, I am creating two pods I need one pod to be scheduled on the master and the other pod on the worker node. I have not specified for pod second testing1 to be scheduled on a worker node because by default pods are scheduled on worker nodes. But the second pod testing1 is also scheduled on the master node.</p>
<p>Yaml file:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test:latest
command: ["sleep"]
args: ["infinity"]
imagePullPolicy: Never
ports:
- containerPort: 8080
nodeSelector:
node_type: master_node
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
kind: Pod
metadata:
name: testing1
labels:
app: testing1
spec:
containers:
- name: testing1
image: testing1:latest
command: ["sleep"]
args: ["infinity"]
imagePullPolicy: Never
</code></pre>
<p>Thanks help is highly appreciated in solving this issue.</p>
<p>Help is highly appreciated. Thanks</p>
| Mazia | <p>You can use nodeAffinity / antiAffinity to solve this.</p>
<p>Why would you assign pods to a master node?</p>
<p>The master nodes are the control plane for your k8s cluster which may cause a negative impact if the pods you schedule on the master node consume too much resources.</p>
<p>If you really want to assign a pod to a master node I recommend you untaint 1 master node and remove the NoSchedule taint and then assign a nodeAffinity to this single master node unless you really need to run this on all your master nodes .</p>
| John Peterson |
<p>We are moving to kubernetes and we are totally new to it.</p>
<p>In our current mono service setup we have: Nginx -> Web application. These way we can protect some static assets via authentication in the web application and use internal and X-Accel-Redirect of Nginx to serve static files after authentication takes place.</p>
<p>Now in kubernetes we have Ingress and behind these services:</p>
<ul>
<li>web app</li>
<li>private static service</li>
</ul>
<p>Is there a way to tell in ingress from the web application to "redirect" the request as we kind of do with sendfile, so that the private static service will reply to it? Or somehow to achieve protecting our static while keeping the static service separate and independent in kubernetes setup?</p>
<p>We kind of made it work by chaining the private static service in front of the web application, but it feels there must be a better way to do it.</p>
| meili | <p>Here is how I managed to make it work.</p>
<p>I created two ingresses.
First:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
internal;
nginx.ingress.kubernetes.io/rewrite-target: /some/path/$1
name: static-service-internal
spec:
rules:
- http:
paths:
- backend:
serviceName: private-static-service
servicePort: 80
path: /protected/(.*)
</code></pre>
<p>and second service:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
name: web-app
spec:
rules:
- http:
paths:
- backend:
serviceName: web-app
servicePort: 80
path: /
</code></pre>
<p>What you see above is supposed to work as in <a href="https://www.nginx.com/resources/wiki/start/topics/examples/xsendfile/" rel="nofollow noreferrer">this example from nginx documentation</a></p>
<p>When receiving <code>X-Accel-Redirect: /protected/iso.img</code> from <code>web-app</code> then it will request <code>/some/path/iso.img</code> from <code>private static service</code>.</p>
<p>Let me know if this solves you problem.</p>
| Matt |
<p>I am using <a href="https://github.com/bitnami/bitnami-docker-mongodb" rel="nofollow noreferrer">Bitnami MongoDB</a> together with <a href="https://github.com/bitnami/charts/tree/master/bitnami/mongodb" rel="nofollow noreferrer">MongoDB Helm Chart</a> (version 10.6.10, image tag being 3.6.17-ol-7-r26) to run a Mongo cluster in AWS under Kubernetes, which was initially created using the Helm chart. I am trying to get backups working using <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html" rel="nofollow noreferrer">EBS Snapshots</a> so that periodically whole volume from the primary MongoDB member is copied, which in case something happens could be restored to a new MongoDB installation (using the same Helm chart).</p>
<p>Currently I'm trying to get a backup process so that the snapshot would be put into a new volume, and a new Mongo namespace could be created in the same Kubernetes cluster, where the existing volume would be mounted. This works so that I'm able to create the Kubernetes volumes manually from the snapshot (Kubernetes Persistent Volume and Persistent Volume Claim) and link them with the MongoDB Helm chart (as those PV + PVC contain proper names), and start the Mongo server.</p>
<p>However, once the pods are running (primary, secondary and arbiter), the previously existing replica set is in place (old local database I guess) and obviously not working, as it's from the snapshot state.</p>
<p>Now I would wish to, following <a href="https://docs.mongodb.com/manual/tutorial/restore-replica-set-from-backup" rel="nofollow noreferrer">MongoDB documentation</a></p>
<ul>
<li>Destroy existing replica set</li>
<li>Reset default settings on arbiter+slave</li>
<li>Create replica set from primary/master (with data being in place for primary)</li>
<li>Attach arbiter and slave to replica set to sync the data similar to what's in the docs.</li>
</ul>
<p>Checking into the state from primary, I get</p>
<pre><code>{
"stateStr" : "REMOVED",
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"codeName" : "InvalidReplicaSetConfig",
.. other fields as well
}
</code></pre>
<p>So replica set is degraded. When I try to get local database deleted and move this to standalone, I'll get</p>
<pre><code>rs0:OTHER> rs.slaveOk()
rs0:OTHER> show dbs
admin 0.000GB
local 1.891GB
+ some actual databases for data, so I can see data is in place
rs0:OTHER> use local
switched to db local
rs0:OTHER> db.dropDatabase()
"errmsg" : "not authorized on local to execute command { dropDatabase: 1.0, .. few other fields .., $readPreference: { mode: \"secondaryPreferred\" }, $db: \"local\" }"
rs0:OTHER> db.grantRolesToUser('root', [{ role: 'root', db: 'local' }])
2020-12-08T18:25:31.103+0000 E QUERY [thread1] Error: Primary stepped down while waiting for replication :
</code></pre>
<p>As I'm using the Bitnami Helm chart, it's having some start parameters for the Kubernetes cluster which probably aren't in sync for getting this to run with the existing volume, which already has some configuration in place.</p>
<p>So I'm just wondering if I'm trying to do all the wrong things here, and the correct solution would be start a fresh MongoDB chart and then restore the database using Mongorestore (so basically not using EBS snapshots), or is there a way to launch this from an existing snapshot/volume, so that I would get use of the Helm Chart and EBS snapshots.</p>
| mpartan | <p>When using existing data to restore Mongo cluster created with <code>bitnami/mongodb</code> helm chart you will need to set some values on the installation command that ensure all the configuration to be in sync between configmaps, secrets, etc and the data stored in this volume.</p>
<p>First, in addition to <code>--set architecture=replicaset</code> you must add an existing PVC to the chart:</p>
<pre><code>--set persistence.existingClaim=my-restored-pvc
</code></pre>
<p>If you did not set any other param such as <code>auth.username</code> or <code>auth.database</code> you will only need to add <code>--set auth.rootPassword=$MONGODB_ROOT_PASSWORD</code> and <code>--set auth.replicaSetKey=$MONGODB_REPLICA_SET_KEY</code> to your install command and the value must be the old instance value, you can get it by</p>
<pre><code>$ export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongo-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
$ export MONGODB_REPLICA_SET_KEY=$(kubectl get secret --namespace default mongo-mongodb -o jsonpath="{.data.mongodb-replica-set-key}" | base64 --decode)
</code></pre>
<p>On the other hand, if you have set specific auth params, you will also need to add those values to the chart like: <code>--set auth.password=$MONGODB_PASSWORD,auth.username=myuser,auth.database=mydatabase</code>, you can get it by</p>
<pre><code>$ export MONGODB_PASSWORD=$(kubectl get secret --namespace default mongo-mongodb -o jsonpath="{.data.mongodb-password}" | base64 --decode)
</code></pre>
<p>Thanks!</p>
| Daniel Arteaga Barba |
<hr />
<p>So like you said I created a another pod which is of kind:job and included the script.sh.</p>
<p>In the script.sh file, I run "kubectl exec" to the main pod to run few commands</p>
<p>The script gets executed, but I get the error "cannot create resource "pods/exec in API group"</p>
<p>So I created a clusterrole with resources: ["pods/exec"] and bind it to the default service account using ClusterRoleBinding</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: service-account-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
In the pod which is of kind:job, I include the service account like shown below
restartPolicy: Never
serviceAccountName: default
but I still get the same error. What am I doing wrong here ?
</code></pre>
<pre><code>Error from server (Forbidden): pods "mongo-0" is forbidden: User "system:serviceaccount:default:default" cannot create resource "pods/exec" in API group "" in the namespace "default"
</code></pre>
<pre><code></code></pre>
| jeril | <p>If this is something that needs to be regularly run for maintenance look into Kubernetes daemon set object.</p>
| John Peterson |
<p>I'm trying get the hello-node service running and accesssible from outside on an azure VM with minikube.</p>
<blockquote>
<p>minikube start --driver=virtualbox</p>
</blockquote>
<p>created deployment</p>
<blockquote>
<p>kubectl create deployment hello-node --image=k8s.gcr.io/echoserver</p>
</blockquote>
<p>exposed deployment</p>
<blockquote>
<p>kubectl expose deployment hello-node --type=LoadBalancer --port=8080</p>
</blockquote>
<p>suppose kubectl get services says:</p>
<blockquote>
<p>hello-node LoadBalancer 1.1.1.1 8080:31382/TCP</p>
</blockquote>
<p>The public IP of the azure VM is 2.2.2.2, the private IP is 10.10.10.10 and the virtualbox IP is 192.168.99.1/24</p>
<p>How can I access the service from a browser outside the cluster's network?</p>
| Serve Laurijssen | <p>In your case, you need you to use --type=NodePort for creating a service object that exposes the deployment. The type=LoadBalancer service is backed by external cloud providers.</p>
<pre><code>kubectl expose deployment hello-node --type=NodePort --name=hello-node-service
</code></pre>
<p>Display information about the Service:</p>
<pre><code>kubectl describe services hello-node-service
</code></pre>
<p>The output should be similar to this:</p>
<pre><code>Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: NodePort
IP: 10.32.0.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31496/TCP
Endpoints: 10.200.1.4:8080,10.200.2.5:8080
Session Affinity: None
Events: <none>
</code></pre>
<p>Make a note of the NodePort value for the service. For example, in the preceding output, the NodePort value is 31496.</p>
<p>Get the public IP address of your VM. And then you can use this URL:</p>
<pre><code>http://<public-vm-ip>:<node-port>
</code></pre>
<p>Don't forget to open this port in firewall rules.</p>
| Alex0M |
<p>I have created helm chart for varnish cache server which is running in kubernetes cluster , while testing with the "external IP" generated its throwing error , sharing below </p>
<pre><code>HTTP/1.1 503 Backend fetch failed
Date: Tue, 17 Mar 2020 08:20:52 GMT
Server: Varnish
Content-Type: text/html; charset=utf-8
Retry-After: 5
X-Varnish: 570521
Age: 0
Via: 1.1 varnish (Varnish/6.3)
X-Cache: uncached
Content-Length: 283
Connection: keep-alive
</code></pre>
<p>Sharing varnish.vcl, values.yaml and deployment.yaml below . Any suggestions how to resolve as I have hardcoded the backend server as .host="www.varnish-cache.org" with port : "80". My requirement is on executing curl -IL I should get the response with cached values not as described above (directly from backend server)..</p>
<p>varnish.vcl:</p>
<pre><code># VCL version 5.0 is not supported so it should be 4.0 or 4.1 even though actually used Varnish version is 6
vcl 4.1;
import std;
# The minimal Varnish version is 5.0
# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'
{{ .Values.varnishconfigData | indent 2 }}
sub vcl_recv {
if(req.url == "/healthcheck") {
return(synth(200,"OK"));
}
}
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = "cached";
} else {
set resp.http.X-Cache = "uncached";
}
}
</code></pre>
<p>values.yaml:</p>
<pre><code># Default values for tt.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
#varnishBackendService: "www.varnish-cache.org"
#varnishBackendServicePort: "80"
image:
repository: varnish
tag: 6.3
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
# type: ClusterIP
type: LoadBalancer
port: 80
# externalIPs: 192.168.0.10
varnishconfigData: |-
backend default {
.host = "www.varnish-cache.org";
.host = "100.68.38.132"
.port = "80";
.first_byte_timeout = 60s;
.connect_timeout = 300s ;
.probe = {
.url = "/";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
sub vcl_backend_response {
set beresp.ttl = 5m;
}
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
limits:
memory: 128Mi
requests:
memory: 64Mi
#resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
</code></pre>
<p>Deployment.yaml:</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ include "varnish.fullname" . }}
labels:
app: {{ include "varnish.name" . }}
chart: {{ include "varnish.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "varnish.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ include "varnish.name" . }}
release: {{ .Release.Name }}
spec:
volumes:
- name: varnish-config
configMap:
name: {{ include "varnish.fullname" . }}-varnish-config
items:
- key: default.vcl
path: default.vcl
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
# command: ["/bin/sh"]
# args: ["-c", "while true; do service varnishd status, varnishlog; sleep 10;done"]
env:
- name: VARNISH_VCL
value: /etc/varnish/default.vcl
volumeMounts:
- name: varnish-config
mountPath : /etc/varnish/
ports:
- name: http
containerPort: 80
protocol: TCP
targetPort: 80
livenessProbe:
httpGet:
path: /healthcheck
port: http
port: 80
failureThreshold: 3
initialDelaySeconds: 45
timeoutSeconds: 10
periodSeconds: 20
readinessProbe:
httpGet:
path: /healthcheck
port: http
port: 80
initialDelaySeconds: 10
timeoutSeconds: 15
periodSeconds: 5
restartPolicy: "Always"
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
</code></pre>
| Debasis Singh | <p>The <code>HTTP/1.1 503 Backend fetch failed</code> error indicates that Varnish is unable to connect to the backend host and port.</p>
<p>I advise you to try some good old manual debugging:</p>
<ul>
<li>Access the bash shell of one of the containers</li>
<li>Open <code>/etc/varnish/default.vcl</code> and check the exact hostname & port that were parsed into the <code>backend</code> definition</li>
<li>Make sure <code>curl</code> is installed and try to curl the hostname on that specific port</li>
<li>Maybe even install <code>telnet</code> and try to see if the port of the hostname is accepting connections</li>
</ul>
<blockquote>
<p>Basically you'll try to figure out if there is a network configuration that is prohibiting you from making an outbound connection, or if there's something else preventing Varnish from making the fetch.</p>
</blockquote>
| Thijs Feryn |
<p>I am running multinode k8s cluster on my workstation. I have created the VM's using multipass. K8s cluster is deployed using microk8s.</p>
<p>Microk8s provides private registry as a <a href="https://github.com/ubuntu/microk8s/blob/8bf6e65e9ed5f0749f43882c31787765ad91af1b/microk8s-resources/actions/registry.yaml" rel="nofollow noreferrer">addon</a>.</p>
<p>It is providing 32000 as node port for the registry.</p>
<p>I would like to connect to this cluster remotely and push docker images to this registry. </p>
<p>I have added <code>172.**.44.***:32000</code> as insecure registery in my docker settings in my remote pc.</p>
<p>Note: <code>172.**.44.***</code> is the IP of the cluster (something you get in kubectl cluster-info)</p>
<p>But I am unable to make it work</p>
<pre><code>docker build -t 172.**.44.***:32000/myapp:v1 .
docker push 172.**.44.***:32000/myapp:v1
Get http://172.**.44.***:32000/v2/: dial tcp 172.**.44.***:32000: connect: no route to host
</code></pre>
<hr>
<p>I didn't use <code>microk8s</code> to set up kubernetes cluster before, but I do have the feeling is, the ip of <code>172.xx.xx.xx</code> is the wrong IP that you can't connect it from your local pc. </p>
<p>so what's the output of below commands:</p>
<ol>
<li>What's the nodes IP, include master and work nodes?</li>
</ol>
<pre><code>kubernetes get nodes -o wide
</code></pre>
<ol start="2">
<li>What's the service setup?</li>
</ol>
<pre><code>kuberentes get services
</code></pre>
<p>can you make sure the service to that's private registry server's port is setup and can be connected. </p>
<ol start="3">
<li>check your own PC's IP</li>
</ol>
<pre><code># for windows
ipconfig
# for linux/macos
ifconfig
</code></pre>
<p>maybe there are many IPs in output, make sure you get the proper local IP for your PC. </p>
<p>For example, it is something like <code>10.xx.xx.xx</code>, then you should use similar IPs to connect to that private registry server, you just need find it out </p>
<ol start="4">
<li>check what network CNI you are using, weavenet, flannel, etc. </li>
</ol>
<p>the IP of <code>172.xx.xx.xx</code> are mostly used by these network CNI provider, it can be used in kubernetes cluster, but not your local host. </p>
| piby180 | <p>After you enable registry on the microk8s, run this script</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get svc -n container-registry
</code></pre>
<p>you can see that microk8s has redirect registry service's port <code>32000</code> to <code>5000</code>, then I use <code>ingress</code> to expose via https. </p>
<p>First, you have to enable <code>ingress</code> on microk8s:</p>
<pre><code>microk8s.enable ingress
</code></pre>
<p>then, you have to create a <code>tls sceret</code> if you want to use https :</p>
<pre class="lang-sh prettyprint-override"><code>openssl genrsa -aes128 -out server.key 2048
openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 3650 -out server.crt
kubectl create secret tls registry-secret-tls --cert=server.crt --key=server.key -n container-registry
</code></pre>
<p>then use <code>kubectl apply -f</code> to create an <code>ingress</code> for revese proxy of <code>registry</code> service.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: registry
namespace: container-registry
annotations:
nginx.ingress.kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "500m"
nginx.ingress.kubernetes.io/proxy-pass-headers: "Location"
spec:
tls:
- hosts:
- ingress.local
secretName: registry-secret-tls
rules:
- host: ingress.local
http:
paths:
- path: /
backend:
serviceName: registry
servicePort: 5000
</code></pre>
<p>then, add <code>127.0.0.1 ingress.local</code> to <code>/etc/hosts</code> file.
At last, use <code>buildah</code> push docker images to <code>ingress.local</code>.</p>
<pre class="lang-sh prettyprint-override"><code>buildah push --tls-verify=false 44c92e82c220 docker://ingress.local/datacenter-school
</code></pre>
<p>This time, it looks everything is ok. But when I try list images in microk8s, I can't find the image that I just pushed.</p>
<pre class="lang-sh prettyprint-override"><code>microk8s.ctr images ls -q |grep datacenter-school
</code></pre>
<p>That's quiet weird!</p>
| zaoying |
<p>I am currently trying to setup Airflow to work in a Kubernetes like environment. For airflow to be useful, I need to be able to use the Git-Sync features so that the DAGs can be stored seperatly from the Pod, thus not being reset when the Pod downscales or restarts. I am trying to set it up with ssh.</p>
<p>I have been searching for good documentation on the Airflow config or tutorials on how to set this up properly, but this has been to no avail. I would very much appreciate some help here, as I have been struggling with this for a while. </p>
<p>Here is how i set the relevant config, please note I have some stand ins for links and some information due to security reasons:</p>
<pre><code>git_repo = https://<git-host>/scm/<project-name>/airflow
git_branch = develop
git_subpath = dags
git_sync_root = /usr/local/airflow
git_sync_dest = dags
git_sync_depth = 1
git_sync_ssh = true
git_dags_folder_mount_point = /usr/local/airflow/dags
git_ssh_key_secret_name = airflow-secrets
git_ssh_known_hosts_configmap_name = airflow-configmap
dags_folder = /usr/local/airflow/
executor = KubernetesExecutor
dags_in_image = False
</code></pre>
<p>Here is how I have setup my origin/config repo:</p>
<pre><code>-root
|-configmaps/airflow
|-airflow.cfg
|-airflow-configmap.yaml
|-environment
|-<environment specific stuff>
|-secrets
|-airflow-secrets.yaml
|-ssh
|-id_rsa
|-id_rsa.pub
|-README.md
</code></pre>
<p>The airflow-conifgmap and secrets look like this:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: airflow-secrets
data:
# key needs to be gitSshKey
gitSshKey: <base64 encoded private sshKey>
</code></pre>
<p>and </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: airflow-configmap
data:
known_hosts: |
https://<git-host>/ ssh-rsa <base64 encoded public sshKey>
</code></pre>
<p>The repo that I am trying to sync to has the Public key set as an access key and is just a folder named dags with 1 dag inside.</p>
<p>My issue is that I do not know what my issue is at this point. I have no way of knowing what part of my config has been set correctly and what part of it is set incorrectly and documentation on the subject is very lackluster. </p>
<p>If there is more information that is required I will be happy to provide it. </p>
<p>Thank you for your time</p>
| NobiliChili | <p>Whats the error you're seeing on doing this ?</p>
<p>Couple of things you need to consider:</p>
<ul>
<li><p>Create an SSH key locally using this <a href="https://docs.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent" rel="nofollow noreferrer">link</a> and:</p>
<ol>
<li><p>Repository Name > Settings > Deploy Keys > Value of ssh_key.pub</p>
</li>
<li><p>Ensure "write access" is checked</p>
</li>
</ol>
</li>
<li><p>My <code>Dockerfile</code> I'm using looks like:</p>
<pre><code>FROM apache/airflow:2.1.2
COPY requirements.txt .
RUN python -m pip install --upgrade pip
RUN pip install -r requirements.txt
</code></pre>
</li>
<li><p>The <code>values.yaml</code> from the official Airflow Helm repository (<code>helm repo add apache-airflow https://airflow.apache.org</code>) needs the following values updated under <code>gitSync</code>:</p>
<ul>
<li><p><code>enabled: true</code></p>
</li>
<li><p><code>repo: ssh://[email protected]/username/repository-name.git</code></p>
</li>
<li><p><code>branch: master</code></p>
</li>
<li><p><code>subPaths: ""</code> (if DAGs are in repository root)</p>
</li>
<li><p><code>sshKeySecret: airflow-ssh-git-secret</code></p>
</li>
<li><p><code>credentialsSecret: git-credentials</code></p>
</li>
</ul>
</li>
<li><p>Export SSH key and <code>known_hosts</code> to Kubernetes secret for accessing the private repository</p>
<pre><code>kubectl create secret generic airflow-ssh-git-secret \
--from-file=gitSshKey=/path/to/.ssh/id_ed25519 \
--from-file=known_hosts=/path/to/.ssh/known_hosts \
--from-file=id_ed25519.pub=/path/to/.ssh/id_ed25519.pub \
-n airflow
</code></pre>
</li>
<li><p>Create and apply manifests:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
namespace: airflow
name: airflow-ssh-git-secret
data:
gitSshKey: <base64_encoded_private_key_id_ed25519_in_one_line>
</code></pre>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: git-credentials
data:
GIT_SYNC_USERNAME: base64_encoded_git_username
GIT_SYNC_PASSWORD: base64_encoded_git_password
</code></pre>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: airflow
name: known-hosts
data:
known_hosts: |
line 1 of known_host file
line 2 of known_host file
line 3 of known_host file
...
</code></pre>
</li>
<li><p>Update Airflow release</p>
<p><code>helm upgrade --install airflow apache-airflow/airflow -n airflow -f values.yaml --debug</code></p>
</li>
<li><p>Get pods in the <em>airflow</em> namespace</p>
<p><code>kubectl get pods -n airflow</code></p>
</li>
<li><p>The <code>airflow-scheduler-SOME-STRING</code> pod is going to have 3 containers running. View the logs of container <code>git-sync-init</code> if you dont see the pods in Running state</p>
</li>
</ul>
| Saurabh |
<p>How do I access the Minio console?</p>
<p><code>minio.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
name: minio
selector:
app: minio
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: minio
spec:
serviceName: minio
replicas: 4
selector:
matchLabels:
app: minio
template:
metadata:
labels:
app: minio
spec:
terminationGracePeriodSeconds: 20
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- minio
topologyKey: kubernetes.io/hostname
containers:
- name: minio
env:
- name: MINIO_ACCESS_KEY
value: "hengshi"
- name: MINIO_SECRET_KEY
value: "hengshi202020"
image: minio/minio:RELEASE.2018-08-02T23-11-36Z
args:
- server
- http://minio-0.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-1.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-2.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-3.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
ports:
- containerPort: 9000
- containerPort: 9001
volumeMounts:
- name: minio-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: minio-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 300M
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: NodePort
ports:
- name: server-port
port: 9000
targetPort: 9000
protocol: TCP
nodePort: 30009
- name: console-port
port: 9001
targetPort: 9001
protocol: TCP
nodePort: 30010
selector:
app: minio
</code></pre>
<p><code>curl http://NodeIP:30010</code> is failed<br />
I tried <code>container --args --console-address ":9001"</code> or <code>env MINIO_BROWSER</code> still not accessible</p>
<p>One more question, what is the latest image startup parameter for Minio? There seems to be something wrong with my args</p>
<p><a href="https://i.stack.imgur.com/wgXIa.png" rel="nofollow noreferrer">enter image description here</a></p>
| seth | <p>You can specify --console-address :9001 in your deployment.yaml file as below in args: section .</p>
<pre><code>args:
- server
- --console-address
- :9001
- /data
</code></pre>
<p>Same way your Service and Ingress needs to point to 9001 port now with the latest Minio.</p>
<pre><code>ports:
- protocol: TCP
port: 9001
</code></pre>
| Jay Oza |
<p>I have stable repo configured</p>
<pre><code>▶ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
</code></pre>
<p>and I know I can perform</p>
<pre><code>helm install stable/jenkins
</code></pre>
<p>Then why isn’t the following command retrieving any results?</p>
<pre><code>▶ helm search repo stable/jenkins
No results found
~
▶ helm search repo jenkins
No results found
</code></pre>
<p>Using</p>
<pre><code>▶ helm version --tls
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.8", GitCommit:"145206680c1d5c28e3fcf30d6f596f0ba84fcb47", GitTreeState:"clean"}
</code></pre>
<p><strong>edit</strong>: even after an update</p>
<pre><code>▶ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "flagger" chart repository
...Successfully got an update from the "incubator" chart repository
...Successfully got an update from the "stakater" chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈ Happy Helming!⎈
~/ 40m
▶ helm search repo stable/jenkins
No results found
</code></pre>
<p>I even tried to <code>remove</code> and <code>add</code> back again the <code>stable</code> repo; same result.</p>
| pkaramol | <p>You are running <code>helm search repo stable/jenkins</code> and this is helm 3 syntax.</p>
<p>Have a look at this help for <strong>helm3</strong>:</p>
<pre><code>$ helm search --help
Search provides the ability to search for Helm charts in the various places
they can be stored including the Helm Hub and repositories you have added. Use
search subcommands to search different locations for charts.
Usage:
helm search [command]
Available Commands:
hub search for charts in the Helm Hub or an instance of Monocular
repo search repositories for a keyword in charts
</code></pre>
<p>But in you question you wrote:</p>
<blockquote>
<p>helm version --tls <br>
Client: &version.Version{SemVer:"<strong>v2.9.1</strong> ...</p>
</blockquote>
<p>This means that you are using <strong>helm 2</strong>. Now lets have a look at <strong>helm 2</strong> help command:</p>
<pre><code>$ helm search --help
...
To look for charts with a particular name (such as stable/mysql), try
searching using vertical tabs (\v). Vertical tabs are used as the delimiter
between search fields. For example:
helm search --regexp '\vstable/mysql\v'
To search for charts using common keywords (such as "database" or
"key-value store"), use
helm search database
or
helm search key-value store
Usage:
helm search [keyword] [flags]
</code></pre>
<p><strong>TLDR:</strong> Use:</p>
<pre><code>helm search stable/jenkins
</code></pre>
<p>Let me know if you have any further questions. I'd be happy to help.</p>
| Matt |
<p>I am seeing multiple and different explanations for imperative Vs Declarative for Kubernetes - something like Imperative means when we use yaml files to create the resources to describe the state and declarative vice versa.</p>
<p>what is the real and clear difference between these two. I would really appreciate if you can put the group of commands fall under the same - like Create under imperative way etc ..</p>
| Nag | <p>"Imperative" is a command - like "create 42 widgets".</p>
<p>"Declarative" is a statement of the desired end result - like "I want 42 widgets to exist".</p>
<p>Typically, your yaml file will be declarative in nature: it will say that you want 42 widgets to exist. You'll give that to Kubernetes, and it will execute the steps necessary to end up with having 42 widgets.</p>
<p>"Create" is itself an imperative command, but what you're creating is a Kubernetes cluster. What the cluster should look like is determined by the declarations in the yaml file.</p>
| user12904511 |
<p>In a project, I'm enabling the cluster autoscaler functionality from Kubernetes.</p>
<p>According to the documentation: <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work" rel="nofollow noreferrer">How does scale down work</a>, I understand that when a node is used for a given time less than 50% of its capacity, then it is removed, together with all of its pods, which will be replicated in a different node if needed.</p>
<p>But the following problem can happen: what if all the pods related to a specific deployment are contained in a node that is being removed? That would mean users might experience downtime for the application of this deployment.</p>
<p>Is there a way to avoid that the scale down deletes a node whenever there is a deployment which only contains pods running on that node?</p>
<p>I have checked the documentation, and one possible (but not good) solution, is to add an annotation to all of the pods containing applications <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node" rel="nofollow noreferrer">here</a>, but this clearly would not down scale the cluster in an optimal way.</p>
| Rodrigo Boos | <p>In the same documentation:</p>
<blockquote>
<p>What happens when a non-empty node is terminated? As mentioned above, all pods should be migrated elsewhere. Cluster Autoscaler does this by evicting them and tainting the node, so they aren't scheduled there again.</p>
</blockquote>
<p>What is the <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api" rel="nofollow noreferrer">Eviction</a> ?:</p>
<blockquote>
<p>The eviction subresource of a pod can be thought of as a kind of policy-controlled DELETE operation on the pod itself.</p>
</blockquote>
<p>Ok, but what if all pods get evicted at the same time on the node?
You can use Pod Disruption Budget to make sure minimum replicas are always working:</p>
<p>What is PDB?:</p>
<blockquote>
<p>A PDB limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions.</p>
</blockquote>
<p>In <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget" rel="nofollow noreferrer">k8s docs</a> you can also read:</p>
<blockquote>
<p>A PodDisruptionBudget has three fields:</p>
<p>A label selector .spec.selector to specify the set of pods to which it applies. This field is required.</p>
<p><code>.spec.minAvailable which is a description of the number of pods from that set that must still be available after the eviction</code>, even in the absence of the evicted pod. minAvailable can be either an absolute number or a percentage.</p>
<p>.spec.maxUnavailable (available in Kubernetes 1.7 and higher) which is a description of the number of pods from that set that can be unavailable after the eviction. It can be either an absolute number or a percentage.</p>
</blockquote>
<p>So if you use PDB for your deployment it should not get deleted all at once.</p>
<p>But please notice that if the node fails for some other reason (e.g hardware failure), you will still experience downtime. If you really care about High Availability consider using pod antiaffinity to make sure the pods are not scheduled all on one node.</p>
| Matt |
<p>I have a AKS cluster and I want to get the public IP of the virtual machine scale set associated with the cluster's agentpool. I found this <a href="https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-networking#:~:text=To%20list%20the%20public%20IP,instance%2Dpublic%2Dips%20command.&text=You%20can%20also%20display%20the,%2D03%2D30%20or%20higher." rel="nofollow noreferrer">documentation page</a> and tried the following API call: </p>
<p><code>GET https://management.azure.com/subscriptions/{your sub ID}/resourceGroups/{RG name}/providers/Microsoft.Compute/virtualMachineScaleSets/{scale set name}/publicipaddresses?api-version=2017-03-30</code></p>
<p>but i get this response: <code>{"value":[]}</code></p>
| Daniel | <p>By default the virtual machine scale set of AKS doesn't have public IP. AKS nodes do not require their own public IP addresses for communication.</p>
<p>But you can assign a public IP per node(preview mode).</p>
<p>Here is the link to official documentation:</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/use-multiple-node-pools#assign-a-public-ip-per-node-for-your-node-pools-preview" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/use-multiple-node-pools#assign-a-public-ip-per-node-for-your-node-pools-preview</a></p>
| Alex0M |
<p>I am learning Kubernetes Label Selector at the URL below.</p>
<p><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/</a></p>
<h3>Set-based requirement</h3>
<blockquote>
<p>Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in, notin and exists (only the key identifier). For example:</p>
</blockquote>
<pre><code>! partition
</code></pre>
<blockquote>
<p>The fourth(This) example selects all resources without a label with key partition; no values are checked.</p>
</blockquote>
<p>My assumption is that if I specify "!test", all pods will be output,
I think that nothing is output when "!app" is specified.</p>
<p>I want to exclude the key and output as in the example described here, but the expected result was not obtained.</p>
<pre><code>My @ Pc: ~ / Understanding-K8s / chap04 $ kubectl get pod -l! app
My@Pc:~/Understanding-K8s/chap04$ kubectl get pod -l !app
-bash: !app: event not found
My@Pc:~/Understanding-K8s/chap04$ kubectl get pod -l "!app"
-bash: !app: event not found
My@Pc:~/Understanding-K8s/chap04$ kubectl get pod -l "!test"
-bash: !test: event not found
My@Pc:~/Understanding-K8s/chap04$ kubectl get pod -l !test
-bash: !test: event not found
</code></pre>
<p>The result of executing the --show-labels command is as follows.</p>
<pre><code>My @ Pc: ~ / Understanding-K8s / chap04 $ kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-pod-a 1/1 Running 0 89m app=photo-view,env=stage
nginx-pod-b 1/1 Running 0 89m app=imagetrain,env=test
nginx-pod-c 1/1 Running 0 89m app=prediction,env=test
nginx-pod-d 1/1 Running 0 89m app=photo-view,env=stage
nginx-pod-e 1/1 Running 0 89m app=imagetrain,env=stage
nginx-pod-f 1/1 Running 0 89m app=photo-view,env=prod
</code></pre>
<p>How can I exclude the intended keys such as app and env?</p>
<hr />
<p>tyzbit has given me the answer, and I have tried.
I couldn't describe the result well in the comment, so I will add it here.</p>
<pre><code>My@Pc:~/Understanding-K8s/chap04$ kubectl get pod -l '!app'
No resources found in default namespace.
My@Pc:~/Understanding-K8s/chap04$ kubectl get pod -l '!test'
NAME READY STATUS RESTARTS AGE
nginx-pod-a 1/1 Running 0 162m
nginx-pod-b 1/1 Running 0 162m
nginx-pod-c 1/1 Running 0 162m
nginx-pod-d 1/1 Running 0 162m
nginx-pod-e 1/1 Running 0 162m
nginx-pod-f 1/1 Running 0 162m
</code></pre>
<hr />
<p>Girdhar Singh Rathore teaches me AWK and jsonpath.</p>
<p>(Maybe, I've written the wrong format...)</p>
AWK
<pre><code>My@Pc:~$ kubectl get pods --show-labels | awk '$6 !~/app/ {print ;}'
NAME READY STATUS RESTARTS AGE LABELS
My@Pc:~$ kubectl get pods --show-labels | awk '$6 !~/test/ {print ;}'
NAME READY STATUS RESTARTS AGE LABELS
nginx-pod-a 1/1 Running 0 18m app=photo-view,env=stage
nginx-pod-d 1/1 Running 0 18m app=photo-view,env=stage
nginx-pod-e 1/1 Running 0 18m app=imagetrain,env=stage
nginx-pod-f 1/1 Running 0 18m app=photo-view,env=prod
</code></pre>
jsonpath
<pre><code>My@Pc:~$ kubectl get pods -o jsonpath='{range .items[*]} {.metadata.name} {.metadata.labels} {"\n"} {end}' | awk '$2 !~/app/ {print $1}'
My@Pc:~$ kubectl get pods -o jsonpath='{range .items[*]} {.metadata.name} {.metadata.labels} {"\n"} {end}' | awk '$2 !~/test/ {print $1}'
nginx-pod-a
nginx-pod-d
nginx-pod-e
nginx-pod-f
</code></pre>
| genmai | <p>The <code>!</code> is getting interpreted by your shell, use a single quote to prevent this. The correct syntax is:</p>
<p><code>kubectl get pods -l '!app'</code></p>
| Tyzbit |
<p>When I using this command in kubernetes v1.18 jenkins's master pod to mount a nfs file system:</p>
<pre><code>root@jenkins-67fff76bb6-q77xf:/# mount -t nfs -o v4 192.168.31.2:/infrastructure/jenkins-share-workspaces /opt/jenkins
mount: permission denied
root@jenkins-67fff76bb6-q77xf:/#
</code></pre>
<p>why it shows permission denied althrough I am using root user? when I using this command in another machine(not in docker), it works fine, shows the server side works fine. this is my kubernetes jenkins master pod secure text config in yaml:</p>
<pre><code>securityContext:
runAsUser: 0
fsGroup: 0
</code></pre>
<p>today I tried another kubernetes pod and mount nfs file system and throw the same error. It seems mount NFS from host works fine, and mount from kubernetes pod have a perssion problem. Why would this happen? the NFS is works fine by PVC binding PV in this kubernetes pod, why it mount from docker failed? I am confusing.</p>
| Dolphin | <p>There are two ways to mount nfs volume to a pod</p>
<p>First (directly in pod spec):</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
volumes:
- name: nfs-volume
nfs:
server: 192.168.31.2
path: /infrastructure/jenkins-share-workspaces
containers:
- name: app
image: example
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
</code></pre>
<p>Second (creating persistens nfs volume and volume claim):</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.31.2
path: "/infrastructure/jenkins-share-workspaces"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: nfs
---
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
containers:
- name: app
image: example
volumeMounts:
- name: nfs
mountPath: /opt/jenkins
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
</code></pre>
<p>EDIT:</p>
<p>The solution above is prefered one, but if you reallly need to use mount in container you need to add capabilities to the pod:</p>
<pre><code>spec:
containers:
- securityContext:
capabilities:
add: ["SYS_ADMIN"]
</code></pre>
| Matt |
<p>I am new to Ignite and Kubernetes. I have a.Net Core 3.1 web application which is hosted Azure Linux App Service. </p>
<p>I followed the instructions <a href="https://apacheignite.readme.io/docs/microsoft-azure-deployment" rel="nofollow noreferrer">(Apache Ignite Instructions Offical Site)</a> and Apache Ignite could run on Azure Kubernetes. I could create a sample table and read-write actions worked successfully. Here is the screenshot of my success tests on PowerShell.</p>
<p><a href="https://i.stack.imgur.com/ly7Py.png" rel="nofollow noreferrer">Please see the success test</a></p>
<p>Now, I try to connect Apache Ignite from my .net core web app but I couldn't make it.
My code is as below. I try to connect with IgniteConfiguration and SpringCfgXml, but both of them getting error.</p>
<pre><code>private void Initialize()
{
var cfg = GetIgniteConfiguration();
_ignite = Ignition.Start(cfg);
InitializeCaches();
}
public IgniteConfiguration GetIgniteConfiguration()
{
var appSettingsJson = AppSettingsJson.GetAppSettings();
var igniteNodes = appSettingsJson["AppSettings:IgniteNodes"];
var nodeList = igniteNodes.Split(",");
var config = new IgniteConfiguration
{
Logger = new IgniteLogger(),
DiscoverySpi = new TcpDiscoverySpi
{
IpFinder = new TcpDiscoveryStaticIpFinder
{
Endpoints = nodeList
},
SocketTimeout = TimeSpan.FromSeconds(5)
},
IncludedEventTypes = EventType.CacheAll,
CacheConfiguration = GetCacheConfiguration()
};
return config;
}
</code></pre>
<p>The first error I get:</p>
<blockquote>
<p>Apache.Ignite.Core.Common.IgniteException HResult=0x80131500<br>
Message=Java class is not found (did you set IGNITE_HOME environment
variable?):
org/apache/ignite/internal/processors/platform/PlatformIgnition<br>
Source=Apache.Ignite.Core</p>
</blockquote>
<p>Also, I have no idea what I am gonna set for IGNITE_HOME, and username and secret to authentication. </p>
| OguzKaanAkyalcin | <p>Solution :
I finally connect the Ignite on Azure Kubernetes.</p>
<p>Here is my connection method.</p>
<pre><code>public void TestConnection()
{
var cfg = new IgniteClientConfiguration
{
Host = "MyHost",
Port = 10800,
UserName = "user",
Password = "password"
};
using (IIgniteClient client = Ignition.StartClient(cfg))
{
var employeeCache1 = client.GetOrCreateCache<int, Employee>(
new CacheClientConfiguration(EmployeeCacheName, typeof(Employee)));
employeeCache1.Put(1, new Employee("Bilge Wilson", 12500, 1));
}
}
</code></pre>
<p>To find to host IP, user name and client secret please check the below images.</p>
<p>Client Id and Secret
<a href="https://i.stack.imgur.com/79bab.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79bab.png" alt="enter image description here"></a></p>
<p>IP Addresses
<a href="https://i.stack.imgur.com/oDBBL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oDBBL.png" alt="enter image description here"></a></p>
<p>Note: I don't need to set any IGNITE_HOME ana JAVA_HOME variable.</p>
| OguzKaanAkyalcin |
<p>I am new to kubernetes. I am running my kubernetes cluster inside my Docker Desktop VM. Below are the versions</p>
<p>Docker Desktop Community : 2.3.0.4 (Stable)<br>
Engine: 19.03.12<br>
kubernetes: 1.16.5</p>
<p>i created a simple react app. below is the Docker file.</p>
<pre><code>FROM node:13.12.0-alpine
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install
# add app files
COPY . ./
# start app
CMD ["npm", "start"]
</code></pre>
<p>I built a docker image and ran it. it works fine. I added the image in the below deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test-react-app
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: test-react-app
template:
metadata:
labels:
app: test-react-app
spec:
containers:
- name: test-react
image: myrepo/test-react:v2
imagePullPolicy: Never
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: test-service
namespace: dev
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
nodePort: 31000
selector:
app: test-react-app
</code></pre>
<p>The pod never starts. Below is the events from describe.</p>
<pre><code>Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned dev/test-deployment-7766949554-m2fbz to docker-desktop
Normal Pulled 8m38s (x5 over 10m) kubelet, docker-desktop Container image "myrepo/test-react:v2" already present on machine
Normal Created 8m38s (x5 over 10m) kubelet, docker-desktop Created container test-react
Normal Started 8m38s (x5 over 10m) kubelet, docker-desktop Started container test-react
Warning BackOff 26s (x44 over 10m) kubelet, docker-desktop Back-off restarting failed container
</code></pre>
<p>Below is the logs from the container. It looks as if the container is running..</p>
<pre><code>> [email protected] start /app
> react-scripts start
[34mℹ[39m [90m「wds」[39m: Project is running at http://10.1.0.33/
[34mℹ[39m [90m「wds」[39m: webpack output is served from
[34mℹ[39m [90m「wds」[39m: Content not from webpack is served from /app/public
[34mℹ[39m [90m「wds」[39m: 404s will fallback to /
Starting the development server...
</code></pre>
| Seeker | <p>It worked!!!</p>
<p>I build the react app into a production app and then copied the docker file. I followed the technique given in this link <a href="https://dev.to/rieckpil/deploy-a-react-application-to-kubernetes-in-5-easy-steps-516j" rel="nofollow noreferrer">https://dev.to/rieckpil/deploy-a-react-application-to-kubernetes-in-5-easy-steps-516j</a>.</p>
| Seeker |
<h2>This is different from other cors related questions</h2>
<p>I am running my node backend-api on <strong>microservices on kubernetes deployed on Digitalocean</strong>. I have literally read all the blogs/forums related to this issue and haven't found any solution (specifically the ones related to Digitalocean).</p>
<p>I am unable to connect to the cluster <em>via React application</em> running on '<strong>localhost:3000</strong>' or anywhere outside the kubernetes cluster.</p>
<p>It is giving me below error:</p>
<pre><code>Access to XMLHttpRequest at 'http://cultor.dev/api/users/signin'
from origin 'http://localhost:3000' has been blocked by
CORS policy: Response to preflight request doesn't pass access
control check: Redirect is not allowed for a preflight request.
</code></pre>
<p>The kubernetes cluster's loadbalancer is listening on <strong>"cultor.dev"</strong> which is set as a local domain in <strong>/etc/hosts</strong>.
<strong>I am able to make it work using Postman!</strong></p>
<blockquote>
<p><strong>NOTE:</strong>
I have tried using cors package as well, it won't help. Also, it works fine if I run this react app <strong>inside of the kubernetes cluster</strong> which I do not want.</p>
</blockquote>
<blockquote>
<p><strong>Ingress nginx config</strong> (tried using annotations mentioned on the official website):</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
## this tells ingress to pick up the routing rules mentioned in this config
annotations:
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
kubernetes.io/ingress.class: nginx
## tells ingress to check for regex in the config file
nginx.ingress.kubernetes.io/use-regex: 'true'
# nginx.ingress.kubernetes.io/enable-cors: 'true'
# nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "*"
# nginx.ingress.kubernetes.io/cors-max-age: 600
# certmanager.k8s.io/cluster-issuer: letsencrypt
# kubernetes.io/ingress.class: nginx
# service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
</code></pre>
<blockquote>
<p><strong>One of the microservices config</strong> (tried cors package as well):</p>
</blockquote>
<pre class="lang-js prettyprint-override"><code>// APP SETTINGS
app.set('trust proxy', true);
app.use(json());
app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Headers', '*');
res.header('Access-Control-Request-Headers', '*');
if (req.method === "OPTIONS") {
res.header('Access-Control-Allow-Methods', '*');
return res.status(200).json({});
}
next();
});
</code></pre>
| Karan Kumar | <p>Okay, after alot of research and with the help of the other answers I did the following:</p>
<ol>
<li>I changed the request to the backend (from the client side) to <strong>https</strong> instead of <strong>http</strong>. This fixed the error <code>Redirect is not allowed for a preflight request</code></li>
<li>I changed the config <strong>ingress nginx</strong> config to get rid of the error <code>multiple values in Access-Control-Allow-Origin</code> :</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
kubernetes.io/ingress.class: nginx
## tells ingress to check for regex in the config file
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
add_header Access-Control-Allow-Methods "POST, GET, OPTIONS";
add_header Access-Control-Allow-Credentials true;
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
</code></pre>
<p>I hope it helps others as well.</p>
| Karan Kumar |
<p>I created a custom exporter in Python using prometheus-client package. Then created necessary artifacts to find the metric as a target in Prometheus deployed on Kubernetes.</p>
<p>But I am unable to see the metric as a target despite following all available instructions.
Help in finding the problem is appreciated.</p>
<p>Here is the summary of what I did.</p>
<ol>
<li>Installed Prometheus using Helm on the K8s cluster in a namespace <em><strong>prometheus</strong></em></li>
<li>Created a python program with prometheus-client package to create a metric</li>
<li>Created and deployed an image of the exporter in dockerhub</li>
<li>Created a deployment against the metrics image, in a namespace <em><strong>prom-test</strong></em></li>
<li>Created a Service, ServiceMonitor, and a ServiceMonitorSelector</li>
<li>Created a service account, role and binding to enable access to the end point</li>
</ol>
<p>Following is the code.</p>
<p><em><strong>Service & Deployment</strong></em></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: ClusterIP
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 6000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
#serviceAccount: test-app-exporter-sa
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
</code></pre>
<p><em><strong>Service account and role binding</strong></em></p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: test-app-exporter-sa
namespace: prom-test
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-binding
subjects:
- kind: ServiceAccount
name: test-app-exporter-sa
namespace: prom-test
roleRef:
kind: ClusterRole
name: test-app-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-role
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
</code></pre>
<p><em><strong>Service Monitor & Selector</strong></em></p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: prometheus
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- default
- prom-test
#any: true
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: prometheus
spec:
serviceAccountName: test-app-exporter-sa
serviceMonitorSelector:
matchLabels:
app: test-app-exporter-sm
release: prometheus
resources:
requests:
memory: 400Mi
</code></pre>
| Padmaja | <p>I am able to get the target identified by Prometheus.
But though the end point can be reached within the cluster as well as from the node IP. Prometheus says the target is down.
In addition to that I am unable to see any other target.
<a href="https://i.stack.imgur.com/h1VKm.png" rel="nofollow noreferrer">Prom-UI</a></p>
<p><strong>Any help is greatly appreciated</strong></p>
<p>Following is my changed code
<strong>Deployment & Service</strong></p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: prom-test
---
apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: NodePort
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
replicas: 1
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
</code></pre>
<p><strong>Cluster Roles</strong></p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: prom-test
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: prom-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: monitoring-role
namespace: monitoring
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: monitoring-role
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
</code></pre>
<p><strong>Service Monitor</strong></p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: monitoring
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- prom-test
- monitoring
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: monitoring
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
serviceMonitorSelector:
matchLabels:
app: test-app-exporter
release: prometheus
resources:
requests:
memory: 400Mi
</code></pre>
| Padmaja |
<p>My pod is running in k8s and restarts every 20-30 hours, caused by an <code>OOMKilled</code> error, here are the configured limits:</p>
<pre><code> limits:
cpu: 600m
memory: 1536Mi
requests:
cpu: 150m
memory: 1536Mi
</code></pre>
<p>In the container is a JVM (Spring Boot) running with the following Options:</p>
<pre><code>-Xms256m -Xmx1G -Xdebug -XX:+UseG1GC -XX:MinHeapFreeRatio=15 -XX:MaxHeapFreeRatio=26
</code></pre>
<p><strong>Analysis of the JVM Metrics</strong></p>
<p>From my point of view, I can see, that there is a load peak, but not high enough to reach the memory limits, however the CPU Usage increases rapidly:
<a href="https://i.stack.imgur.com/SRbds.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SRbds.png" alt="enter image description here" /></a></p>
<p><strong>Analysis of the Pod Metrics</strong></p>
<p>Let me show you the metrics of the pod provided by k8s:
<a href="https://i.stack.imgur.com/D8ZqM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D8ZqM.png" alt="enter image description here" /></a></p>
<p>Memory increases much more than in the JVM shows. CPU Throttling is active. But I do not find the root cause. It seems that the container reaches the Limits and not the JVM, but why? Can CPU throttling causes Memory issues? Throttling is an expected behavior for peaks to slow down. I do not expect any memory issues in that case.</p>
| Norbert Koch | <p>It is possible for CPU throttling to indirectly cause memory issues by making garbage collection in the JVM less efficient or escalate an already inefficient memory usage, as such it may result in slower response times for your application, leading to more requests being queued up and more memory being used making the application store more objects in memory while waiting for requests to be processed.</p>
<p>The JVM flags that you have set are a good starting point. To further investigate any memory leaks you might want to use the followings to dump the heap on an OOM and analyze the dump with a tool like Java VisualVM to find the root cause.</p>
<pre><code>-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/heapdump.bin
</code></pre>
<blockquote>
<p>Throttling is an expected behavior for peaks to slow down.</p>
</blockquote>
<p>Yes, but I would consider CPU throttling more of a handbrake here and not the only <em>solution</em>. Instead I would implement an appropriate mechanism (like rate limiting, request queuing, circuit breakers, or backpressure throttling) either in the application or at the load balancer/reverse proxy level to prevent queues from forming.</p>
| Kenan Güler |
<p>I deploy fluent bit to kubernetes. And I deploy one pod with annotation <code>fluentbit.io/parser: cri</code>. But it still parse the log with the parser <code>ivyxjc</code> which is config in <code>INPUT</code>.</p>
<p>fluent-bit config</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
labels:
k8s-app: fluent-bit
data:
# Configuration files: server, input, filters and output
# ======================================================
fluent-bit.conf: |
[SERVICE]
Flush 1
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
@INCLUDE input-kubernetes.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE output-elasticsearch.conf
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser ivyxjc
DB /var/log/flb_kube.db
Mem_Buf_Limit 200MB
Skip_Long_Lines On
Refresh_Interval 10
filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
k8s-logging.parser On
K8S-Logging.Exclude On
output-elasticsearch.conf: |
[OUTPUT]
Name es
Match *
Host ${FLUENT_ELASTICSEARCH_HOST}
Port ${FLUENT_ELASTICSEARCH_PORT}
HTTP_User ${FLUENT_ELASTICSEARCH_USER}
HTTP_Passwd ${FLUENT_ELASTICSEARCH_PASSWD}
Logstash_Format On
Replace_Dots On
Retry_Limit False
tls On
tls.verify Off
parsers.conf: |
[PARSER]
# http://rubular.com/r/tjUt3Awgg4
Name cri
Format regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
[PARSER]
Name ivyxjc
Format regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag2>[^ ]*) (?<message2>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
</code></pre>
<p>Pod description:</p>
<pre><code>...
Name: logger-5c6658b5dd-66zkw
Namespace: logger
Priority: 0
Start Time: Fri, 15 Oct 2021 15:28:47 +0800
Labels: app=logger
pod-template-hash=5c6658b5dd
Annotations: fluentbit.io/parser: cri
fluentbit.io/parser_stderr: cri
fluentbit.io/parser_stdout: cri
Status: Running
...
</code></pre>
| fei_hsueh | <blockquote>
<p>k8s-logging.parser</p>
</blockquote>
<p>This option tells fluent bit agent to use parser from the annotation that will be used for the "log" keyword. The INPUT
parser will be applied as per usual. Afterwards "KUBERNETES" filter picks up the input and then the parser dictated by "fluentbit.io/parser: parser_name_here" will pick up values from the "log" keyword.</p>
<p>Reference to <a href="https://docs.fluentbit.io/manual/pipeline/filters/kubernetes#processing-the-log-value" rel="nofollow noreferrer">docs</a>.</p>
<p>Unfortunately that does not work in some cases as well, I have not been able to chase it down yet, but will update this answer if I do find something.</p>
| david tsulaia |
<p>I use two virtual machine based on VMware worstation pro 15.5 ,to learn and practice k8s. </p>
<p>OS :Ubuntu 18.04.3<br>
docker :18.09.7<br>
kubectl kubeadm kubelet v1.17.3
flannel:v0.11.0-amd64</p>
<p>After I execute <code>kubeadm init --kubernetes-version=v1.17.3 --apisever-advertise-address 192.168.0.100 --pod-network-cidr=10.244.0.0/16</code> on the master node,everything is ok, <code>kubectl get nodes</code> show master node is <code>READY</code>.</p>
<p>But after I use the <code>kubeadm join</code> on the slave node, the master node's kube-system pods reduce, only <code>exist coredns kube-flannel kube-proxy</code><br>
<code>systemctl status kubelet</code>
show
<code>failed to update node lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "yang";the object has been modified, please apply your changes to the latest version and try again</code>
<code>trying to delete pod kube-........</code></p>
<p>Also <code>kubectl get nodes</code> show only have master node .<br>
Here is the scripts<br>
The first is to set up docker kubeadm kubelet kubectl</p>
<pre><code>#!/bin/bash
apt-get -y autoremove docker docker-engine docker.io docker-ce
apt-get update -y
apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
apt-get vim net-tools
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt-get update -y
# docker源加速
mkdir -p /etc/docker
echo '{"registry-mirrors":["https://vds6zmad.mirror.aliyuncs.com"]}' > /etc/docker/daemon.json
# 安装docker
apt-get install docker.io -y
# 启动和自启动
systemctl start docker
systemctl enable docker
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
#配置kubernetes阿里源
tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
# 设置停止自动更新
apt-mark hold kubelet kubeadm kubectl
# kubelet开机自启动
systemctl enable kubelet && systemctl start kubelet
</code></pre>
<p>this is used to pull the images and tag</p>
<pre><code>#! /usr/bin/python3
import os
images=[
"kube-apiserver:v1.17.3",
"kube-controller-manager:v1.17.3",
"kube-scheduler:v1.17.3",
"kube-proxy:v1.17.3",
"pause:3.1",
"etcd:3.4.3-0",
"coredns:1.6.5",
]
for i in images:
pullCMD = "docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/{}".format(i)
print("run cmd '{}', please wait ...".format(pullCMD))
os.system(pullCMD)
tagCMD = "docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/{} k8s.gcr.io/{}".format(i, i)
print("run cmd '{}', please wait ...".format(tagCMD ))
os.system(tagCMD)
rmiCMD = "docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/{}".format(i)
print("run cmd '{}', please wait ...".format(rmiCMD ))
os.system(rmiCMD)
</code></pre>
<p>When I only start the master, the command<code>kubectl get pods --all-namespaces</code>show these </p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-gxq7p 1/1 Running 6 43h
kube-system coredns-6955765f44-xmcbq 1/1 Running 6 43h
kube-system etcd-yang 1/1 Running 61 14h
kube-system kube-apiserver-yang 1/1 Running 48 14h
kube-system kube-controller-manager-yang 1/1 Running 6 14h
kube-system kube-flannel-ds-amd64-v58g6 1/1 Running 5 43h
kube-system kube-proxy-2vcwg 1/1 Running 5 43h
kube-system kube-scheduler-yang 1/1 Running 6 14h
</code></pre>
<p>the command <code>systemctl status kubelet</code>show these</p>
<pre><code>● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2020-02-28 13:19:08 CST; 9min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 686 (kubelet)
Tasks: 0 (limit: 4634)
CGroup: /system.slice/kubelet.service
└─686 /usr/bin/kubelet --cgroup-driver=systemd --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/v
2月 28 13:19:27 yang kubelet[686]: W0228 13:19:27.709246 686 pod_container_deletor.go:75] Container "900ab1df52d8bbc9b3b0fc035df30ae242d2c8943486dc21183a6ccc5bd22c9b" not found in pod's containers
2月 28 13:19:27 yang kubelet[686]: W0228 13:19:27.710478 686 cni.go:331] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "900ab1df52d8bbc9b3b0fc035df30ae242d2c8943486dc21183a6ccc5bd22c9b"
2月 28 13:19:28 yang kubelet[686]: E0228 13:19:28.512094 686 cni.go:364] Error adding kube-system_coredns-6955765f44-xmcbq/1f179bccaa042b92bd0f9ed97c0bf6f129bc986c574ca32c5435827eecee4f29 to network flannel/cbr0: open /run/flannel/subnet.env: no suc
2月 28 13:19:29 yang kubelet[686]: E0228 13:19:29.002294 686 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "1f179bccaa042b92bd0f9ed97c0bf6f129bc986c574ca32c54358
2月 28 13:19:29 yang kubelet[686]: E0228 13:19:29.002345 686 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-6955765f44-xmcbq_kube-system(7e9ca770-f27c-4143-a025-cd6316a1a7e4)" failed: rpc error: code = Unknown desc = failed to set up s
2月 28 13:19:29 yang kubelet[686]: E0228 13:19:29.002357 686 kuberuntime_manager.go:729] createPodSandbox for pod "coredns-6955765f44-xmcbq_kube-system(7e9ca770-f27c-4143-a025-cd6316a1a7e4)" failed: rpc error: code = Unknown desc = failed to set up
2月 28 13:19:29 yang kubelet[686]: E0228 13:19:29.002404 686 pod_workers.go:191] Error syncing pod 7e9ca770-f27c-4143-a025-cd6316a1a7e4 ("coredns-6955765f44-xmcbq_kube-system(7e9ca770-f27c-4143-a025-cd6316a1a7e4)"), skipping: failed to "CreatePodSan
2月 28 13:19:29 yang kubelet[686]: W0228 13:19:29.802166 686 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-6955765f44-xmcbq_kube-system": CNI failed to retrieve network
2月 28 13:19:29 yang kubelet[686]: W0228 13:19:29.810632 686 pod_container_deletor.go:75] Container "1f179bccaa042b92bd0f9ed97c0bf6f129bc986c574ca32c5435827eecee4f29" not found in pod's containers
2月 28 13:19:29 yang kubelet[686]: W0228 13:19:29.823318 686 cni.go:331] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "1f179bccaa042b92bd0f9ed97c0bf6f129bc986c574ca32c5435827eecee4f29"
</code></pre>
<p>But when I start the slave node ,the pods reduce </p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-gxq7p 1/1 Running 3 43h
kube-system coredns-6955765f44-xmcbq 1/1 Running 3 43h
kube-system kube-flannel-ds-amd64-v58g6 1/1 Running 3 43h
kube-system kube-proxy-2vcwg 1/1 Running 3 43h
</code></pre>
| Yangdaxing | <p>Today I successfully set up a k8s cluster, using the same scripts. Before do it, make sure the two virtual machine can ssh each other, and the master node have set a static IP.I don't know the precise reason.</p>
| Yangdaxing |
<p>I try to use priorityClass.</p>
<p>I create two pods, the first has system-node-critical priority and the second cluster-node-critical priority.</p>
<p>Both pods need to run in a node labeled with nodeName: k8s-minion1 but such a node has only 2 cpus while both pods request 1.5 cpu.
I then expect that the second pod runs and the first is in pending status. Instead, the first pod always runs no matter the classpriority I affect to the second pod.</p>
<p>I even tried to label the node afted I apply my manifest but does not change anything.</p>
<p>Here is my manifest :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
nodeSelector:
nodeName: k8s-minion1
priorityClassName: cluster-node-critical
---
apiVersion: v1
kind: Pod
metadata:
name: secondpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
priorityClassName: system-node-critical
nodeSelector:
nodeName: k8s-minion1
</code></pre>
<p>It is worth noting that I get an error <code>"unknown object : priorityclass"</code> when I do <code>kubectl get priorityclass</code> and when I export my running pod in yml with <code>kubectl get pod secondpod -o yaml</code>, I cant find any <code>classpriority:</code> field.</p>
<p>Here Are my version infos:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Any ideas why this is not working?</p>
<p>Thanks in advance,</p>
<p>Abdelghani</p>
| Abdelghani | <p>PriorityClasses first appeared in <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.8.md#scheduling-1" rel="nofollow noreferrer">k8s 1.8 as alpha feature</a>.
It <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md#graduated-to-beta" rel="nofollow noreferrer">graduated to beta in 1.11</a></p>
<p>You are using 1.10 and this means that this feature is in alpha.</p>
<p>Alpha features are not enabled by default so you would need to enable it.</p>
<p>Unfortunately k8s version 1.10 is no longer supported, so I'd suggest to upgrade at least to 1.14 where <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer">priorityClass feature</a> became stable.</p>
| Matt |
<p>I am running rancher latest docker image on Mac M1 laptop but the contact failed to start.
The command I am using is sudo docker run -d -p 80:80 -p 443:443 --privileged rancher/rancher.</p>
<p>Below is the versions for my environment:</p>
<p>$ docker --version
Docker version 20.10.13, build a224086</p>
<p>$ uname -a
Darwin Joeys-MBP 21.3.0 Darwin Kernel Version 21.3.0: Wed Jan 5 21:37:58 PST 2022; root:xnu-8019.80.24~20/RELEASE_ARM64_T6000 arm64</p>
<p>$ docker images|grep rancher
rancher/rancher latest f09cdb8a8fba 3 weeks ago 1.39GB</p>
<p>Below is the logs from the container.</p>
<pre><code>$ docker logs -f 8d21d7d19b21
2022/04/28 03:34:00 [INFO] Rancher version v2.6.4 (4b4e29678) is starting
2022/04/28 03:34:00 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features: ClusterRegistry:}
2022/04/28 03:34:00 [INFO] Listening on /tmp/log.sock
2022/04/28 03:34:00 [INFO] Waiting for k3s to start
2022/04/28 03:34:01 [INFO] Waiting for server to become available: an error on the server ("apiserver not ready") has prevented the request from succeeding
2022/04/28 03:34:03 [INFO] Waiting for server to become available: an error on the server ("apiserver not ready") has prevented the request from succeeding
2022/04/28 03:34:05 [INFO] Running in single server mode, will not peer connections
2022/04/28 03:34:05 [INFO] Applying CRD features.management.cattle.io
2022/04/28 03:34:05 [INFO] Waiting for CRD features.management.cattle.io to become available
2022/04/28 03:34:05 [INFO] Done waiting for CRD features.management.cattle.io to become available
2022/04/28 03:34:08 [INFO] Applying CRD navlinks.ui.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusters.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD apiservices.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusterregistrationtokens.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD settings.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD preferences.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD features.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusterrepos.catalog.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD operations.catalog.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD apps.catalog.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD fleetworkspaces.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD bundles.fleet.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusters.fleet.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD managedcharts.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusters.provisioning.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusters.provisioning.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD rkeclusters.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD rkebootstraps.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD rkebootstraptemplates.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD custommachines.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD etcdsnapshots.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD clusters.cluster.x-k8s.io
2022/04/28 03:34:09 [INFO] Applying CRD machinedeployments.cluster.x-k8s.io
2022/04/28 03:34:09 [INFO] Applying CRD machinehealthchecks.cluster.x-k8s.io
2022/04/28 03:34:09 [INFO] Applying CRD machines.cluster.x-k8s.io
2022/04/28 03:34:09 [INFO] Applying CRD machinesets.cluster.x-k8s.io
2022/04/28 03:34:09 [INFO] Waiting for CRD machinesets.cluster.x-k8s.io to become available
2022/04/28 03:34:09 [INFO] Done waiting for CRD machinesets.cluster.x-k8s.io to become available
2022/04/28 03:34:09 [INFO] Creating CRD authconfigs.management.cattle.io
2022/04/28 03:34:09 [INFO] Creating CRD groupmembers.management.cattle.io
2022/04/28 03:34:09 [INFO] Creating CRD groups.management.cattle.io
2022/04/28 03:34:09 [INFO] Creating CRD tokens.management.cattle.io
2022/04/28 03:34:09 [INFO] Creating CRD userattributes.management.cattle.io
2022/04/28 03:34:09 [INFO] Creating CRD users.management.cattle.io
2022/04/28 03:34:09 [INFO] Waiting for CRD tokens.management.cattle.io to become available
2022/04/28 03:34:10 [INFO] Done waiting for CRD tokens.management.cattle.io to become available
2022/04/28 03:34:10 [INFO] Waiting for CRD userattributes.management.cattle.io to become available
2022/04/28 03:34:10 [INFO] Done waiting for CRD userattributes.management.cattle.io to become available
2022/04/28 03:34:10 [INFO] Waiting for CRD users.management.cattle.io to become available
2022/04/28 03:34:11 [INFO] Done waiting for CRD users.management.cattle.io to become available
2022/04/28 03:34:11 [INFO] Creating CRD clusterroletemplatebindings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD apps.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD catalogs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD apprevisions.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD dynamicschemas.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD catalogtemplates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD pipelineexecutions.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD etcdbackups.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD pipelinesettings.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD globalrolebindings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD pipelines.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD catalogtemplateversions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD globalroles.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD sourcecodecredentials.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clusteralerts.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clusteralertgroups.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD sourcecodeproviderconfigs.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD kontainerdrivers.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD nodedrivers.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clustercatalogs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD sourcecoderepositories.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clusterloggings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD nodepools.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD nodetemplates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clusteralertrules.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clustermonitorgraphs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clusterscans.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD nodes.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD podsecuritypolicytemplateprojectbindings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD composeconfigs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD podsecuritypolicytemplates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD multiclusterapps.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectnetworkpolicies.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD multiclusterapprevisions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectroletemplatebindings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD monitormetrics.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projects.management.cattle.io
2022/04/28 03:34:11 [INFO] Waiting for CRD sourcecodecredentials.project.cattle.io to become available
2022/04/28 03:34:11 [INFO] Creating CRD rkek8ssystemimages.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD notifiers.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD rkek8sserviceoptions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectalerts.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD rkeaddons.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectalertgroups.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD roletemplates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectcatalogs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectloggings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD samltokens.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectalertrules.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clustertemplates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectmonitorgraphs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clustertemplaterevisions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD cisconfigs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD cisbenchmarkversions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD templates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD templateversions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD templatecontents.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD globaldnses.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD globaldnsproviders.management.cattle.io
2022/04/28 03:34:11 [INFO] Waiting for CRD nodetemplates.management.cattle.io to become available
2022/04/28 03:34:12 [INFO] Waiting for CRD projectalertgroups.management.cattle.io to become available
2022/04/28 03:34:12 [FATAL] k3s exited with: exit status 1
</code></pre>
| Joey Yi Zhao | <p>I would recommend trying to run it with a specific tag, i.e. <code>rancher/rancher:v2.6.6</code>.</p>
<p>Some other things that may interfere: What size setup are you running on?
CPU and minimum memory requirements are currently 2 CPUs and 4gb RAM.</p>
<p>Also, you can try their docker install scripts and check out other documentation here: <a href="https://rancher.com/docs/rancher/v2.6/en/installation/requirements/installing-docker/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.6/en/installation/requirements/installing-docker/</a></p>
<p>Edit: noticed you're running on ARM. There is additional documentation for running rancher on ARM here: <a href="https://rancher.com/docs/rancher/v2.5/en/installation/resources/advanced/arm64-platform/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.5/en/installation/resources/advanced/arm64-platform/</a></p>
| Slickwarren |
<p>I'm new with Kubernetes, i'm testing with Minikube locally. I need some advice with Kubernetes's horizontal scaling.<br />
In the following scenario :</p>
<ul>
<li>Cluster composed of only 1 node</li>
<li>There is only 1 pod on this node</li>
<li>Only one application running on this pod</li>
</ul>
<p>Is there a benefit of deploying new pod <strong>on this node only</strong> to scale my application ?<br />
If i understand correctly, pod are sharing the system's resources. So if i deploy 2 pods instead of 1 on the same node, there will be no performance increase.</p>
<p>There will be no availability increase either, because if the node fails, the two pods will also shut.</p>
<p>Am i right about my two previous statements ?</p>
<p>Thanks</p>
| ShooShoo | <p>Yes, you are right. Pods on the same node are anyhow utilizing the same CPU and Memory resources and therefore are expected to go down in event of node failure.</p>
<p>But, you need to consider it at pod level also. There can be situation where the pod itself gets failed but node is working fine. In such cases, multiple pods can help you serve better and make application highly available.
From performance perspective also, more number of pods can serve requests faster, thereby dropping down latency issues for your application.</p>
| rkdove96 |
<p>I tried setting up <strong>gitea</strong> in my local kubernetes cluster. At first it was working I can access the gitea home page. But when I tried to reboot my raspberry pi. I got the below error on my <code>Service</code></p>
<p><a href="https://i.stack.imgur.com/UKgqr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UKgqr.png" alt="enter image description here" /></a></p>
<p>My <code>pod</code> is ok.
<a href="https://i.stack.imgur.com/CO9Ys.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CO9Ys.png" alt="enter image description here" /></a></p>
<p>I'm wondering why I only got this error every time i reboot my device.</p>
<p>here are my configuraiton</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: gitea-service
spec:
type: NodePort
selector:
app: gitea
ports:
- name: gitea-http
port: 3000
targetPort: 3000
nodePort: 30000
- name: gitea-ssh
port: 22
targetPort: 22
nodePort: 30002
</code></pre>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: gitea-deployment
labels:
app: gitea
spec:
replicas: 1
serviceName: gitea-service-headless
selector:
matchLabels:
app: gitea
template:
metadata:
labels:
app: gitea
spec:
containers:
- name: gitea
image: gitea/gitea:1.12.2
ports:
- containerPort: 3000
name: gitea
- containerPort: 22
name: git-ssh
volumeMounts:
- name: pv-data
mountPath: /data
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: gitea-pvc
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: gitea-service-headless
labels:
app: gitea-service-headless
spec:
clusterIP: None
ports:
- port: 3000
name: gitea-http
targetPort: 3000
- port: 22
name: gitea-https
targetPort: 22
selector:
app: gitea
</code></pre>
| Jayson Gonzaga | <blockquote>
<p>I'm wondering why I only got this error every time i reboot my device.</p>
</blockquote>
<p>Well, lets look at the error:</p>
<blockquote>
<p>Error updating Endpoint Slices for Service dev-ops/gitea-service: node "rpi4-a" not found</p>
</blockquote>
<p>It look like the error was triggered because: <code>node "rpi4-a" not found</code>. Why is it not found?? While rebooting, the node is down, pod is not working for a moment and this is when service throws the error. When the node boots up, pod starts working but the events are present for one hour (by default) before they get automatically deleted.</p>
<p>So don't worry about it. You rebooted the node so you should expect some errors to appear. Kubernetes tries as hard as it can to keep everything working, so when you trigger the reboot without draining the node, some errors may appear.</p>
| Matt |
<p>I have the following <code>MutatingWebhookConfiguration</code></p>
<pre><code>apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: example-webhook
webhooks:
- name: example-webhook.default.svc.cluster.local
admissionReviewVersions:
- "v1beta1"
sideEffects: "None"
timeoutSeconds: 30
objectSelector:
matchLabels:
example-webhook-enabled: "true"
clientConfig:
service:
name: example-webhook
namespace: default
path: "/mutate"
caBundle: "LS0tLS1CR..."
rules:
- operations: [ "CREATE" ]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
</code></pre>
<p>I want to inject the <code>webhook</code> pod in an <code>istio</code> enabled namespace with <code>istio</code> having strict TLS mode on.</p>
<p>Therefore, (I thought) TLS should not be needed in my <code>example-webhook</code> service so it is crafted as follows:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: example-webhook
namespace: default
spec:
selector:
app: example-webhook
ports:
- port: 80
targetPort: webhook
name: webhook
</code></pre>
<p>However when creating a <code>Pod</code> (that does indeed trigger the webhook) I get the following error:</p>
<pre><code>▶ k create -f demo-pod.yaml
Error from server (InternalError): error when creating "demo-pod.yaml": Internal error occurred: failed calling webhook "example-webhook.default.svc.cluster.local": Post "https://example-webhook.default.svc:443/mutate?timeout=30s": no service port 443 found for service "example-webhook"
</code></pre>
<p>Can't I configure the webhook not to be called on <code>443</code> but rather on <code>80</code>? Either way TLS termination is done by the <code>istio</code> sidecar.</p>
<p>Is there a way around this using <code>VirtualService</code> / <code>DestinationRule</code>?</p>
<p><strong>edit</strong>: on top of that, why is it trying to reach the service in the <code>example-webhook.default.svc</code> endpoint? (while it should be doing so in <code>example-webhook.default.svc.cluster.local</code>) ?</p>
<h3>Update 1</h3>
<p>I have tried to use <code>https</code> as follows:</p>
<p>I have created a certificate and private key, using istio's CA.</p>
<p>I can verify that my DNS names in the cert are valid as follows (from another pod)</p>
<pre><code>echo | openssl s_client -showcerts -servername example-webhook.default.svc -connect example-webhook.default.svc:443 2>/dev/null | openssl x509 -inform pem -noout -text
</code></pre>
<pre><code>...
Subject: C = GR, ST = Attica, L = Athens, O = Engineering, OU = FOO, CN = *.cluster.local, emailAddress = [email protected]
...
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:*.default.svc.cluster.local, DNS:example-webhook, DNS:example-webhook.default.svc
...
</code></pre>
<p>but now pod creation fails as follows:</p>
<pre><code>▶ k create -f demo-pod.yaml
Error from server (InternalError): error when creating "demo-pod.yaml": Internal error occurred: failed calling webhook "example-webhook.default.svc.cluster.local": Post "https://example-webhook.default.svc:443/mutate?timeout=30s": x509: certificate is not valid for any names, but wanted to match example-webhook.default.svc
</code></pre>
<h3>Update 2</h3>
<p>The fact that the certs the webhook pod are running with were appropriately created using the <code>istio</code> CA cert, is also validated.</p>
<pre><code>curl --cacert istio_cert https://example-webhook.default.svc
Test
</code></pre>
<p>where <code>istio_cert</code> is the file containing istio's CA certificate</p>
<p>What is going on?</p>
| pkaramol | <p>Not sure if you can use webhook on port 80...</p>
<p>Perhaps some of this will be useful to you, I used the following script to generate certificates, you can change it to suit your needs:</p>
<pre><code>#!/bin/bash
set -e
service=webhook-svc
namespace=default
secret=webhook-certs
csrName=${service}.${namespace}
cat <<EOF >> csr.conf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = ${service}
DNS.2 = ${service}.${namespace}
DNS.3 = ${service}.${namespace}.svc
EOF
openssl genrsa -out server-key.pem 2048
openssl req -new -key server-key.pem -subj "/CN=${service}.${namespace}.svc" -out server.csr -config csr.conf
kubectl delete csr ${csrName} 2>/dev/null || true
cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: ${csrName}
spec:
groups:
- system:authenticated
request: $(< server.csr base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
sleep 5
kubectl certificate approve ${csrName}
for i in {1 .. 10}
do
serverCert=$(kubectl get csr ${csrName} -o jsonpath='{.status.certificate}')
if [[ ${serverCert} != '' ]]; then
break
fi
sleep 1
done
if [[ ${serverCert} == '' ]]; then
echo "ERROR: After approving csr ${csrName}, the signed certificate did not appear on the resource. Giving up after 10 attempts." >&2
exit 1
fi
echo "${serverCert}" | openssl base64 -d -A -out server-cert.pem
# create the secret with CA cert and server cert/key
kubectl create secret generic ${secret} \
--from-file=key.pem=server-key.pem \
--from-file=cert.pem=server-cert.pem \
--dry-run -o yaml |
kubectl -n ${namespace} apply -f -
</code></pre>
<p>The script creates a secret, which I then mounted into the webhook, deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: webhook-deployment
namespace: default
labels:
app: webhook
annotations:
sidecar.istio.io/inject: "false"
spec:
replicas: 1
selector:
matchLabels:
app: webhook
template:
metadata:
labels:
app: webhook
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: webhook
image: webhook:v1
imagePullPolicy: IfNotPresent
volumeMounts:
- name: webhook-certs
mountPath: /certs
readOnly: true
volumes:
- name: webhook-certs
secret:
secretName: webhook-certs
</code></pre>
<p>service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: webhook-svc
namespace: default
labels:
app: webhook
spec:
ports:
- port: 443
targetPort: 8443
selector:
app: webhook
</code></pre>
| KubePony |
<p>the following script works :</p>
<pre><code>#!/bin/bash
kubectl exec -ti mypod -- bash -c "cat somefile"
</code></pre>
<p>but</p>
<pre><code>#!/bin/bash
command="cat somefile"
kubectl exec -ti mypod -- bash -c $command
</code></pre>
<p>does not.</p>
<pre><code>chmod +x myscript.sh
./myscript.sh
</code></pre>
<p>the prompt never returns!!!</p>
<p>What is wrong with the second script?
Thanks in advance,
Abdelghani</p>
| Abdelghani | <p>You are missing the quotes. <code>command="cat somefile"</code> will store in the variable the string <em>cat somefile</em>.</p>
<p>The script should look like this:</p>
<pre><code>#!/bin/bash
command="cat somefile"
kubectl exec -ti mypod -- bash -c "$command"
</code></pre>
| Neo Anderson |
<p>I have deployed a Drupal Instance but i see that the instance Endpoint are not visible although the containers deployed successfully.</p>
<p>Container logs don't point to any direction</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: drupal-deployment
spec:
replicas: 1
selector:
matchLabels:
app: drupal
type: frontend
template:
metadata:
labels:
app: drupal
spec:
containers:
- name: drupal
image: drupal
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
**********************
apiVersion: v1
kind: Service
metadata:
name: drupal-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30010
selector:
app: drupal
type: frontend
************************`
</code></pre>
<pre><code>root@ip-172-31-32-54:~# microk8s.kubectl get pods
NAME READY STATUS RESTARTS AGE
drupal-deployment-6fdd7975f-l4j2z 1/1 Running 0 9h
drupal-deployment-6fdd7975f-p7sfz 1/1 Running 0 9h
root@ip-172-31-32-54:~# microk8s.kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
drupal-service NodePort 10.152.183.6 <none> 80:30010/TCP 9h
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 34h
***********************
root@ip-172-31-32-54:~# microk8s.kubectl describe service drupal-service
Name: drupal-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=drupal,type=frontend
Type: NodePort
IP: 10.152.183.6
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30010/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>Any directions is really helpful.</p>
<p>NOTE: This works perfectly when running a container using the command</p>
<pre><code>docker run --name some-drupal -p 8080:80 -d drupal
</code></pre>
<p>Thank you,
Anish</p>
| anish anil | <p>Your service selector has two values:</p>
<pre><code>Selector: app=drupal,type=frontend
</code></pre>
<p>but your pod has only one of these:</p>
<pre><code>spec:
template:
metadata:
labels:
app: drupal
</code></pre>
<p>Just make sure that all labels required by the service actually exist on the pod.</p>
<p>Like following:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: drupal-deployment
spec:
replicas: 1
selector:
matchLabels:
app: drupal
type: frontend
template:
metadata:
labels:
app: drupal
type: frontend # <--------- look here
spec:
containers:
- name: drupal
image: drupal
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
</code></pre>
| Matt |
<p>I'm writing up a deployment for graylog 5.1, however the problem I'm running into is that graylog needs to write into its graylog.conf and log4j2.xml and I'm pulling those from a config map that I've created.</p>
<p>The problem is they are always created as read-only and graylog cannot write into them, I read that it can be solved by create an initContainer that fix this by copying configMap into an emptyDir and then mounting that dir into the container's path: /usr/share/graylog/data/config. But for the life of me I cannot figure out how to do this.</p>
<p>Here's an example of the graylog-deployment.yaml:</p>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: graylog
spec:
replicas: 1
selector:
matchLabels:
app: graylog
template:
metadata:
labels:
app: graylog
spec:
volumes:
- name: graylog-config
configMap:
name: graylog-config
initContainers:
- name: copy-config-files
image: alpine
command: ["/bin/sh", "-c"]
args:
- |
mkdir -p /tmp/config
cp /usr/share/graylog/data/config/* /tmp/config/
chmod -R 0775 /tmp/config/
volumeMounts:
- name: graylog-config
mountPath: /tmp/config
containers:
- name: graylog
image: graylog/graylog:5.1
ports:
- containerPort: 9000
- containerPort: 1514
- containerPort: 12201
resources:
requests:
memory: "512Mi" # Set the minimum memory required
cpu: "250m" # Set the minimum CPU required
limits:
memory: "1Gi" # Set the maximum memory allowed
cpu: "1" # Set the maximum CPU allowed
env:
- name: GRAYLOG_MONGODB_URI
value: "mongodb://mongodb:27017/graylog"
- name: GRAYLOG_ELASTICSEARCH_HOSTS
value: "http://elasticsearch:9200"
- name: GRAYLOG_ROOT_PASSWORD_SHA2 #admin pass
value: ****
volumeMounts:
- name: graylog-claim
mountPath: /usr/share/graylog/data
- name: graylog-config
mountPath: /usr/share/graylog/data/config
volumeClaimTemplates:
- metadata:
name: graylog-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: microk8s-hostpath
resources:
requests:
storage: 2Gi
</code></pre>
<p>However, this does not work, initContainer never finishes and I get an ouput saying that it's a read-only. I have tried many different approaches and I cannot seem to get it right.</p>
<p>Any help is greatly appreciated, I'm at a loss here.</p>
<p>I tried using emptyDir: {} approach but still the same result.
I have tried with chmoding but that also doesn't work since it's a read-only files that are created: log4j2.xml and graylog.conf:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: graylog-config
data:
log4j2.xml: |-
<?xml version="1.0" encoding="UTF-8"?>
<Configuration packages="org.graylog2.log4j" shutdownHook="disable">
<Appenders>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="%d %-5p: %c - %m%n"/>
</Console>
<!-- Internal Graylog log appender. Please do not disable. This makes internal log messages available via REST calls. -->
<Memory name="graylog-internal-logs" bufferSize="500"/>
</Appenders>
<Loggers>
<!-- Application Loggers -->
<Logger name="org.graylog2" level="info"/>
<Logger name="com.github.joschi.jadconfig" level="warn"/>
<!-- Prevent DEBUG message about Lucene Expressions not found. -->
<Logger name="org.elasticsearch.script" level="warn"/>
<!-- Disable messages from the version check -->
<Logger name="org.graylog2.periodical.VersionCheckThread" level="off"/>
<!-- Silence chatty natty -->
<Logger name="com.joestelmach.natty.Parser" level="warn"/>
<!-- Silence Kafka log chatter -->
<Logger name="org.graylog.shaded.kafka09.log.Log" level="warn"/>
<Logger name="org.graylog.shaded.kafka09.log.OffsetIndex" level="warn"/>
<Logger name="org.apache.kafka.clients.consumer.ConsumerConfig" level="warn"/>
<!-- Silence useless session validation messages -->
<Logger name="org.apache.shiro.session.mgt.AbstractValidatingSessionManager" level="warn"/>
<!-- Silence Azure SDK messages -->
<Logger name="com.azure" level="warn"/>
<Logger name="reactor.core.publisher.Operators" level="off"/>
<Logger name="com.azure.messaging.eventhubs.PartitionPumpManager" level="off"/>
<Logger name="com.azure.core.amqp.implementation.ReactorReceiver" level="off"/>
<Logger name="com.azure.core.amqp.implementation.ReactorDispatcher" level="off"/>
<Root level="warn">
<AppenderRef ref="STDOUT"/>
<AppenderRef ref="graylog-internal-logs"/>
</Root>
</Loggers>
</Configuration>
graylog.conf: |-
############################
# GRAYLOG CONFIGURATION FILE
############################
is_master = true
node_id_file = /usr/share/graylog/data/config/node-id
password_secret = hs1hvm7Wi7GaChG5iDsHkvYlOnkayN4YBFeFhMosBgLNwCztbKRIZfcWA4zgx6RL9JF3I5v0mRJMNOYmF9vvuo30G2vuwAYW
root_password_sha2 = 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
bin_dir = /usr/share/graylog/bin
data_dir = /usr/share/graylog/data
plugin_dir = /usr/share/graylog/plugin
http_bind_address = 0.0.0.0:9000
elasticsearch_hosts = http://elasticsearch:9200
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = data/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://mongo/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
proxied_requests_thread_pool_size = 32
</code></pre>
| Shdow89 | <p>I think you forgot to mount the <code>emptyDir</code> as your final directory.</p>
<p>I use the following steps as shown in the below excerpt:</p>
<ol>
<li>Create a <code>emptyDir</code> volume (e.g., <code>final-dir</code>)</li>
<li>Add it as <code>volumeMount</code> to both your <code>initContainer</code> (e.g., mounted to <code>/dest</code>) and your <code>container</code>(e.g., mounted to <code>/some/path/your_file</code>)</li>
<li>Add a <code>volume</code> for your <code>configMap</code> and mount it in your <code>initContainer</code> (e.g., mounted to <code>/src</code>)</li>
<li>Execute the <code>cp</code> command in your <code>initContainer</code> (e.g., <code>/src/your_file /dest/your_file</code>)</li>
</ol>
<p>Hope this helps.</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
..
spec:
...
template:
...
spec:
initContainers:
- name: copy-configmap
image: alpine
command: ['sh', '-c', 'cp /src/your_file /dest/your_file']
volumeMounts:
- name: temp-dir
mountPath: /src
- name: final-dir
mountPath: /dest
containers:
- name: your-container-name
...
volumeMounts:
- name: final-dir
mountPath: /some/path/your_file
subPath: your_file
volumes:
- name: final-dir
emptyDir: {}
- name: temp-dir
configMap:
name: your-configMap-name
...
</code></pre>
| marcel h |
<p>If one is running Docker Enterprise with Kubernetes in an on-premises private cloud, is it possible to add clusters in a public cloud like Azure?</p>
| 208_man | <p>On GCP, Anthos is a candidate.<br />
You may have a look on their <a href="https://cloud.google.com/anthos/docs/concepts/overview" rel="nofollow noreferrer">architecture</a> and see if it fits your needs.<br />
Anthos is advertised in most of the GCP architecture courses and offers integration between GKE and both on-prem(the on-prem cluster must meet some prerequisites or you can use the version provided by Google) and AWS Kubernetes clusters.</p>
<p>Istio is a service mesh and if I understood correctly your requirements, <a href="https://istio.io/latest/docs/ops/deployment/deployment-models/#multiple-clusters" rel="nofollow noreferrer">the multiple clusters</a> and <a href="https://istio.io/latest/docs/ops/deployment/deployment-models/#multiple-networks" rel="nofollow noreferrer">multiple networks</a> models could be used.</p>
| Neo Anderson |
<p>Before I start all the services I need a pod that would do some initialization. But I dont want this pod to be running after the init is done and also all the other pods/services should start after this init is done. I am aware of init containers in a pod, but I dont think that would solve this issue, as I want the pod to exit after initialization.</p>
| Bharath | <p>You are recommended to let Kubernetes handle pods automatically instead of manual management by yourself, when you can. Consider <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Job</a> for run-once tasks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
template:
spec:
restartPolicy: Never
containers:
- name:
image:
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ kubectl apply -f job.yml
</code></pre>
<p>Kubernetes will create a pod to run that job. The pod will complete as soon as the job exits. Note that the completed job and its pod will be released from consuming resources but not removed completely, to give you a chance to examine its status and log. Deleting the job will remove the pod completely.</p>
<p>Jobs can do advanced things like restarting on failure with exponential backoff, running tasks in parallelism, and limiting the time it runs.</p>
| Son Nguyen |
<p>following the previous question on Stack Overflow at this <a href="https://stackoverflow.com/questions/64590283/external-oauth-authentication-with-nginx-in-kubernetes">link</a>, after successful authentication (at Github.com) i get <em>404 page not found</em> on my browser.</p>
<p>The Ingress configuration below (used by nginx-ingress controller):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$request_uri"
spec:
ingressClassName: nginx
rules:
- host: site.example.com
http:
paths:
- path: /v1
backend:
serviceName: web-service
servicePort: 8080
- path: /
backend:
serviceName: oauth2-proxy
servicePort: 4180
tls:
- hosts:
- site.example.com
secretName: example-tls
</code></pre>
<hr />
<pre><code>$ kubectl get ing -n nginx-ingress
NAME CLASS HOSTS ADDRESS PORTS
ingress nginx site.example.com 80, 443
</code></pre>
<ul>
<li>browser sends GET to <a href="https://site.example.com/" rel="nofollow noreferrer">https://site.example.com/</a>,</li>
<li>browser is redirected to Github login page,</li>
<li>After successful login, browser is redirected to <a href="https://site.example.com/" rel="nofollow noreferrer">https://site.example.com/</a>,</li>
<li>browser sends GET to <a href="https://site.example.com/" rel="nofollow noreferrer">https://site.example.com/</a> with cookie _oauth2_proxy filled</li>
<li>the response is <em>404 page not found</em></li>
</ul>
<p>The node.js web application i'm trying to access to via oauth2 has been built with two paths (/ and /v1). Web application is behind Service <em>web-service</em>.</p>
<p>OAuth2 Github application configuration:</p>
<pre><code>Homepage URL
https://site.example.com/
Authorization callback URL
https://site.example.com/oauth2/callback
</code></pre>
<p>OAuth2 deployment and service:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: nginx-ingress
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=github
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
# Register a new application
# https://github.com/settings/applications/new
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: 32066******52
- name: OAUTH2_PROXY_CLIENT_SECRET
value: ff2b0a***************9bd
- name: OAUTH2_PROXY_COOKIE_SECRET
value: deSF_t******03-HQ==
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
</code></pre>
<hr />
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: nginx-ingress
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
</code></pre>
<p>Logs from oauth2-proxy container:</p>
<pre><code>[2020/11/10 19:47:27] [logger.go:508] Error loading cookied session: cookie "_oauth2_proxy" not present, removing session
10.44.0.2:51854 - - [2020/11/10 19:47:27] site.example.com GET - "/" HTTP/1.1 "Mozilla/5.0
[2020/11/10 19:47:27] [logger.go:508] Error loading cookied session: cookie "_oauth2_proxy" not present, removing session
10.44.0.2:51858 - - [2020/11/10 19:47:27] site.example.com GET - "/favicon.ico" HTTP/1.1 "Mozilla/5.0 ....
10.44.0.2:51864 - - [2020/11/10 19:47:28] site.example.com GET - "/oauth2/start?rd=%2F" HTTP/1.1 "Mozilla/5.0 ....
10.44.0.2:52004 - marco.***[email protected] [2020/11/10 19:48:33] [AuthSuccess] Authenticated via OAuth2: Session{email:marco.***[email protected] user:mafi81 PreferredUsername: token:true created:2020-11-10 19:48:32.494549621 +0000 UTC m=+137.822819581}
10.44.0.2:52004 - - [2020/11/10 19:48:32] site.example.com GET - "/oauth2/callback?code=da9c3af9d8f35728d2d1&state=e3280edf2430c507cd74f3d4655500c1%3A%2F" HTTP/1.1 "Mozilla/5.0 ...
10.44.0.2:52012 - marco.****[email protected] [2020/11/10 19:48:33] site.example.com GET - "/" HTTP/1.1 "Mozilla/5.0 ....
10.44.0.2:52014 - marco.****[email protected] [2020/11/10 19:48:33] site.example.com GET - "/favicon.ico" HTTP/1.1 "Mozilla/5.0 .... Chrome/86.0.4240.193 Safari/537.36" 404 19 0.000
</code></pre>
<p>Testing environment:</p>
<ul>
<li>VirtualBox with kubeadm v1.19.3</li>
<li>NGINX Ingress controller Version=1.9.0.</li>
</ul>
<p>I'm not still confident with paths configuration under Ingress resource.
Any suggestion on how to go ahead with troubleshoot would be great.</p>
<p><strong>UPDATE</strong>:</p>
<p>Following the Matt's answer, giving the right way to test the authentication, here is the new environment:</p>
<pre><code>NGINX Ingress controller
Release: v0.41.2
Build: d8a93551e6e5798fc4af3eb910cef62ecddc8938
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.4
OAuth2 Pod
image: quay.io/oauth2-proxy/oauth2-proxy
</code></pre>
<p>Ingress manifest:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: web
annotations:
nginx.ingress.kubernetes.io/auth-response-headers: Authorization
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.web.svc.cluster.local:4180/oauth2/auth
nginx.ingress.kubernetes.io/auth-signin: https://site.example.com/oauth2/start?rd=$request_uri
nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $name_upstream_1 $upstream_cookie__oauth2_proxy_1;
access_by_lua_block {
if ngx.var.name_upstream_1 ~= "" then
ngx.header["Set-Cookie"] = "_oauth2_proxy_1=" .. ngx.var.name_upstream_1 .. ngx.var.auth_cookie:match("(; .*)")
end
}
spec:
ingressClassName: nginx-oauth
rules:
- host: site.example.com
http:
paths:
- path: /
backend:
serviceName: web-service
servicePort: 8080
</code></pre>
<hr />
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: web
spec:
ingressClassName: nginx-oauth
rules:
- host: site.example.com
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 4180
path: /oauth2
tls:
- hosts:
- site.example.com
secretName: tls
</code></pre>
<p>Note that i had to change one annotation to get it working:</p>
<ul>
<li>auth-url: http://oauth2- proxy.web.svc.cluster.local:4180/oauth2/auth (this solves resolution failure)</li>
</ul>
| Marco_81 | <p>According to <a href="https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/overview#configuring-for-use-with-the-nginx-auth_request-directive" rel="nofollow noreferrer">oauth-proxy documentation</a> you MUST use <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">kubernetes/ingress-nginx</a>.</p>
<p>Here you can read more about <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md" rel="nofollow noreferrer">differences between nginxinc/kubernetes-ingress and kubernetes/ingress-nginx Ingress Controllers</a>.</p>
<p>In oath2-proxy docs (mentioned earlier) you can find the following:</p>
<blockquote>
<p>When you use ingress-nginx in Kubernetes, you MUST use kubernetes/ingress-nginx (which includes the Lua module) and the following configuration snippet for your Ingress. Variables set with auth_request_set are not set-able in plain nginx config when the location is processed via proxy_pass and then may only be processed by Lua. Note that nginxinc/kubernetes-ingress does not include the Lua module.</p>
<pre><code>nginx.ingress.kubernetes.io/auth-response-headers: Authorization
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$escaped_request_uri
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2/auth
nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $name_upstream_1 $upstream_cookie_name_1;
access_by_lua_block {
if ngx.var.name_upstream_1 ~= "" then
ngx.header["Set-Cookie"] = "name_1=" .. ngx.var.name_upstream_1 .. ngx.var.auth_cookie:match("(; .*)")
end
}
</code></pre>
</blockquote>
<p>So if we can trust documentation, your authentication won't work because you are using wrong nginx controller and you are missing annotations.</p>
| Matt |
<p>I am following this image architecture from K8s</p>
<p><a href="https://i.stack.imgur.com/2gu6N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2gu6N.png" alt="enter image description here" /></a></p>
<p>However I can not seem to connect to socket.io server <strong>from within</strong> the cluster using the service name</p>
<p>Current situation:</p>
<p>From POD B</p>
<ul>
<li>Can connect directly to App A's pod using WS ( ws://10.10.10.1:3000 ) ✅</li>
<li>Can connect to App A's service using HTTP ( http://orders:8000 ) ✅</li>
<li><strong>Can not connect to App A's service using WS ( ws://orders:8000 )</strong> ❌</li>
</ul>
<p>From outside world / Internet</p>
<ul>
<li><p><strong>Can connect to App A's service using WS ( ws://my-external-ip/orders )</strong> ✅ // using traefik to route my-external-ip/orders to service orders:8000</p>
</li>
<li><p>Can connect to App A's service using HTTP ( http://my-external-ip/orders ) ✅ // using traefik to route my-external-ip/orders to service orders:8000</p>
</li>
</ul>
<p>My current service configuration</p>
<pre><code>spec:
ports:
- name: http
protocol: TCP
port: 8000
targetPort: 3000
selector:
app: orders
clusterIP: 172.20.115.234
type: ClusterIP
sessionAffinity: None
</code></pre>
<p>My Ingress Helm chart</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "app.name" $ }}-backend
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: forward
ingress.kubernetes.io/auth-url: "http://auth.default.svc.cluster.local:8000/api/v1/oauth2/auth"
ingress.kubernetes.io/auth-response-headers: authorization
labels:
{{- include "api-gw.labels" $ | indent 4 }}
spec:
rules:
- host: {{ .Values.deploy.host | quote }}
http:
paths:
- path: /socket/events
backend:
serviceName: orders
servicePort: 8000
</code></pre>
<p>My Service Helm chart</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: {{ template "app.name" . }}
spec:
{{ if not $isDebug -}}
selector:
app: {{ template "app.name" . }}
{{ end -}}
type: NodePort
ports:
- name: http
port: {{ template "app.svc.port" . }}
targetPort: {{ template "app.port" . }}
nodePort: {{ .Values.service.exposedPort }}
protocol: TCP
# Helpers..
# {{/* vim: set filetype=mustache: */}}
# {{- define "app.name" -}}
# {{ default "default" .Chart.Name }}
# {{- end -}}
# {{- define "app.port" -}}
# 3000
# {{- end -}}
# {{- define "app.svc.port" -}}
# 8000
# {{- end -}}
</code></pre>
| qkhanhpro | <p>The services DNS name must be set in your container to access its VIP address.
Kubernetes automatically sets environmental variables in all pods which have the <strong>same selector</strong> as the <strong>service</strong>.</p>
<p>In your case, all pods with <strong>selector A</strong>, have environmental variables set in them when the container is deploying, that contain the services VIP and PORT.</p>
<p>The other pod with <strong>selector B</strong>, is not linked as an endpoints for the service, therefor, it does not contain the environmental variables needed to access the service.</p>
<p>Here is the <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service" rel="nofollow noreferrer">k8s documentation</a> related to your problem.</p>
<p>To solve this, you can setup a DNS service, which k8s offers as a cluster addon.
Just follow the documentation.</p>
| Daniel Karapishchenko |
<p>I have a service running on minikube which as you can see from screenshots is accessible when I ssh to minikube but when I try to access it through browser on PC where minikube is running then it is not accessible. Can you please help me find the problem.</p>
<p><a href="https://i.stack.imgur.com/ObZlq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ObZlq.png" alt="Service" /></a></p>
<p><a href="https://i.stack.imgur.com/E1wka.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E1wka.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/0V8MY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0V8MY.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/t4kHG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t4kHG.png" alt="enter image description here" /></a></p>
| funtyper | <p>I used port-forward and that worked.</p>
<pre><code>kubectl port-forward service/frontend -n guestbook-qa 8080:80
</code></pre>
<p><a href="https://i.stack.imgur.com/W9rCH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W9rCH.png" alt="enter image description here" /></a></p>
| funtyper |
<p>I am trying to find event related to my pod <code>kuebctl describe pod <pod_name></code> but I am seeing blank events.</p>
<pre><code>Events: <none>
</code></pre>
<p>I have the application deployed in AWS EKS. I think this has started to happen when one of my nodes got replaced with another one. How do i ensure that i see the events.</p>
<p>when I see the output of <code>kubectl get pods</code> I see restart count = 1 for one of my pods which indicate there should be some events.</p>
<p>Any help on how to investigate this further would be really great, thanks.</p>
<p>Thanks.</p>
| opensource-developer | <p>It is normal to have no events on the pods if no event was generated in the last 60 minutes. I have same behavior in my cluster as well:</p>
<pre><code>kubectl describe pod prometheus-77566c9987-95g92 -n istio-system | grep -i events
Events: <none>
</code></pre>
<p>The default events-ttl(time to live) is 60 minutes.</p>
<p>Actually, while trying to decrease my ttl to reproduce an see if the events disappear without having to wait, I went into this <a href="https://github.com/aws/containers-roadmap/issues/785" rel="nofollow noreferrer">SR</a> , that's asking for this value to be configurable via the AWS web portal.</p>
<p>For longer lived and advanced logging, you need to persist the events/logs or leverage the built-in logging systems offered by your cloud provider. If you want to do it yourself, there are plenty of options for doing this(Stackdriver,Prometheues,ELK).</p>
<p>However, if you want to increase the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">ttl</a> of the events, you must change the config through the api-server as explained in this <a href="https://stackoverflow.com/a/50356764/13736525">post</a>.</p>
| Neo Anderson |
<p>In Azure Kubernetes Service, I am trying to setup Locust load test, as of now locust test contains 1 master and 2 slave pods. With default dnspolicy provided by kubernetes, Slave pods able to establish connection with master pod, which I had confirmed in locust webpage. But to run locust test successfully slave pods need to connect to other services so had to use custom dnspolicy in slave pods.</p>
<p>Once I apply the custom dnspolicy in slave pods, slave pods are not able to connect to master pod. I tried to apply the same dnspolicy slave uses, in master deployment file but still slave pods are not able to establish connection with master pod.</p>
<p>I am not sure what am I missing in this case, how to establish connection between slave with pods custom dnspolicy to master pod which uses default dns policy provided by Azure kubernetes</p>
<p>slave deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: slave
name: slave
spec:
replicas: 2
selector:
matchLabels:
io.kompose.service: slave
strategy: {}
template:
metadata:
labels:
io.kompose.service: slave
spec:
imagePullSecrets:
- name: secret
containers:
- args:
- -f
- /usr/app/locustfile.py
- --worker
- --master-host
- master
image: xxxxxxxx/locust-xxxxxx:locust-slave-1.0.2
name: slave
resources: {}
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
dnsPolicy: "None"
dnsConfig:
nameservers:
- xx.xx.xx.xx
- xx.xx.xx.xx
searches:
- xxxxxx
- locust-xxxx.svc.cluster.local
- svc.cluster.local
- cluster.local
- xxxxxxxxxxxxxx.jx.internal.cloudapp.net
options:
- name: ndots
value: "0"
restartPolicy: Always
status: {}
</code></pre>
<p>master deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: master
name: master
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: master
strategy: {}
template:
metadata:
labels:
io.kompose.service: master
spec:
imagePullSecrets:
- name: secret
containers:
- image: xxxxxxxxxxxxx/locust-xxxxxxxxxxx:locust-master-1.0.1
name: master
resources: {}
restartPolicy: Always
status: {}
</code></pre>
<p><em>I am new to networking side of things</em></p>
| Sar | <p>It was not the issue with kubernetes, I was able to establish connection between master and slave with the help of this link <a href="http://www.github.com/locustio/locust/issues/294" rel="nofollow noreferrer">www.github.com/locustio/locust/issues/294</a>. What was missing was a env variable so I added these env variable in slave deployment.yaml file</p>
<pre><code>env:
- name: LOCUST_MODE
value: slave
- name: LOCUST_MASTER
value: master
</code></pre>
| Sar |
<p><strong>What happened</strong></p>
<p>I am trying to create custom object in kubernetes using kubernetes python client, but i am unable to do so, it would be helpful if someone can explain what i am doing wrong here</p>
<pre class="lang-sh prettyprint-override"><code>Traceback (most recent call last):
File "/home/talha/PycharmProjects/doosra/tasks/cluster_tasks.py", line 585, in <module>
main()
File "/home/talha/PycharmProjects/doosra/tasks/cluster_tasks.py", line 574, in main
resource = api.get_namespaced_custom_object(
File "/home/talha/PycharmProjects/venv/lib/python3.8/site-packages/kubernetes/client/api/custom_objects_api.py", line 1484, in get_namespaced_custom_object
return self.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, **kwargs) # noqa: E501
File "/home/talha/PycharmProjects/venv/lib/python3.8/site-packages/kubernetes/client/api/custom_objects_api.py", line 1591, in get_namespaced_custom_object_with_http_info
return self.api_client.call_api(
File "/home/talha/PycharmProjects/venv/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/home/talha/PycharmProjects/venv/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/home/talha/PycharmProjects/venv/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 373, in request
return self.rest_client.GET(url,
File "/home/talha/PycharmProjects/venv/lib/python3.8/site-packages/kubernetes/client/rest.py", line 239, in GET
return self.request("GET", url,
File "/home/talha/PycharmProjects/venv/lib/python3.8/site-packages/kubernetes/client/rest.py", line 233, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Content-Type': 'text/plain; charset=utf-8', 'X-Content-Type-Options': 'nosniff', 'Date': 'Mon, 07 Sep 2020 00:41:25 GMT', 'Content-Length': '19'})
HTTP response body: 404 page not found
</code></pre>
<p><strong>What you expected to happen</strong>:
Custom Object found</p>
<p><strong>How to reproduce it (as minimally and precisely as possible)</strong>:</p>
<pre class="lang-py prettyprint-override"><code> from kubernetes import client
from kubernetes.client import Configuration
from kubernetes.config import kube_config
class K8s(object):
def __init__(self, configuration_json):
self.configuration_json = configuration_json
# self._configuration_json = None
@property
def client(self):
k8_loader = kube_config.KubeConfigLoader(self.configuration_json)
call_config = type.__call__(Configuration)
k8_loader.load_and_set(call_config)
Configuration.set_default(call_config)
return client
def main():
cls = {
"kind": "Config",
"users": [
{
"name": "xyx",
"user": {
"client-key-data": "daksjdkasjdklj==",
"client-certificate-data": "skadlkasldk"
}
}
],
"clusters": [
{
"name": "cluster1",
"cluster": {
"server": "https://cyx.cloud.ibm.com",
"certificate-authority-data": "sldklaksdl="
}
}
],
"contexts": [
{
"name": "cluster1/admin",
"context": {
"user": "admin",
"cluster": "cluster1",
"namespace": "default"
}
}
],
"apiVersion": "v1",
"preferences": {},
"current-context": "cluster1/admin"
}
config_trgt = K8s(configuration_json=cls).client
api = config_trgt.CustomObjectsApi()
resource = api.get_namespaced_custom_object(
group="velero.io",
version="v1",
namespace="velero",
plural="backups",
name="apple"
)
print(resource)
if __name__ == "__main__":
main()
</code></pre>
<p><strong>Anything else we need to know?</strong>:
I am able to use all other api including create_namespaced_custom_object()
I can view this custom object using "kubectl get backup -n velero"</p>
<p><strong>CRD</strong>
<a href="https://gist.githubusercontent.com/talhalatifkhan/c6eba420e327cf5ef8da1087c326e0a1/raw/d030950775b87a60ecfdb4370fbd518169118d26/gistfile1.txt" rel="nofollow noreferrer">https://gist.githubusercontent.com/talhalatifkhan/c6eba420e327cf5ef8da1087c326e0a1/raw/d030950775b87a60ecfdb4370fbd518169118d26/gistfile1.txt</a></p>
<p><strong>CUSTOM OBJECT</strong></p>
<p><a href="https://gist.githubusercontent.com/talhalatifkhan/0537695a1f08b235cbe87d83a3f83296/raw/030cf503a33a9162251a61380105c719037b90ad/gistfile1.txt" rel="nofollow noreferrer">https://gist.githubusercontent.com/talhalatifkhan/0537695a1f08b235cbe87d83a3f83296/raw/030cf503a33a9162251a61380105c719037b90ad/gistfile1.txt</a></p>
<p><strong>Environment</strong>:</p>
<ul>
<li>Kubernetes version (v1.18.6)</li>
<li>OS (Linux 20.04):</li>
<li>Python version (3.8.2)</li>
<li>Python client version (12.0.0a1)</li>
</ul>
| Talha Latif | <p>Used python 3.6 and it no longer returns status.code 404.</p>
| Talha Latif |
<p>I am confused about the <code>previous</code> parameter in the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#read-log-pod-v1-core" rel="nofollow noreferrer">Read Pod Log API</a> in Kubernetes. The parameter called <code>previous</code> has this description in the Kubernetes API documentation:</p>
<p><code>Return previous terminated container logs. Defaults to false.</code></p>
<p>What exactly does this mean or do? I decided to try and investigate the behavior around this and I came up with the following example. Consider this simple pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: log-pod-demo
spec:
containers:
- name: log-container
image: ubuntu
command: [ "/bin/sh", "-c", "echo 'Hello, World!'; echo 'Hello!'; echo 'Hola!'; echo 'OK I AM DONE NOW'" ]
</code></pre>
<p>If I run <code>kubectl apply -f log-pod-demo.yaml</code> followed by <code>kubectl logs log-pod-demo -p</code>, kubectl returns the expected log messages (<code>Hello, World!, Hello, Hola, OK I AM DONE NOW</code>). At other times kubectl returns this error: <code>unable to retrieve container logs for docker://1ceb99245689d3616bcace8de7b7dfcdbab297258e3b37b92340f7deb4a3e62f</code>. See a picture of this here:</p>
<p><a href="https://i.stack.imgur.com/SipEu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SipEu.png" alt="enter image description here" /></a></p>
<p>I can't figure out what's going on. When I omit the <code>-p</code> parameter I do not get any errors. I also tried adding some additional logging to the <code>kubectl</code> command like this: <code>kubectl logs log-pod-demo -p -v=99</code> and I get this error: <code>previous terminated container \"log-container\" in pod \"log-pod-demo\" not found</code>.</p>
<p>I am confused by this behavior and I don't understand what adding this parameter is supposed to do. Can someone please explain what the purpose of the <code>previous</code> parameter is, when I would want to use it and why I am getting these errors?</p>
| Rob L | <p>Here is what my kubectl help shows:</p>
<pre><code>$ kubectl logs --help | grep '\-\-previous'
-p, --previous=false: If true, print the logs for the previous instance of the container in a pod if it exists.
</code></pre>
<p>My help message is different than yours because (according to <a href="https://github.com/kubernetes/kubernetes/commit/3a3633a32ec83675b63c126932d00390165d3d54" rel="nofollow noreferrer">this commit</a>) you are using kubectl version 1.11 or older and I also belive that new help provides more clear explaination for this parameter.</p>
<p>But to further explain it to you, lets think why this is usefull. Immagine, that some container got restarted and you want to know why. It would be very hard to tell if you didn't keep any logs from before restart happened. Since the logs are kept, you can recall them using --previous flag and hopefully find the issue that caused it.</p>
| Matt |
<p>How do I map a service containing a static webpage under a subpath using ambassador in kubernetes ?</p>
<p>this is my yaml file</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: grafana
spec:
prefix: /grafana/
method: GET
service: monitoring-grafana.default:80
timeout_ms: 40000
</code></pre>
<p>**
and this is the response i get when trying to navigate</p>
<p>If you're seeing this Grafana has failed to load its application files</p>
<ol>
<li><p>This could be caused by your reverse proxy settings.</p>
</li>
<li><p>If you host grafana under subpath make sure your grafana.ini root_url setting includes subpath</p>
</li>
<li><p>If you have a local dev build make sure you build frontend using: yarn start, yarn start:hot, or yarn build</p>
</li>
<li><p>Sometimes restarting grafana-server can help
**</p>
</li>
</ol>
| Ismael GraHms | <p>Have you read through the Monitoring page on the Ambassador Docs? There's a bit about implementing Prometheus + Grafana if that helps: <a href="https://www.getambassador.io/docs/latest/howtos/prometheus/" rel="nofollow noreferrer">https://www.getambassador.io/docs/latest/howtos/prometheus/</a></p>
| Casey |
<p>How can I make every container run as non-root in Kubernetes?</p>
<p>Containers that do not specify a user, as in this example, and also do not specify a SecurityContext in the corresponding deployment, should still be able to be executed in the cluster - but without running as root. What options do you have here?</p>
<pre><code>FROM debian:jessie
RUN apt-get update && apt-get install -y \
git \
python \
vim
CMD ["echo", "hello world"]
</code></pre>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: mynamespace
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: hello-world
name: hello-world
</code></pre>
| user5580578 | <p>you can add Pod Security Policy to your cluster, there is an option (below) you can add to prevent any deployment from running without specifying a non-root user:</p>
<pre><code>spec:
runAsUser:
rule: MustRunAsNonRoot
</code></pre>
<p>for more info about Pod Security Policy please go to this link:
<a href="https://kubernetes.io/docs/concepts/security/pod-security-policy/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/security/pod-security-policy/</a></p>
| mohalahmad |
<p>It seems to me that we cannot use IPv6 in Kubernetes service loadBalancerSourceRanges. I simplified the repro to a very simple configuration like below:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
type: "LoadBalancer"
loadBalancerSourceRanges:
- "2600:1700:aaaa:aaaa::aa/32"
ports:
- port: 5678 # Default port for image
</code></pre>
<p>Deploying it on GKE and I got the following failure when I "kubectl describe service apple-service":</p>
<pre><code> Warning KubeProxyIncorrectIPVersion 13m (x11 over 62m) kube-proxy, gke-xxxx
2600:1700:aaaa:aaaa::aa/32 in loadBalancerSourceRanges has incorrect IP version
Normal EnsuringLoadBalancer 51s (x18 over 62m) service-controller
Ensuring load balancer
Warning SyncLoadBalancerFailed 46s (x17 over 62m) service-controller
Error syncing load balancer: failed to ensure load balancer: googleapi: Error 400: Invalid
value for field 'resource.sourceRanges[1]': '2600:1700::/32'. Must be a valid IPV4 CIDR address range., invalid
</code></pre>
<p>Just want to confirm my conclusion (i.e. that this is not supported in k8s), or, if my conclusion is not correct, what is the fix. Maybe there is a way for the whole cluster to be on IPv6 so that this will work?</p>
<p>Thank you very much!</p>
| Wei | <p>You are seeing this error because IPv6 cannot be used along IPv4 in k8s (you could run k8s in ipv6-only mode, but this would not work in GCP since GCP does not allow to use ipv6 addresses for internal communication).</p>
<p><a href="https://cloud.google.com/vpc/docs/vpc" rel="nofollow noreferrer">GCP VPC docs</a>:</p>
<blockquote>
<p><strong>VPC networks only support IPv4 unicast traffic</strong>. They <strong>do not support</strong> broadcast, multicast, or <strong>IPv6</strong> traffic within the network; VMs in the VPC network can only send to IPv4 destinations and only receive traffic from IPv4 sources. However, it is possible to create an IPv6 address for a global load balancer.</p>
</blockquote>
<p>K8s 1.16+ provides dual stack feature that is in early development (alpha) stage that allows for IPv6 and can be enabled with <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">feature-gates</a>, but since you are using GKE, controlplain is managed by GCP so you can't enable it (and since is alpha feature you probably should not want to).</p>
<p>You can find a bit more about this dual stack feature here:
<a href="https://kubernetes.io/docs/concepts/services-networking/dual-stack/" rel="nofollow noreferrer">dual-stack</a>
and here:
<a href="https://kubernetes.io/docs/tasks/network/validate-dual-stack/" rel="nofollow noreferrer">validate-dual-stack</a></p>
<p>Here is the latest pull request I have found on github relating this feature: <a href="https://github.com/kubernetes/kubernetes/pull/91824" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/91824</a></p>
<p>I think that we can expect that the beta version will appear soon in one of k8s releases but since GKE is about two versions behind the latest relese, I infere that it will take some time before we can use IPv6 with GKE.</p>
| Matt |
<p>is there a way to stop Ambassador from polling services for open api docs?</p>
<p>I have tried disabling the developerportal mapping but still not working. </p>
<pre><code>time="2020-06-11 04:59:49" level=error msg="Bad HTTP response" func=github.com/datawire/apro/cmd/amb-sidecar/devportal/server.HTTPGet.func1 file="github.com/datawire/apro@/cmd/amb-sidecar/devportal/server/fetcher.go:165" status_code=404 subsystem=fetcher url="https://127.0.0.1:8443/<nameofservice>/api/auth/info/.ambassador-internal/openapi-docs"
</code></pre>
<p>Kubernetes version : 1.16
AES version: 1.4.3</p>
| manish | <p>You can disable the doc polling in version 1.5.0+ by setting the environment variable POLL_EVERY_SECS to 0.</p>
| Casey |
<p> I am trying to learn kubernetes recently. I have already deployed jaeger (all-in-one) by istio on kubernetes, and everying works well. Although I can see the trace information on the jaeger UI, I don't know how to extract these trace data by python. I want to use these data to do root causes location of microservices.
I think there must be some API to access these data dicectly by python but I don't find it. or can I access the cassandra using python to get these data?.
I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this.</p>
| gxxxh | <p>I realized that I had got into a completely wrong direction. I thought that I have to access the backend storage to get the trace data, which actually make the problem much more complex. I got the answer from github discussion and here is the address <a href="https://github.com/jaegertracing/jaeger/discussions/2876#discussioncomment-477176" rel="nofollow noreferrer">https://github.com/jaegertracing/jaeger/discussions/2876#discussioncomment-477176</a></p>
| gxxxh |
<p>I know that the kube-system pods (API server, etcd, scheduler, controller manager) get created using the static pod deployment method. If you look at the manifests, you will see that in the metadata the namespace is set to kube-system, since you can't create pods in a non existing namespace, how does the kube-system namespace gets created initially and where the definition of this object is persisted since etcd isn't deployed yet.</p>
| Yassine359 | <p>From <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-organizing-with-namespaces" rel="nofollow noreferrer">this GCP blog</a>:</p>
<blockquote>
<p>In most Kubernetes distributions, the cluster comes out of the box with a Namespace called “default.” In fact, there are actually three namespaces that Kubernetes ships with: default, kube-system (used for Kubernetes components), and kube-public (used for public resources).</p>
</blockquote>
<p>This means that k8s has these 3 namespaces out of the box and that they cannot be deleted (unlike any other namespaces that are created by a user):</p>
<pre><code>$ kubectl delete ns default
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
$ kubectl delete ns kube-system
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
</code></pre>
| Matt |
<h1>EDIT 1</h1>
<p>In response to the comments I have included additional information.</p>
<pre><code>$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-lkwfn 0/1 ContainerCreating 0 7m8s
coredns-66bff467f8-pcn6b 0/1 ContainerCreating 0 7m8s
etcd-masternode 1/1 Running 0 7m16s
kube-apiserver-masternode 1/1 Running 0 7m16s
kube-controller-manager-masternode 1/1 Running 0 7m16s
kube-proxy-7zrjn 1/1 Running 0 7m8s
kube-scheduler-masternode 1/1 Running 0 7m16s
</code></pre>
<h2>More systemd logs</h2>
<pre><code>...
Jun 16 16:18:59 masternode kubelet[6842]: E0616 16:18:59.313433 6842 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-66bff467f8-pcn6b_kube-system_d5fe7a46-c32d-4fa3-b1b3-fe5a28983e08_0(cc72c59e22145274e47ca417c274af99591d0008baf2bf13364538b7debb57d3): failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 16 16:18:59 masternode kubelet[6842]: E0616 16:18:59.313512 6842 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-66bff467f8-pcn6b_kube-system(d5fe7a46-c32d-4fa3-b1b3-fe5a28983e08)" failed: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-66bff467f8-pcn6b_kube-system_d5fe7a46-c32d-4fa3-b1b3-fe5a28983e08_0(cc72c59e22145274e47ca417c274af99591d0008baf2bf13364538b7debb57d3): failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 16 16:18:59 masternode kubelet[6842]: E0616 16:18:59.313532 6842 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-66bff467f8-pcn6b_kube-system(d5fe7a46-c32d-4fa3-b1b3-fe5a28983e08)" failed: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-66bff467f8-pcn6b_kube-system_d5fe7a46-c32d-4fa3-b1b3-fe5a28983e08_0(cc72c59e22145274e47ca417c274af99591d0008baf2bf13364538b7debb57d3): failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 16 16:18:59 masternode kubelet[6842]: E0616 16:18:59.313603 6842 pod_workers.go:191] Error syncing pod d5fe7a46-c32d-4fa3-b1b3-fe5a28983e08 ("coredns-66bff467f8-pcn6b_kube-system(d5fe7a46-c32d-4fa3-b1b3-fe5a28983e08)"), skipping: failed to "CreatePodSandbox" for "coredns-66bff467f8-pcn6b_kube-system(d5fe7a46-c32d-4fa3-b1b3-fe5a28983e08)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-66bff467f8-pcn6b_kube-system(d5fe7a46-c32d-4fa3-b1b3-fe5a28983e08)\" failed: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-66bff467f8-pcn6b_kube-system_d5fe7a46-c32d-4fa3-b1b3-fe5a28983e08_0(cc72c59e22145274e47ca417c274af99591d0008baf2bf13364538b7debb57d3): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
Jun 16 16:19:09 masternode kubelet[6842]: E0616 16:19:09.256408 6842 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-66bff467f8-lkwfn_kube-system_f0187bfd-89a2-474c-b843-b00875183c77_0(1aba005509e85f3ea7da3fc48ab789ae3a10ba0ffefc152d1c4edf65693befe2): failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 16 16:19:09 masternode kubelet[6842]: E0616 16:19:09.256498 6842 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-66bff467f8-lkwfn_kube-system(f0187bfd-89a2-474c-b843-b00875183c77)" failed: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-66bff467f8-lkwfn_kube-system_f0187bfd-89a2-474c-b843-b00875183c77_0(1aba005509e85f3ea7da3fc48ab789ae3a10ba0ffefc152d1c4edf65693befe2): failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 16 16:19:09 masternode kubelet[6842]: E0616 16:19:09.256525 6842 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-66bff467f8-lkwfn_kube-system(f0187bfd-89a2-474c-b843-b00875183c77)" failed: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-66bff467f8-lkwfn_kube-system_f0187bfd-89a2-474c-b843-b00875183c77_0(1aba005509e85f3ea7da3fc48ab789ae3a10ba0ffefc152d1c4edf65693befe2): failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 16 16:19:09 masternode kubelet[6842]: E0616 16:19:09.256634 6842 pod_workers.go:191] Error syncing pod f0187bfd-89a2-474c-b843-b00875183c77 ("coredns-66bff467f8-lkwfn_kube-system(f0187bfd-89a2-474c-b843-b00875183c77)"), skipping: failed to "CreatePodSandbox" for "coredns-66bff467f8-lkwfn_kube-system(f0187bfd-89a2-474c-b843-b00875183c77)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-66bff467f8-lkwfn_kube-system(f0187bfd-89a2-474c-b843-b00875183c77)\" failed: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-66bff467f8-lkwfn_kube-system_f0187bfd-89a2-474c-b843-b00875183c77_0(1aba005509e85f3ea7da3fc48ab789ae3a10ba0ffefc152d1c4edf65693befe2): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
... (repeats over and over again)
</code></pre>
<p>I have sucessfully installed Kubernetes 1.18 with CRI-0 1.18 and set up a cluster using kubeadm init --pod-network-cidr=192.168.0.0/16. However, the "coredns"-nodes are stuck at "ContainerCreating". I followed the official Kubernetes install instructions. </p>
<h2>What I have tried</h2>
<p>I tried installing Calico but that didn't fix it. I also tried manually changing the cni0 interface to UP but that also didn't work. The problem apparently lies somewhere with the bridged traffic but I followed the Kubernetes tutorial and enabled it.</p>
<p>In my research of the problem I stumbled upon promising solutions and tutorials but none of them solved the problem. (<a href="https://github.com/rancher/rke/issues/1788" rel="noreferrer">Rancher GitHub Issue</a>, <a href="https://github.com/cri-o/cri-o/blob/master/tutorials/kubeadm.md" rel="noreferrer">CRI-O GitHub Page</a>, <a href="https://github.com/projectcalico/calico/issues/2322" rel="noreferrer">Projectcalico</a>, <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="noreferrer">Kubernetes tutorial</a>)</p>
<h2>Firewall-cmd</h2>
<pre><code>$ sudo firewall-cmd --state
running
$ sudo firewall-cmd --version
0.7.0
</code></pre>
<h2>Systemd logs</h2>
<p><a href="https://i.stack.imgur.com/b2KSe.png" rel="noreferrer">Image of the log</a>
because pasting the entire log would be ugly. </p>
<h2>uname -r</h2>
<pre><code>4.18.0-147.8.1.el8_1.x86_64 (Centos 8)
</code></pre>
<h2>CRI-O</h2>
<pre><code>crio --version
crio version
Version: 1.18.1
GitCommit: 5cbf694c34f8d1af19eb873e39057663a4830635
GitTreeState: clean
BuildDate: 2020-05-25T19:01:44Z
GoVersion: go1.13.4
Compiler: gc
Platform: linux/amd64
Linkmode: dynamic
</code></pre>
<h2>runc</h2>
<pre><code>$ runc --version
runc version spec: 1.0.1-dev
</code></pre>
<h2>Kubernetes</h2>
<p>1.18</p>
<h2>Podman version</h2>
<p>1.6.4</p>
<h2>iptables/nft</h2>
<p>I am using nft with the iptables compatability layer.</p>
<pre><code>$ iptables --version
iptables v1.8.2 (nf_tables)
</code></pre>
<h2>Provider of host:</h2>
<p>Contabo VPS</p>
<h2>sysctl</h2>
<pre><code>$ sysctl net.bridge
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-filter-pppoe-tagged = 0
net.bridge.bridge-nf-filter-vlan-tagged = 0
net.bridge.bridge-nf-pass-vlan-input-dev = 0
$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
</code></pre>
<h2>selinux disabled</h2>
<pre><code>$ cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
</code></pre>
<h2>ip addr list</h2>
<pre><code>$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether REDACTED brd ff:ff:ff:ff:ff:ff
inet REDACTED scope global noprefixroute eth0
valid_lft forever preferred_lft forever
3: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether c6:00:41:85:da:ad brd ff:ff:ff:ff:ff:ff
inet 10.85.0.1/16 brd 10.85.255.255 scope global noprefixroute cni0
valid_lft forever preferred_lft forever
7: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.249.128/32 brd 192.168.249.128 scope global tunl0
valid_lft forever preferred_lft forever
</code></pre>
| Riki | <p>Holy Hand Grenade of Antioch! I finally fixed it! It only took me, what, about a bazillion years and a restless-night. Sweet Victory! Well... ehm. On to the solution.</p>
<p>I finally understand the comments by @Arghya Sadhu and @Piotr Malec and they were right. I didn't configure my CNI-plugin correctly. I am using Flannel as a network provider and they require a 10.244.0.0/16 subnet. In my crio-bridge.conf found in /etc/cni/net.d/ the default subnet was different (10.85.0.0/16 or something). I thought it would be enough to specify the CIDR on the kubeadm init command but I was wrong. You need to set the correct CIDR in the crio-bridge.conf and podman.conflist (or similar files in the directory). I also thought those files that were installed with CRI-O were configured with reasonable defaults and, to be honest, I didn't fully understand what they were for. </p>
<p>Also something strange happened: According to Flannel the subnet for CRI-O should be /16 but when I checked the logs with journalctl -u kubelet it mentioned a /24 subnet. </p>
<pre><code>failed to set bridge addr: \"cni0\" already has an IP address different from 10.244.0.1/24"
</code></pre>
<p>So I had to change the subnet in crio.conf to /24 and it worked. I probably have to change the subnet in the podman.conflist too, but I am not sure.</p>
<p>Anyway, thanks to Arghya and Piotr for their help!</p>
| Riki |
<p>I'm testing out HPA with custom metrics from application and exposing to K8s using Prometheus-adapter.</p>
<p>My app exposes a "jobs_executing" custom metric that is a numerical valued guage (prometheus-client) in golang exposing number of jobs executed by the app (pod).</p>
<p>Now to cater this in hpa, here is how my HPA configuration looks like:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: myapp
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods
pods:
metric:
name: jobs_executing
target:
type: AverageValue
averageValue: 5
</code></pre>
<p>I want autoscaler to scale my pod when the average no. of jobs executed by overall pods equals "5". This works, but sometimes the HPA configuration shows values like this:</p>
<pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
my-autoscaler Deployment/my-scaling-sample-app 7700m/5 1 10 10 38m
</code></pre>
<p>here targets show up as "7700m/5" even though the average no. of jobs executed overall were 7.7. This makes HPA just scale horizontally aggressively. I don't understand why it is putting "7700m" in the current target value"?</p>
<p>My question is, if there is a way to define a flaoting point here in HPA that doesn't confuse a normal integer with a 7700m (CPU unit?)</p>
<p>or what am I missing? Thank you</p>
| Ahsan Nasir | <p>From the docs:</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#appendix-quantities" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#appendix-quantities</a></p>
<blockquote>
<p>All metrics in the HorizontalPodAutoscaler and metrics APIs are specified using a special whole-number notation known in Kubernetes as a quantity. For example, the quantity 10500m would be written as 10.5 in decimal notation. The metrics APIs will return whole numbers without a suffix when possible, and will generally return quantities in milli-units otherwise. This means you might see your metric value fluctuate between 1 and 1500m, or 1 and 1.5 when written in decimal notation.</p>
</blockquote>
<p>So it does not seem like you are able to adjust the unit of measurement that the HPA uses, the generic Quantity.</p>
| Jack Barger |
<p>I'm going to deploy my docker image to the k8s cluster using jenkins CICD.</p>
<p>I installed Kubernetes CLI and SSH Agent in Jenkins.</p>
<p>I used the below code.</p>
<pre><code>stage('List pods') {
withKubeConfig([credentialsId: 'kubectl-user']) {
sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl"'
sh 'chmod u+x ./kubectl'
sh './kubectl get pods -n dev'
}
}
</code></pre>
<p>And, I'm getting the below error.</p>
<pre><code>org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 39: Unknown stage section "withKubeConfig". Starting with version 0.5, steps in a stage must be in a ‘steps’ block. @ line 39, column 2.
stage('List pods') {
^
WorkflowScript: 39: Expected one of "steps", "stages", or "parallel" for stage "List pods" @ line 39, column 2.
stage('List pods') {
^
2 errors
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:142)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:127)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:571)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:523)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:337)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
</code></pre>
<p>Have I missed anything?</p>
| prime | <p>It seems like you are using <a href="https://www.jenkins.io/doc/book/pipeline/#scripted-pipeline-fundamentals" rel="nofollow noreferrer">scripted pipeline syntax</a> inside a declarative pipeline and therefore you are seeing the error.</p>
<p>If you want to use declarative pipeline syntax you must follow the strict <a href="https://www.jenkins.io/doc/book/pipeline/syntax/" rel="nofollow noreferrer">formatting guidelines</a>, in your case you are missing the steps directive under the stage directive.<br />
Your code should look something like:</p>
<pre><code>pipeline {
agent any
stages {
stage('List Pods') {
steps {
withKubeConfig([credentialsId: 'kubectl-user']) {
sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl"'
sh 'chmod u+x ./kubectl'
sh './kubectl get pods -n dev'
}
}
}
}
}
</code></pre>
<p>If you want to use the scripted pipeline syntax it will look something like:</p>
<pre><code>node {
stage('List Pods') {
withKubeConfig([credentialsId: 'kubectl-user']) {
sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl"'
sh 'chmod u+x ./kubectl'
sh './kubectl get pods -n dev'
}
}
}
</code></pre>
<p>You can <a href="https://www.jenkins.io/doc/book/pipeline/syntax/#compare" rel="nofollow noreferrer">read more here</a> about the differences between the two pipelines syntaxes.</p>
| Noam Helmer |
<p>GKE Autoscaler is not scaling nodes up after 15 nodes (former limit)</p>
<p>I've changed the <code>Min</code> and <code>Max</code> values in Cluster to 17-25</p>
<p><a href="https://i.stack.imgur.com/AB51g.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AB51g.png" alt="enter image description here"></a>
However the node count is stuck on 14-15 and is not going up, right now my cluster is full, no more pods can fit in, so every new deployment should trigger node scale up and schedule itself onto the new node, which is not happening.</p>
<p>When I create deployment it is stuck in <code>Pending</code> state with a message:</p>
<pre><code>pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 max cluster cpu, memory limit reached
</code></pre>
<p>Max cluster cpu, memory limit reached sounds like the maximum node count is somehow still 14-15, how is that possible? Why it is not triggering node scale up?</p>
<p>ClusterAutoscalerStatus:</p>
<pre><code>apiVersion: v1
data:
status: |+
Cluster-autoscaler status at 2020-03-10 10:35:39.899329642 +0000 UTC:
Cluster-wide:
Health: Healthy (ready=14 unready=0 notStarted=0 longNotStarted=0 registered=14 longUnregistered=0)
LastProbeTime: 2020-03-10 10:35:39.608193389 +0000 UTC m=+6920.650397445
LastTransitionTime: 2020-03-10 09:49:11.965623459 +0000 UTC m=+4133.007827509
ScaleUp: NoActivity (ready=14 registered=14)
LastProbeTime: 2020-03-10 10:35:39.608193389 +0000 UTC m=+6920.650397445
LastTransitionTime: 2020-03-10 08:40:47.775200087 +0000 UTC m=+28.817404126
ScaleDown: NoCandidates (candidates=0)
LastProbeTime: 2020-03-10 10:35:39.608193389 +0000 UTC m=+6920.650397445
LastTransitionTime: 2020-03-10 09:49:49.580623718 +0000 UTC m=+4170.622827779
NodeGroups:
Name: https://content.googleapis.com/compute/v1/projects/project/zones/europe-west4-b/instanceGroups/adjust-scope-bff43e09-grp
Health: Healthy (ready=14 unready=0 notStarted=0 longNotStarted=0 registered=14 longUnregistered=0 cloudProviderTarget=14 (minSize=17, maxSize=25))
LastProbeTime: 2020-03-10 10:35:39.608193389 +0000 UTC m=+6920.650397445
LastTransitionTime: 2020-03-10 09:46:19.45614781 +0000 UTC m=+3960.498351857
ScaleUp: NoActivity (ready=14 cloudProviderTarget=14)
LastProbeTime: 2020-03-10 10:35:39.608193389 +0000 UTC m=+6920.650397445
LastTransitionTime: 2020-03-10 09:46:19.45614781 +0000 UTC m=+3960.498351857
ScaleDown: NoCandidates (candidates=0)
LastProbeTime: 2020-03-10 10:35:39.608193389 +0000 UTC m=+6920.650397445
LastTransitionTime: 2020-03-10 09:49:49.580623718 +0000 UTC m=+4170.622827779
</code></pre>
<p>Deployment is very small! (200m CPU, 256Mi mem) so it will surely fit if new node would be added.</p>
<p>Looks like a bug in nodepool/autoscaler as 15 was my former node count limit, somehow it looks like it still things 15 is top.</p>
<p><strong>EDIT:</strong>
New nodepool with bigger machines, autoscaling in GKE turned on, still the same issue after some time, even though the nodes are having free resources.
Top from nodes:</p>
<pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-infrastructure-n-autoscaled-node--0816b9c6-fm5v 805m 41% 4966Mi 88%
gke-infrastructure-n-autoscaled-node--0816b9c6-h98f 407m 21% 2746Mi 48%
gke-infrastructure-n-autoscaled-node--0816b9c6-hr0l 721m 37% 3832Mi 67%
gke-infrastructure-n-autoscaled-node--0816b9c6-prfw 1020m 52% 5102Mi 90%
gke-infrastructure-n-autoscaled-node--0816b9c6-s94x 946m 49% 3637Mi 64%
gke-infrastructure-n-autoscaled-node--0816b9c6-sz5l 2000m 103% 5738Mi 101%
gke-infrastructure-n-autoscaled-node--0816b9c6-z6dv 664m 34% 4271Mi 75%
gke-infrastructure-n-autoscaled-node--0816b9c6-zvbr 970m 50% 3061Mi 54%
</code></pre>
<p>And yet Still the message <code>1 max cluster cpu, memory limit reached</code>. This is still happening when updating a deployment, the new version sometimes stuck in <code>Pending</code> because it wont trigger the scale up.</p>
<p><strong>EDIT2:</strong>
While describing cluster with cloud command, I've found this:</p>
<pre><code>autoscaling:
autoprovisioningNodePoolDefaults:
oauthScopes:
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
serviceAccount: default
enableNodeAutoprovisioning: true
resourceLimits:
- maximum: '5'
minimum: '1'
resourceType: cpu
- maximum: '5'
minimum: '1'
resourceType: memory
</code></pre>
<p>How is this working with autoscaling turned on? It wont trigger scaleup if those are reached? (The sum is already are above that)</p>
| Josef Korbel | <p>I ran into the same issue and was bashing my head against the wall trying to figure out what was going on. Even support couldn't figure it out.</p>
<p>The issue is that if you enable node auto-provisioning at the cluster level, you are setting the actual min/max cpu and mem allowed for the entire cluster. At first glance the UI seems to be suggesting the min/max cpu and mem you would want per node that is auto-provisioned - but that is not correct. So if for example you wanted a maximum of 100 nodes with 8 CPU per node, then your max CPU should be 800. I know a maximum for the cluster is obviously useful so things don't get out of control, but the way it is presented is not intuitive. Since you actually don't have control over what gets picked for your machine type, don't you think it would be useful to not let kubernetes pick a 100 core machine for a 1 core task? that is what I thought it was asking when I was configuring it.</p>
<p>Node auto-provisioning is useful because if for some reason you have auto-provisioning on your own node pool, sometimes it can't meet your demands due to quota issues, then the cluster level node auto-provisioner will figure out a different node pool machine type that it can provision to meet your demands. In my scenario I was using C2 CPUs and there was a scarcity of those cpus in the region so my node pool stopped auto-scaling.</p>
<p>To make things even more confusing, most people start with specifying their node pool machine type, so they are already used to customzing these limits on a per node basis. But then something stops working like a quota issue you have no idea about so you get desperate and configure the node auto-provisioner at the cluster level but then get totally screwed because you thought you were specifying the limits for the new potential machine type.</p>
<p>Hopefully this helps clear some things up.</p>
| Sean Montgomery |
<p>I tried combining things I have found on the syntax but this is as close as I can get. It creates multiple stages but says they have no steps.</p>
<p>I can get it to run a bunch of parallel steps on the same agent if I move the agent syntax down to where the "test" stage is defined but I want to spin up separate pods for each one so I can actually use the kubernetes cluster effectively and do my work parallel.</p>
<p>attached is an example Jenkinsfile for reference</p>
<pre><code>def parallelStagesMap
def generateStage(job) {
return {
stage ("$job.key") {
agent {
kubernetes {
cloud 'kubernetes'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: name
image: image
command:
- sleep
args:
- infinity
"""
}
}
steps {
sh """
do some important stuff
"""
}
}
}
}
pipeline {
agent none
stages {
stage('Create List of Stages to run in Parallel') {
steps {
script {
def map = [
"name" : "aparam",
"name2" : "aparam2"
]
parallelStagesMap = map.collectEntries {
["${it.key}" : generateStage(it)]
}
}
}
}
stage('Test') {
steps {
script {
parallel parallelStagesMap
}
}
}
stage('Release') {
agent etc
steps {
etc
}
}
}
}
</code></pre>
| Digital Powers | <p>To run your dynamically created jobs in parallel you will have to use scripted pipeline syntax.<br />
The equivalent syntax for the declarative <code>kubernetes</code> agent in the scripted pipeline is <code>podTemplate</code> and <code>node</code> (see full <a href="https://plugins.jenkins.io/kubernetes/" rel="nofollow noreferrer">Doucumentation</a>):</p>
<pre class="lang-groovy prettyprint-override"><code>podTemplate(yaml: '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-8
command:
- sleep
args:
- 99d
''') {
node(POD_LABEL) {
...
}
}
</code></pre>
<p>Notice that the <code>podTemplate</code> can receive the <code>cloud</code> parameter in addition to the yaml but it defaults to <code>kubernetes</code> so there is no need to pass it.</p>
<p>So in your case you can use this syntax to run the jobs in parallel on different agents:</p>
<pre class="lang-groovy prettyprint-override"><code>// Assuming yaml is same for all nodes - if not it can be passed as parameter
podYaml= """
apiVersion: v1
kind: Pod
spec:
containers:
- name: name
image: image
command:
- sleep
args:
- infinity
"""
pipeline {
agent none
stages {
stage('Create List of Stages to run in Parallel') {
steps {
script {
def map = ["name" : "aparam",
"name2" : "aparam2"]
parallel map.collectEntries {
["${it.key}" : generateStage(it)]
}
}
}
}
}
}
def generateStage(job) {
return {
stage(job.key) {
podTemplate(yaml:podYaml) {
node(POD_LABEL) {
// Each execution runs on its own node (pod)
sh "do some important stuff with ${job.value}"
}
}
}
}
}
</code></pre>
| Noam Helmer |
<p>I'm using azure k8's at the moment. I have two services which i need to expose via same domain and wanted add paths for different service.</p>
<p>ingress files as follows</p>
<pre><code>
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana-ingress
namespace : {{ .Values.namespace }}
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- {{ .Values.kibana.ingressdomain }}
secretName: abb-aks-cert
rules:
- host: {{ .Values.kibana.ingressdomain }}
http:
paths:
- path: /app/kibana
backend:
serviceName: kibana-service
servicePort: 5601
- path: /grafana
backend:
serviceName: monitor-grafana
servicePort: 80
</code></pre>
<p>When i define like this i'm getting 404 errors. Is there any solution for this?</p>
<p>Kubernetes version is 1.16</p>
| Chinthaka Hasakelum | <p>This is very common mistake people make.
Web applications usually (by default) serve with base path = <code>/</code>.
During first request, the website resonds correctly but with incorrect paths because it not aware that its running behind proxy and that sth is rewriting the paths.</p>
<p>You need to set root/base path for both appications accordingly and remove the rewrite because its not needed.</p>
<hr />
<p>For kibana you need to set:</p>
<blockquote>
<h3>server.basePath:</h3>
<p>Enables you to specify a path to mount Kibana at if you are running behind a proxy. Use the server.rewriteBasePath setting to tell Kibana if it should remove the basePath from requests it receives, and to prevent a deprecation warning at startup. This setting cannot end in a slash (/).</p>
</blockquote>
<p>More in <a href="https://www.elastic.co/guide/en/kibana/current/settings.html" rel="nofollow noreferrer">kibana docs</a></p>
<hr />
<p>For grafana you need to set:</p>
<blockquote>
<h3>root_url</h3>
<p>This is the full URL used to access Grafana from a web browser. This is important if you use Google or GitHub OAuth authentication (for the callback URL to be correct).</p>
<blockquote>
<p>Note: This setting is also important if you have a reverse proxy in front of Grafana that exposes it through a subpath. In that case add the subpath to the end of this URL setting.</p>
</blockquote>
<h3>serve_from_sub_path</h3>
<p>Serve Grafana from subpath specified in root_url setting. By default it is set to false for compatibility reasons.</p>
<p>By enabling this setting and using a subpath in root_url above, e.g. root_url = http://localhost:3000/grafana, Grafana is accessible on http://localhost:3000/grafana</p>
</blockquote>
<p>More in <a href="https://grafana.com/docs/grafana/latest/administration/configuration/#root_url" rel="nofollow noreferrer">grafana docs</a></p>
| Matt |
<p>I'v created ingress on gke with annotation that have a list of whitelisted ips - the problem is list got too big and cant see whats at the end of it (that's how I see it on <code>kubectl describe ingress <name></code> with the 3 dots at the end)</p>
<pre class="lang-sh prettyprint-override"><code>nginx.ingress.kubernetes.io/whitelist-source-range:
xx, yy, zz ...
</code></pre>
<p>to the point where I see dots at the end - and after 20+ min of looking cant find command to describe my ingress in a way to get its annotations - any thoughts ?</p>
<p>the ingress itself was made like so</p>
<pre class="lang-sh prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
...
annotations:
kubernetes.io/ingress.class: nginx
...
nginx.ingress.kubernetes.io/whitelist-source-range: {{ index .Values.ingress "whitelist" | quote }}
</code></pre>
<p>and the list just as plain string with coma delimiter</p>
<p>I thought I could use something like <code>kubectl describe ingress -o jsonpath='{.metadata.annotations}'</code> or smth but it doesnt work on <code>describe</code> only on <code>get</code> commands</p>
| potatopotato | <p>Ok answer was way simpler, just running <code>kubectl get ingress <name> -o json</code> showed full annotation list</p>
| potatopotato |
<p><strong>Using superset on kubernetes, I would like to modify the events logged on the standard output.</strong></p>
<p>In the <a href="https://apache-superset.readthedocs.io/en/latest/installation.html#event-logging" rel="nofollow noreferrer">documentation</a> there's a python class that can be used to modify the logger.<br />
In the chart's values I've set :</p>
<pre><code>extraConfigFiles:
stats_logger.py: |-
class JSONStdOutEventLogger(AbstractEventLogger):
def log(self, user_id, action, *args, **kwargs):
...
</code></pre>
<p>And</p>
<pre><code>configFile: |-
...
EVENT_LOGGER = JSONStdOutEventLogger()
...
</code></pre>
<p>Unfortunalty the pod doesn't find the class:</p>
<pre><code>NameError: name 'JSONStdOutEventLogger' is not defined
</code></pre>
<p>There's no documentation beside that so I'm lost in the dark abyss of event logging...</p>
<p>Help would be greatly appreciated ! Thx</p>
| Doctor | <p>I had the same issue.<br />
Superset doesn't give any clue but in my case I had to import <code>AbstractEventLogger</code> and <code>json</code>.</p>
<p>Just put these two lines above your <code>JSONStdOutEventLogger</code> class :</p>
<pre><code>from superset.utils.log import AbstractEventLogger
import json
</code></pre>
| Clement Phu |
<p>I created an AKS and I deployed the Apache Ignite service on it.
When I check the pods I can see they are working.</p>
<p><a href="https://i.stack.imgur.com/eFI5M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eFI5M.png" alt="Pods Status" /></a></p>
<p>Also, I can get the load balancer IP.</p>
<p><a href="https://i.stack.imgur.com/oSuDT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oSuDT.png" alt="AKS Load Balancer IP" /></a></p>
<p>I follow the official instructions of Apache and try to connect Ignite with ThinClient.
I share my code and instructions code.</p>
<p><strong>Here is my code:</strong></p>
<pre><code>public void ConnectIgnite()
{
var cfg = new IgniteClientConfiguration
{
Endpoints = new[] { "20.101.12.***:10800" }
};
var client = Ignition.StartClient(cfg);
}
</code></pre>
<p>But my code is getting below errors;</p>
<blockquote>
<p>System.AggregateException: 'Failed to establish Ignite thin client
connection, examine inner exceptions for details.'</p>
<p>Inner Exception ExtendedSocketException: A connection attempt failed
because the connected party did not properly respond after a period of
time, or established connection failed because connected host has
failed to respond.</p>
</blockquote>
<p><strong>and here is the Apache's instruction code;</strong></p>
<pre><code>ClientConfiguration cfg = new ClientConfiguration().setAddresses("13.86.186.145:10800");
IgniteClient client = Ignition.startClient(cfg);
ClientCache<Integer, String> cache = client.getOrCreateCache("test_cache");
cache.put(1, "first test value");
System.out.println(cache.get(1));
client.close();
</code></pre>
<p>Also, here is the official instruction <a href="https://ignite.apache.org/docs/latest/installation/kubernetes/azure-deployment#connecting-with-thin-clients" rel="nofollow noreferrer">link</a></p>
<p>I didn't understand what is wrong? Also, The Instruction says I don't need clientid and clientsecret but I don't want to connect without any security but this is completely another issue.</p>
| OguzKaanAkyalcin | <p>I found what is wrong. Apache's official page says: use below XML for the configuration.</p>
<pre><code><bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<constructor-arg>
<bean class="org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration">
<property name="namespace" value="default" />
<property name="serviceName" value="ignite" />
</bean>
</constructor-arg>
</bean>
</property>
</bean>
</property>
</bean>
</code></pre>
<p>but,</p>
<p>first, that XML needs a beans tag at the top of XML.
Also, namespace value and serviceName value are not compatible with apache's official instruction page. If you follow the apache's page for the setup;</p>
<p>you have to use the below values</p>
<pre><code><property name="namespace" value="ignite" />
<property name="serviceName" value="ignite-service" />
</code></pre>
<p>instead of</p>
<pre><code><property name="namespace" value="default" />
<property name="serviceName" value="ignite" />
</code></pre>
<p>end of the changes your XML will look like</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<constructor-arg>
<bean class="org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration">
<property name="namespace" value="ignite" />
<property name="serviceName" value="ignite-service" />
</bean>
</constructor-arg>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
</code></pre>
<p>I changed my configuration XML and restart the pods with below command and It worked.</p>
<pre><code>kubectl -n service rollout restart deployment ignite-cluster
</code></pre>
| OguzKaanAkyalcin |
<p>I have kubernetes pod environment variable</p>
<pre><code>JOBID=111
</code></pre>
<p>I am changing this env variable from inside a shell script like below. This change is happening inside an infinite loop. so script never terminates.</p>
<pre><code>export JOBID=$(echo $line)
</code></pre>
<p>Inside the script the value of variable is changed to new value. But if I check the value of the env varible outside the script, inisde a new terminal the value of the env variable is still 111.</p>
| user3553913 | <blockquote>
<p>Inside the script the value of variable is changed to new value. But if I check the value of the env varible outside the script, inisde a new terminal the value of the env variable is still 111</p>
</blockquote>
<p>This is how environment variables work and you cannot change it. You can only change variable for a specific process, and this env will propagate for every other process you run from this process.</p>
<p>But you cannot overwrite global value. Only the local value (process' copy). Every other process (e.g. started by kubectl exec) will have "old" value of the env variable.</p>
| Matt |
<p>Hello Im new to minikube and I cant connect to an exposed service. I created an Api .Net Core, I builded the image, go it into my private registry and then i created an deployment with an yml file that works.
But i cant expose that deployment as service. Everytime after I expose it its all fine but I cant connect to it via the port and the minikube ip adress.
If I try to connect to the ipadress:port I get connection refused.</p>
<h2>Deployment yml file:</h2>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: testapp1-deployment
labels:
app: testapp1
spec:
replicas: 2
selector:
matchLabels:
app: testapp1
template:
metadata:
labels:
app: testapp1
version: v0.1
spec:
containers:
- name: testapp1-deployment
image: localhost:5000/testapp1
imagePullPolicy: Never
resources:
requests:
cpu: 120m
ports:
- containerPort: 80
</code></pre>
<p>Service yml file:</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: testapp1-service
spec:
type: NodePort
selector:
app: testapp1
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
| Yatesu | <p>The problem was my dockerfile and I wasn't enabling docker support in my app in ASP .NET Core. I enabled the docker support and changed the dockerfile a bit then I rebuild it and it worked for me.</p>
<blockquote>
<p>FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env</p>
<p>WORKDIR /app</p>
<p>COPY *.csproj ./
RUN dotnet restore</p>
<p>COPY . ./
RUN dotnet publish -c Release -o out</p>
<p>FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "aspnetapp.dll"]</p>
</blockquote>
<p>that's the dockerfile Im using for my app at the moment so if someone else face the same problem as me try to use the dockerfile. If it still won't work look up in previous comments.</p>
| Yatesu |
Subsets and Splits