Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I am using Kubernetes helm chart for my Kubernetes service deployment. I have different services now, called x1, x2 upto x10. So now I created x1.yaml inside my templates folder. And running the 'helm install ./mychart'. And now I am getting deployment inside my Kubernetes cluster.</p>
<p>Can I add .yaml files (x2.yaml to x10.yaml) for all my Kubernetes service inside templates folder, and can I deploy together all by using 1 chart ?</p>
<p>I did not properly understood the hierarchy of Helm chart for Kubernetes resource deployment.</p>
| Mr.DevEng | <p>Anything that you put into templates/ folder will be rendered as Kube manifest. If you add 10 manifests there - 10 manifests will be applied on "helm install". It is up to you how you want this to work. </p>
<p>You can put all your apps into single Helm chart and create one values.yaml for all your applications. This is absolutely valid practice although not very popular. Whenever you change values.yaml and issue "helm upgrade" - changed manifests will be reapplied.</p>
<p>Or you can create separate chart per application, that's how most of charts look like. In that case you will upgrade applications separately from each other. I think this method is preferred.</p>
| Vasili Angapov |
<p>I have deployed prometheus on kubernetes cluster (EKS). I was able to successfully scrape <code>prometheus</code> and <code>traefik</code> with following </p>
<pre><code>scrape_configs:
# A scrape configuration containing exactly one endpoint to scrape:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['prometheus.kube-monitoring.svc.cluster.local:9090']
- job_name: 'traefik'
static_configs:
- targets: ['traefik.kube-system.svc.cluster.local:8080']
</code></pre>
<p>But node-exporter deployed as <code>DaemonSet</code> with following definition is not exposing the node metrics. </p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: kube-monitoring
spec:
selector:
matchLabels:
app: node-exporter
template:
metadata:
name: node-exporter
labels:
app: node-exporter
spec:
hostNetwork: true
hostPID: true
containers:
- name: node-exporter
image: prom/node-exporter:v0.18.1
args:
- "--path.procfs=/host/proc"
- "--path.sysfs=/host/sys"
ports:
- containerPort: 9100
hostPort: 9100
name: scrape
resources:
requests:
memory: 30Mi
cpu: 100m
limits:
memory: 50Mi
cpu: 200m
volumeMounts:
- name: proc
readOnly: true
mountPath: /host/proc
- name: sys
readOnly: true
mountPath: /host/sys
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys
</code></pre>
<p>and following scrape_configs in prometheus</p>
<pre><code>scrape_configs:
- job_name: 'kubernetes-nodes'
scheme: http
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.kube-monitoring.svc.cluster.local:9100
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
</code></pre>
<p>I also tried to <code>curl http://localhost:9100/metrics</code> from one of the container, but got <code>curl: (7) Failed to connect to localhost port 9100: Connection refused</code></p>
<p>What I am missing here with the configuration ?</p>
<p>After suggestion to install Prometheus by helm, I didn't install it on test cluster and tried to compare my original configuration with helm installed Prometheus. </p>
<p>Following pods were running :</p>
<pre><code>NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-prometheus-oper-alertmanager-0 2/2 Running 0 4m33s
prometheus-grafana-66c7bcbf4b-mh42x 2/2 Running 0 4m38s
prometheus-kube-state-metrics-7fbb4697c-kcskq 1/1 Running 0 4m38s
prometheus-prometheus-node-exporter-6bf9f 1/1 Running 0 4m38s
prometheus-prometheus-node-exporter-gbrzr 1/1 Running 0 4m38s
prometheus-prometheus-node-exporter-j6l9h 1/1 Running 0 4m38s
prometheus-prometheus-oper-operator-648f9ddc47-rxszj 1/1 Running 0 4m38s
prometheus-prometheus-prometheus-oper-prometheus-0 3/3 Running 0 4m23s
</code></pre>
<p>I didn't find any configuration for node exporter in pod <code>prometheus-prometheus-prometheus-oper-prometheus-0</code> at <code>/etc/prometheus/prometheus.yml</code></p>
| roy | <p>The previous advice to use Helm is highly valid, I would also recommend that.</p>
<p>Regarding your issue: thing is that you are not scraping nodes directly, you're using node-exporter for that. So <code>role: node</code> is incorrect, you should instead use <code>role: endpoints</code>. For that you also need to create service for all pods of your DaemonSet.</p>
<p>Here is working example from my environment (installed by Helm):</p>
<pre><code>- job_name: monitoring/kube-prometheus-exporter-node/0
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- monitoring
relabel_configs:
- source_labels: [__meta_kubernetes_service_label_app]
separator: ;
regex: exporter-node
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: metrics
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: pod
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: service
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: job
replacement: ${1}
action: replace
- separator: ;
regex: (.*)
target_label: endpoint
replacement: metrics
action: replace
</code></pre>
| Vasili Angapov |
<p>I've been trying to complete the first step of <a href="https://istio.io/docs/tasks/security/authz-http/" rel="nofollow noreferrer">https://istio.io/docs/tasks/security/authz-http/</a>. With the following file, i'm supposed to activate authorization on the default namespace of my cluster.</p>
<p>However, when i run the following script:</p>
<pre><code>apiVersion: "rbac.istio.io/v1alpha1"
kind: ClusterRbacConfig
metadata:
name: default
spec:
mode: 'ON_WITH_INCLUSION'
inclusion:
namespaces: ["default"]
</code></pre>
<p>which is an exact copy of the script on the website, i get the following error:
<code>error: unable to recognize "5-authorization/yaml-files/rbac-config-ON.yaml": no matches for kind "ClusterRbacConfig" in version "rbac.istio.io/v1alpha1"</code>.</p>
<p>Unless istio's documentation is severely out-of-date, and the apiVersion is no longer the correct one, i don't know what causes this.</p>
| Patrick Weyn | <p>As <a href="https://stackoverflow.com/users/7298328/szymig">szymig</a> mentioned, the wrong version of Istio was used. GKE runs 1.2.2.</p>
| Patrick Weyn |
<p>I have a Kubernetes Cluster running on a 1 master, 2 worker setup ob linux servers. I have a HAProxy forwarding my requests to Nginx Controllers. My complete setup is behind a corporate proxy. The DNS entry is enabled in this corporate proxy.
Requests will get to the nginx controller, but wont be forwarded to the service.
I installed the ingress controller as descibed by many tutorials with the files in <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a> .</p>
<p>I'm new to stack overflow, so if i should give more specific information just let me know. I hope someone can help me with my issue, thank you in advance :D</p>
<p><strong>My Ingress with missing Address:</strong></p>
<pre><code>Name: app-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
test.kubetest.lff.bybn.de
/abc app-service:80 (10.244.2.4:3000)
Annotations: kubernetes.io/ingress.class: nginx
Events: <none>
</code></pre>
<p><strong>Yaml Files of Deployment, Service and Ingress, IngressClass, ConfigMap</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: app
name: app-blue
spec:
replicas: 1
selector:
matchLabels:
run: app
version: 0.0.1
template:
metadata:
labels:
run: app
version: 0.0.1
spec:
containers:
- name: app
image: errm/versions:0.0.1
ports:
- containerPort: 3000
----
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
run: app
version: 0.0.1
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: test.kubetest.lff.bybn.de
http:
paths:
- path: /abc
backend:
serviceName: app-service
servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: nginx
# annotations:
# ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: nginx.org/ingress-controller
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
</code></pre>
<p><strong>Curl from outside of the Cluster and Logs from Controller Pod</strong></p>
<pre><code>curl test.kubetest.lff.bybn.de/abc
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 93 0 93 0 0 1 0 --:--:-- 0:00:50 --:--:-- 26<html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
E0131 19:44:11.949261 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:574: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
E0131 19:45:06.894791 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:574: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
E0131 19:45:48.532075 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:574: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
10.48.25.57 - - [31/Jan/2021:19:46:35 +0000] "GET /abc HTTP/1.1" 499 0 "-" "curl/7.73.0" "-"
E0131 19:46:37.902444 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:574: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
E0131 19:47:15.346193 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:574: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
E0131 19:47:48.536636 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:574: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
E0131 19:48:21.890770 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:574: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
</code></pre>
| jergan95 | <p>Looking at the Ingress definition, I see that it misses the ingress class. Either you defined an IngressClass annotated as the default class to use, or that may be the reason your Ingress is not working, at the moment.</p>
<p>An Ingress Class is basically a category which specify who needs to serve and manage the Ingress, this is necessary since in a cluster you can have more than one Ingress controller, each one with its rules and configurations.</p>
<p>Depending on the Kubernetes version, the ingress class can be defined with an annotation on the ingress (before v1.18) such as:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
...
</code></pre>
<p>Or with a whole resource and then referred into the Ingress as shown in the documentation (<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class</a>)</p>
<p>Even in new versions of Kubernetes, the old annotation may still be supported, depends on the controller.</p>
<p>If you are unsure on what ingress class you should use, that should be defined by the controller, you probably decided one when you installed it or you used the default one (which most of the times is nginx)</p>
| AndD |
<p>I have two deployment in one service:
First deployment is application backend;
Second deployment is a LDAP that stores configuration for backend;</p>
<p>I want to initalize LDAP then backend and only after that perform rolling update for both deployments.</p>
<p>I understand that i can use Init Containers and Readness probes if i want to wait a new deployment to initialize before update, but how can i achive the same for two deployments?</p>
| Сергей Татунов | <p>You may use initContainer for LDAP which initializes it, then start LDAP. Also you may use initContainer in your backend app which waits for LDAP service to become available. This way your backend will always wait for LDAP initialization. This is a common practice for creating application dependencies in Kube world.</p>
| Vasili Angapov |
<p>I'm creating a HorizontalPodAutoscaler in Kubernetes and I need to configure the downscale stabilization window to be smaller than the default. The code used and error are below:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: busy-autoscaler
spec:
behavior:
scaleDown:
stabilizationWindowSeconds: 10
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: busy-worker
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
</code></pre>
<pre><code>$ kubectl create -f some-autoscale.yaml
error validating "some-autoscale.yaml": error validating data: ValidationError(HorizontalPodAutoscaler.spec): unknown field "behavior" in io.k8s.api.autoscaling.v2beta2.HorizontalPodAutoscalerSpec
</code></pre>
<p>My understanding is that the <code>behavior</code> field should be supported from Kubernetes 1.17 as stated in the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior" rel="nofollow noreferrer">docs</a>. Running <code>kubectl version</code> gives the following output:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>The <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#horizontalpodautoscalerspec-v2beta2-autoscaling" rel="nofollow noreferrer">API reference</a> doesn't have the <code>behavior</code> field for <code>v2beta2</code> which makes this more confusing.</p>
<p>I'm running Minikube 1.6.2 locally. What am I doing wrong?</p>
| cjm | <p>Client Version: v1.20.2
Server Version: v1.18.9-eks-d1db3c</p>
<p>Make sure <code>kubectl api-versions</code> and your cluster supports autoscaling/v2beta2</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "ks.fullname" . }}-keycloak
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "ks.fullname" . }}-keycloak
minReplicas: {{ .Values.keycloak.hpa.minpods }}
maxReplicas: {{ .Values.keycloak.hpa.maxpods }}
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.keycloak.hpa.memory.averageUtilization }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.keycloak.hpa.cpu.averageUtilization }}
behavior:
scaleDown:
stabilizationWindowSeconds: {{ .Values.keycloak.hpa.stabilizationWindowSeconds }}
policies:
- type: Pods
value: 1
periodSeconds: {{ .Values.keycloak.hpa.periodSeconds }}
</code></pre>
| Kiruthika kanagarajan |
<p>I would like to list pods created within 24 hours. I didn't find any kubectl commands or anything to get those. Could anyone please help me with the kubectl command to get only the pods created in last 24 hours.</p>
| Vivek Subramani | <p>Not the most beautiful solution but this should work (or give you an idea if you want to further improve the command)</p>
<pre><code>kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}} {{.metadata.creationTimestamp}}{{"\n"}}{{end}}' | awk '$2 > "'$(date -d 'yesterday' -Ins --utc | sed 's/+0000/Z/')'" { print $1 }'
</code></pre>
<p>List all pods names and filter rows with startTime > of one day.</p>
| AndD |
<p>I'm looking to deploy a bare metal k8s cluster. </p>
<p>Typically when I deploy k8s clusters, I have two networks. Control Plane, and Nodes. However, in this cluster I'd like to leverage <a href="https://rook.io/" rel="nofollow noreferrer">rook</a> to present storage (ceph/nfs).</p>
<p>Most advice I get and articles I read say that systems like ceph need their own backend, isolated cluster network for replication etc - <a href="http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/" rel="nofollow noreferrer">ceph reference docs</a>. Moreover, a common datacenter practice is to have a separate network for NFS.</p>
<p>How are these requirements and practices adopted in a k8s world? Can the physical network just be flat, and the k8s SDN does all the heavy lifting here? Do I need to configure network policies and additional interfaces to provide physical segregation for my resources? </p>
| thisguy123 | <p>Ceph best practice is to have separate "cluster network" for replication/rebalancing and client-facing network (so called "public network") which is used by clients (like K8s nodes) to connect to Ceph. Ceph cluster network is totally different from K8s cluster network. Those are simply two different things. Ideally they should live on different NICs and switches/switchports.</p>
<p>If you have separate NICs towards Ceph cluster then you can create interfaces on K8s nodes to interact with Ceph's "public network" using those dedicated NICs. So there will be separate interfaces for K8s management/inter-pod traffic and separate interfaces for storage traffic. </p>
| Vasili Angapov |
<p>Given the following kustomize patch:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}@domain.de
</code></pre>
<p>I want to use <code>kubectl apply -k</code> and somehow pass a value for <code>${PASSWORD}</code> which I can set from my build script. </p>
<p>The only solution I got to work so far was replacing the <code>${PASSWORD}</code> with <code>sed</code>, but I would prefer a kustomize solution.</p>
| user2074945 | <p>As @Jonas already suggested you should consider using <code>Secret</code>. It's nicely described in <a href="https://blog.stack-labs.com/code/kustomize-101/" rel="noreferrer">this</a> article.</p>
<blockquote>
<p>I want to use kubectl apply -k and somehow pass a value for
${PASSWORD} which I can set from my build script.</p>
</blockquote>
<p>I guess your script can store the generated password as a variable or save it to some file. You can easily create a <code>Secret</code> as follows:</p>
<pre><code>$ kustomize edit add secret sl-demo-app --from-literal=db-password=$PASSWORD
</code></pre>
<p>or from a file:</p>
<pre><code>$ kustomize edit add secret sl-demo-app --from-file=file/path
</code></pre>
<p>As you can read in the mentioned article:</p>
<blockquote>
<p>These commands will modify your <code>kustomization.yaml</code> and add a
<code>SecretGenerator</code> inside it.</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- custom-env.yaml
- replica-and-rollout-strategy.yaml
secretGenerator:
- literals:
- db-password=12345
name: sl-demo-app
type: Opaque
</code></pre>
</blockquote>
<p><code>kustomize build</code> run in your project directory will create among others following <code>Secret</code>:</p>
<pre><code>apiVersion: v1
data:
db-password: MTIzNDU=
kind: Secret
metadata:
name: sl-demo-app-6ft88t2625
type: Opaque
...
</code></pre>
<p>More details you can fine in the <a href="https://blog.stack-labs.com/code/kustomize-101/" rel="noreferrer">article</a>.</p>
<blockquote>
<p>If we want to use this secret from our deployment, we just have, like
before, to add a new layer definition which uses the secret.</p>
<p>For example, this file will mount the db-password value as
environement variables</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
template:
spec:
containers:
- name: app
env:
- name: "DB_PASSWORD"
valueFrom:
secretKeyRef:
name: sl-demo-app
key: db.password
</code></pre>
</blockquote>
<p>In your <code>Deployment</code> definition file it may look similar to this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
env:
- name: "PASSWORD"
valueFrom:
secretKeyRef:
name: git-secret
key: git.password
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}@domain.de
</code></pre>
| mario |
<p>I'm learning spark, and I'm get confused about running docker which contains spark code on <code>Kubernetes</code> cluster.</p>
<ul>
<li><p>I read that <code>spark</code> get utilized multiple nodes (servers) and it can run code on different nodes, in order to get complete jobs faster (and get used the memory of each node, when the data is too big)</p>
</li>
<li><p>On the other side, I read that Kubernetes pod (which contains dockers/containers) run on one node.</p>
</li>
</ul>
<p>For example, I'm running the following <code>spark</code> code from <code>docker</code>:</p>
<pre><code>num = [1, 2, 3, 4, 5]
num_rdd = sc.parallelize(num)
double_rdd = num_rdd.map(lambda x: x * 2)
</code></pre>
<p>Some notes and reminders (from my understanding):</p>
<ul>
<li>When using the <code>map</code> command, each value of the <code>num</code> array maps to different spark node (slave worker)</li>
<li><code>k8s</code> pod run on one node</li>
</ul>
<ol>
<li>So I'm confused how spark utilized multiple nodes when the pod run on one node ?</li>
<li>Does the <code>spark slave workers</code> runs on different nodes, and this is how the pod which run the code above can communicate with those nodes in order to utilize the spark framework ?</li>
</ol>
| Boom | <p>When you run Spark on Kubernetes, you have a few ways to set things up. The most common way is to set Spark to run in client-mode.</p>
<p>Basically Spark can run on Kubernetes on a Pod.. then the application itself, having the endpoints for the k8s masters, is able to <strong>spawn its own worker Pods</strong>, as long as everything is correctly configured.</p>
<p>What is needed for this setup is to deploy the Spark application on Kubernetes (usually with a StatefulSet but it's not a must) along with an headless ClusterIP Service (which is required to make worker Pods able to communicate with the master application that spawned them)</p>
<p>You also need to give the Spark application all the correct configurations such as the k8s masters endpoint, the Pod name and other parameters to set things up.</p>
<p>There are other ways to setup Spark, there's no obligation to spawn worker Pods, you can run all the stages of your code locally (and the configuration is easy, if you have small jobs with small amount of data to execute you don't need workers)</p>
<p>Or you can execute the Spark application externally from the Kubernetes cluster, so not on a pod.. but giving it the Kubernetes master endpoints so that it can still spawn workers on the cluster (aka cluster-mode)</p>
<p>You can find a lot more info in the Spark documentation, which explains mostly everything to set things up (<a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#client-mode" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html#client-mode</a>)
And can read about StatefulSets and their usage of headless ClusterIP Services here (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/</a>)</p>
| AndD |
<p>My kubernetes application is made of several flavors of nodes, a couple of “schedulers” which send tasks to quite a few more “worker” nodes. In order for this app to work correctly all the nodes must be of exactly the same code version.</p>
<p>The deployment is performed using a standard ReplicaSet and when my CICD kicks in it just does a simple rolling update. This causes a problem though since during the rolling update, nodes of different code versions co-exist for a few seconds, so a few tasks during this time get wrong results.</p>
<p>Ideally what I would want is that deploying a new version would create a completely new application that only communicates with itself and has time to warm its cache, then on a flick of a switch this new app would become active and start to get new client requests. The old app would remain active for a few more seconds and then shut down.</p>
<p>I’m using Istio sidecar for mesh communication.</p>
<p>Is there a standard way to do this? How is such a requirement usually handled?</p>
| shoosh | <p>I also had such a situation. Kubernetes alone cannot satisfy your requirement, I was also not able to find any tool that allows to coordinate multiple deployments together (although <a href="https://flagger.app/" rel="nofollow noreferrer">Flagger</a> looks promising).</p>
<p>So the only way I found was by using CI/CD: Jenkins in my case. I don't have the code, but the idea is the following:</p>
<ol>
<li><p>Deploy all application deployments using single Helm chart. Every Helm release name and corresponding Kubernetes labels must be based off of some sequential number, e.g. Jenkins <code>$BUILD_NUMBER</code>. Helm release can be named like <code>example-app-${BUILD_NUMBER}</code> and all deployments must have label <code>version: $BUILD_NUMBER</code> . Important part here is that your <code>Services</code> should not be a part of your Helm chart because they will be handled by Jenkins.</p>
</li>
<li><p>Start your build with detecting the current version of the app (using bash script or you can store it in <code>ConfigMap</code>).</p>
</li>
<li><p>Start <code>helm install example-app-{$BUILD_NUMBER}</code> with <code>--atomic</code> flag set. Atomic flag will make sure that the release is properly removed on failure. And don't delete previous version of the app yet.</p>
</li>
<li><p>Wait for Helm to complete and in case of success run <code>kubectl set selector service/example-app version=$BUILD_NUMBER</code>. That will instantly switch Kubernetes <code>Service</code> from one version to another. If you have multiple services you can issue multiple <code>set selector</code> commands (each command executes immediately).</p>
</li>
<li><p>Delete previous Helm release and optionally update <code>ConfigMap</code> with new app version.</p>
</li>
</ol>
<p>Depending on your app you may want to run tests on non user facing <code>Services</code> as a part of step 4 (after Helm release succeeds).</p>
<p>Another good idea is to have <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="nofollow noreferrer"><code>preStop</code></a> hooks on your worker pods so that they can finish their jobs before being deleted.</p>
| Vasili Angapov |
<p>prometheus:v2.15.2
kubernetes:v1.14.9</p>
<p>I have a query where it shows exactly the maximum over time during the set period.
But I would like to join with the metric already set in the kube_pod_container resource.</p>
<p>I would like to know if what is set is close to the percentage set or not, displaying the percentage.</p>
<p>I have other examples working with this same structure of metric</p>
<p><code>jvm_memory_bytes_used{instance="url.instance.com.br"} / jvm_memory_bytes_max{area="heap"} * 100 > 80</code></p>
<p>but this one is not working.</p>
<p><code>max_over_time(sum(rate(container_cpu_usage_seconds_total{pod="pod-name-here",container_name!="POD", container_name!=""}[1m])) [1h:1s]) / kube_pod_container_resource_requests_cpu_cores * 100 < 70</code></p>
<p>Well the first idea was to create a query to collect the maximum historical cpu usage of a container in a pod in a brief period:</p>
<p><code>max_over_time(sum(rate(container_cpu_usage_seconds_total{pod="xpto-92838241",container_name!="POD", container_name!=""}[1m])) [1h:1s])</code></p>
<p><strong>Element</strong>: {} <strong>Value:</strong> 0.25781324101515</p>
<p>If we execute it this way:</p>
<p><code>container_cpu_usage_seconds_total{pod="xpto-92838241",container_name!="POD", container_name!=""}</code></p>
<p><strong>Element:</strong> container_cpu_usage_seconds_total{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_instance_type="t3.small",beta_kubernetes_io_os="linux",cluster="teste.k8s.xpto",container="xpto",container_name="xpto",cpu="total",failure_domain_beta_kubernetes_io_region="sa-east-1",failure_domain_beta_kubernetes_io_zone="sa-east-1c",generic="true",id="/kubepods/burstable/poda9999e9999e999e9-/99999e9999999e9",image="nginx",instance="kubestate-dev.internal.xpto",job="kubernetes-cadvisor",kops_k8s_io_instancegroup="nodes",kubernetes_io_arch="amd64",kubernetes_io_hostname="ip-99-999-9-99.sa-east-1.compute.internal",kubernetes_io_os="linux",kubernetes_io_role="node",name="k8s_nginx_nginx-99999e9999999e9",namespace="nmpc",pod="pod-92838241",pod_name="pod-92838241",spot="false"} <strong>Value:</strong> 22533.2</p>
<p>Now we have what is configured:</p>
<p><code>kube_pod_container_resource_requests_cpu_cores{pod="xpto-92838241"}</code></p>
<p><strong>Element:</strong> kube_pod_container_resource_requests_cpu_cores{container="xpto",instance="kubestate-dev.internal.xpto",job="k8s-http",namespace="nmpc",node="ip-99-999-999-99.sa-east-1.compute.internal",pod="pod-92838241"} <strong>Value:</strong> 1</p>
<p>Well, in my perception it would be to use these two metrics and get it close to the percentage like this:</p>
<p><code>max_over_time(sum(rate(container_cpu_usage_seconds_total{pod="xpto-dev-92838241",container_name!="POD", container_name!=""}[1m])) [1h:1s]) / kube_pod_container_resource_requests_cpu_cores * 100 < 70</code></p>
<p><strong>Element:</strong> <em>no data</em> <strong>Value:</strong> </p>
<p>But these two metrics do not interact, I can not understand why and do not find in the documentation.</p>
<p>Regards</p>
| Vinicius Peres | <p>As you can see <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#removed-metrics" rel="nofollow noreferrer">here</a>, only in <strong>Kubernetes 1.16</strong> <em>cadvisor metric labels <code>pod_name</code> and <code>container_name</code></em> were removed and substituted by <code>pod</code> and <code>container</code> respectively.
As you are using <strong>Kubernetes 1.14</strong>, you should still use <code>pod_name</code> and <code>container_name</code>.</p>
<p>Let me know if it helps.</p>
| mario |
<p>Is there a way to run a scheduled job that would pull some files regurarly on a mounted shared volume?</p>
<p>I have tried cronjob but apparently it doesn't supposed external filesystems</p>
<p>thanks in advance.</p>
| zozo6015 | <p>CronJobs should be able to mount PVC just as any other resource which spawn Pods, you can just add a <code>volumeMounts</code> section under the <code>container</code> template, and then a <code>volume</code> section under <code>template</code>.</p>
<p>Something like the following:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: example-name
spec:
schedule: '0 * * * *'
jobTemplate:
spec:
completions: 1
template:
spec:
containers:
- name: example-container-name
image: your-docker-repo/your-docker-image:the-tag
volumeMounts:
- name: data
mountPath: /internal/path/to/mount/pvc
volumes:
- name: data
persistentVolumeClaim:
claimName: example-claim
</code></pre>
<p>This should mount <code>example-claim</code> PVC to the CronJob's Pod when the Pod is spawned.</p>
<p>Basically there are two sections.. under each container volumeMounts list the volumes mounted by the container, at which path and a few more configuration. All the volumeMounts entries should be defined once in the volumes section which associate names (that act as keys for the spec) and claims or empty-dir.</p>
<p>As for creating the PVC, let me link you the documentation (<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a>)</p>
<p>What you want to do basically is to create a Persistent Volume which points to your mounted shared volume (what is it, a nfs storage? The declaration changes slightly, depending on what exactly you want to mount) and then a Claim (PVC) in the same namespace of the CronJob which will bound to the PV.</p>
<p>If you are unsure about the correct indentation of the various objects or where to put things, check the practical API Reference Docs (<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#cronjob-v1beta1-batch" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#cronjob-v1beta1-batch</a>)</p>
| AndD |
<p>I'm trying to understand what happens when a container is configured with a CPU request and without a limit, and it tries to use more CPU than requested while the node is fully utilized, but there is another node with available resources.</p>
<p>Will k8s keep the container throttled in its current node or will it be moved to another node with available resources? do we know how/when k8s decides to move the container when its throttled in such a case?</p>
<p>I would appreciate any extra resources to read on this matter, as I couldn't find anything that go into details for this specific scenario.</p>
| Trickster | <p><strong>Q1) what happens when a container is configured with a CPU request and without a limit ?</strong></p>
<p><strong>ANS:</strong></p>
<p><strong>If you do not specify a CPU limit</strong></p>
<p>If you do not specify a CPU limit for a Container, then one of these situations applies:</p>
<p>The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running.</p>
<p>The Container is running in a namespace that has a default CPU limit, and the Container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the CPU limit.</p>
<p><strong>If you specify a CPU limit but do not specify a CPU request</strong></p>
<p>If you specify a CPU limit for a Container but do not specify a CPU request, Kubernetes automatically assigns a CPU request that matches the limit. Similarly, if a Container specifies its own memory limit, but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit.</p>
<p><strong>Q2) it tries to use more CPU than requested while the node is fully utilized, but there is another node with available resources?</strong></p>
<p><strong>ANS:</strong></p>
<p>The <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/#:%7E:text=kube%2Dscheduler-,kube%2Dscheduler,-Synopsis" rel="nofollow noreferrer">Kubernetes scheduler</a> is a control plane process which assigns Pods to Nodes. The scheduler determines which Nodes are valid placements for each Pod in the scheduling queue according to constraints and available resources. The scheduler then ranks each valid Node and binds the Pod to a suitable Node. Multiple different schedulers may be used within a cluster; kube-scheduler is the reference implementation. See scheduling for more information about scheduling and the kube-scheduler component.</p>
<p><strong>Scheduling, Preemption and Eviction</strong></p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/#:%7E:text=Preemption%20and%20Eviction-,Scheduling%2C%20Preemption%20and%20Eviction,-In%20Kubernetes%2C%20scheduling" rel="nofollow noreferrer">In Kubernetes</a>, scheduling refers to making sure that Pods are matched to Nodes so that the kubelet can run them. Preemption is the process of terminating Pods with lower Priority so that Pods with higher Priority can schedule on Nodes. Eviction is the process of terminating one or more Pods on Nodes.</p>
<p><strong>Q3) Will k8s keep the container throttled in its current node or will it be moved to another node with available resources?</strong></p>
<p><strong>ANS:</strong></p>
<p><strong>Pod Disruption</strong></p>
<p>Pod disruption is the process by which Pods on Nodes are terminated either voluntarily or involuntarily.</p>
<p>Voluntary disruptions are started intentionally by application owners or cluster administrators. Involuntary disruptions are unintentional and can be triggered by unavoidable issues like Nodes running out of resources, or by accidental deletions.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions" rel="nofollow noreferrer">Voluntary and involuntary disruptions</a>
Pods do not disappear until someone (a person or a controller) destroys them, or there is an unavoidable hardware or system software error.</p>
<p>We call these unavoidable cases involuntary disruptions to an application.</p>
<p><strong>Examples are:</strong></p>
<ul>
<li>a hardware failure of the physical machine backing the node</li>
<li>cluster administrator deletes VM (instance) by mistake</li>
<li>cloud provider or hypervisor failure makes VM disappear</li>
<li>a kernel panic</li>
<li>the node disappears from the cluster due to cluster network partition</li>
<li>eviction of a pod due to the node being <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/" rel="nofollow noreferrer">out-of-resources</a>.</li>
</ul>
<p><strong>Suggestion:</strong></p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">Taints and tolerations</a> work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.</p>
<p><strong>Command:</strong></p>
<pre><code>kubectl taint nodes node1 key1=value1:NoSchedule
</code></pre>
<p><strong><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions" rel="nofollow noreferrer">Example</a>:</strong></p>
<pre><code>kubectl taint nodes node1 key1=node.kubernetes.io/disk-pressure:NoSchedule
</code></pre>
| MadProgrammer |
<p>I want to set up a Kubernetes Cluster with multiple Helm Charts installed. I like the idea of having configurations versioned in an Git repository. I'm wondering if there is any tool (or recommended/best practice) of how the state of installed helm charts can be "versioned".</p>
<p>For example, I would like to have a yaml file similar to the following example with multiple helm charts and a tool (that's the tool I'm searching for) which will take care of applying this file to my Kubernetes cluster:</p>
<pre><code>- name: gitlab
chart: gitlab/gitlab-runner
repository: https://charts.gitlab.io
values:
- gitlab-runner/values.yaml
- local/gitlab-runner-override.yaml
namespace: gitlab-runner
- name: metallb
chart: stable/metallb
values:
- metallb/configuration.yaml
...
</code></pre>
<p>This way it is possible to manage the contents of the Kubernetes cluster in a programatically way.</p>
<p>Any recommendations?</p>
| Matthias Lohr | <p>It looks like <a href="https://github.com/roboll/helmfile" rel="nofollow noreferrer">helmfile</a> is the solution you need:</p>
<blockquote>
<p>Helmfile is a declarative spec for deploying helm charts. It lets
you...</p>
<p>Keep a directory of chart value files and maintain changes in version
control. Apply CI/CD to configuration changes. Periodically sync to
avoid skew in environments.</p>
</blockquote>
<p>You can read more about it in <a href="https://medium.com/@naseem_60378/helmfile-its-like-a-helm-for-your-helm-74a908581599" rel="nofollow noreferrer">this</a> article.</p>
| mario |
<p>My Helm chart (run by Rancher pipeline)'s structure is:</p>
<pre><code>charts/web/0.3.0/templates
deployment.yaml
service.yaml
ingress.yaml
webinspect-runner.yaml
</code></pre>
<p>The <code>deployment.yaml</code> deploys a web server (say flask-restful or django), and I set a service to it:</p>
<h2>service.yaml</h2>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: {{ include "web.name" . }}-svc
labels:
{{- include "iiidevops.labels" . | nindent 4 }}
spec:
type: NodePort
selector:
app: {{ include "web.name" . }}
ports:
- port: {{ .Values.web.port }}
protocol: TCP
</code></pre>
<p>And I want to create a web scan (by webinspect) service to scan it, so I write this:</p>
<h2>webinspect-runner.yaml</h2>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
annotations:
"helm.sh/hook": post-install,post-upgrade,post-rollback
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
name: "{{ .Release.Name }}-webinspect-{{ .Values.pipeline.sequence }}"
spec:
template:
metadata:
name: "{{ .Release.Name }}-webinspect-{{ .Values.pipeline.sequence }}"
labels:
{{- include "iiidevops.labels" . | nindent 8 }}
spec:
containers:
- name: "webinspect-runner-{{ .Values.pipeline.sequence }}"
image: "{{ .Values.harbor.host }}/{{ .Values.harbor.cache }}/iiiorg/webinspect-runner:{{ .Values.webinspect.runnerVersion }}"
command: ['sh', '-c', 'cd /usr/src/app; node /usr/src/app/app.js;']
env:
- name: inspect_url
value: http://{{ .Release.Name }}.{{ .Values.ingress.externalHost }}{{ .Values.webinspect.startPath }}
...
restartPolicy: Never
backoffLimit: 4
</code></pre>
<p>The point is <code>inspect_url</code> env. I now use an external domain and ingress to redirect it, but it requires a domain name.</p>
<p>Since <code>kubectl</code> can get the service's IP and port like <code>http://10.50.40.99:32147</code>, can I also write something to let <code>inspect_url</code> also become <code>http://10.50.40.99:32147</code>?</p>
| Romulus Urakagi Ts'ai | <p>I'm not sure if you need the scan to be external or if you are okay with an internal scan as well, but either way, you could obtain IP and port by manually requesting them at creation time.</p>
<p><strong>If it is okay to use internal IP and port:</strong></p>
<p>First, for the ClusterIP assigned to a service, it is possible to manually require it (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address</a>)</p>
<p>You just need to specify the <code>.spec.clusterIP</code> field of the Service resource.</p>
<p>Second, for the port you can just use the internal one, the one that you define with:</p>
<pre><code>- port: {{ .Values.web.port }}
</code></pre>
<p><strong>If you want to scan it externally at the NodePort port:</strong></p>
<p>First, you can scan the service at whichever node of the cluster, since the NodePort service will expose the communication on all nodes IP Addresses. For the IP you can then just choose one of the cluster's nodes address.</p>
<p>Instead, for the external port, you can specify it manually, simply by adding the optional field:</p>
<pre><code># By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: your_port_here
</code></pre>
<p>in the Service declaration.</p>
| AndD |
<p>We have 2 clusters on <strong>GKE</strong>: <code>dev</code> and <code>production</code>. I tried to run this command on <code>dev</code> cluster:</p>
<pre><code>gcloud beta container clusters update "dev" --update-addons=NodeLocalDNS=ENABLED
</code></pre>
<p>And everything went great, node-local-dns pods are running and all works, next morning I decided to run same command on <code>production</code> cluster and node-local-dns fails to run, and I noticed that both <strong>PILLAR__LOCAL__DNS</strong> and <strong>PILLAR__DNS__SERVER</strong> in yaml aren't changed to proper IPs, I tried to change those variables in config yaml, but <strong>GKE</strong> keeps overwriting them back to yaml with <strong>PILLAR__DNS__SERVER</strong> variables... </p>
<p>The only difference between clusters is that <code>dev</code> runs on <code>1.15.9-gke.24</code> and production <code>1.15.11-gke.1</code>.</p>
| Kikun | <p><strong>Apparently <code>1.15.11-gke.1</code> version has a bug.</strong></p>
<p>I recreated it first on <code>1.15.11-gke.1</code> and can confirm that <code>node-local-dns</code> <code>Pods</code> fall into <code>CrashLoopBackOff</code> state:</p>
<pre><code>node-local-dns-28xxt 0/1 CrashLoopBackOff 5 5m9s
node-local-dns-msn9s 0/1 CrashLoopBackOff 6 8m17s
node-local-dns-z2jlz 0/1 CrashLoopBackOff 6 10m
</code></pre>
<p>When I checked the logs:</p>
<pre><code>$ kubectl logs -n kube-system node-local-dns-msn9s
2020/04/07 21:01:52 [FATAL] Error parsing flags - Invalid localip specified - "__PILLAR__LOCAL__DNS__", Exiting
</code></pre>
<h3>Solution:</h3>
<p><strong>Upgrade to <code>1.15.11-gke.3</code> helped.</strong> First you need to upgrade your <strong>master-node</strong> and then your <strong>node pool</strong>. It looks like on this version everything runs nice and smoothly:</p>
<pre><code>$ kubectl get daemonsets -n kube-system node-local-dns
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
node-local-dns 3 3 3 3 3 addon.gke.io/node-local-dns-ds-ready=true 44m
$ kubectl get pods -n kube-system -l k8s-app=node-local-dns
NAME READY STATUS RESTARTS AGE
node-local-dns-8pjr5 1/1 Running 0 11m
node-local-dns-tmx75 1/1 Running 0 19m
node-local-dns-zcjzt 1/1 Running 0 19m
</code></pre>
<p>As it comes to manually fixing this particular daemonset <code>yaml</code> file, I wouldn't recommend it as you can be sure that <strong>GKE's</strong> <strong>auto-repair</strong> and <strong>auto-upgrade</strong> features will overwrite it sooner or later anyway.</p>
<p>I hope it was helpful.</p>
| mario |
<p>Pod A is on ClusterIP service type, so incoming requests from external resources are not allowed.
Pod A executes outgoing requests to 3rd party services (Such as Google APIs).
And I want to specify the IP address that this request is coming from on google for security reasons.</p>
<p>Is there a way to find the IP address this pod uses for outgoing HTTP requests?</p>
| Cod3n0d3 | <p>If it is a public cluster where each node in the cluster has an ip address the public ip will be the address of the node the pod is on.
If it is a private cluster you can deploy a nat gateway for all the nodes and specify static ip addresses.</p>
<p>you can use this terraform module for a private cluster:
<a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/private-cluster" rel="nofollow noreferrer">https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/private-cluster</a></p>
<p>Plus a nat gateway from here
<a href="https://cloud.google.com/nat/docs/gke-example#terraform" rel="nofollow noreferrer">https://cloud.google.com/nat/docs/gke-example#terraform</a></p>
| mn0o7 |
<p>I have install a kubernetes cluster using three VMs in virtualBox, and my host machine's IP is 192.168.50.166, and the node information in cluster is</p>
<pre><code>vm1 192.168.50.28 worker-node-1
vm2 192.168.50.29 worker-node-2
vm3 192.168.50.30 master-node
</code></pre>
<p>if I can have a single public IP (140.112.1.1) in my host machine, how can I expose my services like</p>
<pre><code>http://140.112.1.1:xxxx/services
</code></pre>
<p>I think maybe I should buy another network interface in my host machine and assign public IP to this interface, but I don't known how to communicate with my cluster.</p>
| tom | <p>What you are searching for is the LoadBalancer service and the Ingress resource.</p>
<p>Kubernetes can offer LoadBalancing service (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer</a>), which basically acts as a way to transfer traffic from an external LoadBalancer to the backend Pods.</p>
<p>If you are hosting your Kubernetes cluster on top of a cloud service (Azure, Google and so on), there are a good amount of chances that you already have something that provides Load Balancer functionalities for your cluster.</p>
<p>If that's not the case and you want to host a Load Balancer service on top of the Kubernetes cluster, so that all your nodes partecipate in serving the public IP (or more than one public IP), a common approach is to deploy MetalLB on Kubernetes (<a href="https://metallb.universe.tf/" rel="nofollow noreferrer">https://metallb.universe.tf/</a>)</p>
<p>Second, by using Ingress resources (<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a>), it is possible to manage external access to different Pods (and Services) based on the path of the request, typically HTTP or HTTPS.</p>
<p>It's basically a way to route incoming traffic to different Services (and different Pods) in the cluster, based on the path of the request, plus can offer SSL and a lot of other functionalities.</p>
<p>A common approach to serve Ingress resources on a cluster, is by using NGINX Ingress (<a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/</a>)</p>
<p>With the combination of LoadBalancer + Ingress you can expose all your services behind a LoadBalancer attached to an external IP address, everything nicely in HTTP or HTTPS with certificates and so on.</p>
<p>With the supposition that you are hosting your Kubernetes cluster on almost-bare-metal (normal VMs like if they were bare-metal machines), you could:</p>
<ul>
<li>Have the public address you have at your disposal available for the VMs to use on their network interfaces</li>
<li>Install MetalLB on the cluster, this will provide internal LoadBalancing, you can specify which IP range (or single IP) it can use</li>
<li>Install NGINX Ingress on the cluster, this will provide support for Ingress resources. When installing this, the nginx-controller should receive the external IP in LoadBalancing by MetalLB</li>
<li>Lastly, create an Ingress to serve all the services that you want under the paths that you want. If everything works correctly, the nginx-controller should start serving your services on the external IP address</li>
</ul>
| AndD |
<p>I installed kubectl and tried enable shell autocompletion for zsh.
When I'm using <code>kubectl</code> autocompletion works fine. Howewer when I'm trying use autocompletion with alias <code>k</code> then shell return me</p>
<pre><code>k g...(eval):1: command not found: __start_kubectl 8:45
(eval):1: command not found: __start_kubectl
(eval):1: command not found: __start_kubectl
</code></pre>
<p>In my <code>.zshrc</code> file I have:</p>
<pre><code>source <(kubectl completion zsh)
alias k=kubectl
compdef __start_kubectl k
</code></pre>
| Dawid Krok | <p>Can you try this one:</p>
<pre><code>compdef _kubectl k
</code></pre>
| Vasili Angapov |
<p>I am creating a deployment using <a href="https://pastebin.com/nCLTigF7" rel="nofollow noreferrer">this yaml</a> file. It creates a replica of 4 busybox pods. All fine till here.</p>
<p>But when I edit this deployment using the command <code>kubectl edit deployment my-dep2</code>, only changing the version of busybox image to 1.31 (a downgrade but still an update from K8s point of view), the ReplicaSet is not completely replaced.</p>
<p>The output of <code>kubectl get all --selector app=my-dep2</code> post the edit is:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/my-dep2-55f67b974-5k7t9 0/1 ErrImagePull 2 5m26s
pod/my-dep2-55f67b974-wjwfv 0/1 CrashLoopBackOff 2 5m26s
pod/my-dep2-dcf7978b7-22khz 0/1 CrashLoopBackOff 6 12m
pod/my-dep2-dcf7978b7-2q5lw 0/1 CrashLoopBackOff 6 12m
pod/my-dep2-dcf7978b7-8mmvb 0/1 CrashLoopBackOff 6 12m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-dep2 0/4 2 0 12m
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-dep2-55f67b974 2 2 0 5m27s
replicaset.apps/my-dep2-dcf7978b7 3 3 0 12m
</code></pre>
<p>As you can see from the output above, there are 2 ReplicaSet which are existing in parallel. I expect the old ReplicaSet to be completely replaced by new ReplicaSet (containing the 1.31 version of busybox). But this is not happening. What am I missing here?</p>
| Gautam Somani | <p>This is totally normal, expected result, related with <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">Rolling Update</a> mechanism in <strong>kubernetes</strong></p>
<p>Take a quick look at the following working example, in which I used <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">sample nginx <code>Deployment</code></a>. Once it's deployed, I run:</p>
<pre><code>kubectl edit deployments.apps nginx-deployment
</code></pre>
<p>and removed the image tag which actually equals to performing an update to <code>nginx:latest</code>. Immediatelly after applying the change you can see the following:</p>
<pre><code>$ kubectl get all --selector=app=nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-574b87c764-bvmln 0/1 Terminating 0 2m6s
pod/nginx-deployment-574b87c764-zfzmh 1/1 Running 0 2m6s
pod/nginx-deployment-574b87c764-zskkk 1/1 Running 0 2m7s
pod/nginx-deployment-6fcf476c4-88fdm 0/1 ContainerCreating 0 1s
pod/nginx-deployment-6fcf476c4-btvgv 1/1 Running 0 3s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-deployment ClusterIP 10.3.247.159 <none> 80/TCP 6d4h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 2 3 2m7s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-574b87c764 2 2 2 2m7s
replicaset.apps/nginx-deployment-6fcf476c4 2 2 1 3s
</code></pre>
<p>As you can see, at certain point in time there are running pods in both replicas. It's because of the mentioned rolling update mechanism, which ensures your app availability when it is being updated.</p>
<p>When the update process is ended replicas count in the old <code>replicaset</code> is reduced to 0 so there are no running pods, managed by this <code>replicaset</code> as the new one achieved its desired state:</p>
<pre><code>$ kubectl get all --selector=app=nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-6fcf476c4-88fdm 1/1 Running 0 10s
pod/nginx-deployment-6fcf476c4-btvgv 1/1 Running 0 12s
pod/nginx-deployment-6fcf476c4-db5z7 1/1 Running 0 8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-deployment ClusterIP 10.3.247.159 <none> 80/TCP 6d4h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 3 3 2m16s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-574b87c764 0 0 0 2m16s
replicaset.apps/nginx-deployment-6fcf476c4 3 3 3 12s
</code></pre>
<p>You may ask yourself: why is it still there ? why it is not deleted immediatelly after the new one becomes ready. Try the following:</p>
<pre><code>$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 <none>
</code></pre>
<p>As you can see, there are 2 revisions of our rollout for this deployment. So now we may want to simply undo this recent change:</p>
<pre><code>$ kubectl rollout undo deployment nginx-deployment
deployment.apps/nginx-deployment rolled back
</code></pre>
<p>Now, when wee look at our replicas we can observe a reverse process:</p>
<pre><code>$ kubectl get all --selector=app=nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-574b87c764-6j7l5 0/1 ContainerCreating 0 1s
pod/nginx-deployment-574b87c764-m7956 1/1 Running 0 4s
pod/nginx-deployment-574b87c764-v2r75 1/1 Running 0 3s
pod/nginx-deployment-6fcf476c4-88fdm 0/1 Terminating 0 3m25s
pod/nginx-deployment-6fcf476c4-btvgv 1/1 Running 0 3m27s
pod/nginx-deployment-6fcf476c4-db5z7 0/1 Terminating 0 3m23s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-deployment ClusterIP 10.3.247.159 <none> 80/TCP 6d4h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 3 3 5m31s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-574b87c764 3 3 2 5m31s
replicaset.apps/nginx-deployment-6fcf476c4 1 1 1 3m27s
</code></pre>
<p>Note, that there is no need to create a 3rd <code>replicaset</code>, as there is still the old one which can be used to undo our recent change. The final result looks as follows:</p>
<pre><code>$ kubectl get all --selector=app=nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-574b87c764-6j7l5 1/1 Running 0 40s
pod/nginx-deployment-574b87c764-m7956 1/1 Running 0 43s
pod/nginx-deployment-574b87c764-v2r75 1/1 Running 0 42s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-deployment ClusterIP 10.3.247.159 <none> 80/TCP 6d4h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 3 3 6m10s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-574b87c764 3 3 3 6m10s
replicaset.apps/nginx-deployment-6fcf476c4 0 0 0 4m6s
</code></pre>
<p>I hope that the above example helped you to realize why this old <code>replicaset</code> isn't immediatelly removed and what it can be still useful for.</p>
| mario |
<p>I'm trying to get access to my kubernetes cluster in my self hosted gitlab instance as it is described in the <a href="https://gitlab.jaqua.de/help/user/project/clusters/deploy_to_cluster.md#deployment-variables" rel="noreferrer">docs</a>.</p>
<pre><code>deploy:
stage: deployment
script:
- kubectl create secret docker-registry gitlab-registry --docker-server="$CI_REGISTRY" --docker-username="$CI_DEPLOY_USER" --docker-password="$CI_DEPLOY_PASSWORD" --docker-email="$GITLAB_USER_EMAIL" -o yaml --dry-run=client | kubectl apply -f -
</code></pre>
<p>But I do get the error</p>
<pre><code>Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=secrets", GroupVersionKind: "/v1, Kind=Secret"
Name: "gitlab-registry", Namespace: "gitlab"
from server for: "STDIN": secrets "gitlab-registry" is forbidden: User "system:serviceaccount:gitlab:default" cannot get resource "secrets" in API group "" in the namespace "gitlab"
</code></pre>
<p>I do not understand the error. Why do I get a forbidden error?</p>
<hr />
<p><em>Update</em></p>
<p>The kubernetes cluster is integrated in gitlab at instance level.</p>
<p>But running <code>kubectl config view</code> in the CI pipeline gives me</p>
<pre><code>apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
</code></pre>
<hr />
<p><em>Update2</em></p>
<p>Thanks to AndD, the secret can be created with this role / service account:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: gitlab
name: gitlab-deploy
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: use-secrets
namespace: gitlab
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab
roleRef:
kind: ClusterRole
name: gitlab-deploy
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>But running a simple apply for this namespace.yaml file</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: myns
</code></pre>
<p>gives me a similar error:</p>
<pre><code>Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=namespaces", GroupVersionKind: "/v1, Kind=Namespace"
Name: "myns", Namespace: ""
from server for: "namespace.yaml": namespaces "myns" is forbidden: User "system:serviceaccount:gitlab:default" cannot get resource "namespaces" in API group "" in the namespace "myns"
</code></pre>
<p>I used ClusterBinding to get this working even for a different namespace. What am I doing wrong?</p>
| user3142695 | <p>Kubernetes makes use of a Role-based access control (RBAC) to prevent Pods and Users from being able to interact with resources in the cluster, unless they are not authorized.</p>
<p>From the error, you can see that Gitlab is trying to use the <code>secrets</code> resource and also that it is using as <code>ServiceAccount</code> the <code>default</code> service account in its namespace.</p>
<p>This means that Gitlab is not configured to use a particular ServiceAccount, which means it makes use of the default one (there's a default service account in each namespace of the cluster)</p>
<hr />
<p>You can attach role auth and permissions to service accounts by using <code>Role</code> / <code>ClusterRole</code> and <code>RoleBinding</code> / <code>ClusterRoleBinding</code>.</p>
<p>Roles or ClusterRoles describe permissions. For example, a Role could be:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: gitlab
name: secret-user
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
</code></pre>
<p>and this states that "whoever has this role, can do whatever (all the verbs) with secrets <strong>but only</strong> in the namespace <code>gitlab</code>"</p>
<p>If you want to give generic permissions in all namespaces, you can use a ClusterRole instead, which is very similar.</p>
<p>Once the Role is created, you then can attach it to a User, a Group or a ServiceAccount, for example:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: use-secrets
namespace: gitlab
subjects:
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role # this must be Role or ClusterRole
name: secret-user # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>and this bind the role previously created to the <code>ServiceAccount</code> called default in the namespace <code>gitlab</code>.</p>
<p>Then, <strong>all</strong> Pods running in the namespace <code>gitlab</code> and using the <code>default</code> service account, will be able to use <code>secrets</code> (use the verbs listed in the Role) <strong>but only</strong> in the namespace specified by the Role.</p>
<hr />
<p>As you can see, this aspect of Kubernetes is pretty complex and powerful, so have a look at the docs because they explain things <strong>really</strong> well and are also full of examples:</p>
<p>Service Accounts - <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></p>
<p>RBAC - <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a></p>
<p>A list of RBAC resources - <a href="https://stackoverflow.com/questions/57872201/how-to-refer-to-all-subresources-in-a-role-definition">How to refer to all subresources in a Role definition?</a></p>
<hr />
<p><strong>UPDATE</strong></p>
<p>You are doing nothing wrong. It's just that you are trying to use the resource <code>namespace</code> but Gitlab has no Bind that gives access to that type of resource. With your <code>ClusterRole</code> you just gave it access to <code>secrets</code>, but nothing more.</p>
<p>Consider giving the ClusterRole more permissions, changing it to list all resources that you need to access:</p>
<pre><code>rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets", "namespaces", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
</code></pre>
<p>For example this will give access to secrets, namespaces and Pods.</p>
<p>As an alternative, you can bind Gitlab's service account to <code>cluster-admin</code> to directly give it access to <strong>everything</strong>.</p>
<pre><code>kubectl create clusterrolebinding gitlab-is-now-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=gitlab:default
</code></pre>
<p>Before doing so tho, consider the following:</p>
<blockquote>
<p>Fine-grained role bindings provide greater security, but require more
effort to administrate. Broader grants can give unnecessary (and
potentially escalating) API access to ServiceAccounts, but are easier
to administrate.</p>
</blockquote>
<p>So, it is way more secure to first decide which resources can be used by Gitlab and then create a Role / ClusterRole giving access to only those resources (and with the verbs that you need)</p>
| AndD |
<p>I would like to route traffic to pods based on headers - with a fallback.</p>
<p>The desired result would be a k8s cluster where multiple versions of the same service could be deployed and routed to using header values.</p>
<p>svcA
svcB
svcC</p>
<p>each of these services (the main branch of git repo) would be deployed either to default namespace or labelled 'main'. any feature branch of each service can also be deployed, either into its own namespace or labelled with the branch name.</p>
<p>Ideally by setting a header <code>X-svcA</code> to a value matching a branch name, we would route any traffic to the in matching namespace or label. If there is no such name space or label, route the traffic to the default (main) pod.</p>
<pre><code>if HEADERX && svcX:label
route->svcX:label
else
route->svcX
</code></pre>
<p>The first question - is this (or something like) even possible with istio or linkerd</p>
| Ian Wood | <p>You can do that using Istio <code>VirtualService</code></p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
...
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
</code></pre>
<p>Read more <a href="https://istio.io/latest/docs/tasks/traffic-management/request-routing/#route-based-on-user-identity" rel="nofollow noreferrer">here</a>.</p>
| Vasili Angapov |
<p>I'm trying to deploy a MongoDB replica set by using the MongoDB Community Kubernetes Operator in Minikube.<br />
I followed the instructions on the official GitHub, so:</p>
<ul>
<li>Install the CRD</li>
<li>Install the necessary roles and role-bindings</li>
<li>Install the Operator Deploy the Replicaset</li>
</ul>
<p>By default, the operator will creates three pods, each of them automatically linked to a new persistent volume claim bounded to a new persistent volume also created by the operator (so far so good).</p>
<p>However, I would like the data to be saved in a specific volume, mounted in a specific host path. So in order I would need to create three persistent volumes, each mounted to a specific host path, and then automatically I would want to configure the replicaset so that each pod would connect to its respective persistent volume (perhaps using the matchLabels selector).
So I created three volumes by applying the following file:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-00
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/00"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-01
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/01"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-02
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/02"
</code></pre>
<p>and then I set up the replica set configuration file in the following way, but it still fails to connect the pods to the volumes:</p>
<pre><code>apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongo-rs
namespace: $NAMESPACE
spec:
members: 3
type: ReplicaSet
version: "4.4.0"
persistent: true
podSpec:
persistence:
single:
labelSelector:
matchLabels:
type: local
service: mongo
storage: 5Gi
storageClass: manual
statefulSet:
spec:
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: [ "ReadWriteOnce", "ReadWriteMany" ]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
service: mongo
storageClassName: manual
security:
authentication:
modes: ["SCRAM"]
users:
- ...
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
</code></pre>
<p>I can't find any documentation online, except the <a href="https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_custom_volume_cr.yaml" rel="nofollow noreferrer">mongodb.com_v1_custom_volume_cr.yaml</a>, has anyone faced this problem before? How could I make it work?</p>
| Federico Barusco | <p>I think you could be interested into using local type of volumes. It works, like this:</p>
<p><strong>First</strong>, you create a storage class for the local volumes. Something like the following:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>Since it has <code>no-provisioner</code>, it will be usable only if you manually create local PVs. <code>WaitForFirstConsumer</code> instead, will prevent attaching a PV to a PVC of a Pod which cannot be scheduled on the host on which the PV is available.</p>
<p>Second, you create the local PVs. Similarly to how you created them in your example, something like this:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /path/on/the/host
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- the-node-hostname-on-which-the-storage-is-located
</code></pre>
<p>Notice the definition, it tells the path on the host, the capacity.. and then it explains on which node of the cluster, such PV can be used (with the nodeAffinity). It also link them to the storage class we created early.. so that if someone (a claim template) requires storage with that class, it will now find this PV.</p>
<p>You can create 3 PVs, on 3 different nodes.. or 3 PVs on the same node at different paths, you can organize things as you desire.</p>
<p><strong>Third</strong>, you can now use the <code>local-storage</code> class in claim template. The claim template could be something similar to this:</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: the-name-of-the-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 5Gi
</code></pre>
<p>And each Pod of the StatefulSet will try to be scheduled on a node with a <code>local-storage</code> PV available.</p>
<hr />
<p>Remember that with local storages or, in general, with volumes that utilize host paths.. you may want to spread the various Pods of your app on different nodes, so that the app may resist the failure of a single node on its own.</p>
<hr />
<p>In case you want to be able to decide which Pod links to which volume, the easiest way is to create one PV at a time, then wait for the Pod to <code>Bound</code> with it.. before creating the next one. It's not optimal but it's the easiest way.</p>
| AndD |
<p>Good afternoon</p>
<p>I really need some help getting a group of sentinels up so that they can monitor and perform elections for my redis pods, which are running without issue. At the bottom of this message I have included the sentinel config, which spells out the volumes. The first sentinel, sentinel0, sits at Pending, while the rest of the redis instances are READY 1/1, for all three.</p>
<p>But they don't get scheduled. When I attempt to apply the sentinel statefulset, I get the following schedule error. The sentinel statefulset config is at the bottom of this post</p>
<blockquote>
<p>Warning FailedScheduling 5s default-scheduler 0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't find available persistent volumes to bind.
Warning FailedScheduling 4s default-scheduler 0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't find available persistent volumes to bind.</p>
</blockquote>
<p>About my kubernetes setup:</p>
<p>I am running a four-node baremetal kubernetes cluster; one master node and three worker nodes respectively.</p>
<p>For storage, I am using a 'local-storage' StorageClass shared across the nodes. Currently I am using a single persistent volume configuration file which defines three volumes across three nodes. This seems to be working out for the redis statefulset, but not sentinel. (sentiel config at bottom)</p>
<p>See below config of persistent volume (all three pv-volume-node-0, 1, 2 all are bound)</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-0
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-0
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-1
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-1
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-2
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-2
</code></pre>
<p>Note: the path "/var/opt/mssql" is the stateful directory data pt for the redis cluster. It's a misnomer and in no way reflects a sql database (I just used this directory from a walkthrough), and it works.</p>
<p>Presently all three redis pods are successfully deployed with a functioning statefulset, see below for the redis config (all working)</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
initContainers:
- name: config
image: redis:6.0-alpine
command: [ "sh", "-c" ]
args:
- |
cp /tmp/redis/redis.conf /etc/redis/redis.conf
echo "finding master..."
MASTER_FDQN=`hostname -f | sed -e 's/redis-[0-9]\./redis-0./'`
if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then
echo "master not found, defaulting to redis-0"
if [ "$(hostname)" == "redis-0" ]; then
echo "this is redis-0, not updating config..."
else
echo "updating redis.conf..."
echo "slaveof $MASTER_FDQN 6379" >> /etc/redis/redis.conf
fi
else
echo "sentinel found, finding master"
MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
echo "master found : $MASTER, updating redis.conf"
echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf
fi
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: config
mountPath: /tmp/redis/
containers:
- name: redis
image: redis:6.0-alpine
command: ["redis-server"]
args: ["/etc/redis/redis.conf"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: data
mountPath: /var/opt/mssql
- name: redis-config
mountPath: /etc/redis/
volumes:
- name: redis-config
emptyDir: {}
- name: config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
clusterIP: None
ports:
- port: 6379
targetPort: 6379
name: redis
selector:
app: redis
</code></pre>
<p>The real issue I'm having, I believe spawns from how I've configured the sentinel statefulset. The pods won't schedule and its printed reason is it isn't finding persistent volumes to bind from.</p>
<p>SENTINEL STATEFULSET CONFIG, problem here, can't figure out how to set it up right with the volumes I made.</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sentinel
spec:
serviceName: sentinel
replicas: 3
selector:
matchLabels:
app: sentinel
template:
metadata:
labels:
app: sentinel
spec:
initContainers:
- name: config
image: redis:6.0-alpine
command: [ "sh", "-c" ]
args:
- |
REDIS_PASSWORD=a-very-complex-password-here
nodes=redis-0.redis.redis.svc.cluster.local,redis-1.redis.redis.svc.cluster.local,redis-2.redis.redis.svc.cluster.local
for i in ${nodes//,/ }
do
echo "finding master at $i"
MASTER=$(redis-cli --no-auth-warning --raw -h $i -a $REDIS_PASSWORD info replication | awk '{print $1}' | grep master_host: | cut -d ":" -f2)
if [ "$MASTER" == "" ]; then
echo "no master found"
MASTER=
else
echo "found $MASTER"
break
fi
done
echo "sentinel monitor mymaster $MASTER 6379 2" >> /tmp/master
echo "port 5000
$(cat /tmp/master)
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1
sentinel auth-pass mymaster $REDIS_PASSWORD
" > /etc/redis/sentinel.conf
cat /etc/redis/sentinel.conf
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
containers:
- name: sentinel
image: redis:6.0-alpine
command: ["redis-sentinel"]
args: ["/etc/redis/sentinel.conf"]
ports:
- containerPort: 5000
name: sentinel
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: data
mountPath: /var/opt/mssql
volumes:
- name: redis-config
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: sentinel
spec:
clusterIP: None
ports:
- port: 5000
targetPort: 5000
name: sentinel
selector:
app: sentinel
</code></pre>
<p>This is my first post here. I am a big fan of stackoverflow!</p>
| 007chungking | <p>You may try to create three PVs using this template:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-0
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: data-redis-0
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-0
</code></pre>
<p>Important part here is <code>claimRef</code> field which ties PV with PVC with StatefulSet.
It should be of special format.</p>
<p>Read more here: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd#using_a_preexisting_disk_in_a_statefulset" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd#using_a_preexisting_disk_in_a_statefulset</a></p>
| Vasili Angapov |
<p>how to completely uninstall minikube from ubuntu 20.04.</p>
<p>i'm getting an error from my current minikube when starting :</p>
<p><code>minikube start </code>
gets
<code>🐳 Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...| ❌ Unable to load cached images: loading cached images: stat /home/feiz-nouri/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4: no such file or directory</code></p>
| feiz | <blockquote>
<p>how to completely uninstall minikube from ubuntu 20.04</p>
</blockquote>
<p>First, run <code>minikube delete</code> to remove <strong>minikube VM</strong> (or container if run with <code>docker</code> driver), <strong>virtual network interfaces</strong> configured on the host machine and all other traces of <strong>minikube</strong> cluster.</p>
<p>Only then you can safely remove its binary. The way how you should do it depends on how you've installed it, but as you can see <a href="https://minikube.sigs.k8s.io/docs/start/" rel="noreferrer">here</a>, there are not so many options.</p>
<p>If you've installed it by running:</p>
<pre><code>sudo install minikube-linux-amd64 /usr/local/bin/minikube
</code></pre>
<p>you can simply remove the binary from <code>/usr/local/bin/minikube</code> directory as what the above command basically does, is copying the binary to the destination directory. If it's installed in a different directory, you can always check it by running:</p>
<pre><code>which minikube
</code></pre>
<p>If it was installed using <code>dpkg</code> package manager:</p>
<pre><code>sudo dpkg -i minikube_latest_amd64.deb
</code></pre>
<p>you can search for it with the following command:</p>
<pre><code>dpkg -l | grep minikube
</code></pre>
<p>If it shows you something like:</p>
<pre><code>ii minikube 1.17.1 amd64 Minikube
</code></pre>
<p>you can completely remove it (with all its configuration files) by running:</p>
<pre><code>sudo dpkg --purge minikube
</code></pre>
| mario |
<p>I've Docker Desktop Kubernetes cluster setup in my local machine and it is working fine.
Now i'm trying to deploy .Net Core gRPC server and .Net core Console load generator to my cluster.</p>
<p>I'm using VisualStudio(2019)'s default template for gRPC application</p>
<p><strong>Server:</strong></p>
<p>proto file</p>
<pre><code>syntax = "proto3";
option csharp_namespace = "KubernetesLoadSample";
package greet;
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply);
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings.
message HelloReply {
string message = 1;
}
</code></pre>
<p>.net core gRPC application</p>
<pre><code>public class GreeterService : Greeter.GreeterBase
{
private readonly ILogger<GreeterService> _logger;
public GreeterService(ILogger<GreeterService> logger)
{
_logger = logger;
}
public override Task<HelloReply> SayHello(HelloRequest request, ServerCallContext context)
{
_logger.LogInformation("Compute started");
double result = 0;
for (int i = 0; i < 10000; i++)
{
for (int j = 0; j < i; j++)
{
result += Math.Sqrt(i) + Math.Sqrt(j);
}
}
return Task.FromResult(new HelloReply
{
Message = "Completed"
}); ;
}
}
</code></pre>
<p>and DockerFile for this project as follows,</p>
<pre><code>FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
WORKDIR /src
COPY ["KubernetesLoadSample.csproj", "KubernetesLoadSample/"]
RUN dotnet restore "KubernetesLoadSample/KubernetesLoadSample.csproj"
WORKDIR "/src/KubernetesLoadSample"
COPY . .
RUN dotnet build "KubernetesLoadSample.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "KubernetesLoadSample.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "KubernetesLoadSample.dll"]
</code></pre>
<p>i was able to check this image working locally using</p>
<pre><code>PS C:\Users\user> docker run -it -p 8000:80 kubernetesloadsample:latest
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app
info: KubernetesLoadSample.GreeterService[0]
Compute started // called from BloomRPC Client
</code></pre>
<p><strong>Client</strong></p>
<p>Client is a .net console application, that calls server in a loop</p>
<pre><code> static async Task Main(string[] args)
{
var grpcServer = Environment.GetEnvironmentVariable("GRPC_SERVER");
Channel channel = new Channel($"{grpcServer}", ChannelCredentials.Insecure);
Console.WriteLine($"Sending load to port {grpcServer}");
while(true)
{
try
{
var client = new Greeter.GreeterClient(channel);
var reply = await client.SayHelloAsync(
new HelloRequest { Name = "GreeterClient" });
Console.WriteLine("result: " + reply.Message);
await Task.Delay(1000);
}
catch (Exception ex)
{
Console.WriteLine($"{DateTime.UtcNow} : tried to connect : {grpcServer} Crashed : {ex.Message}");
}
}
}
</code></pre>
<p>Docker file for client:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
WORKDIR /src
COPY ["GrpcClientConsole.csproj", "GrpcClientConsole/"]
RUN dotnet restore "GrpcClientConsole/GrpcClientConsole.csproj"
WORKDIR "/src/GrpcClientConsole"
COPY . .
RUN dotnet build "GrpcClientConsole.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "GrpcClientConsole.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "GrpcClientConsole.dll"]
</code></pre>
<p>and deployment file as follows,</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: core-load
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: compute-server
namespace: core-load
spec:
replicas: 4
selector:
matchLabels:
app: compute-server-svc
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: compute-server-svc
spec:
containers:
- env:
image: kubernetesloadsample:latest
imagePullPolicy: Never
name: compute-server-svc
ports:
- containerPort: 80
name: grpc
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: compute-server-svc
namespace: core-load
spec:
clusterIP: None
ports:
- name: grpc
port: 5000
targetPort: 80
protocol: TCP
selector:
app: compute-server-svc
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: compute-client
namespace: core-load
spec:
replicas: 1
selector:
matchLabels:
app: compute-client
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: compute-client
spec:
containers:
- env:
- name: GRPC_SERVER
value: compute-server-svc.core-load.svc.cluster.local:5000
image: grpc-client-console:latest
imagePullPolicy: Never
name: compute-client
resources: {}
status: {}
---
</code></pre>
<p><strong>Problem</strong></p>
<p>client is not able to connect gRPC server with this compute-server-svc.core-load.svc.cluster.local:5000 name. I tried compute-server-svc.core-load this as well, but facing below issue</p>
<pre><code>PS E:\study\core\k8sgrpc\KubernetesLoadSample> k get pods -n core-load
NAME READY STATUS RESTARTS AGE
compute-client-bff5f666-cjwf5 1/1 Running 0 15s
compute-server-545567f589-5blkv 1/1 Running 0 15s
compute-server-545567f589-bv4r2 1/1 Running 0 15s
compute-server-545567f589-mdp2x 1/1 Running 0 15s
compute-server-545567f589-wdff5 1/1 Running 0 15s
PS E:\study\core\k8sgrpc\KubernetesLoadSample> k logs compute-client-bff5f666-cjwf5 -n core-load --tail 5
07/09/2021 17:18:35 : tried to connect : compute-server-svc.core-load.svc.cluster.local:5000 Crashed : Status(StatusCode=Unavailable, Detail="failed to connect to all addresses")
07/09/2021 17:18:35 : tried to connect : compute-server-svc.core-load.svc.cluster.local:5000 Crashed : Status(StatusCode=Unavailable, Detail="failed to connect to all addresses")
07/09/2021 17:18:35 : tried to connect : compute-server-svc.core-load.svc.cluster.local:5000 Crashed : Status(StatusCode=Unavailable, Detail="failed to connect to all addresses")
07/09/2021 17:18:35 : tried to connect : compute-server-svc.core-load.svc.cluster.local:5000 Crashed : Status(StatusCode=Unavailable, Detail="failed to connect to all addresses")
07/09/2021 17:18:35 : tried to connect : compute-server-svc.core-load.svc.cluster.local:5000 Crashed : Status(StatusCode=Unavailable, Detail="failed to connect to all addresses")
</code></pre>
<p>I didnt get any solution from the stackoverflow questions similar to this, so im creating this.</p>
<p>Can anyone please let me know what i've missed or doing wrong?</p>
<p>TIA</p>
| WPFUser | <p>You defined your service with the:</p>
<pre><code>clusterIP: None
</code></pre>
<p>which is used to create an headless service. This may be the cause of the problem, so removing it could resolve your error.</p>
<hr />
<p>When you create a <code>ClusterIP</code> type service (which is the default type) Kubernetes automatically assign the service a virtual IP (called also cluster IP, as the type suggests) which is then used to proxy communication towards the Pods selected by the service in question.</p>
<p>This means that there is a "new" IP address (visible only from inside the cluster), different from the various IP assigned to the Pods (or single Pod) behind the service, which then routes the traffic with a sort of load balancing to the Pods standing behind.</p>
<p>If you specify</p>
<pre><code>clusterIP: None
</code></pre>
<p>you create an headless service. You are basically telling Kubernetes that you don't want a virtual IP to be assigned to the service. There is no load balancing by the proxy as there is no IP to load balance.</p>
<p>Instead, the DNS configuration will return A records (the IP addresses) for each of the Pods behind (selected) by the service.</p>
<p>This can be useful if your application needs to discover each Pod behind the service and then do whatever they want with the IP address on their own.</p>
<p>Maybe to load balance with an internal implementation, maybe because different Pods (behidn the same service) are used for different things.. or maybe because each one of those Pods wants to discover the other Pods (think about multi-instance primary applications such as Kafka or Zookeeper, for example)</p>
<hr />
<p>I'm not sure on what exactly could be your problem, it may depends on how the hostname is resolved by that particular app.. but you shouldn't use an headless service, unless you have the necessity to decide which of the Pods selected by the svc you want to contact.</p>
<p>Using DNS round robin to load balance is also (almost always) not a good idea compared to a virtual IP.. as applications could cache the DNS resolution and, if Pods then change IP address (since Pods are ephimeral, they change IP address whenever they restart, for example), there could be network problems in reaching them.. and more.</p>
<p>There's a huge amount of info in the docs:
<a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
| AndD |
<p>I have a pod with 3 containers (ca, cb, cc).</p>
<p>This container is owned by TeamA. Team A creates and owns ca.the other two containers are developed by two other teams. TeamB for cb and TeamC for cc.</p>
<p>Both cb and cc also run independently (outside the TeamA pod) as services within this same cluster.</p>
<p>How can TeamA pod find out when cb or cc deploy a newer version in the cluster and how can we ensure that it triggers a refresh?</p>
<p>As you may have guesses cb and cc are services that ca relies on heavily and they are also services in their own right.</p>
<p>Is there a way to ensure that TeamA pod keeps cb and cc updated whenever TeamB and TeamC deploy new versions to the cluster?</p>
| anuruddha kulatunga | <p>This is not a task for Kubernetes. You should configure that in your CI/CD tool. For example, whenever a new commit is pushed to service A, it will first trigger the pipeline for A, and then trigger corresponding pipelines for services B and C. Every popular CI/CD system has this ability and this is how it's normally done.</p>
<p>Proper CI/CD tests will also protect you from mistakes. If service A update breaks compatibility with B and C, your pipeline should fail and notify you about that.</p>
| Vasili Angapov |
<p>Recently, the managed pod in my mongo deployment onto GKE was automatically deleted and a new one was created in its place. As a result, all my db data was lost.</p>
<p>I specified a PV for the deployment and the PVC was bound too, and I used the standard storage class (google persistent disk). The Persistent Volume Claim had not been deleted either.</p>
<p>Here's an image of the result from <code>kubectl get pv</code>:
<a href="https://i.stack.imgur.com/emsMe.png" rel="nofollow noreferrer">pvc</a></p>
<p>My mongo deployment along with the persistent volume claim and service deployment were all created by using kubernets' <code>kompose</code> tool from a docker-compose.yml for a <a href="https://v1.prisma.io/docs/1.34/prisma-server/local-prisma-setup-je3i/" rel="nofollow noreferrer">prisma 1 + mongodb</a> deployment.</p>
<p>Here are my yamls:</p>
<p><code>mongo-deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose -f docker-compose.yml convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mongo
name: mongo
namespace: dbmode
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mongo
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose -f docker-compose.yml convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mongo
spec:
containers:
- env:
- name: MONGO_INITDB_ROOT_PASSWORD
value: prisma
- name: MONGO_INITDB_ROOT_USERNAME
value: prisma
image: mongo:3.6
imagePullPolicy: ""
name: mongo
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- mountPath: /var/lib/mongo
name: mongo
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: mongo
persistentVolumeClaim:
claimName: mongo
status: {}
</code></pre>
<p><code>mongo-persistentvolumeclaim.yaml</code></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: mongo
name: mongo
namespace: dbmode
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
</code></pre>
<p><code>mongo-service.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose -f docker-compose.yml convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mongo
name: mongo
namespace: dbmode
spec:
ports:
- name: "27017"
port: 27017
targetPort: 27017
selector:
io.kompose.service: mongo
status:
loadBalancer: {}
</code></pre>
<p>I've tried checking the contents mounted in <code>/var/lib/mongo</code> and all I got was an empty <code>lost+found/</code> folder, and I've tried to search the Google Persistent Disks but there was nothing in the root directory and I didn't know where else to look.</p>
<p>I guess that for some reason the mongo deployment is not pulling from the persistent volume for the old data when it starts a new pod, which is extremely perplexing.</p>
<p>I also have another kubernetes project where the same thing happened, except that the old pod still showed but had an <code>evicted</code> status.</p>
| Jonathan Lynn | <blockquote>
<p>I've tried checking the contents mounted in /var/lib/mongo and all I
got was an empty lost+found/ folder,</p>
</blockquote>
<p>OK, but have you checked it was actually saving data there, before the <code>Pod</code> restart and data loss ? I guess it was never saving any data in that directory.</p>
<p>I checked the image you used by running a simple <code>Pod</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-pod
image: mongo:3.6
</code></pre>
<p>When you connect to it by running:</p>
<pre><code>kubectl exec -ti my-pod -- /bin/bash
</code></pre>
<p>and check the default mongo configuration file:</p>
<pre><code>root@my-pod:/var/lib# cat /etc/mongod.conf.orig
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb # 👈
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
</code></pre>
<p>you can see among other things that <code>dbPath</code> is actually set to <code>/var/lib/mongodb</code> and <strong>NOT</strong> to <code>/var/lib/mongo</code>.</p>
<p>So chances are that your mongo wasn't actually saving any data to your <code>PV</code> i.e. to <code>/var/lib/mongo</code> directory, where it was mounted, but to <code>/var/lib/mongodb</code> as stated in its configuration file.</p>
<p>You should be able to check it easily by <code>kubectl exec</code> to your running mongo pod:</p>
<pre><code>kubectl exec -ti <mongo-pod-name> -- /bin/bash
</code></pre>
<p>and verify where the data is saved.</p>
<p>If you didn't overwrite in any way the original config file (e.g. by providing a <code>ConfigMap</code>), <code>mongo</code> should save its data to <code>/var/lib/mongodb</code> and this directory, not being a mount point for your volume, is part of a <code>Pod</code> filesystem and its ephemeral.</p>
<h3>Update:</h3>
<p>The above mentioned <code>/etc/mongod.conf.orig</code> is only a template so it doesn't reflect the actual configuration that has been applied.</p>
<p>If you run:</p>
<pre><code>kubectl logs your-mongo-pod
</code></pre>
<p>it will show where the data directory is located:</p>
<pre><code>$ kubectl logs my-pod
2020-12-16T22:20:47.472+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=my-pod
2020-12-16T22:20:47.473+0000 I CONTROL [initandlisten] db version v3.6.21
...
</code></pre>
<p>As we can see, data is saved in <code>/data/db</code>:</p>
<pre><code>dbpath=/data/db
</code></pre>
| mario |
<h2>Scenario 1:</h2>
<p>I have 3 local-persistent-volumes provisioned, each pv is mounted on different node:</p>
<ul>
<li>10.30.18.10</li>
<li>10.30.18.11</li>
<li>10.30.18.12</li>
</ul>
<p>When I start my app with 3 replicas using:</p>
<pre><code>kind: StatefulSet
metadata:
name: my-db
spec:
replicas: 3
...
...
volumeClaimTemplates:
- metadata:
name: my-local-vol
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-local-sc"
resources:
requests:
storage: 10Gi
</code></pre>
<p>Then I notice pods and pvs are on the same host:</p>
<ul>
<li>pod1 with ip <code>10.30.18.10</code> has claimed the pv that is mounted on <code>10.30.18.10</code></li>
<li>pod2 with ip <code>10.30.18.11</code> has claimed the pv that is mounted on <code>10.30.18.11</code></li>
<li>pod3 with ip <code>10.30.18.12</code> has claimed the pv that is mounted on <code>10.30.18.12</code></li>
</ul>
<p>(whats not happening is: pod1 with ip <code>10.30.18.10</code> has claimed the pv that is mounted on different node <code>10.30.18.12</code> etc)</p>
<p>The only common config between pv and pvc is <code>storageClassName</code>, so I didn't configure this behavior.</p>
<p><strong>Question:</strong>
So, who is responsible for this magic? Kubernetes scheduler? Kubernetes provisioner?</p>
<hr />
<h2>Scenario 2:</h2>
<p>I have 3 local-persistent-volumes provisioned:</p>
<ul>
<li>pv1 has capacity.storage of 10Gi</li>
<li>pv2 has capacity.storage of 100Gi</li>
<li>pv3 has capacity.storage of 100Gi</li>
</ul>
<p>Now, I start my app with 1 replica</p>
<pre><code>kind: StatefulSet
metadata:
name: my-db
spec:
replicas: 1
...
...
volumeClaimTemplates:
- metadata:
name: my-local-vol
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-local-sc"
resources:
requests:
storage: 10Gi
</code></pre>
<p>I want to ensure that this StatefulSet <em>always</em> claim pv1 (10Gi) even this is on a different node, and don't claim pv2 (100Gi) and pv3 (100Gi)</p>
<p><strong>Question:</strong></p>
<p>Does this happen automatically?</p>
<p>How do I ensure the desired behavior? Should I use a separate storageClassName to ensure this?</p>
<p>What is the PersistentVolumeClaim policy? Where can I find more info?</p>
<hr />
<p>EDIT:</p>
<p>yml used for StorageClass:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: my-local-pv
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
| Sida Zhou | <p>With local Persistent Volumes, this is the expected behaviour. Let me try to explain what happens when using local storage.</p>
<p>The usual setup for local storage on a cluster is the following:</p>
<ul>
<li>A local storage class, configured to be <code>WaitForFirstConsumer</code></li>
<li>A series of local persistent volumes, linked to the local storage class</li>
</ul>
<p>And this is all well documented with examples in the official documentation: <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#local</a></p>
<p>With this done, Persistent Volume Claims can request storage from the local storage class and StatefulSets can have a <code>volumeClaimTemplate</code> which requests storage of the local storage class.</p>
<hr />
<p>Let me take as example your StatefulSet with 3 replicas, each one requires local storage with the <code>volumeClaimTemplate</code>.</p>
<ul>
<li><p>When the Pods are first created, they request a storage of the required <code>storageClass</code>. For example your <code>my-local-sc</code></p>
</li>
<li><p>Since this storage class is manually created and does <strong>not</strong> support dynamically provisioning of new PVs (like, for example, Ceph or similar) it is checked if a PV attached to the storage class is <code>available</code> to be bound.</p>
</li>
<li><p>If a PV is selected, it is bound to the newly created PVC (and from now, can be used only with that particular PV, since it is now <code>Bound</code>)</p>
</li>
<li><p>Since the PV is of type <code>local</code>, the PV has a <code>nodeAffinity</code> required which selects a node.</p>
</li>
<li><p>This <strong>force</strong> the Pod, now bound to that PV, to be <strong>scheduled only on that particular node</strong>.</p>
</li>
</ul>
<p>This is why each Pod was scheduled on the same node of the bounded persistent volume. And this means that the Pod is restricted to run on that node only.</p>
<p>You can test this easily by draining / cordoning one of the nodes and then trying to restart the Pod bound to the PV available on that particular node. What you should see is that the Pod will <strong>not</strong> start, as the PV is restricted from its <code>nodeAffinity</code> and the node is not available.</p>
<hr />
<p>Once each Pod of the StatefulSet is bound to a PV, that Pod will be scheduled only on a specific node.. Pods will not change the PV that they are using, unless the PVC is removed (which will force the Pod to request again a new PV to bound)</p>
<p>Since local storage is handled manually, PV which were bounded and have the related PVC removed from the cluster, enter in <code>Released</code> state and cannot be claimed anymore, they must be handled by someone.. maybe deleting them and then recreating new ones at the same location (and maybe cleaning the filesystem as well, depending on the situation)</p>
<p>This means that local storage is OK to be used only:</p>
<ul>
<li><p>If HA is not a problem.. for example, I don't care if my app is blocked by a single node not working</p>
</li>
<li><p>If HA is handled directly by the app itself. For example, a StatefulSet with 3 Pods like a multi-primary database (Galera, Clickhouse, Percona for examples) or ElasticSearch or Kafka, Zookeeper or something like that.. all will handle the HA on their own as they can resist one of their nodes being down as long as there's quorum.</p>
</li>
</ul>
<hr />
<p><strong>UPDATE</strong></p>
<p>Regarding the Scenario 2 of your question. Let's say you have multiple Available PVs and a single Pod which starts and wants to Bound to one of them. This is a normal behaviour and the control plane would select one of those PVs on its own (if they match with the requests in Claim)</p>
<p>There's a specific way to pre-bind a PV and a PVC, so that they will always bind together. This is described in the docs as "<strong>reserving a PV</strong>": <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reserving-a-persistentvolume" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reserving-a-persistentvolume</a></p>
<p>But the problem is that this cannot be applied to olume claim templates, as it requires the claim to be created manually with special properties.</p>
<p>The volume claim template tho, as a selector field which can be used to restrict the selection of a PV based on labels. It can be seen in the API specs ( <a href="https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#persistentvolumeclaimspec-v1-core" rel="nofollow noreferrer">https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#persistentvolumeclaimspec-v1-core</a> )</p>
<p>When you create a PV, you label it with what you want.. for example you could label it like the following:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: example-small-pv
labels:
size-category: small
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-big-pv
labels:
size-category: big
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node-2
</code></pre>
<p>And then the claim template can select a category of volumes based on the label. Or maybe it doesn't care so it doesn't specify <code>selector</code> and can use all of them (provided that the size is enough for its claim request)</p>
<p>This could be useful.. but it's not the only way to select or restrict which PVs can be selected, because when the PV is first bound, if the storage class is WaitForFirstConsumer, the following is also applied:</p>
<blockquote>
<p>Delaying volume binding ensures that the PersistentVolumeClaim binding
decision will also be evaluated with any other node constraints the
Pod may have, such as node resource requirements, node selectors, Pod
affinity, and Pod anti-affinity.</p>
</blockquote>
<p>Which means that if the Pod has a node affinity to one of the nodes, it will select for sure a PV on that node (if the local storage class used is WaitForFirstConsumer)</p>
<hr />
<p>Last, let me quote the offical documentation for things that I think they could answer your questions:</p>
<p>From <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a></p>
<blockquote>
<p>A user creates, or in the case of dynamic provisioning, has already
created, a PersistentVolumeClaim with a specific amount of storage
requested and with certain access modes. A control loop in the master
watches for new PVCs, finds a matching PV (if possible), and binds
them together. If a PV was dynamically provisioned for a new PVC, the
loop will always bind that PV to the PVC. Otherwise, the user will
always get at least what they asked for, but the volume may be in
excess of what was requested. Once bound, PersistentVolumeClaim binds
are exclusive, regardless of how they were bound. A PVC to PV binding
is a one-to-one mapping, using a ClaimRef which is a bi-directional
binding between the PersistentVolume and the PersistentVolumeClaim.</p>
<p>Claims will remain unbound indefinitely if a matching volume does not
exist. Claims will be bound as matching volumes become available. For
example, a cluster provisioned with many 50Gi PVs would not match a
PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to
the cluster.</p>
</blockquote>
<p>From <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#local</a></p>
<blockquote>
<p>Compared to hostPath volumes, local volumes are used in a durable and
portable manner without manually scheduling pods to nodes. The system
is aware of the volume's node constraints by looking at the node
affinity on the PersistentVolume.</p>
<p>However, local volumes are subject to the availability of the
underlying node and are not suitable for all applications. If a node
becomes unhealthy, then the local volume becomes inaccessible by the
pod. The pod using this volume is unable to run. Applications using
local volumes must be able to tolerate this reduced availability, as
well as potential data loss, depending on the durability
characteristics of the underlying disk.</p>
</blockquote>
| AndD |
<p>We are using Azure and I'm creating a Elasticsearch instance through the following snippet with terraform on a managed AKS cluster:</p>
<pre><code>resource "helm_release" "elasticsearch" {
name = "elasticsearch"
repository = "https://helm.elastic.co"
chart = "elasticsearch"
version = "7.12.1"
timeout = 900
set {
name = "volumeClaimTemplate.storageClassName"
value = "elasticsearch-ssd"
}
set {
name = "volumeClaimTemplate.resources.requests.storage"
value = "5Gi"
}
set {
name = "imageTag"
value = "7.12.1"
}
...
}
</code></pre>
<p>So far no problem. Elasticsearch spins up and is ready to use. Everything is in a virtual net. So the nodes of Elasticsearch get attributed a Cluster IP.</p>
<p>Now, to deploy something on my Kubernetes Cluster that actually uses Elasticsearch, I need to pass it the cluster IP of the Elasticsearch.</p>
<p>Does anybody know a way to retrieve the Cluster IP in an automated way? So that I can pass it to the following terraform modules in my configuration? I think I scanned all the outputs of the helm release, but I'm unable to find the cluster IP...</p>
<p>In the example below, it would be the "10.0.169.174":</p>
<pre><code>ppaulis@ppaulis-sb3:~/PhpstormProjects/baywa/infra/terraform-final/5_tms-api$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
elasticsearch-master ClusterIP 10.0.169.174 <none> 9200/TCP,9300/TCP 25h app=elasticsearch-master,chart=elasticsearch,release=tms-dv-elasticsearch
elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 25h app=elasticsearch-master
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 28h <none>
</code></pre>
<p>Any help is appreciated!
Thanks,
Pascal</p>
| Pascal Paulis | <p>First of all it's a bad practice to rely on Service IP address instead of DNS name. If you're trying to reach ES outside of Kubernetes then you might want to automatically create DNS record for it using <a href="https://github.com/kubernetes-sigs/external-dns" rel="nofollow noreferrer">external-dns</a>. I think this is the best way.</p>
<p>However, nothing stops you from adjusting ES Helm chart to your needs. Just go ahead and modify the YAML here: <a href="https://github.com/elastic/helm-charts/blob/master/elasticsearch/templates/service.yaml" rel="nofollow noreferrer">https://github.com/elastic/helm-charts/blob/master/elasticsearch/templates/service.yaml</a></p>
| Vasili Angapov |
<p>I have a cluster that has numerous services running as pods from which I want to pull logs with fluentd. <strong>All</strong> services show logs when doing <code>kubectl logs service</code>. However, some logs don't show up in those folders:</p>
<ul>
<li>/var/log</li>
<li>/var/log/containers</li>
<li>/var/log/pods</li>
</ul>
<p>although the other containers are there. The containers that ARE there are created as a Cronjob, or as a Helm chart, like a MongoDB installation.</p>
<p>The containers that aren't logging are created by me with a Deployment file like so:</p>
<pre><code>kind: Deployment
metadata:
namespace: {{.Values.global.namespace | quote}}
name: {{.Values.serviceName}}-deployment
spec:
replicas: {{.Values.replicaCount}}
selector:
matchLabels:
app: {{.Values.serviceName}}
template:
metadata:
labels:
app: {{.Values.serviceName}}
annotations:
releaseTime: {{ dateInZone "2006-01-02 15:04:05Z" (now) "UTC"| quote }}
spec:
containers:
- name: {{.Values.serviceName}}
# local: use skaffold, dev: use passed tag, test: use released version
image: {{ .Values.image }}
{{- if (eq .Values.global.env "dev") }}:{{ .Values.imageConfig.tag}}{{ end }}
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
envFrom:
- configMapRef:
name: {{.Values.serviceName}}-config
{{- if .Values.resources }}
resources:
{{- if .Values.resources.requests }}
requests:
memory: {{.Values.resources.requests.memory}}
cpu: {{.Values.resources.requests.cpu}}
{{- end }}
{{- if .Values.resources.limits }}
limits:
memory: {{.Values.resources.limits.memory}}
cpu: {{.Values.resources.limits.cpu}}
{{- end }}
{{- end }}
imagePullSecrets:
- name: {{ .Values.global.imagePullSecret }}
restartPolicy: {{ .Values.global.restartPolicy }}
{{- end }}
</code></pre>
<p>and a Dockerfile CMD like so:
<code>CMD ["node", "./bin/www"]</code></p>
<p>One assumption might be that the CMD doesn't pipe to STDOUT, but why would the logs show up in <code>kubectl logs</code> then?</p>
| rStorms | <p>This is how I would proceed to find out where a container is logging:</p>
<ol>
<li><p>Identify the node on which the Pod is running with:</p>
<pre><code>kubectl get pod pod-name -owide
</code></pre>
</li>
<li><p>SSH on that node, you can check which logging driver is being used by the node with:</p>
<pre><code>docker info | grep -i logging
</code></pre>
<p>if the output is <code>json-file</code>, then the logs are being written to file as expected. If there is something different, then it may depends on what the driver do (there are many drivers, they could write to <code>journald</code> for example, or other options)</p>
</li>
<li><p>If the logging driver writes to file, you can check the current output for a specific Pod by knowing the container id of that Pod, to do so, on a control-plane node:</p>
<pre><code>kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'
</code></pre>
<p>(if there are more containers in the same pod, the index to use may vary, depending on which container you want to inspect)</p>
</li>
<li><p>With the id extracted, which will be something like <code>docker://f834508490bd2b248a2bbc1efc4c395d0b8086aac4b6ff03b3cc8fd16d10ce2c</code>, you can inspect the container with docker, on the node on which the container is running. Just remove the <code>docker://</code> part from the id, SSH again on the node you identified before, then do a:</p>
<pre><code>docker inspect container-id | grep -i logpath
</code></pre>
</li>
</ol>
<p>Which should output where the container is actively writing its logs to file.</p>
<hr />
<p>In my case, the particular container I tried this procedure on, is currently logging into:</p>
<pre><code>/var/lib/docker/containers/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63-json.log
</code></pre>
| AndD |
<p>I am currently trying to set up an EKS cluster on AWS with CloudFormation. I have been following the guide on <a href="https://en.sokube.ch/post/aws-kubernetes-aws-elastic-kubernetes-service-eks" rel="nofollow noreferrer">https://en.sokube.ch/post/aws-kubernetes-aws-elastic-kubernetes-service-eks</a>.</p>
<p>However, after my EKS cluster is successfully created I am unable to interact with it through kubectl as I always get <code>error: You must be logged in to the server (Unauthorized)</code>. I have been stuck on what I am doing wrong.</p>
<p>One hint that may be the problem is that I created the stack via the AWS Console, and not the AWS CLI, so it is different users. But I don't see why this should be an issue when the CLI user has the full permissions, and I could find no information on how to allow other IAM Users in that case.</p>
<p>The IMA user that I am logged in with my AWS CLI has the <code>AdministratorAccess</code> policy</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
</code></pre>
<p>The console command I run</p>
<pre><code>~/workspace/Archipelago(master*) » aws eks --region us-west-2 describe-cluster --name archipelago-alpha-eks --query "cluster.status" --output text | cat
ACTIVE
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/workspace/Archipelago(master*) » aws eks --region us-west-2 update-kubeconfig --name archipelago-alpha-eks
Added new context arn:aws:eks:us-west-2:075174350620:cluster/archipelago-alpha-eks to /home/kasper/.kube/config
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/workspace/Archipelago(master*) » kubectl get node
error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>My full CloudFormation</p>
<pre><code>AWSTemplateFormatVersion: "2010-09-09"
Description: ""
Parameters:
env:
Type: "String"
Default: "local"
Mappings:
ServicePrincipals:
aws-cn:
ec2: ec2.amazonaws.com.cn
aws:
ec2: ec2.amazonaws.com
Resources:
eksVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: !Sub "archipelago-${env}-eks-vpc"
- Key: Project
Value: !Sub "archipelago-${env}-eks"
eksInternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Sub "archipelago-${env}-eks-InternetGateway"
- Key: Project
Value: !Sub "archipelago-${env}-eks"
eksVPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref eksInternetGateway
VpcId: !Ref eksVPC
eksPublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref eksVPC
Tags:
- Key: Name
Value: !Sub "archipelago-${env}-eks-RouteTable"
- Key: Project
Value: !Sub "archipelago-${env}-eks"
eksPublicRoute:
DependsOn: eksVPCGatewayAttachment
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref eksPublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref eksInternetGateway
eksPublicSubnet01:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: us-west-2a
MapPublicIpOnLaunch: true
CidrBlock: 10.0.0.0/24
VpcId:
Ref: eksVPC
Tags:
- Key: Name
Value: !Sub "archipelago-${env}-eks-PublicSubnet01"
- Key: Project
Value: !Sub "archipelago-${env}-eks"
eksPublicSubnet02:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: us-west-2b
MapPublicIpOnLaunch: true
CidrBlock: 10.0.1.0/24
VpcId:
Ref: eksVPC
Tags:
- Key: Name
Value: !Sub "archipelago-${env}-eks-PublicSubnet02"
- Key: Project
Value: !Sub "archipelago-${env}-eks"
eksPublicSubnet01RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref eksPublicSubnet01
RouteTableId: !Ref eksPublicRouteTable
eksPublicSubnet02RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref eksPublicSubnet02
RouteTableId: !Ref eksPublicRouteTable
eksSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster communication with worker nodes
VpcId: !Ref eksVPC
Tags:
- Key: Name
Value: !Sub "archipelago-${env}-eks-SecurityGroup"
- Key: Project
Value: !Sub "archipelago-${env}-eks"
eksIAMRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- eks.amazonaws.com
Action:
- "sts:AssumeRole"
RoleName: EKSClusterRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
eksCluster:
Type: AWS::EKS::Cluster
Properties:
Name: !Sub "archipelago-${env}-eks"
Version: 1.19
RoleArn:
"Fn::GetAtt": ["eksIAMRole", "Arn"]
ResourcesVpcConfig:
SecurityGroupIds:
- !Ref eksSecurityGroup
SubnetIds:
- !Ref eksPublicSubnet01
- !Ref eksPublicSubnet02
DependsOn: [eksIAMRole, eksPublicSubnet01, eksPublicSubnet02, eksSecurityGroup]
eksNodeInstanceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- !FindInMap [ServicePrincipals, !Ref "AWS::Partition", ec2]
Action:
- "sts:AssumeRole"
ManagedPolicyArns:
- !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEKSWorkerNodePolicy"
- !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEKS_CNI_Policy"
- !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
Path: /
eksNodeGroup:
Type: AWS::EKS::Nodegroup
Properties:
ClusterName: !Sub "archipelago-${env}-eks"
NodeRole:
"Fn::GetAtt": ["eksNodeInstanceRole", "Arn"]
AmiType: AL2_x86_64
InstanceTypes:
- t3a.medium
NodegroupName: !Sub "archipelago-${env}-eks-NodeGroup01"
RemoteAccess:
Ec2SshKey: !Sub "archipelago-${env}-eks-key"
ScalingConfig:
MinSize: 1
DesiredSize: 1
MaxSize: 3
Labels:
Project: !Sub "archipelago-${env}-eks"
Subnets:
- !Ref eksPublicSubnet01
- !Ref eksPublicSubnet02
DependsOn: [eksCluster, eksNodeInstanceRole]
</code></pre>
| Androme | <p>User or role that created EKS cluster <strong>is the only</strong> IAM entity that has access to EKS cluster. From <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>When you create an Amazon EKS cluster, the IAM entity user or role, such as a federated user that creates the cluster, is automatically granted system:masters permissions in the cluster's RBAC configuration in the control plane. This IAM entity does not appear in the ConfigMap, or any other visible configuration, so make sure to keep track of which IAM entity originally created the cluster. To grant additional AWS users or roles the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes.</p>
</blockquote>
<p>Kubernetes has its own permissions model, so you need to use above link to add additional users to your EKS cluster.</p>
<p>You may edit aws-auth configmap to look like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapUsers: |
- userarn: YOUR_IAM_USER_ARN
username: YOUR_USER_NAME
groups:
- system:masters
</code></pre>
| Vasili Angapov |
<p>I have a kubernetes cluster with configured nginx to port traffic to my angular application.
It is working fine, however when I access the <code>myipaddress/api/v1</code> - I want nginx to port the traffic to my express application which is listening on port 3000 and the angular application not to look the <code>myipaddress/api/v1</code> as a route component in angular as it doesn't exist.</p>
<p>Here's my kubernetes nginx ingress for express</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: ClusterIP
ports:
- port: 3000
targetPort: 3000
selector:
app: backend
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backend-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /api/?(.*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 3000
</code></pre>
<p>Here's my ingress for angular</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: webapp
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: webapp
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp
port:
number: 80
</code></pre>
<p>What I am trying to achieve for angular</p>
<pre><code>myipaddres.com -> serve angular application`
myipaddress.com/users -> serve angular router for /users
</code></pre>
<p>What I am trying to achieve for express:</p>
<pre><code>myipaddress.com/api/v1/users -> call the users v1 endpoint in express
myipaddress.com/api/v2/users -> call the users v2 endpoint in express
</code></pre>
| Ilia Hanev | <p>Ok I managed to find the solution myself, posting this in case someone needs it.
The newest api for kubernetes (up to date of posting this) supports regex but you should explicitly enable it with annotation.</p>
<pre><code>---
# ingress traffic
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backend-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true" # enabling regex annotation
spec:
rules:
- http:
paths:
- path: /api/* # works fine with regex enabled
pathType: Prefix
backend:
service:
name: backend
port:
number: 3000
</code></pre>
<p>The same applies for any other ingress path you need to reverse proxy.</p>
| Ilia Hanev |
<p>I'm trying to monitor Kubernetes PVC disk usage. I need the memory that is in use for Persistent Volume Claim. I found the command:</p>
<blockquote>
<p>kubectl get --raw / api / v1 / persistentvolumeclaims</p>
</blockquote>
<p>Return:</p>
<pre><code>"status":{
"phase":"Bound",
"accessModes":[
"ReadWriteOnce"
],
"capacity":{
"storage":"1Gi"
}
}
</code></pre>
<p>But it only brings me the full capacity of the disk, and as I said I need the used one</p>
<p>Does anyone know which command could return this information to me?</p>
| Danilo Marquiori | <p>+1 to <a href="https://stackoverflow.com/users/14425365/touchmarine">touchmarine's</a> answer however I'd like to expand it a bit and add also my three cents.</p>
<blockquote>
<p>But it only brings me the full capacity of the disk, and as I said I
need the used one</p>
</blockquote>
<p><code>PVC</code> is an abstraction which represents <strong>a request for a storage</strong> and simply doesn't store such information as disk usage. As a higher level abstraction it doesn't care at all how the underlying storage is used by its consumer.</p>
<p>@touchmarine, Instead of using a <code>Pod</code> whose only function is to <code>sleep</code> and every time you need to check the disk usage you need to attach to it maually, I would propose to use something like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: media
persistentVolumeClaim:
claimName: media
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/data"
name: media
- name: busybox
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do du -sh /data; sleep 10;done"]
volumeMounts:
- mountPath: "/data"
name: media
</code></pre>
<p>It can be of course a single-container <code>busybox</code> <code>Pod</code> as in @touchmarine's example but here I decided to to show also how it can be used as a sidecar running next to <code>nginx</code> container within a single <code>Pod</code>.</p>
<p>As it runs a simple bash script - an infinite while loop, which prints out current disk usage to the standard output it can be read with <code>kubectl logs</code> without a need of using <code>kubectl exec</code> and attaching to the <code>Pod</code>:</p>
<pre><code>$ kubectl logs nginx-deployment-56bb5c87f6-dqs5h busybox
20.0K /data
20.0K /data
20.0K /data
</code></pre>
<p>I guess it can be also used more effectively to configure some sort of monitoring of disk usage.</p>
| mario |
<p>I'm a beginner to Kubernetes, Helm and Yaml. I'm trying to access the QuestDB Console via Kubernetes Ingress Controller setup in my minikube, but i'm getting the below error when running a helm upgrade. Could anyone advice how I can correct this?</p>
<pre><code>Error: UPGRADE FAILED: failed to create resource: Ingress.extensions "questdb" is invalid: spec: Invalid value: []networking.IngressRule(nil): either `defaultBackend` or `rules` must be specified
</code></pre>
<p>Here's my overriding value.yaml</p>
<pre><code>ingress:
enabled: true
rules:
- host: localhost
http:
paths:
- path: /questdb
backend:
serviceName: questdb-headless
servicePort: 9000
- path: /influxdb
backend:
serviceName: questdb-headless
servicePort: 9009
</code></pre>
<p>I've installed the QuestDB helm chart using a local version which has only slightly modified the original ingress.yaml to reference networking.k8s.io/v1 instead of networking.k8s.io/v1beta1. Here's what it is locally:</p>
<pre><code>{{- if .Values.ingress.enabled -}}
{{- $fullName := include "questdb.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else -}}
apiVersion: extensions/v1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "questdb.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
</code></pre>
<p>I'm running on these versions:</p>
<pre><code>- helm : v3.6.0
- Kubernetes :
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
NAME NAMESPACE CHART APP VERSION
kubernetes-ingress default kubernetes-ingress-1.15.2 1.6.2
questdb default questdb-0.8.0 6.0.3
</code></pre>
<p>More details on the original chart and templates can be found here: <a href="https://github.com/questdb/questdb-kubernetes/tree/master/charts/questdb" rel="nofollow noreferrer">https://github.com/questdb/questdb-kubernetes/tree/master/charts/questdb</a></p>
| louis xie | <p>The Ingress template expects things to stay under <code>.Values.ingress.hosts</code> but in your values are under <code>.Values.ingress.rules</code>.</p>
<p>Additionally, paths needs to stay directly under hosts items, not under http, because the ingress is using it with a</p>
<pre><code>{{- range .paths }}
</code></pre>
<p>under <code>.Values.ingress.hosts</code> items. And, paths are just strings, as the service name and port are directly taken from the <code>fullname</code> and the <code>.Values.service.port</code></p>
<hr />
<p>I would try changing your values to something like:</p>
<pre><code>ingress:
enabled: true
hosts:
- host: localhost
paths:
- "/questdb"
- "/influxdb"
</code></pre>
<p>or something close to this.</p>
<p>Additionally, you can try and see what is the output of an helm upgrade or install command if you add the parameters <code>--debug --dry-run</code> which could greatly help you identify problems like those, showing the definitions as they will be created (if there's no error while building the template, of course)</p>
<hr />
<p><strong>Update</strong>: since you also changed the Ingress template to use <code>networking.k8s.io/v1</code>, you need to also change how the template is created, because the new kind of Ingress expects things in a different way, as you can see in the documentation: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
<p>Rules could becomes something like this:</p>
<pre><code>rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
service:
name: {{ .svc }}
port:
number: {{ .port }}
{{- end }}
{{- end }}
</code></pre>
<p>and remove the declarations of</p>
<pre><code>{{- $fullName := include "questdb.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
</code></pre>
<p>which are now useless. With this, you can change your values in the following:</p>
<pre><code>ingress:
enabled: true
hosts:
- host: localhost
paths:
- path: "/questdb"
svc: questdb-headless
port: 9000
- path: "/influxdb"
svc: questdb-headless
port: 9009
</code></pre>
<p>But the service taht you specify in the values must be created somehwere of course (by the ingress and it needs to expose the desired ports)</p>
| AndD |
<p>I have deployed pods running nginx using helm, but when I do minikube service service_name, I see my service running on localhost as shown below.
<a href="https://i.stack.imgur.com/IEACP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IEACP.png" alt="enter image description here" /></a></p>
<p>I thought that you need to access the service via the cluster IP not localhost?</p>
<p>I tried to access it using the cluster ip with the port of the service, but it doesn't seem to work.</p>
<p>I also tried to run it again after stopping docker, but it seems that docker is required to start the kubernetes cluster.</p>
<p>I'm following this <a href="https://www.youtube.com/watch?v=vQX5nokoqrQ&t=1214s" rel="nofollow noreferrer">kubecon demo</a> , in the demo she can access it using the cluster ip just fine.</p>
| allen | <p>This is achieved using <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel" rel="nofollow noreferrer"><code>minikube tunnel</code></a> command executed in separate terminal. This creates a tunnel and adds route to ClusterIP range.</p>
| Vasili Angapov |
<pre class="lang-scala prettyprint-override"><code>package learn.spark
import org.apache.spark.{SparkConf, SparkContext}
object MasterLocal2 {
def main(args: Array[String]): Unit = {
val conf = new SparkConf()
conf.setAppName("spark-k8s")
conf.setMaster("k8s://https://192.168.99.100:16443")
conf.set("spark.driver.host", "192.168.99.1")
conf.set("spark.executor.instances", "5")
conf.set("spark.kubernetes.executor.request.cores", "0.1")
conf.set("spark.kubernetes.container.image", "spark:latest")
val sc = new SparkContext(conf)
println(sc.parallelize(1 to 5).map(_ * 10).collect().mkString(", "))
sc.stop()
}
}
</code></pre>
<p>I am trying to speed up the local running of the Spark program, but I got some exceptions. I don't know how to configure to pass the JVM things to the executors.</p>
<pre class="lang-php prettyprint-override"><code>Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 8, 10.1.1.217, executor 4): java.lang.ClassNotFoundException: learn.spark.MasterLocal2$$anonfun$main$1
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
</code></pre>
| Time Killer | <p>Mount the Idea compilation result directory to the executor, then set <code>spark.executor.extraClassPath</code> to that.</p>
<pre class="lang-scala prettyprint-override"><code>conf.set("spark.kubernetes.executor.volumes.hostPath.anyname.options.path", "/path/to/your/project/out/production/examples")
conf.set("spark.kubernetes.executor.volumes.hostPath.anyname.mount.path", "/intellij-idea-build-out")
conf.set("spark.executor.extraClassPath", "/intellij-idea-build-out")
</code></pre>
<p>Make sure that your compilation out directory can be mounted to the executor container via <a href="https://kubernetes.io/docs/concepts/storage/volumes" rel="nofollow noreferrer">K8S volume</a>, which involves the use of Kubernetes.</p>
| Margrett |
<p>i am running k8s cluster on GKE</p>
<p>it has 4 node pool with different configuration</p>
<p><strong>Node pool : 1</strong> (Single node coroned status)</p>
<p>Running <strong>Redis & RabbitMQ</strong></p>
<p><strong>Node pool : 2</strong> (Single node coroned status)</p>
<p>Running <strong>Monitoring & Prometheus</strong></p>
<p><strong>Node pool : 3</strong> (Big large single node)</p>
<p>Application pods</p>
<p><strong>Node pool : 4</strong> (Single node with auto-scaling enabled)</p>
<p>Application pods</p>
<p>currently, i am running single replicas for each service on GKE</p>
<p>however 3 replicas of the main service which mostly manages everything.</p>
<p>when scaling this main service with HPA sometime seen the issue of Node getting crashed or <code>kubelet frequent restart</code> PODs goes to Unkown state.</p>
<p>How to handle this scenario ? If the node gets crashed GKE taking time to auto repair and which cause service down time.</p>
<p><strong>Question : 2</strong></p>
<p>Node pool : 3 -4 running application PODs. Inside the application, there are 3-4 memory-intensive micro services i am also thinking same to use <strong>Node selector</strong> and fix it on one Node.</p>
<p>while only small node pool will run main service which has HPA and node auto scaling auto work for that node pool.</p>
<p>however i feel like it's not best way to it with Node selector.</p>
<p>it's always best to run more than one replicas of each service but currently, we are running single replicas only of each service so please suggest considering that part.</p>
| chagan | <p>As <a href="https://stackoverflow.com/users/9231144/patrick-w">Patrick W</a> rightly suggested in his comment:</p>
<blockquote>
<p><strong>if you have a single node, you leave yourself with a single point of
failure</strong>. Also keep in mind that <strong>autoscaling takes time to kick in and
is based on resource requests</strong>. <strong>If your node suffers OOM because of
memory intensive workloads, you need to readjust your memory requests
and limits</strong> – Patrick W Oct 10 at</p>
</blockquote>
<p>you may need to redesign a bit your infrastructure so you have more than a single node in every nodepool as well as <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/" rel="nofollow noreferrer">readjust mamory requests and limits</a></p>
<p>You may want to take a look at the following sections in the <strong>official kubernetes docs</strong> and <strong>Google Cloud blog</strong>:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">Managing Resources for Containers</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">Assign CPU Resources to Containers and Pods</a></li>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/" rel="nofollow noreferrer">Configure Default Memory Requests and Limits for a Namespace</a></li>
<li><a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">Resource Quotas</a></li>
<li><a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">Kubernetes best practices: Resource requests and limits</a></li>
</ul>
<blockquote>
<p>How to handle this scenario ? If the node gets crashed GKE taking time
to auto repair and which cause service down time.</p>
</blockquote>
<p>That's why having more than just one node for a single node pool can be much better option. It greatly reduces the likelihood that you'll end up in the situation described above. <strong>GKE</strong> <strong>autorapair</strong> feature needs to take its time (usually a few minutes) and if this is your only node, you cannot do much about it and need to accept possible downtimes.</p>
<blockquote>
<p>Node pool : 3 -4 running application PODs. Inside the application,
there are 3-4 memory-intensive micro services i am also thinking same
to use Node selector and fix it on one Node.</p>
<p>while only small node pool will run main service which has HPA and
node auto scaling auto work for that node pool.</p>
<p>however i feel like it's not best way to it with Node selector.</p>
</blockquote>
<p>You may also take a loot at <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">node affinity and anti-affinity</a> as well as <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">taints and tolerations</a></p>
| mario |
<p>I have 3 nodes of vault running on k8s , everything was fine, and suddenly today
i have event warning that says :</p>
<pre><code>Readiness probe failed: Key Value --- ----- Seal Type shamir Initialized true Sealed true Total Shares 5 Threshold 3 Unseal Progress 0/3 Unseal Nonce n/a Version 1.6.1 Storage Type raft HA Enabled true
</code></pre>
<p>when i look at the node-1 and node-2 logs i can see that the server is up and running</p>
<pre><code>==> Vault server configuration:
Api Address: https://10.xxx.0.xxx:8200
Cgo: disabled
Cluster Address: https://vault-1.vault-internal:8201
Go Version: go1.15.4
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "enabled")
Log Level: info
Mlock: supported: true, enabled: false
Recovery Mode: false
Storage: raft (HA available)
Version: Vault v1.6.1
Version Sha: 6d2db3f033e02e70xxxx360062b88b03
==> Vault server started! Log data will stream in below:
2021-01-26T10:11:14.437Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy=
</code></pre>
<p>also here is the pod describe:</p>
<pre><code>$ kubectl describe pod vault-1 -n vault-foo
Name: vault-1
Namespace: vault-foo
Priority: 0
Node: ip-10-101-0-98.ec2.internal/xxx.xxx.0.98
Start Time: Tue, 26 Jan 2021 12:11:05 +0200
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault
component=server
controller-revision-hash=vault-7694f4b78c
helm.sh/chart=vault-0.9.0
statefulset.kubernetes.io/pod-name=vault-1
vault-active=false
vault-initialized=false
vault-perf-standby=false
vault-sealed=true
vault-version=1.6.1
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: xxx.xxx.0.191
IPs:
IP: xxx.xxx.0.191
Controlled By: StatefulSet/vault
Containers:
vault:
Container ID: docker://077b501aef3eaeb5f9e75dc144f288d51dbff96edb093c157401e89e5738a447
Image: vault:1.6.1
Image ID: docker-pullable://vault@sha256:efe6036315aafbab771939cf518943ef704f5e02a96a0e1b2643666a4aab1ad4
Ports: 8200/TCP, 8201/TCP, 8202/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/sh
-ec
Args:
cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
[ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
[ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
State: Running
Started: Tue, 26 Jan 2021 12:11:14 +0200
Ready: False
Restart Count: 0
Readiness: exec [/bin/sh -ec vault status -tls-skip-verify] delay=5s timeout=3s period=5s #success=1 #failure=2
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_K8S_POD_NAME: vault-1 (v1:metadata.name)
VAULT_K8S_NAMESPACE: vault-foo (v1:metadata.namespace)
VAULT_ADDR: https://127.0.0.1:8200
VAULT_API_ADDR: https://$(POD_IP):8200
SKIP_CHOWN: true
SKIP_SETCAP: true
HOSTNAME: vault-1 (v1:metadata.name)
VAULT_CLUSTER_ADDR: https://$(HOSTNAME).vault-internal:8201
HOME: /home/vault
Mounts:
/home/vault from home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from vault-token-pb4vc (ro)
/vault/config from config (rw)
/vault/data from data (rw)
/vault/userconfig/vault-tls from userconfig-vault-tls (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-vault-1
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-config
Optional: false
userconfig-vault-tls:
Type: Secret (a volume populated by a Secret)
SecretName: vault-tls
Optional: false
home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
vault-token-pb4vc:
Type: Secret (a volume populated by a Secret)
SecretName: vault-token-pb4vc
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 2m24s (x32639 over 45h) kubelet Readiness probe failed: Key Value
--- -----
Seal Type shamir
Initialized true
Sealed true
Total Shares 5
Threshold 3
Unseal Progress 0/3
Unseal Nonce n/a
Version 1.6.1
Storage Type raft
HA Enabled true
</code></pre>
<p>what I'm missing here ? what are those warnings ?</p>
| user63898 | <p>Looks like your Vault was restarted. Every time you restart Vault you need to unseal it (see <code>Unseal Progress 0/3</code> in the output). Read more: <a href="https://www.vaultproject.io/docs/concepts/seal" rel="nofollow noreferrer">https://www.vaultproject.io/docs/concepts/seal</a></p>
| Vasili Angapov |
<p>I'm experimenting with kubectl -o=custom-columns and I wonder if there is a way to get node status.
I'm can get the nodename with this </p>
<blockquote>
<p>k get nodes -o=custom-columns=NAME:.metadata.name</p>
</blockquote>
<p>but is there a way to get the node status (ready, notready)?</p>
| Manuel Castro | <p>Try to run <code>kubectl get nodes</code> as following:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get nodes -o custom-columns=STATUS:status.conditions[-1].type
</code></pre>
| xxw |
<p>I recently enabled RBAC at Kubernetes. Since than, Jenkins (running on Kubernetes, creating agent-pods on the very same Kubernetes) is able to create agent-pods, but is unable to connect to JNLP via Port 50'000.</p>
<p>I noticed a reference for <code>Connecting to jenkins.example.de:50000</code>, but did not find where this is configured, as it must resolve Kubernetes-Internal (Kube-DNS), as the port is not exposed from outside.</p>
<p>I noticed (and updated) configuration at <code>Configure System</code> > <code>Jenkins Location</code> > <code>Jenkins URL</code>, leading to failed RBAC logins (Keycloak), as redirect URL is set incorrectly. Futher it does not feel correct for configuring cluster-internal endpoints for JNLP. I can chose between JNLP being able to work with cluster-internal URL or Being able to login, using RBAC:</p>
<p><a href="https://i.stack.imgur.com/i6v1j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i6v1j.png" alt="enter image description here" /></a></p>
<h2>Questions</h2>
<ul>
<li>How to configure Jenkins URL correclty? (https:(jenkins.example.com?)</li>
<li>How to configure Jenkins JNLP correclty (jenkins-svc.jenkins.cluster.local:50000)? Where to do so?</li>
</ul>
<h2>Pod Information</h2>
<pre><code>kubectl get all -o wide -n jenkins
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/jenkins-64ff7ff784-nq8jh 2/2 Running 0 22h 192.168.0.35 kubernetes-slave02 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/jenkins-svc ClusterIP 10.105.132.134 <none> 8080/TCP,50000/TCP 68d app=jenkins
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/jenkins 1/1 1 1 68d jenkins jenkins/jenkins:latest app=jenkins
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/jenkins-64ff7ff784 1 1 1 68d jenkins jenkins/jenkins:latest app=jenkins,pod-template-hash=64ff7ff784
</code></pre>
<pre><code>kubectl describe -n jenkins pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
Name: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
Namespace: jenkins
Priority: 0
Node: kubernetes-slave/192.168.190.116
Start Time: Fri, 08 Jan 2021 17:16:56 +0100
Labels: istio.io/rev=default
jenkins=jenkins-slave
jenkins/label=worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897
jenkins/label-digest=9f81f8f2dabeba69de7d48422a0fc3cbdbaa8ce0
security.istio.io/tlsMode=istio
service.istio.io/canonical-name=worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
service.istio.io/canonical-revision=latest
Annotations: buildUrl: https://jenkins.example.de/job/APP-Kiali/job/master/63/
cni.projectcalico.org/podIP: 192.168.4.247/32
cni.projectcalico.org/podIPs: 192.168.4.247/32
prometheus.io/path: /stats/prometheus
prometheus.io/port: 15020
prometheus.io/scrape: true
runUrl: job/APP-Kiali/job/master/63/
sidecar.istio.io/status:
{"version":"e2cb9d4837cda9584fd272bfa1f348525bcaacfadb7e9b9efbd21a3bb44ad7a1","initContainers":["istio-init"],"containers":["istio-proxy"]...
Status: Terminating (lasts <invalid>)
Termination Grace Period: 30s
IP: 192.168.4.247
IPs:
IP: 192.168.4.247
Init Containers:
istio-init:
Container ID: docker://182de6a71b33e7350263b0677f510f85bd8da9c7938ee5c6ff43b083efeffed6
Image: docker.io/istio/proxyv2:1.8.1
Image ID: docker-pullable://istio/proxyv2@sha256:0a407ecee363d8d31957162b82738ae3dd09690668a0168d660044ac8fc728f0
Port: <none>
Host Port: <none>
Args:
istio-iptables
-p
15001
-z
15006
-u
1337
-m
REDIRECT
-i
*
-x
-b
*
-d
15090,15021,15020
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 08 Jan 2021 17:17:01 +0100
Finished: Fri, 08 Jan 2021 17:17:02 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
Environment:
DNS_AGENT:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7htdh (ro)
Containers:
kubectl:
Container ID: docker://fb2b1ce8374799b6cc59db17fec0bb993b62369cd7cb2b71ed9bb01c363649cd
Image: lachlanevenson/k8s-kubectl:latest
Image ID: docker-pullable://lachlanevenson/k8s-kubectl@sha256:47e2096ae077b6fe7fdfc135c53feedb160d3b08001b8c855d897d0d37fa8c7e
Port: <none>
Host Port: <none>
Command:
cat
State: Running
Started: Fri, 08 Jan 2021 17:17:03 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/home/jenkins/agent from workspace-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7htdh (ro)
jnlp:
Container ID: docker://58ee7b399077701f3f0a99ed97eb6f1e400976b7946d209d2bee64be32a94885
Image: jenkins/inbound-agent:4.3-4
Image ID: docker-pullable://jenkins/inbound-agent@sha256:62f48a12d41e02e557ee9f7e4ffa82c77925b817ec791c8da5f431213abc2828
Port: <none>
Host Port: <none>
State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 08 Jan 2021 17:17:04 +0100
Finished: Fri, 08 Jan 2021 17:17:15 +0100
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 256Mi
Environment:
JENKINS_PROTOCOLS: JNLP4-connect
JENKINS_SECRET: ****
JENKINS_AGENT_NAME: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
JENKINS_DIRECT_CONNECTION: jenkins.example.de:50000
JENKINS_INSTANCE_IDENTITY: ****
JENKINS_NAME: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
JENKINS_AGENT_WORKDIR: /home/jenkins/agent
Mounts:
/home/jenkins/agent from workspace-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7htdh (ro)
istio-proxy:
Container ID: docker://9a87cafa07779cfc98c58678f484e48e28e354060573c19db9d3d9c86be7a496
Image: docker.io/istio/proxyv2:1.8.1
Image ID: docker-pullable://istio/proxyv2@sha256:0a407ecee363d8d31957162b82738ae3dd09690668a0168d660044ac8fc728f0
Port: 15090/TCP
Host Port: 0/TCP
Args:
proxy
sidecar
--domain
$(POD_NAMESPACE).svc.cluster.local
--serviceCluster
worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b.jenkins
--proxyLogLevel=warning
--proxyComponentLogLevel=misc:error
--concurrency
2
State: Running
Started: Fri, 08 Jan 2021 17:17:11 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
Readiness: http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
Environment:
JWT_POLICY: first-party-jwt
PILOT_CERT_PROVIDER: istiod
CA_ADDR: istiod.istio-system.svc:15012
POD_NAME: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b (v1:metadata.name)
POD_NAMESPACE: jenkins (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
HOST_IP: (v1:status.hostIP)
CANONICAL_SERVICE: (v1:metadata.labels['service.istio.io/canonical-name'])
CANONICAL_REVISION: (v1:metadata.labels['service.istio.io/canonical-revision'])
PROXY_CONFIG: {"proxyMetadata":{"DNS_AGENT":""}}
ISTIO_META_POD_PORTS: [
]
ISTIO_META_APP_CONTAINERS: kubectl,jnlp
ISTIO_META_CLUSTER_ID: Kubernetes
ISTIO_META_INTERCEPTION_MODE: REDIRECT
ISTIO_METAJSON_ANNOTATIONS: {"buildUrl":"https://jenkins.example.de/job/APP-Kiali/job/master/63/","runUrl":"job/APP-Kiali/job/master/63/"}
ISTIO_META_WORKLOAD_NAME: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
ISTIO_META_OWNER: kubernetes://apis/v1/namespaces/jenkins/pods/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
ISTIO_META_MESH_ID: cluster.local
TRUST_DOMAIN: cluster.local
DNS_AGENT:
Mounts:
/etc/istio/pod from istio-podinfo (rw)
/etc/istio/proxy from istio-envoy (rw)
/var/lib/istio/data from istio-data (rw)
/var/run/secrets/istio from istiod-ca-cert (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7htdh (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
workspace-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-7htdh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7htdh
Optional: false
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
istio-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
istio-podinfo:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
metadata.annotations -> annotations
istiod-ca-cert:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-ca-root-cert
Optional: false
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26s default-scheduler Successfully assigned jenkins/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b to kubernetes-slave
Normal Pulling 24s kubelet Pulling image "docker.io/istio/proxyv2:1.8.1"
Normal Pulled 21s kubelet Successfully pulled image "docker.io/istio/proxyv2:1.8.1" in 2.897659504s
Normal Created 21s kubelet Created container istio-init
Normal Started 21s kubelet Started container istio-init
Normal Pulled 19s kubelet Container image "lachlanevenson/k8s-kubectl:latest" already present on machine
Normal Created 19s kubelet Created container kubectl
Normal Started 19s kubelet Started container kubectl
Normal Pulled 19s kubelet Container image "jenkins/inbound-agent:4.3-4" already present on machine
Normal Created 19s kubelet Created container jnlp
Normal Started 18s kubelet Started container jnlp
Normal Pulling 18s kubelet Pulling image "docker.io/istio/proxyv2:1.8.1"
Normal Pulled 11s kubelet Successfully pulled image "docker.io/istio/proxyv2:1.8.1" in 7.484694118s
Normal Created 11s kubelet Created container istio-proxy
Normal Started 11s kubelet Started container istio-proxy
Warning Unhealthy 9s kubelet Readiness probe failed: Get "http://192.168.4.247:15021/healthz/ready": dial tcp 192.168.4.247:15021: connect: connection refused
Normal Killing 6s kubelet Stopping container kubectl
Normal Killing 6s kubelet Stopping container istio-proxy
</code></pre>
<h2>Logs: Jenkins Agent</h2>
<pre><code>fabiansc@Kubernetes-Master:~$ kubectl logs -n jenkins pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b
error: a container name must be specified for pod worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b, choose one of: [kubectl jnlp istio-proxy] or one of the init containers: [istio-init]
fabiansc@Kubernetes-Master:~$ kubectl logs -n jenkins pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b -c kubectl
fabiansc@Kubernetes-Master:~$ kubectl logs -n jenkins pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b -c jnlp
unable to retrieve container logs for docker://58ee7b399077701f3f0a99ed97eb6f1e400976b7946d209d2bee64be32a94885fabiansc@Kubernetes-Master:~$ kubectl logs -n jenkins pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-2jm7b -c jnlp -c jnlppod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples
fabiansc@Kubernetes-Master:~$ kubectl logs -n jenkins -c jnlp pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
Error from server (BadRequest): container "jnlp" in pod "worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw" is waiting to start: PodInitializing
fabiansc@Kubernetes-Master:~$ kubectl logs -n jenkins -c jnlp pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Jan 08, 2021 4:18:07 PM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.3
Jan 08, 2021 4:18:07 PM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using /home/jenkins/agent/remoting as a remoting work directory
Jan 08, 2021 4:18:07 PM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among []
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Agent discovery successful
Agent address: jenkins.example.de
Agent port: 50000
Identity: cd:35:f9:1a:60:54:e4:91:07:86:59:49:0b:b6:73:c4
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Handshaking
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to jenkins.example.de:50000
fabiansc@Kubernetes-Master:~$ kubectl logs -f -n jenkins -c jnlp pod/worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: worker-c82ea4bd-52e1-47c6-bad7-4a416a1e6897-z1bn0-t57rw
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Jan 08, 2021 4:18:07 PM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.3
Jan 08, 2021 4:18:07 PM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using /home/jenkins/agent/remoting as a remoting work directory
Jan 08, 2021 4:18:07 PM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among []
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Agent discovery successful
Agent address: jenkins.example.de
Agent port: 50000
Identity: cd:35:f9:1a:60:54:e4:91:07:86:59:49:0b:b6:73:c4
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Handshaking
Jan 08, 2021 4:18:07 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to jenkins.example.de:50000
Jan 08, 2021 4:18:17 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to jenkins.example.de:50000 (retrying:2)
java.io.IOException: Failed to connect to jenkins.example.de:50000
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:247)
at hudson.remoting.Engine.connectTcp(Engine.java:844)
at hudson.remoting.Engine.innerRun(Engine.java:722)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:645)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:205)
... 3 more
Jan 08, 2021 4:18:17 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Trying protocol: JNLP4-connect
Jan 08, 2021 4:18:18 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Protocol JNLP4-connect encountered an unexpected exception
java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.util.SettableFuture.get(SettableFuture.java:223)
at hudson.remoting.Engine.innerRun(Engine.java:743)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.onRecvClosed(AckFilterLayer.java:283)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:816)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$1500(BIONetworkLayer.java:48)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:247)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:117)
at java.lang.Thread.run(Thread.java:748)
Jan 08, 2021 4:18:18 PM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: The server rejected the connection: None of the protocols were accepted
java.lang.Exception: The server rejected the connection: None of the protocols were accepted
at hudson.remoting.Engine.onConnectionRejected(Engine.java:828)
at hudson.remoting.Engine.innerRun(Engine.java:768)
at hudson.remoting.Engine.run(Engine.java:518)
</code></pre>
<h2>Logs: Jenkins Agent</h2>
<pre><code>INFO: Connecting to jenkins.example.de:50000 (retrying:2)
java.io.IOException: Failed to connect to jenkins.example.de:50000
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:247)
at hudson.remoting.Engine.connectTcp(Engine.java:844)
at hudson.remoting.Engine.innerRun(Engine.java:722)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:645)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:205)
... 3 more
Jan 08, 2021 4:18:17 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Trying protocol: JNLP4-connect
Jan 08, 2021 4:18:18 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Protocol JNLP4-connect encountered an unexpected exception
java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.util.SettableFuture.get(SettableFuture.java:223)
at hudson.remoting.Engine.innerRun(Engine.java:743)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.onRecvClosed(AckFilterLayer.java:283)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:816)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$1500(BIONetworkLayer.java:48)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:247)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:117)
at java.lang.Thread.run(Thread.java:748)
Jan 08, 2021 4:18:18 PM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: The server rejected the connection: None of the protocols were accepted
java.lang.Exception: The server rejected the connection: None of the protocols were accepted
at hudson.remoting.Engine.onConnectionRejected(Engine.java:828)
at hudson.remoting.Engine.innerRun(Engine.java:768)
at hudson.remoting.Engine.run(Engine.java:518)
</code></pre>
| Fabiansc | <p>Found the answer. <code>Istio</code> was delaying connectivity of <code>JNLP</code>. Details on <a href="https://github.com/jenkinsci/docker-inbound-agent/issues/146" rel="noreferrer">Github Issue #146</a>. Further, <code>Jenkins URL</code> <strong>and</strong> <code>Jenkins Tunnel</code> must be configured (otherwise it fails, see <a href="https://github.com/jenkinsci/docker/issues/788" rel="noreferrer">Github Issue #788</a>):</p>
<p><a href="https://i.stack.imgur.com/Jl2Wm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Jl2Wm.png" alt="enter image description here" /></a></p>
<p>Two solutions:</p>
<ul>
<li>Disable <code>Istio</code></li>
<li>Create your own custom <code>JNPLP</code> image, utilizing delay / retry (graceful degradation). None is provided since February 2020.</li>
</ul>
| Fabiansc |
<p>I have been using Openshift/Kubernates for some time and this has been the understanding.
For service to service communication</p>
<ul>
<li>use DNS name of <code>${service-name}</code> if they are under the same namespace</li>
<li>use DNS name of <code>${service-name}.${namespace}.svc.cluster.local</code> if they are from different namespaces (network is joined)</li>
</ul>
<p>Recently i was introduced with the topic of "we should add a dot after the svc.cluster.local to make it FQDN, for better DNS lookup speed". Done some testing and indeed with lookup is much faster with the dot. (~100ms without dot, 10ms with dot)</p>
<p>After some research, it was caused by the default dns setting from the kubernates</p>
<pre><code>sh-4.2$ cat /etc/resolv.conf
search ${namespace}.svc.cluster.local svc.cluster.local cluster.local
nameserver X.X.X.X
options ndots:5
</code></pre>
<p>the ndots = 5 will perform a local search (sequential) if the dns name does not contain 5 dots.
In the case of <code>${service-name}.${namespace}.svc.cluster.local</code>, the local search will be as such</p>
<ol>
<li><code>${service-name}.${namespace}.svc.cluster.local</code> + <code>${namespace}.svc.cluster.local</code> // FAILED LOOKUP</li>
<li><code>${service-name}.${namespace}.svc.cluster.local</code> + <code>svc.cluster.local</code> // FAILED LOOKUP</li>
<li><code>${service-name}.${namespace}.svc.cluster.local</code> + <code>cluster.local</code> // FAILED LOOKUP</li>
<li><code>${service-name}.${namespace}.svc.cluster.local</code> // SUCCESS LOOKUP</li>
</ol>
<p>And for <code>${service-name}.${namespace}.svc.cluster.local.</code>, the local search will be as such</p>
<ol>
<li><code>${service-name}.${namespace}.svc.cluster.local</code> // SUCCESS LOOKUP</li>
</ol>
<p>References</p>
<ol>
<li><a href="https://pracucci.com/kubernetes-dns-resolution-ndots-options-and-why-it-may-affect-application-performances.html" rel="nofollow noreferrer">link</a></li>
<li><a href="https://rcarrata.com/openshift/dns-deep-dive-in-openshift/" rel="nofollow noreferrer">how to debug</a></li>
</ol>
<p>Questions:</p>
<ol>
<li>Since the <code>ndots = 5</code> is the default setting for kubernetes, why <code>${service-name}.${namespace}.svc.cluster.local.</code> is not documented on the official side ?</li>
<li>Should we change all service call to <code>${service-name}.${namespace}.svc.cluster.local.</code> ? any potential downsides ?</li>
</ol>
| bLaXjack | <blockquote>
<p>Since the <code>ndots = 5</code> is the default setting for kubernetes, why
<code>${service-name}.${namespace}.svc.cluster.local.</code> is not documented on
the official side ?</p>
</blockquote>
<p>Well, it's a really good question. I searched through the official docs and it looks like this is not a documented feature. For this reason much better place for posting your doubts and also request for documentation improvement is the official <strong>GitHub</strong> site of <a href="https://github.com/kubernetes/dns" rel="nofollow noreferrer">Kubernetes DNS</a>.</p>
<blockquote>
<p>Should we change all service call to
<code>${service-name}.${namespace}.svc.cluster.local.</code> ? any potential
downsides ?</p>
</blockquote>
<p>If it works well for you and definitely increases the performance, I would say <em>- Why not ?</em> I can't see any potential downsides here. By adding the last dot you're simply omitting those first 3 lookups that are doomed to failure anyway if you use <code>Service</code> domain name in a form of <code>${service-name}.${namespace}.svc.cluster.local</code></p>
<p>Inferring from lookup process you described and your tests, I guess if you use only <code>${service-name}</code> (of course only within the same <code>namespace</code>), dns lookup should be also much faster and closer to those <code>10ms</code> you observed when using <code>${namespace}.svc.cluster.local svc.cluster.local cluster.local.</code> as then it is matched in the very first iteration.</p>
| mario |
<p>I want to find the region and node of my node, I need this to log monitoring data.
kubernetes spec and metadata doesn't provide this information. I checked out
<a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go</a> which looks promising but I can't find
the info I am looking for.</p>
<p>Any suggestion? Thanks</p>
| RandomQuests | <p>If you are using GKE then node zone and region should be in node's labels:</p>
<pre><code>failure-domain.beta.kubernetes.io/region
failure-domain.beta.kubernetes.io/zone
topology.kubernetes.io/region
topology.kubernetes.io/zone
</code></pre>
<p>You can see node labels using <code>kubectl get nodes --show-labels</code></p>
| Vasili Angapov |
<p>I'm trying to forward kubernetes-event logs to elasticsearch using fluentd.I currently use <code>fluent/fluentd-kubernetes-daemonset:v1.10.1-debian-elasticsearch7-1.0</code>as container image to forward my application logs to elasticsearch cluster.I've searched enough & my problem is that this image doesn't have enough documentation as to accomplishing this task(i.e; forward kubernetes event related logs).</p>
<p>I've found <a href="https://rubygems.org/gems/fluent-plugin-kubernetes-objects" rel="nofollow noreferrer">this</a> plugin from splunk which has desired output but this has overhead like :</p>
<ul>
<li><p>add above plugin's gem to bundler.</p>
</li>
<li><p>install essential tools like <code>make</code> etc.</p>
</li>
<li><p>install the plugin .</p>
</li>
</ul>
<p>Sure I can do above steps using <code>init-container</code>, but above operations are adding ~200MB to disk space .I'd like to know if it can be accomplished with smaller footprint or other way.</p>
<p>Any help is appreciated.</p>
<p>Thanks.</p>
| YoganA | <p>You can try this: <a href="https://github.com/opsgenie/kubernetes-event-exporter" rel="nofollow noreferrer">https://github.com/opsgenie/kubernetes-event-exporter</a></p>
<p>It is able to export Kube events to Elasticsearch.</p>
| Vasili Angapov |
<p>After learning that we should have used a <code>StatefulSet</code> instead of a <code>Deployment</code> in order to be able to attach the same persistent volume to multiple pods and especially pods on different nodes, I tried changing our config accordingly.</p>
<p>However, even when using the same name for the volume claim as before, it seems to be creating an entirely new volume instead of using our existing one, hence the application loses access to the existing data when run as a <code>StatefulSet</code>.</p>
<p>Here's the volume claim part of our current <code>Deployment</code> config:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>This results in a claim with the same name.</p>
<p>And here's the template for the <code>StatefulSet</code>:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>This results in new claims for every pod, with the pod name and an ID per claim, like e.g. <code>gitea-server-data-gitea-server-0</code>.</p>
<p>The new claims are now using a new volume instead of the existing one. So I tried specifying the existing volume explicitly, like so:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
volumeName: pvc-c87ff507-fd77-11e8-9a7b-420101234567
resources:
requests:
storage: 20Gi
</code></pre>
<p>However, this results in pods failing to be scheduled and the new claim being "pending" indefinitely:</p>
<blockquote>
<p>pod has unbound immediate PersistentVolumeClaims (repeated times)</p>
</blockquote>
<p>So the question is: how can we migrate the volume claim(s) in a way that allows us to use the existing persistent volume and access the current application data from a new <code>StatefulSet</code> instead of the current <code>Deployment</code>?</p>
<p>(In case it is relevant, we are using Kubernetes on GKE.)</p>
| raucao | <p>In StatefulSet, when you try to use PVC to store your data, you actually define your PVC by using <code>volumeClaimTemplates</code> like:</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>In this scenario, the following things could happen:</p>
<ul>
<li>If the StatefulSet name is <code>gitea-server</code> and the replica is <code>1</code> then
the only pod of the StatefulSet will use the PVC named <code>gitea-server-data-gitea-server-0</code>(if already exist in the cluster) or create a new one named <code>gitea-server-data-gitea-server-0</code>(if doesn't exist in the cluster).</li>
<li>If the StatefulSet name is <code>gitea-server</code> and the replica is <code>2</code> then
the two Pod of the StatefulSet will use the PVCs named <code>gitea-server-data-gitea-server-0</code> and <code>gitea-server-data-gitea-server-1</code> repectively(if already exist in the cluster) or create new PVCs named <code>gitea-server-data-gitea-server-0</code> an <code>gitea-server-data-gitea-server-1</code>(if doesn't exist in the cluster) and so on.</li>
</ul>
<p>Generally, in StatefulSet, generated PVC name follow the convention:</p>
<pre><code><volumeClaimTemplates name>-<StatefulSet name>-<Pod ordinal>
</code></pre>
<p>Now, if you create a PVC named <code>gitea-server-data-gitea-server-0</code>and the other things looks like:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>then after creating the PVC, if you try to create a StatefulSet with replica <code>1</code> and with the above configuration that defined in <code>volumeClaimTemplates</code> then the SatefulSet will use this PVC(<code>gitea-server-data-gitea-server-0</code>).</p>
<p>You can also use this PVC in other workload(Like Deployment) by specifying the field <code>spec.accessmodes</code> as <code>ReadWriteMany</code>.</p>
| Sayf Uddin Al Azad Sagor |
<p>i'm having an hard time to start learning kubernetes the documentations isn't really simple i want to realize simple trick for the moment i'm familliar with docker and i wan't to learn orchestration with kubernetes .
i've realized the following yml file :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
selector:
matchLabels:
app: test
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: ubuntu:20.04
command: ["/scripts/ubuntu.sh"]
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: scripts
mountPath: /scripts
volumes:
- name: scripts
configMap:
name: scripts
defaultMode: 0744
</code></pre>
<p>in my sh file i've got the following :</p>
<pre><code>#!/bin/sh
apt install -y apache2 && service apache2 start && tail -f /dev/null
</code></pre>
<p>the idea is to launch an ubuntu , install apache2 , start the service , keep container alive with a tail , and being able to go on the mapped 80 port from container to my host .</p>
<p>i think i'm doing something wrong when i launch kubectl apply -f i get :
<code>deployment.apps/test-deployment created</code>
but nothing up on my localhost unfortunatly ...</p>
<p>anyone already had that problem ?</p>
<p>PS: i'm using docker desktop on windows .</p>
<p>Edit :</p>
<p>Now i got my pods running but i cannot access my apache on localhost:80 heres my actual config :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: apache
spec:
selector:
matchLabels:
app: test
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: ubuntu:20.04
command: ["/bin/sh", "-c", "apt update -y && apt install -y apache2 && service apache2 start && tail -f /dev/null"]
ports:
- name: http
containerPort: 80
env:
- name: DEBIAN_FRONTEND
value: noninteractive
---
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30001
</code></pre>
| kevP-Sirius | <p>Problem Fixed :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: apache
spec:
selector:
matchLabels:
app: test
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: ubuntu:20.04
command: ["/bin/sh", "-c", "apt update -y && apt install -y apache2 && service apache2 start && tail -f /dev/null"]
ports:
- name: http
containerPort: 80
env:
- name: DEBIAN_FRONTEND
value: noninteractive
---
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: test
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: http
# nodePort: 30001
</code></pre>
<p>my problem was coming from the selector in service that i was using wrongly i'll check how it work exactly but now i'm having my service providing my ubuntu with apache on localhost perfectly .</p>
<p>Edit :
After checking i've uncomment the <code>selector: app : test</code> under spec service , because it wasn't working without it now everything ok !</p>
| kevP-Sirius |
<p>With App Engine GAE, we will usually have yaml with different cron tasks as follows:</p>
<pre><code>cron:
# Notifications Job
- description: "Remove Notifications Cron Weekly run"
url: /tasks/notifications
schedule: every monday 09:00
timezone: Australia/NSW
# Jobs job
- description: "Remove Deleted Jobs / completed"
url: /tasks/jobs/deleted_completed_drafts
schedule: every monday 09:00
timezone: Australia/NSW
# Marketplace job
- description: "Remove Deleted Products / soldout"
url: /tasks/products/deleted_soldout_drafts
schedule: every monday 09:00
timezone: Australia/NSW
</code></pre>
<p>I moved to GKE, I can't figure out yet exactly how to run the above cron tasks from one file:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: Cron Task
spec:
schedule: "*/1 0 0 * * 0" #"*/1 * * * *"
startingDeadlineSeconds: 104444
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: callout
image: gcr.io/my-site/mysite-kubernetes:v0.0.16
args:
- /bin/sh
- -ec
- curl https://www.ksite.com/tasks/notifications
restartPolicy: Never
</code></pre>
<p>So how do I arrange the GKE Cron file to accommodate all the above tasks?
Do I have to write different(codes) for each different task?</p>
<p>The schedule should be every Monday 09:00 timezone: Australia/NSW. Is schedule: <strong>"*/1 0 0 * * 0"</strong> a correct representation of that?</p>
<p>Do I have to specify an image, cause the web-deployment script already has the image specified?</p>
| LearnToday | <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer"><code>CronJob</code></a> in <strong>kubernetes</strong> uses standard <a href="https://en.wikipedia.org/wiki/Cron" rel="nofollow noreferrer">Cron</a> syntax:</p>
<pre><code># ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# │ │ │ │ │ 7 is also Sunday on some systems)
# │ │ │ │ │
# │ │ │ │ │
# * * * * * <command to execute>
</code></pre>
<p>So if you want to run your job <em>every Monday at 09:00</em> it should look as follows:</p>
<pre><code> ┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of the month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
│ │ │ │ │ 7 is also Sunday on some systems)
│ │ │ │ │
│ │ │ │ │
0 9 * * 1 <command to execute>
</code></pre>
<p>If your script was integrated with the image, you wouldn't need to use curl to execute it. Even if it's not part of the image but it is available locally on your node, you can think about mounting it as a <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">Volume</a> e.g. <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a>, which is the simplest way of mounting a file located on your kubernetes node so it becomes available to your pods. In such case you simply need to put as your command the full path to the script:</p>
<pre><code>args:
- /bin/sh
- -c
- /full/path/to/script.sh
</code></pre>
<p>Otherwise you can use any image containing <strong>curl</strong> as <a href="https://stackoverflow.com/users/4945535/user140547">user140547</a> already suggeasted.</p>
<p>As to:</p>
<blockquote>
<p>It is probably better to write three Cronjobs than to try to cram all
three calls into one.</p>
</blockquote>
<p>I would also strongly recommend you to use 3 separate <code>CronJobs</code> as such approach is much simpler and easier to troubleshoot if anything goes wrong with running any of those jobs.</p>
| mario |
<p>I'm deploying Prometheus-operator to my cluster by Helm chart but I implement a custom service to monitor my application, I need to add my service to Prometheus-operator to see my metrics data.</p>
<p>How I can do that?</p>
| Waseem Awashra | <p>At first, you need to deploy Prometheus-operator by Helm or manually:</p>
<pre class="lang-sh prettyprint-override"><code># By Helm:
$ helm install stable/prometheus-operator --generate-name
# By manual: for release `release-0.41`
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.41/bundle.yaml
</code></pre>
<p>If your cluster is RBAC enabled then you need to install RBAC stuff for <code>Prometheus</code> object:</p>
<pre class="lang-sh prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: default
</code></pre>
<p>Then you need to deploy <code>Promethues</code> object:</p>
<pre class="lang-sh prettyprint-override"><code>apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
labels:
prometheus: prometheus
spec:
replicas: 1
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
k8s-app: prometheus
serviceMonitorNamespaceSelector:
matchLabels:
prometheus: prometheus
resources:
requests:
memory: 400Mi
</code></pre>
<p>Here, <code>Prometheus</code> object will select all <code>ServiceMonitor</code> that meet up the below conditions:</p>
<ul>
<li><code>ServiceMonitor</code> will have the <code>k8s-app: prometheus</code> label.</li>
<li><code>ServiceMonitor</code> will be created in that namespaces which have <code>prometheus: prometheus</code> label.</li>
</ul>
<p>The ServiceMonitor has a label selector to select Services and their underlying Endpoint objects. The Service object for the example application selects the Pods by the <code>app</code> label having the <code>example-app</code> value. The Service object also specifies the port on which the metrics are exposed.</p>
<pre class="lang-sh prettyprint-override"><code>kind: Service
apiVersion: v1
metadata:
name: example-app
labels:
app: example-app
spec:
selector:
app: example-app
ports:
- name: web
port: 8080
</code></pre>
<p>This Service object is discovered by a ServiceMonitor, which selects in the same way. The <code>app</code> label must have the value <code>example-app</code>.</p>
<pre class="lang-sh prettyprint-override"><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
k8s-app: prometheus
spec:
selector:
matchLabels:
app: example-app
namespaceSelector:
# matchNames:
# - demo
any: true
endpoints:
- port: web
</code></pre>
<p>Here, <code>namespaceSelector</code> is used to select all-namespaces where the service is created. you can specify specific any namespace using <code>matchNames</code>.</p>
<p>You can also create a <code>ServiceMonitor</code> in any namespace as you want. But you need to specify it in <code>Prometheus</code> cr's <code>spec</code>, like:</p>
<pre class="lang-sh prettyprint-override"><code> serviceMonitorNamespaceSelector:
matchLabels:
prometheus: prometheus
</code></pre>
<p>The above <code>serviceMonitorNamespaceSelector</code> is used in <code>Prometheus</code> operator to select that namespace which has the label <code>prometheus: prometheus</code>. Suppose you have a namespace <code>demo</code> and in this <code>demo</code> namespace you have created a <code>Prometheus</code> then you need to add label <code>prometheus: prometheus</code> in <code>demo</code> namespace using patch:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl patch namespace demo -p '{"metadata":{"labels": {"prometheus":"prometheus"}}}'
</code></pre>
<p>You can find more details here:</p>
<ul>
<li><p>Helm: <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/prometheus-operator</a></p>
</li>
<li><p>Manual: <a href="https://github.com/prometheus-operator/prometheus-operator/blob/release-0.41/Documentation/user-guides/getting-started.md" rel="nofollow noreferrer">https://github.com/prometheus-operator/prometheus-operator/blob/release-0.41/Documentation/user-guides/getting-started.md</a></p>
</li>
<li><p>namespaceSelector: <a href="https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md" rel="nofollow noreferrer">https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md</a></p>
</li>
</ul>
| Sayf Uddin Al Azad Sagor |
<p>In the context of Kubernetes, I've come across the terms <code>Block Storage</code>, <code>File Storage</code> and <code>Object Storage</code> but I don't understand how they are really used (mounted) inside a container. I have a few questions,</p>
<ol>
<li>Are these all storage types backed by raw block devices?</li>
<li>Is <code>Block Storage</code> a term used to mean a <strong>logical abstraction of block devices</strong>?</li>
<li>Is <code>Block Storage</code> mounted to a path inside a container just like we mount a file system on linux? which also implies the question whether the <code>Block Storage</code> is a formatted file system?</li>
<li>How <code>Object Storage</code> is presented to a container? How does the container make use of it? Is it mounted to a path?</li>
<li>How <code>File Storage</code> is presented to a container? How does the container make use of it? Is it mounted to a path?</li>
<li>What are 3 example scenarios to use these 3 storage types?</li>
</ol>
| Iresh Dissanayaka | <p><strong>Block storage</strong> is backed by block device. It can be physical disk or it can be network-attached device (iSCSI, FC or AWS EBS volume) or even Ceph RBD. In most cases pods don't need to work with raw block devices (with exception of Kube native storages like Ceph, Portworx) and Kubernetes instead creates filesystem on top of it and mounts it into pod. The main thing about block storage is that in most cases it's Read-Write Only (RWO) which means it can be mounted read-write only to single pod.</p>
<p><strong>File storage</strong> is backed by filesystem. It can be local filesystem, like hostPath, or it can be network share like NFS. In that case Kubernetes can directly mount it inside pod without any additional preparation. The main thing about NFS is that it can be mounted Read-Write Many (RWX) which means it can be mounted read-write to many pods. Also filesystems on one node can be attached to many pods on that particular node.</p>
<p><strong>Object storage</strong> can be imagined like files-over-HTTP(S) (AWS S3, GCP GCS, Azure Blob Storage, Ceph RGW, Minio). There is no official Kubernetes supported way to mount object storage inside pods, but there are some dirty workarounds like s3fs, Ganesha NFS and may be others. In most cases you will work with object storage directly from your app using provider specific libraries which is how it's meant to work.</p>
| Vasili Angapov |
<p>I have a kubernetes service on azure and it has own virtual network.My local network is using pfsense for gateway and has a public ip.Can i define static route between azure and my local network for communication kubernetes nodes and my local machines?If yes how ?</p>
<p>I know i can use VPN gateway or LoadBalancer but i am wondering about just static routing or some solution like that.</p>
| akuscu | <p>I found solution like that:</p>
<p>Bind an public ip to node interface.
Allow only my premise's public ip fr inbound and outbound.
Do the same on premise firewall.
Create NAT rules on premise firewall.</p>
| akuscu |
<p>I have a <code>.pcap</code> file on the master node, which I want to view in Wireshark on the local machine. I access the Kubernetes cluster via the Kops server by ssh from the local machine. I checked <code>kubectl cp --help</code> but provides a way to cp a file from remote pod to kops server.</p>
<p>If anyone knows how to bring a file from Master Node -> Kops Server -> Local machine, please share your knowledge! Thanks!</p>
| Tech Girl | <p>Solution is simple - <code>scp</code>, thanks to @OleMarkusWith's quick response.</p>
<p>All I did was:</p>
<p>On Kops Server:</p>
<p><code>scp admin@<master-node's-external-ip>:/path/to/file /dest/path</code></p>
<p>On local machine:</p>
<p><code>scp <kops-server-ip>:/path/to/file /dest/path</code></p>
| Tech Girl |
<p>I am initializing a kubernetes cluster with <code>Terraform</code>, I created an <code>aws_autoscaling_group</code> for autoscaling, but I want to protect the master node from autoscaling. I haven't found a setting in the official documentation that could solve my problem.</p>
<p>Thank you for your help.</p>
| nalou | <p>In your Terraform <code>aws_autoscaling_group </code> just set <code>min_size</code>, <code>max_size</code> and <code>desired_capacity</code> all to the same value. If you have 3 master nodes set all of them to 3. This will effectively disable autoscaling, but will always keep you cluster with 3 master nodes.</p>
| Vasili Angapov |
<p>We upgraded our existing development cluster from 1.13.6-gke.13 to 1.14.6-gke.13 and our pods can no longer reach our in-house network over our Google Cloud VPN. Our production cluster (still on 1.13) shares the same VPC network and VPN tunnels and is still working fine. The only thing that changed was the upgrade of the admin node and node pool to 1.14 on the development cluster. </p>
<p>I have opened a shell into a pod on the development cluster and attempted to ping the IP address of an in-house server to which we need access. No response received. Doing the same on a pod in our production cluster works as expected.</p>
<p>I ssh'd into a node in the cluster and was able to ping the in-house network. so it's just pods that have networking issues.</p>
<p>Access to the publicly exposed services in the cluster is still working as expected. Health checks are OK.</p>
<p>UPDATE: </p>
<p>I created a new node pool using the latest 1.13 version, drained the pods from the 1.14 pool and all is well with the pods running on the 1.13 pool again. Something is definitely up with 1.14. It remains to be seen if this is an issue cause by some new configuration option or just a bug. </p>
<p>RESOLUTION: </p>
<p>IP masquerading is discussed here <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent</a>. My resolution was to add the pod subnets for each of my clusters to the list of advertised networks in my VPN Cloud Routers on GCP. So now the pod networks can traverse the VPN. </p>
| jlar310 | <p>Until GKE 1.13.x, even if not necessary, GKE will masquerade pods trying to reach external IPs, even on the same VPC of the cluster, unless the destination is on the 10.0.0.0/8 range.</p>
<p>Since 1.14.x versions, this rule is no longer added by default on clusters. This means that pods trying to reach any endpoint will be seen with their Pod IP instead of the node IP as the masquerade rule was removed.</p>
<p>You could try recreating your Cloud VPN in order to include the POD IP range.</p>
| LukeTerro |
<p>we have an EKS cluster on 1.21.</p>
<p>There is an nginx-ingress-controller-default-ingress-controller deployed with a Classic Load Balancer.</p>
<p>Suddenly, its pods are crashing with following errors.</p>
<pre><code>I0815 04:40:04.970835 8 flags.go:204] Watching for Ingress class: nginx
W0815 04:40:04.980149 8 flags.go:249] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W0815 04:40:04.980218 8 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W0815 04:40:04.980255 8 client_config.go:548] error creating inClusterConfig, falling back to default config: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied
F0815 04:40:04.980417 8 main.go:272] Error while initiating a connection to the Kubernetes API server. This could mean the cluster is misconfigured (e.g. it has invalid API server certificates or Service Accounts configuration). Reason: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied
</code></pre>
<p>Below are the securityContext and VolumeMount of the pod.</p>
<pre><code> securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-k7n9n
readOnly: true
</code></pre>
<p>I tried to change runAsUser : 0, but it throws back with message the pods ".." is invalid.</p>
<p>Can you please give me some directions on what can be wrong here and any possible solution?</p>
| Nisarg | <p>Try adding <code>fsGroup</code>. This will make serviceaccount directory readable by non-root user:</p>
<pre><code>spec:
template:
spec:
securityContext:
fsGroup: 65534
</code></pre>
| Vasili Angapov |
<p>i'm trying to test locally some microservices .NET Core with Minikube.
I've 2 microservices that comunicate with each other and with a container with mssql by clusterIP.
It works all fine, but i can't connect directly to mssql from SQL Management Studio.</p>
<p>Here the deployment of mssql:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2017-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Express"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: /var/opt/mssql/data
name: mssqldb
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-claim
---
apiVersion: v1
kind: Service
metadata:
name: mssql-clusterip-service
spec:
type: ClusterIP
selector:
app: mssql
ports:
- name: mssql
protocol: TCP
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Service
metadata:
name: mssql-loadbalancer
spec:
type: LoadBalancer
selector:
app: mssql
ports:
- protocol: TCP
port: 1433
targetPort: 1433
</code></pre>
<p>I've tried also with NodePort but i can't access it by "localhost, 1433"
Any idea of how can i access it externally?</p>
<p>Thanks</p>
| user1477747 | <p>There is a different way to access your app from external world. If you use the <code>LoadBalancer</code> type service then you can do the following steps to access your app from external(for only minikube):</p>
<ol>
<li>Run the below command in different terminal:</li>
</ol>
<pre><code>minikube tunnel
</code></pre>
<ol start="2">
<li>Get the services</li>
</ol>
<pre><code>kubectl get svc
</code></pre>
<p>The output looks like:</p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20m
mssql-loadbalancer LoadBalancer 10.102.149.78 10.102.149.78 1433:30373/TCP 16s
</code></pre>
<ol start="3">
<li>open in your browser (make sure there is no proxy set)</li>
</ol>
<pre><code>http://REPLACE_WITH_EXTERNAL_IP:1443
</code></pre>
<p>You can also use a port-forwarding mechanism to access your app like:</p>
<pre><code>kubectl port-forward service/<your service> 1443:1443
</code></pre>
<p>Ref: <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/accessing/</a></p>
| Sayf Uddin Al Azad Sagor |
<p>I am spinning up a new Jupyter notebook instance from Jupiter hub and wish to have Kubernetes API access from inside the spun up container. According to the <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/administrator/security.html#kubernetes-api-access" rel="nofollow noreferrer">docs</a>, I added the parameter for service account in my helm values and as expected, I can see the service account token mounted as expected.</p>
<pre><code>subu@jupyter-subu:~$ sudo ls /run/secrets/kubernetes.io/serviceaccount/
ca.crt namespace token
</code></pre>
<p>When I try to run kubectl however, I get an access denied</p>
<pre><code>subu@jupyter-subu:~$ kubectl get pods
error: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied
</code></pre>
<p>Fair enough, but run it as sudo and it simply ignores the service account token.</p>
<pre><code>subu@jupyter-subu:~$ sudo kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>If I setup the kubectl config manually with the details of the token, it works totally though, its just the default settings that don't work. Any ideas on why this could be happening would be much appreciated!</p>
| Subramaniam Ramasubramanian | <p>In order to make kubectl use the projected token, the environment variables KUBERNETES_SERVICE_PORT and KUBERNETES_SERVICE_HOST must be set in your environment. These are automatically injected upon pod start, but likely only for your user, not for the sudo <code>root</code> user.</p>
<p>Make sure to pass these variables for the root environment (<code>sudo -E kubectl get pods</code>) or make sure the projected token is readable by your user (this should be achievable by setting the KubeSpawner's singleuser_uid to your UID <a href="https://github.com/jupyterhub/kubespawner/issues/140" rel="nofollow noreferrer">https://github.com/jupyterhub/kubespawner/issues/140</a>).</p>
| Richard Nemeth |
<p>I'm trying to connect to a CloudSQL database from a container running in Kubernetes (not Google). I can connect using for instance IntelliJ, but Kubernetes refuses to connect</p>
<blockquote>
<p>failed to connect to <code>host=<ip> user=user database=db</code>: failed to
write startup message (x509: cannot validate certificate for
because it doesn't contain any IP SANs)</p>
</blockquote>
<p>The message is connect in that the information is indeed missing in the certificate generated by Google.</p>
<p>Then how am I supposed to connect?</p>
| Martin01478 | <p>As per the <a href="https://github.com/GoogleCloudPlatform/gke-cloud-sql-postgres-demo" rel="nofollow noreferrer">github</a> you can connect to cloud SQL (Postgres) instance using Cloud SQL Proxy container as sidecar container.</p>
| Mahboob |
<p>I have an azure pipeline that I am using to generate and push helm charts in. Within this pipeline I have a deployment stage that uses the <code>HelmDeploy0</code> task. This task requires a <code>kubernetesServiceEndpoint</code>. Consequentially, I have also opted to use Pipeline environments and have configured my dev Kubernetes cluster as my Dev Environment for use in the pipeline. I am unsure ohow this task is going to use this environment as I have to assume that Azure DevOps, upon using the environment, must be authenticating with the cluster, thus, the helm chart should simply install. Unfortunately the <code>HelmDeploy0</code> task requires this serviceendpoint key. Any insight would be appreciated. Below is a snippet of the pipeline stage.</p>
<pre><code>- stage: Dev
displayName: Deploy to Dev
dependsOn: Build
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
jobs:
- deployment:
displayName: Deploy $(projName)
environment: 'Dev'
strategy:
runOnce:
preDeploy:
steps:
- task: HelmInstaller@1
inputs:
helmVersionToInstall: latest
deploy:
steps:
- task: HelmDeploy@0
displayName: Install Helm Chart
inputs:
command: install
chartName: $(REGISTRY_NAME)/helm/$(projName)
arguments: --version $(tag)
releaseName: $(projName)
</code></pre>
<p>NOTE: Yes I know variables cannot be used as the display name, it's there to protect any IP right now.</p>
| Derek Williams | <p>You probably need to explicitly specify your Kubernetes cluster resource name in the Environment section. See below:</p>
<pre><code>- deployment:
environment:
name: Dev # name of the environment to run this job on.
resourceName: cluster-resource-name # name of the resource in the environment to record the deployments against
resourceType: Kubernetes
strategy:
...
</code></pre>
<p>You can also try using the shorten syntax: <code>environment: environmentName.resourceName</code>. If the shorten syntax failed to find the resource, you need to use above syntax to provide the <code>resourceType</code>. See document <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-schema#environment" rel="nofollow noreferrer">here</a>.</p>
<p>The steps of the deployment job automatically inherit the service connection details from resource targeted by the deployment job.</p>
<blockquote>
<p>You can scope the target of deployment to a particular resource within the environment. This allows you to record deployment history on a specific resource within the environment. The steps of the deployment job automatically inherit the service connection details from resource targeted by the deployment job.</p>
</blockquote>
<p>Check <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-schema#environment" rel="nofollow noreferrer">here</a> for information.</p>
| Levi Lu-MSFT |
<p>i have a problem with ftps-filezilla and Kubernetes for weeks.</p>
<p><strong>CONTEXT</strong> :</p>
<p>I have a school project with Kubernetes and ftps.
I need to create a ftps server in kubernetes in the port 21, and it needs to run on alpine linux.
So i create an image of my ftps-alpine server using a docker container.
I test it, if it work properly on it own :
Using <code>docker run --name test-alpine -itp 21:21 test_alpine</code>
I have this output in filezilla :</p>
<pre><code> Status: Connecting to 192.168.99.100:21…
Status: Connection established, waiting for welcome message…
Status: Initializing TLS…
Status: Verifying certificate…
Status: TLS connection established.
Status: Logged in
Status: Retrieving directory listing…
Status: Calculating timezone offset of server…
Status: Timezone offset of server is 0 seconds.
Status: Directory listing of “/” successful
</code></pre>
<p>It work successfully, filezilla see the file that is within my ftps directory
I am good for now(work on active mode).</p>
<p><strong>PROBLEM</strong> :</p>
<p>So what i wanted, was to use my image in my kubernetes cluster(I use Minikube).
When i connect my docker image to an ingress-service-deployment in kubernetes I have that :</p>
<pre><code> Status: Connecting to 192.168.99.100:30894…
Status: Connection established, waiting for welcome message…
Status: Initializing TLS…
Status: Verifying certificate…
Status: TLS connection established.
Status: Logged in
Status: Retrieving directory listing…
Command: PWD
Response: 257 “/” is the current directory
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: PORT 192,168,99,1,227,247
Response: 500 Illegal PORT command.
Command: PASV
Response: 227 Entering Passive Mode (172,17,0,5,117,69).
Command: LIST
Error: The data connection could not be established: EHOSTUNREACH - No route to host
Error: Connection timed out after 20 seconds of inactivity
Error: Failed to retrieve directory listing
</code></pre>
<p><strong>SETUP</strong> :</p>
<pre><code>
ingress.yaml :
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
namespace: default
name: ingress-controller
spec:
backend:
serviceName: my-nginx
servicePort: 80
backend:
serviceName: ftps-alpine
servicePort: 21
ftps-alpine.yml :
apiVersion: v1
kind: Service
metadata:
name: ftps-alpine
labels:
run: ftps-alpine
spec:
type: NodePort
ports:
port: 21
targetPort: 21
protocol: TCP
name: ftp21
port: 20
targetPort: 20
protocol: TCP
name: ftp20
selector:
run: ftps-alpine
apiVersion: apps/v1
kind: Deployment
metadata:
name: ftps-alpine
spec:
selector:
matchLabels:
run: ftps-alpine
replicas: 1
template:
metadata:
labels:
run: ftps-alpine
spec:
- name: ftps-alpine
image: test_alpine
imagePullPolicy: Never
ports:
- containerPort: 21
- containerPort: 20
</code></pre>
<p><strong>WHAT DID I TRY</strong> :</p>
<ul>
<li>When i see the error message : Error: The data connection could not
be established: EHOSTUNREACH - No route to host google it and i see
this message :
<a href="https://stackoverflow.com/questions/31001017/ftp-in-passive-mode-ehostunreach-no-route-to-host">FTP in passive mode : EHOSTUNREACH - No route to host</a>
. And i already run my ftps server in active mode.</li>
<li>Change vsftpd.conf file and my service:</li>
</ul>
<pre><code>vsftpd.conf :
seccomp_sandbox=NO
pasv_promiscuous=NO
listen=NO
listen_ipv6=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
chroot_local_user=YES
#secure_chroot_dir=/vsftpd/empty
pam_service_name=vsftpd
pasv_enable=YES
pasv_min_port=30020
pasv_max_port=30021
user_sub_token=$USER
local_root=/home/$USER/ftp
userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
allow_writeable_chroot=YES
#listen_port=21
</code></pre>
<p>I did change my the nodeport of my kubernetes to 30020 and 30021 and i add them to containers ports.
I change the pasv min port and max port.
I add the pasv_adress of my minikube ip.
Nothing work .</p>
<p><strong>Question</strong> :</p>
<p>How can i have the successfully first message but for my kubernetes cluster ?</p>
<p>If you have any questions to clarify, no problem.</p>
<p><strong>UPDATE</strong> : </p>
<p>Thanks to coderanger, i have advance and there is this problem :</p>
<pre><code>Status: Connecting to 192.168.99.100:30894...
Status: Connection established, waiting for welcome message...
Status: Initializing TLS...
Status: Verifying certificate...
Status: TLS connection established.
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is the current directory
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: PASV
Response: 227 Entering Passive Mode (192,168,99,100,178,35).
Command: LIST
Error: The data connection could not be established: ECONNREFUSED - Connection refused by server
</code></pre>
| bsteve | <p>It works with the following change:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ftps-alpine
labels:
run: ftps-alpine
spec:
type: NodePort
ports:
- port: 21
targetPort: 21
nodePort: 30025
protocol: TCP
name: ftp21
- port: 20
targetPort: 20
protocol: TCP
nodePort: 30026
name: ftp20
- port: 30020
targetPort: 30020
nodePort: 30020
protocol: TCP
name: ftp30020
- port: 30021
targetPort: 30021
nodePort: 30021
protocol: TCP
name: ftp30021
selector:
run: ftps-alpine
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ftps-alpine
spec:
selector:
matchLabels:
run: ftps-alpine
replicas: 1
template:
metadata:
labels:
run: ftps-alpine
spec:
containers:
- name: ftps-alpine
image: test_alpine
imagePullPolicy: Never
ports:
- containerPort: 21
- containerPort: 20
- containerPort: 30020
- containerPort: 30021
</code></pre>
<p>and for the vsftpd.conf :</p>
<pre><code>seccomp_sandbox=NO
pasv_promiscuous=NO
listen=YES
listen_ipv6=NO
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
chroot_local_user=YES
#secure_chroot_dir=/vsftpd/empty
pam_service_name=vsftpd
pasv_enable=YES
pasv_min_port=30020
pasv_max_port=30021
user_sub_token=$USER
local_root=/home/$USER/ftp
userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
allow_writeable_chroot=YES
#listen_port=21
pasv_address=#minikube_ip#
</code></pre>
| bsteve |
<p>I have a local image that runs fine this way:
<code>docker run -p 8080:8080 -v C:\Users\moritz\Downloads\1\imageService\examples1:/images -v C:\Users\moritz\entwicklung\projekte\imageCluster\logs:/logs imageservice</code></p>
<p>Now i want this to run as Kubernetes (using built in from Docker-for-Windows v1.19.7) deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: image-service
spec:
selector:
matchLabels:
app: image-service
template:
metadata:
labels:
app: image-service
spec:
containers:
- name: image-service
image: "imageservice"
resources:
limits:
cpu: "0.9"
memory: "1Gi"
ports:
- name: http
containerPort: 8080
volumeMounts:
- mountPath: /images
name: image-volume
- mountPath: /logs
name: log-volume
volumes:
- name: image-volume
hostPath:
path: "c:\\Users\\moritz\\Downloads\\1\\imageService\\examples1"
type: Directory
- name: log-volume
hostPath:
path: /mnt/c/Users/moritz/entwicklung/projekte/imageCluster/logs
type: Directory
</code></pre>
<p>As you see i tried different ways to set up my host path on windows machine but i always get:</p>
<pre><code> Warning FailedMount 0s (x4 over 4s) kubelet MountVolume.SetUp failed for volume "log-volume" : hostPath type check failed: /mnt/c/Users/moritz/entwicklung/projekte/imageCluster/logs is not a directory
Warning FailedMount 0s (x4 over 4s) kubelet MountVolume.SetUp failed for volume "image-volume" : hostPath type check failed: c:\Users\moritz\Downloads\1\imageService\examples1 is not a directory
</code></pre>
<p>I also tried other variants (for both):</p>
<ul>
<li>C:\Users\moritz\entwicklung\projekte\imageCluster\logs</li>
<li>C:/Users/moritz/entwicklung/projekte/imageCluster/logs</li>
</ul>
<p>So how to correctly setup these windows host path. (The next step would be to set them as environment variable.)</p>
<p><strong>Little update:</strong></p>
<p>removing <code>type: Directory</code> helps to <strong>get rid of the error</strong> and pod is starting but the mounts are not working. If i "look" into container in <code>/images</code> i don't see the images i have on my host and i don't see any logs in log mount while in container /logs contains the expected files.</p>
<p>in meantime i also tried (no avail)</p>
<ul>
<li>/host_mnt/c/...</li>
<li>/C/Users/...</li>
<li>//C/Users/...</li>
</ul>
| dermoritz | <p>As mentioned <a href="https://stackoverflow.com/questions/62812948/volume-mounts-not-working-kubernetes-and-wsl-2-and-docker/63524931#63524931">here</a>, you can use below hostPath to make it work on wsl2.</p>
<pre><code>// C:\someDir\volumeDir
hostPath:
path: /run/desktop/mnt/host/c/someDir/volumeDir
type: DirectoryOrCreate
</code></pre>
<p>There is also an example you can use.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-localpc
spec:
containers:
- name: test-webserver
image: ubuntu:latest
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 600"]
volumeMounts:
- mountPath: /run/desktop/mnt/host/c/aaa
name: mydir
- mountPath: /run/desktop/mnt/host/c/aaa/1.txt
name: myfile
volumes:
- name: mydir
hostPath:
# Ensure the file directory is created.
path: /run/desktop/mnt/host/c/aaa
type: DirectoryOrCreate
- name: myfile
hostPath:
path: /run/desktop/mnt/host/c/aaa/1.txt
type: FileOrCreate
</code></pre>
| Jakub |
<p>Consider the following <a href="https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/controllers/frontend.yaml" rel="nofollow noreferrer">example</a> provided in this <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">doc</a>.</p>
<p>What I'm trying to achieve is to see the 3 replicas names from inside the container.
following <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">this guide</a> I was able to get the current pod name, but i need also the pod names from my replicas.</p>
<p>Ideally i would like to:</p>
<pre><code>print(k8s.get_my_replicaset_names())
</code></pre>
<p>or</p>
<pre><code>print(os.getenv("MY_REPLICASET"))
</code></pre>
<p>and have a result like:</p>
<pre><code>[frontend-b2zdv,frontend-vcmts,frontend-wtsmm]
</code></pre>
<p>that is the pod names of all the container's replicas (also the current container of course) and eventually compare the current name in the name list to get my index in the list.</p>
<p>Is there any way to achieve this?</p>
| JoulinRouge | <p>As you can read <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api" rel="nofollow noreferrer">here</a>, the <strong>Downward API</strong> is used to expose <code>Pod</code> and <code>Container</code> fields to a running Container:</p>
<blockquote>
<p>There are two ways to expose Pod and Container fields to a running
Container:</p>
<ul>
<li>Environment variables</li>
<li><a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api" rel="nofollow noreferrer">Volume Files</a></li>
</ul>
<p>Together, these two ways of exposing Pod and Container fields are
called the <em>Downward API</em>.</p>
</blockquote>
<p>It is not meant to expose any information about other objects/resources such as <code>ReplicaSet</code> or <code>Deployment</code>, that manage such a <code>Pod</code>.</p>
<p>You can see exactly what fields contains the <code>yaml</code> manifest that describes a running <code>Pod</code> by executing:</p>
<pre><code>kubectl get pods <pod_name> -o yaml
</code></pre>
<p>The example fragment of its output may look as follows:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
<some annotations here>
...
creationTimestamp: "2020-10-08T22:18:03Z"
generateName: nginx-deployment-7bffc778db-
labels:
app: nginx
pod-template-hash: 7bffc778db
name: nginx-deployment-7bffc778db-8fzrz
namespace: default
ownerReferences: 👈
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet 👈
name: nginx-deployment-7bffc778db 👈
...
</code></pre>
<p>As you can see, in <code>metadata</code> section it contains <code>ownerReferences</code> which in the above example contains one reference to a <code>ReplicaSet</code> object by which this <code>Pod</code> is managed. So you can get this particular <code>ReplicaSet</code> name pretty easily as it is part of a <code>Pod</code> yaml manifest.</p>
<p><strong>However, you cannot get this way information about other <code>Pods</code> managed by this <code>ReplicaSet</code> .</strong></p>
<p>Such information only can be obtained from the <strong>api server</strong> e.g by using <strong>kubectl</strong> client or programmatically with direct calls to the API.</p>
| mario |
<p>According to the docs here <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-proto" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-proto</a>
Envoy proxy adds the Header <code>X-Forwarded-Proto</code> to the request, for some reason the header value is wrong; it set it as <code>http</code> although the incoming requests scheme is <code>https</code> which cause some problems in my application code since it depends on the correct value of this header.</p>
<p>Is this a bug in envoy? Can I prevent envoy from doing this?</p>
| yakout | <p>As I mentioned in comments there is related <a href="https://github.com/istio/istio/issues/7964" rel="nofollow noreferrer">github issue</a> about that.</p>
<blockquote>
<p>Is there a way to prevent envoy from adding specific headers?</p>
</blockquote>
<p>There is istio dev @howardjohn <a href="https://github.com/istio/istio/issues/7964#issuecomment-679397836" rel="nofollow noreferrer">comment</a> about that</p>
<blockquote>
<p>We currently have two options:</p>
<ul>
<li><a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="nofollow noreferrer">EnvoyFilter</a></li>
<li><a href="https://preliminary.istio.io/latest/docs/ops/configuration/traffic-management/network-topologies/" rel="nofollow noreferrer">Alpha api</a></li>
</ul>
<p>There will not be a third; instead we will promote the alpha API.</p>
</blockquote>
<hr />
<p>So the first option would be envoy filter.</p>
<hr />
<p>There are 2 answers with that in above github issue.</p>
<p><a href="https://github.com/istio/istio/issues/7964#issuecomment-554241818" rel="nofollow noreferrer">Answer</a> provided by @jh-sz</p>
<blockquote>
<p>In general, use_remote_address should be set to true when Envoy is deployed as an edge node (aka a front proxy), whereas it may need to be set to false when Envoy is used as an internal service node in a mesh deployment.</p>
</blockquote>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: xff-trust-hops
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: NETWORK_FILTER
match:
context: ANY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager"
use_remote_address: true
xff_num_trusted_hops: 1
</code></pre>
<hr />
<p><strong>AND</strong></p>
<hr />
<p><a href="https://github.com/istio/istio/issues/7964#issuecomment-434466264" rel="nofollow noreferrer">Answer</a> provided by @vadimi</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: my-app-filter
spec:
workloadLabels:
app: my-app
filters:
- listenerMatch:
portNumber: 5120
listenerType: SIDECAR_INBOUND
filterName: envoy.lua
filterType: HTTP
filterConfig:
inlineCode: |
function envoy_on_request(request_handle)
request_handle:headers():replace("x-forwarded-proto", "https")
end
function envoy_on_response(response_handle)
end
</code></pre>
<hr />
<p>The second option would be Alpha api, this feature is actively in development and is considered pre-alpha.</p>
<hr />
<blockquote>
<p>Istio provides the ability to manage settings like <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-for" rel="nofollow noreferrer">X-Forwarded-For</a> (XFF) and <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-client-cert" rel="nofollow noreferrer">X-Forwarded-Client-Cert</a> (XFCC), which are dependent on how the gateway workloads are deployed. This is currently an in-development feature. For more information on X-Forwarded-For, see the IETF’s <a href="https://www.rfc-editor.org/rfc/rfc7239" rel="nofollow noreferrer">RFC</a>.</p>
<p>You might choose to deploy Istio ingress gateways in various network topologies (e.g. behind Cloud Load Balancers, a self-managed Load Balancer or directly expose the Istio ingress gateway to the Internet). As such, these topologies require different ingress gateway configurations for transporting correct client attributes like IP addresses and certificates to the workloads running in the cluster.</p>
<p>Configuration of XFF and XFCC headers is managed via MeshConfig during Istio installation or by adding a pod annotation. Note that the Meshconfig configuration is a global setting for all gateway workloads, while pod annotations override the global setting on a per-workload basis.</p>
</blockquote>
| Jakub |
<p>I'm not able to execute kubectl(v1.16.3) commands in the ansible command module.</p>
<p>For e.g. Creation of Namespace using ansible.</p>
<pre>
tasks:
- name: "Creating Directory"
file:
path: ~/ansible_ns/demo_namespaces
state: directory
- name: "Creating Namespaces(1/2)"
copy:
content: "apiVersion: v1 \nkind: Namespace \nmetadata: \n name: {{item}} "
dest: "~/ansible_ns/demo_namespaces/{{item}}.yml"
with_items:
- "{{ namespace }}"
- name: "Creating Namespaces(2/2)"
command: "kubectl create -f {{item}}.yml --kubeconfig=/var/lib/kubernetes/kubeconfig.yaml"
args:
chdir: ~/ansible_ns/demo_namespaces/
ignore_errors: true
with_items:
- "{{ namespace }}"
</pre>
<p>I'm ending up with the below error:</p>
<pre>
(item=ns) => {
"ansible_loop_var": "item",
"changed": false,
"cmd": "kubectl create -f ns.yml --kubeconfig=/var/lib/kubernetes/kubeconfig.yaml",
"invocation": {
"module_args": {
"_raw_params": "kubectl create -f ns.yml --kubeconfig=/var/lib/kubernetes/kubeconfig.yaml",
"_uses_shell": false,
"argv": null,
"chdir": "/root/ansible_ns/demo_namespaces/",
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"item": "ns",
"msg": "[Errno 2] No such file or directory",
"rc": 2
}
</pre>
<p>NOTE: But I'm able to do "kubectl create -f .." manually..and it is creating the stuff.</p>
<p><b>My Ansible version:</b></p>
<pre><code>$ ansible --version
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/mdupaguntla/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
</code></pre>
<p>FYI, I also tried with Ansible - 2.4.2 as well. But No luck.</p>
<p><b>My System OS: CentOS 7</b></p>
<p><b>My queries:</b></p>
<ol>
<li><p>What is this error mean "[Errno 2] No such file or directory" in my context?</p></li>
<li><p>I came to know that Ansible introduced kubectl & k8s module: Is there anyone in the community using these.. If Yes, please let me know how to use them. If they any prerequisites - please share them
For kubectl Module: Came to know that the pre-requisite is kubectl go library.May I know where can I
get this Library.</p></li>
<li><p>when the kubectl version is 1.8 and ansible version is 2.4.2 - I'm able to get the K8s resources created using "kubectl create -f ..." using command module. But when I upgraded my cluster from v1.8 to v1.16.3 - I'm not able to create the resources using "kubectl create -f ..." using command module. Let me if I missed doing things.</p></li>
</ol>
<p>Thanks in advance for the Community</p>
| manoj kumar | <p>You have to add the path for kubectl in the command module.</p>
<pre><code>command: "/the/path/kubectl create -f {{item}}.yml .........."
</code></pre>
<p>This is because the $PATH is not updated with the path of kubectl. You can add the path to $PATH also instead of giving the path in command module.</p>
| Smily |
<p>As a learning project, I've currently got a honeypot running in Kubernetes, which works fine. (only sad thing is that I can't see actual SRC IP's, because everything from K8s perspective is coming from the loadbalancer).</p>
<p>I want to make a cluster of honeypots and eventually make an ELK backend to which all of the logs will be send and visualise some of it. Now I can't seem to figure out how to use 1 loadbalancer with different ports for different containers. Is there a better way to tackle this problem? I kind of get the 1 service 1 loadbalancer thing, but I'm sure I'm not the only one who face(d)(s) this problem?</p>
<p>Any help is appreciated. Thanks in advance.</p>
| chr0nk | <p>When it comes to preserving client's source IP when using <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">external load balancer</a>, <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">this fragment</a> of the official kubernetes documentation should fully answer your question:</p>
<blockquote>
<h2>Preserving the client source IP<a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer"></a></h2>
<p>Due to the implementation of this feature, the source IP seen in the
target container is <em>not the original source IP</em> of the client. To
enable preservation of the client IP, the following fields can be
configured in the service spec (supported in GCE/Google Kubernetes
Engine environments):</p>
<ul>
<li><code>service.spec.externalTrafficPolicy</code> - denotes if this Service desires to route external traffic to node-local or cluster-wide
endpoints. There are two available options: Cluster (default) and
Local. Cluster obscures the client source IP and may cause a second
hop to another node, but should have good overall load-spreading.
Local preserves the client source IP and avoids a second hop for
LoadBalancer and NodePort type services, but risks potentially
imbalanced traffic spreading.</li>
<li><code>service.spec.healthCheckNodePort</code> - specifies the health check node port (numeric port number) for the service. If
<code>healthCheckNodePort</code> isn't specified, the service controller
allocates a port from your cluster's NodePort range. You can configure
that range by setting an API server command line option,
<code>--service-node-port-range</code>. It will use the user-specified
<code>healthCheckNodePort</code> value if specified by the client. It only has
an effect when <code>type</code> is set to LoadBalancer and
<code>externalTrafficPolicy</code> is set to Local.</li>
</ul>
<p>Setting <code>externalTrafficPolicy</code> to Local in the Service
configuration file activates this feature.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
externalTrafficPolicy: Local ### 👈
type: LoadBalancer
</code></pre>
</blockquote>
<p>The key point is setting the <code>externalTrafficPolicy</code> to <code>Local</code> and it should entirely solve your problem with preserving the original source IP, but keep in mind that this setting has also some downsides. It could potentially lead to less equally balanced traffic. As you can read specifically in this fragment:</p>
<p><em>There are two available options: <code>Cluster</code> (default) and <code>Local</code>. <strong><code>Cluster</code> obscures the client source IP</strong> and may cause a second hop to another node, <strong>but should have good overall load-spreading</strong>. <strong><code>Local</code> preserves the client source IP</strong> and avoids a second hop for LoadBalancer and NodePort type services, <strong>but risks potentially imbalanced traffic spreading</strong>.</em></p>
| mario |
<p>I have ingress and service with LB. When traffic coming from outside it hits ingress first and then does it goes to pods directly using ingress LB or it goes to service and get the pod ip via selector and then goes to pods? If it's first way, what is the use of services? And which kind, services or ingress uses readinessProbe in the deployment?</p>
<p>All the setup is in GCP</p>
<p>I am new to K8 networks.</p>
| Sid | <p>A service type <code>LoadBalancer</code> is a external source provided by your cloud and are NOT in Kubernetes cluster. They can work forwarding the request to your pods using node selector, but you can't for example make path rules or redirect, rewrites because this is provided by an Ingress.</p>
<blockquote>
<p><strong>Service</strong> is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector (see below for why you might want a Service without a selector).</p>
</blockquote>
<pre><code> Internet
|
[ LoadBalancer ]
--|-----|--
[ Services ]
--| |--
[ Pod1 ] [ Pod2 ]
</code></pre>
<p>When you use <strong>Ingress</strong>, is a <em>component</em> controller by a ingress controller that is basically a pod configured to handle the rules you defined.
To use ingress you need to configure a service for your path, and then this service will reach the pods with configures selectors. You can configure some rules based on path, hostname and them redirect for the service you want. Like this:</p>
<pre><code> Internet
|
[ Ingress ]
--|-----|--
[ Services ]
--| |--
[ Pod1 ] [ Pod2 ]
</code></pre>
<blockquote>
<p><strong>Ingress</strong> exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.</p>
</blockquote>
<p><a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="nofollow noreferrer">This article</a> has a good explanation between all ways to expose your service.</p>
<p>The <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="nofollow noreferrer"><strong>readnessProbe</strong></a> is configured in your pod/deployment specs, and <strong>kubelet</strong> is responsible to evaluate your container healthy.</p>
<blockquote>
<p>The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/overview/components/#kube-proxy" rel="nofollow noreferrer">kube-proxy</a> is the responsible to foward the request for the pods. </p>
<p>For example, if you have 2 pods in different nodes, kube-proxy will handle the firewall rules (iptables) and distribute the traffic between your nodes. Each node in your cluster has a kube-proxy running.</p>
<p>kube-proxy can be configured in 3 ways: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-userspace" rel="nofollow noreferrer">userspace mode</a>, <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer">iptables mode</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer">ipvs mode</a>.</p>
<blockquote>
<p>If kube-proxy is running in iptables mode and the first Pod that’s selected does not respond, the connection fails. This is different from userspace mode: in that scenario, kube-proxy would detect that the connection to the first Pod had failed and would automatically retry with a different backend Pod.</p>
</blockquote>
<p><strong>References:</strong></p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
| Mr.KoopaKiller |
<p>My application is running as a container on top of <code>kubernetes</code>.<br>
The application consume messages from <code>rabbitmq</code>. </p>
<p>I can't predict the exact amount of <code>cpu</code> so I don't want to use it as autoscale limit, though I did set the <code>prefetch</code> to something that looks normal.<br>
Is there a way to follow the number of messages in the queue,<br>
and once there are too much to tell <code>k8s</code> to autoscale?<br>
Or maybe set the autoscale to follow message rate?</p>
| natdev | <p>I wasn't able to find much content on this which didn't involve using an external source such as StackDriver.</p>
<p>I spent several days working through all the issues, and wrote up a demo app with code on how to do it. I hope it will help someone:</p>
<p><a href="https://ryanbaker.io/2019-10-07-scaling-rabbitmq-on-k8s/" rel="nofollow noreferrer">https://ryanbaker.io/2019-10-07-scaling-rabbitmq-on-k8s/</a></p>
| ryan-baker |
<p>Background:</p>
<p>I'm trying to use goreplay to mirror the traffic to other destination.
I found that k8s service is a load balancing on layer 4 which cause the traffic can not be capture by goreplay,So i decide to add a reverse-proxy sidecar inside pod just like istio does.</p>
<p>Here is my pod yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: default
labels:
app: nginx
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- image: nginx
imagePullPolicy: IfNotPresent
name: proxy
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 10m
memory: 40Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: default
initContainers:
- command:
- iptables
args:
- -t
- nat
- -A
- PREROUTING
- -p
- tcp
- --dport
- "80"
- -j
- REDIRECT
- --to-ports
- "15001"
image: soarinferret/iptablesproxy
imagePullPolicy: IfNotPresent
name: istio-init
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 10Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 256
name: default
optional: false
name: default
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: v1
data:
default.conf: |
server {
listen 15001;
server_name localhost;
access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
kind: ConfigMap
metadata:
name: default
namespace: default
</code></pre>
<p>I use <code>kubectl port-forward service/nginx 8080:80</code> and then <code>curl http://localhost:8080</code>,the traffic were sent directly to nginx not my proxy.</p>
<p>WHAT I WANT:</p>
<ol>
<li>A way to let goreplay to capture traffic that load balanced by k8s service.</li>
<li>Correct iptables rule to let traffic success route to my proxy sideCar.</li>
</ol>
<p>Thanks for any help!</p>
| Jonyhy96 | <p>As @Jonyhy96 mentioned in comments the only things which need to be changed here is to the privileged value to true in the <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">securityContext</a> field of <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">initContainer</a>.</p>
<blockquote>
<p><strong>Privileged</strong> - determines if any container in a pod can enable privileged mode. By default a container is not allowed to access any devices on the host, but a "privileged" container is given access to all devices on the host. This allows the container nearly all the same access as processes running on the host. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices.</p>
</blockquote>
<hr />
<p>So the initContainer would look like this</p>
<pre><code>initContainers:
- command:
- iptables
args:
- -t
- nat
- -A
- PREROUTING
- -p
- tcp
- --dport
- "80"
- -j
- REDIRECT
- --to-ports
- "15001"
image: soarinferret/iptablesproxy
imagePullPolicy: IfNotPresent
name: istio-init
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 10Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
privileged: true <---- changed from false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
</code></pre>
<hr />
<p>There is very good <a href="https://venilnoronha.io/hand-crafting-a-sidecar-proxy-and-demystifying-istio" rel="nofollow noreferrer">tutorial</a> about that, not exactly on nginx, but explains how to actually build the proxy.</p>
| Jakub |
<p>Im attempting to create an python script to evict nodes based on some criteria and im having trouble getting the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#create_namespaced_pod_eviction" rel="nofollow noreferrer">create_namespaced_pod_eviction</a>
to behave properly. From what I can tell from the api documentation my syntax looks pretty correct. Any help is appreciated. I'll also mention that the kubernetes cluster is 1.10 on AWS EKS</p>
<pre><code> for i in pods.items:
print("Deleting pod: ", i.metadata.name, i.metadata.namespace, node)
body = kubernetes.client.V1beta1Eviction()
api_response = v1.create_namespaced_pod_eviction(i.metadata.name, i.metadata.namespace, body, dry_run='All', include_uninitialized='True', pretty='True')
</code></pre>
<p>This is the output:</p>
<pre><code>('Deleting pod: ', 'ambassador-5d86576878-4kv6w', 'istio-system', 'ip-10-72-20-161.ec2.internal')
Traceback (most recent call last):
File "src/update_workernodes.py", line 105, in <module>
main()
File "src/update_workernodes.py", line 99, in main
evict_pods(old_worker_dns)
File "src/update_workernodes.py", line 82, in evict_pods
api_response = v1.create_namespaced_pod_eviction(name=i.metadata.name, namespace=i.metadata.namespace, body=body, dry_run='All', include_uninitialized='True', pretty='True')
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 6353, in create_namespaced_pod_eviction
(data) = self.create_namespaced_pod_eviction_with_http_info(name, namespace, body, **kwargs)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 6450, in create_namespaced_pod_eviction_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 364, in request
body=body)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/rest.py", line 266, in POST
body=body)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Date': 'Tue, 13 Nov 2018 02:34:52 GMT', 'Audit-Id': '7a3725ac-5b1c-470b-a743-0af202a56f7c', 'Content-Length': '175', 'Content-Type': 'application/json'})
HTTP response body: {
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Name parameter required.",
"reason": "BadRequest",
"code": 400
}
</code></pre>
| joeldamata | <p>For those who stumble across this, I was able to get this working by doing the following:</p>
<pre><code>podName = 'insert-name-of-pod'
podNamespace = 'insert-namespace-of-pod'
body = client.V1beta1Eviction(metadata=client.V1ObjectMeta(name=podName, namespace=podNamespace))
api_response = v1.create_namespaced_pod_eviction(name=podName, namespace=podNamespace, body=body)
</code></pre>
| ryan-baker |
<p>I was wondering if the auto scaling feature in kubernetes is a reactive approach or proactive approach and if they are rule based only</p>
<p>please let me know</p>
<p>Thank you</p>
| Thivya Thogesan | <p>It entirely depends on how you define <strong>reactive</strong> and <strong>proactive</strong>. From one hand I would say <strong>reactive</strong> as the metrics, autoscaling decision is based upon, need to reach a certain value so the autoscaling process can take place. An action is <strong>proactive</strong> only when it is based on prediction or anticipation of certain events e.g. load increase e.g. you are expecting that due to promotion campaing you're launching next week, the load on your app will increase about 3 times.</p>
<p>I would encourage you to take a closer look at <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">autoscaling algorithm details</a>.</p>
<p>From the most basic perspective, the Horizontal Pod Autoscaler controller operates on the ratio between desired metric value and current metric value:</p>
<blockquote>
<pre><code>desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
</code></pre>
<p>For example, if the current metric value is <code>200m</code>, and the desired
value is <code>100m</code>, the number of replicas will be doubled, since
<code>200.0 / 100.0 == 2.0</code> If the current value is instead <code>50m</code>, we'll
halve the number of replicas, since <code>50.0 / 100.0 == 0.5</code>. We'll skip
scaling if the ratio is sufficiently close to 1.0 (within a
globally-configurable tolerance, from the
<code>--horizontal-pod-autoscaler-tolerance</code> flag, which defaults to 0.1).</p>
</blockquote>
<p>As you can see the <strong>desired number of replicas</strong> is calculated based on a <strong>current metric value</strong> and only if this value reaches the certain critical point, the autoscaling process is triggered. So it's 100% reactive from this perspective.</p>
<p>Looking at the problem from different point of view, the decision about using <strong>horizontal pod autoscaler</strong> is kind of <strong>proactive</strong> approach. But now I'm talking about user's approach to managing their infrastructure, not about the mechanism of the <strong>hpa</strong> itself as I described above. Suppose you don't use <strong>horizontal pod autoscaler</strong> at all and you have sometimes unexpected load increases on your rigidly fixed set of pods that youre application is running on and due to those increases your application often becomes unavailable.</p>
<p>If you administer such environment manually, your <strong>reaction</strong> in such a situation is the decision about <strong>scaling</strong> your <code>Deployment</code> out. You will probably agree with me that this is totally reactive approach.</p>
<p>However if you decide to use <strong>hpa</strong>, you <strong>proactively anticipate</strong> the occurance of such load increases. It gives you the possibility of being always one step ahead and react automatically before the situation occurs. So if you decide to scale out your <code>Deployment</code> when the CPU usage reaches the certain treshhold e.g. 50% (still safe for the application so it continues running), the <strong>hpa</strong> automatically handles the situation for you, based on your predictions. However the <strong>horizontal pod autoscaler's</strong> reaction is <strong>reactive</strong> (reaction on exceeded treshhold), at the same time infrastructure autoscaling in such moment is <strong>proactive</strong>, as the <strong>autoscaler</strong> steps into action before the situation becomes critical.</p>
<p>I hope this has shed some light on your understanding of the problem.</p>
| mario |
<p>For a sample microservice based architecture deployed on Google kubernetes engine, I need help to validate my understanding :</p>
<ol>
<li>We know services are supposed to load balance traffic for pod replicaset.</li>
<li>When we create an nginx ingress controller and ingress definitions to route to each service, a loadbalancer is also setup automatically.</li>
<li>had read somewhere that creating nginx ingress controller means an nginx controller (deployment) and a loadbalancer type service getting created behind the scene. I am not sure if this is true.</li>
</ol>
<blockquote>
<p>It seems loadbalancing is being done by services. URL based routing is
being done by ingress controller.</p>
<p>Why do we need a loadbalancer? It is not meant to load balance across multiple instances. It will just
forward all the traffic to nginx reverse proxy created and it will
route requests based on URL.</p>
</blockquote>
<p>Please correct if I am wrong in my understanding.</p>
| inaitgaJ | <p>A Service type <code>LoadBalancer</code> and the <code>Ingress</code> is the way to reach your application externally, although they work in a different way.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Service:</a></p>
<blockquote>
<p>In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">selector</a> (see <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="noreferrer">below</a> for why you might want a Service <em>without</em> a selector).</p>
</blockquote>
<p>There are some types of Services, and of them is the <code>LoadBalancer</code> type that permit you to expose your application externally assigning a externa IP for your service. For each LoadBalancer service a new external IP will be assign to it.
The load balancing will be handled by kube-proxy.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingress:</a></p>
<blockquote>
<p>An API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting.</p>
</blockquote>
<p>When you setup an ingress (i.e.: nginx-ingress), a Service type <code>LoadBalancer</code> is created for the ingress-controller pods and a Load Balancer in you cloud provider is automatically created and a public IP will be assigned for the nginx-ingress service.</p>
<p>This load balancer/public ip will be used for incoming connection for all your services, and nginx-ingress will be the responsible to handle the incoming connections.</p>
<p>For example:</p>
<p>Supose you have 10 services of <code>LoadBalancer</code> type: This will result in 10 new publics ips created and you need to use the correspondent ip for the service you want to reach.</p>
<p>But if you use a ingress, only 1 IP will be created and the ingress will be the responsible to handle the incoming connection for the correct service based on PATH/URL you defined in the ingress configuration. With ingress you can:</p>
<ul>
<li>Use regex in <code>path</code> to define the service to redirect;</li>
<li>Use SSL/TLS </li>
<li>Inject custom headers;</li>
<li>Redirect requests for a default service if one of the service failed (default-backend);</li>
<li>Create whitelists based on IPs</li>
<li>Etc...</li>
</ul>
<p>A important <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#caveats-and-limitations-when-preserving-source-ips" rel="noreferrer">note</a> about Ingress Load balancing in ingress:</p>
<blockquote>
<p>GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB kube-proxy rules which would correctly balance across all endpoints.</p>
<p>With the new functionality, the external traffic is not equally load balanced across pods, but rather equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability for specifying the weight per node, they balance equally across all target nodes, disregarding the number of pods on each node).</p>
</blockquote>
| Mr.KoopaKiller |
<p>I've more that 10 CronJobs configured in Kubernetes and all working properly as expected.</p>
<p>Now I am planning add 20 more CronJobs. All jobs getting data from external resources, processing it and generating some kind of reports.</p>
<p>I want to configure schedule expression of CronJobs into ConfigMap. for example something like,</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: scheduler-config
namespace: test
data:
job1.schedule.expression: */1 * * * *
job2.schedule.expression: */5 * * * *
job3.schedule.expression: */30 * * * *
job4.schedule.expression: 0 1 * * *
job5.schedule.expression: 0 0 * * THU
</code></pre>
<p>I want to make it more flexible if possible, for example if I update the ConfigMap with new expression then CronJob should be updated with new schedule expression and next time it should run as per new expression value in ConfigMap.</p>
| Jignesh Dhua | <p>As I already mentioned in comments</p>
<p>As far as I know <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">ConfigMap</a> is used to set environment variables inside container or is mounted as volume. I don't think you can use configmap to set schedule in cronjob.</p>
<hr />
<p>As an alternative you could use <a href="https://github.com/bambash/helm-cronjobs" rel="nofollow noreferrer">helm cronjobs</a> for that and specify the schedules in <a href="https://github.com/bambash/helm-cronjobs/blob/master/values.yaml" rel="nofollow noreferrer">values.yaml</a>.</p>
<p>Take a look at below cronjobs created with above helm cronjobs.</p>
<pre><code>kubectl get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cold-fly-hello-env-var * * * * * False 0 23s 1m
cold-fly-hello-ubuntu */5 * * * * False 0 23s 1m
cold-fly-hello-world * * * * * False 0 23s 1m
</code></pre>
<p>And their schedule vlaues <a href="https://github.com/bambash/helm-cronjobs/blob/master/values.yaml#L10" rel="nofollow noreferrer">there</a>, <a href="https://github.com/bambash/helm-cronjobs/blob/master/values.yaml#L51" rel="nofollow noreferrer">there</a> and <a href="https://github.com/bambash/helm-cronjobs/blob/master/values.yaml#L21" rel="nofollow noreferrer">there</a>.</p>
| Jakub |
<p>I have deployed my application in a baremetal kubernetes cluster using nginx as ingress controller. I have deployed several ui applications with <code><my-hostname>/<module-name></code> (I cannot change this due to client's requirements). I have written following ingress rules to access my APIs and modules.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.org/proxy-connect-timeout: 60s
nginx.org/proxy-read-timeout: 60s
name: test-ingress
namespace: csi-dev
spec:
rules:
- host: 'my.host.name'
http:
paths:
- backend:
serviceName: security-servicename
servicePort: 80
path: (/<api-pattern>/iam/)(.*)
- backend:
serviceName: api-gateway-servicename
servicePort: 80
path: (/<api-pattern>/)(.*)
- backend:
serviceName: ui-config-server-servicename
servicePort: 80
path: (/<ui-config-server-pattern>/)(.*)
- backend:
serviceName: ui-module1-servicename
servicePort: 80
path: /(ui-module1)/?(.*)
- backend:
serviceName: ui-module1-servicename
servicePort: 80
path: /(ui-module2)/?(.*)
- backend:
serviceName: ui-module1-servicename
servicePort: 80
path: /(ui-module3)/?(.*)
</code></pre>
<p>When I apply this Ingress controller, Kubernetes give me following errors.</p>
<pre><code>* spec.rules[0].http.paths[0].path: Invalid value: "(.*/<api-pattern>/iam/)(.*)": must be an absolute path
* spec.rules[0].http.paths[1].path: Invalid value: "(.*/<api-pattern>/)(.*)": must be an absolute path
* spec.rules[0].http.paths[2].path: Invalid value: "(.*/<ui-config-server>/)(.*)": must be an absolute path
</code></pre>
<p>But when I use <code>*.host.name</code> instead of <code>my.host.name</code> this works without error.
I need to restrict my hostname also.</p>
<p>Do anyone have a solution?</p>
| Tishan Harischandrai | <p>Found an answer on <a href="https://gitlab.cncf.ci/kubernetes/kubernetes/blob/60f4fbf4f25764dbd94b7f8146d927ddc684514d/pkg/apis/extensions/validation/validation.go#L417-418" rel="nofollow noreferrer">this</a></p>
<p>Kubernetes API validates whether <code>spec.rules[n].http.paths[m].path</code> has a <code>/</code> when hostname does not have a wildcard. Hence rewrote <code>Path</code> as follows.</p>
<p><code>/(.*<api-pattern>/iam/)(.*)</code></p>
| Tishan Harischandrai |
<p>We are using AKS 1.19.11 version and would like to know whether we could enable the configurable scaling behavior in AKS also as <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>.</p>
<p>If yes, the current hpa setting used is with <code>apiversion: autoscaling/v1</code>. Is it possible to configure these hpa behavior properties with these api version?</p>
| Vowneee | <p>If you ask specifically about <code>behavior</code> field, the answer is: <strong>no, it's not available in <code>apiVersion: autoscaling/v1</code></strong> and if you want to leverage it, you need to use <code>autoscaling/v2beta2</code>. It's clearly stated <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>Starting from v1.18 the v2beta2 API allows scaling behavior to be
configured through the HPA behavior field.</p>
</blockquote>
<p>If you have doubts, you can easily check it on your own by trying to apply a new <code>HorizontalPodAutoscaler</code> object definition, containing this field, but instead of required <code>autoscaling/v2beta2</code>, use <code>autoscaling/v1</code>. You should see the error message similar to the one below:</p>
<pre><code>error: error validating "nginx-multiple.yaml": error validating data: [ValidationError(HorizontalPodAutoscaler.spec): unknown field "behavior" in io.k8s.api.autoscaling.v1.HorizontalPodAutoscalerSpec, ValidationError(HorizontalPodAutoscaler.spec): unknown field "metrics" in io.k8s.api.autoscaling.v1.HorizontalPodAutoscalerSpec]; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>As you can see both <code>metrics</code> and <code>behavior</code> fields in <code>spec</code> are not valid in <code>autoscaling/v1</code> however they are perfectly valid in <code>autoscaling/v2beta2</code> API.</p>
<p>To check whether your <strong>AKS</strong> cluster supports this API version, run:</p>
<pre><code>$ kubectl api-versions | grep autoscaling
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
</code></pre>
<p>If your result is similar to mine (i.e. you can see <code>autoscaling/v2beta2</code>), it means your <strong>AKS</strong> cluster supports this API version.</p>
| mario |
<p>I have scripts that are mounted to a shared persistent volume. Part of the main Deployment chart is to run some bash scripts in the <code>initContainers</code> that will clone the scripts repository and copy/mount it to the shared persistent volume. My issue is sometimes there will be no changes in the main app or no update to the values.yaml file, so no helm upgrade will actually happen. I think this is fine but what I want to happen is have a task that will still clone the scripts repository and copy/mount it to the persistent volume. </p>
<p>I am reading about k8s Job (post-install hook) but I am not sure if this will accomplish what I need. </p>
| alltej | <p>Since you are not changed anything in HELM side like values or spec/templates, the HELM will not perform any change.
In this case your code is a external source and looking by HELM perspective it is correct.</p>
<p>I can propose some alternatives to achieve what you want:</p>
<ol>
<li><strong>Use HELM with FORCE flag</strong>
Use <code>helm upgrade --force</code> to upgrade your deployment.
By Helm <a href="https://helm.sh/docs/helm/helm_upgrade/" rel="nofollow noreferrer">docs</a>:
<blockquote>
<p><strong>--force</strong> - force resource updates through a replacement strategy</p>
</blockquote></li>
</ol>
<p>In this case Helm will recreate all resources of your chart, consequently the pods, and then re-run <code>initContainers</code> executing your script again.</p>
<ol start="2">
<li><strong>Use a Kubernetes CronJob</strong>
In this case you will spawn a pod that will mount your volume and run a script/command you want.</li>
</ol>
<p>Example of a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">Kubernetes CronJob</a>:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: nice-count
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: nice-job
image: alpine
command: ['sh', '-c', 'echo "HelloWorld" > /usr/share/nginx/html/index.html']
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
restartPolicy: Never
</code></pre>
<p>In this example, a CronJob will run <strong>each 2 hours</strong>, mounting the volume <code>task-pv-storage</code> on <code>/usr/share/nginx/html</code> and executing the command <code>echo "HelloWorld" > /usr/share/nginx/html/index.html</code>.</p>
<p>You should trigger the CronJob mannually creating a Job with the command:</p>
<pre><code>kubectl create job --from=cronjob/<CRON_JOB_NAME> <JOB_NAME>
</code></pre>
<p>In the example above, the command looks like this:</p>
<p><code>kubectl create job --from=cronjob/nice-count nice-count-job</code></p>
<ol start="3">
<li><strong>Execute a Job manually or using CI/CD</strong>
You can execute the job directly or if you have a CI/CD solution you can create a job to run once instead use a CronJob, in this case you should use this template:</li>
</ol>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: nice-count-job
spec:
template:
spec:
containers:
- image: alpine
name: my-job
volumeMounts:
- mountPath: /usr/share/nginx/html
name: task-pv-storage
command:
- sh
- -c
- echo "hello" > /usr/share/nginx/html/index.html
restartPolicy: Never
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
</code></pre>
<p>I've tested this examples and works in both cases.</p>
<p>Please let me know if that helped!</p>
| Mr.KoopaKiller |
<p>I have a problem with controller-manager and scheduler not responding, that is not related to github issues I've found (<a href="https://github.com/rancher/rancher/issues/11496" rel="nofollow noreferrer">rancher#11496</a>, <a href="https://github.com/Azure/AKS/issues/173" rel="nofollow noreferrer">azure#173</a>, …)</p>
<p>Two days ago we had a memory overflow by one POD on one Node in our 3-node HA cluster. After that rancher webapp was not accessible, we found the compromised pod and scaled it to 0 over kubectl. But that took some time, figuring everything out.</p>
<p>Since then rancher webapp is working properly, but there are continuous alerts from controller-manager and scheduler not working. Alerts are not consist, sometimes they are both working, some times their health check urls are refusing connection.</p>
<pre><code>NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
</code></pre>
<p>Restarting controller-manager and scheduler on compromised Node hasn’t been effective. Even reloading all of the components with</p>
<p><code>docker restart kube-apiserver kubelet kube-controller-manager kube-scheduler kube-proxy</code>
wasn’t effective either.</p>
<p><strong>Can someone please help me figure out the steps towards troubleshooting and fixing this issue without downtime on running containers?</strong></p>
<p>Nodes are hosted on DigitalOcean on servers with 4 Cores and 8GB of RAM each (Ubuntu 16, Docker 17.03.3).</p>
<p>Thanks in advance !</p>
| ralic | <p>The first area to look at would be your logs... Can you export the following logs and attach them?</p>
<pre><code>/var/log/kube-controller-manager.log
</code></pre>
<p>The controller manager is an endpoint, so you will need to do a "get endpoint". Can you run the following:</p>
<pre><code>kubectl -n kube-system get endpoints kube-controller-manager
</code></pre>
<p>and</p>
<pre><code>kubectl -n kube-system describe endpoints kube-controller-manager
</code></pre>
<p>and</p>
<pre><code>kubectl -n kube-system get endpoints kube-controller-manager -o jsonpath='{.metadata.annotations.control-plane\.alpha\.kubernetes\.io/leader}'
</code></pre>
| kellygriffin |
<p>I am facing same issue in my lab setup on my laptop.</p>
<p>Environment:</p>
<h1>Istio version installed 1.7. Pods are up and working</h1>
<p>vagrant@master-1:~$ kubectl get pods -n istio-system</p>
<pre><code>NAME READY STATUS RESTARTS AGE
grafana-75b5cddb4d-5t5lq 1/1 Running 1 16h
istio-egressgateway-695f5944d8-s7mbg 1/1 Running 1 16h
istio-ingressgateway-5c697d4cd7-vpd68 1/1 Running 1 16h
istiod-76fdcdd945-tscgc 1/1 Running 0 17m
kiali-6c49c7d566-8wbnw 1/1 Running 1 16h
prometheus-9d5676d95-zxbnk 2/2 Running 2 14h
</code></pre>
<h1>Kubernetes Cluster information:-</h1>
<p>Cluster is deployed by hard way
1 LB in front of master with IP 192.168.5.30 and HA proxy running , 1 Master node 192.168.5.11, 2 worker nodes in setup deployed on VMbox Ubuntu VMs . I am using weavenet as the CNI for my cluster.</p>
<h1>Worker Node in cluster:-</h1>
<p>vagrant@loadbalancer:~$ kubectl get node -o wide</p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
worker-3 Ready <none> 62d v1.18.0 192.168.5.24 <none> Ubuntu 18.04.4 LTS 4.15.0-112-generic docker://19.3.12
worker-4 Ready <none> 61d v1.18.0 192.168.5.25 <none> Ubuntu 18.04.4 LTS 4.15.0-112-generic docker://19.3.12
vagrant@loadbalancer:~$
</code></pre>
<h1>Kube-Apisever config</h1>
<pre><code> --ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=192.168.5.11 \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.crt \\
--enable-admission-plugins=NodeRestriction,ServiceAccount \\
--enable-swagger-ui=true \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/var/lib/kubernetes/ca.crt \\
--etcd-certfile=/var/lib/kubernetes/etcd-server.crt \\
--etcd-keyfile=/var/lib/kubernetes/etcd-server.key \\
--etcd-servers=https://192.168.5.11:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \\
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
--kubelet-https=true \\
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
--service-cluster-ip-range=10.96.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt \\
--tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key \\
--v=2
</code></pre>
<h1>istio pod svc</h1>
<pre><code>vagrant@master-1:~$ kubectl describe svc istiod -n istio-system
Name: istiod
Namespace: istio-system
Labels: app=istiod
install.operator.istio.io/owning-resource=installed-state
install.operator.istio.io/owning-resource-namespace=istio-system
istio=pilot
istio.io/rev=default
operator.istio.io/component=Pilot
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.7.0
release=istio
Annotations: Selector: app=istiod,istio=pilot
Type: ClusterIP
IP: 10.96.0.197
Port: grpc-xds 15010/TCP
TargetPort: 15010/TCP
Endpoints: 10.44.0.7:15010
Port: https-dns 15012/TCP
TargetPort: 15012/TCP
Endpoints: 10.44.0.7:15012
Port: https-webhook 443/TCP
TargetPort: 15017/TCP
Endpoints: 10.44.0.7:15017
Port: http-monitoring 15014/TCP
TargetPort: 15014/TCP
Endpoints: 10.44.0.7:15014
Port: dns-tls 853/TCP
TargetPort: 15053/TCP
Endpoints: 10.44.0.7:15053
Session Affinity: None
Events: <none>
</code></pre>
<h1>basic troubleshooting</h1>
<pre><code>vagrant@loadbalancer:~$ kubectl -n istio-system get configmap istio-sidecar-injector -o jsonpath='{.data.config}' | grep policy:
policy: enabled
vagrant@loadbalancer:~$ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep
> istio-injection: enabled
bjectSelector: {}
reinvocationPolicy:
> Never
</code></pre>
<h1>Error from Kube API server</h1>
<pre><code>Aug 31 02:48:22 master-1 kube-apiserver[1750]: I0831 02:48:22.521377 1750 trace.go:116] Trace[51800791]: “Call mutating webhook” configuration:istio-sidecar-injector,webhook:sidecar-injector.istio.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:9b96e1b2-3bbe-41d6-a727-0e19cdd9fbd1 (started: 2020-08-31 02:47:52.521061627 +0000 UTC m=+1080.518695497) (total time:30.000277923s):
Aug 31 02:48:22 master-1 kube-apiserver[1750]: Trace[51800791]: [30.000277923s] [30.000277923s] END
Aug 31 02:48:22 master-1 kube-apiserver[1750]: W0831 02:48:22.521529 1750 dispatcher.go:181] Failed calling webhook, failing closed sidecar-injector.istio.io: failed calling webhook “sidecar-injector.istio.io”: Post https://istiod.istio-system.svc:443/inject?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Aug 31 02:48:22 master-1 kube-apiserver[1750]: I0831 02:48:22.521814 1750 trace.go:116] Trace[491776795]: “Create” url:/api/v1/namespaces/dev/pods,user-agent## ##:kubectl/v1.18.0 (linux/amd64) kubernetes/9e99141,client:192.168.5.30 (started: 2020-08-31 02:47:52.510910326 +0000 UTC m=+1080.508544152) (total time: 30.010883231s):
Aug 31 02:48:22 master-1 kube-apiserver[1750]: Trace[491776795]: [30.010883231s] [30.003030474s] END
</code></pre>
| Mohit Verma | <p>As I already mentioned in comments if you're using VM you should follow this <a href="https://istio.io/latest/docs/setup/install/virtual-machine/" rel="nofollow noreferrer">guide</a> to deploy Istio and connect a virtual machine to it.</p>
<p>Just note that VM support is still an alpha feature.</p>
<p>Quoted from <a href="https://istio.io/latest/news/releases/1.6.x/announcing-1.6/" rel="nofollow noreferrer">1.6 upgrade notes</a></p>
<blockquote>
<p><strong>Better Virtual Machine support</strong></p>
<p>Expanding our support for workloads not running in Kubernetes was one of the our major areas of investment for 2020, and we’re excited to announce some great progress here.</p>
<p>For those of you who are adding non-Kubernetes workloads to meshes (for example, workloads deployed on VMs), the new WorkloadEntry resource makes that easier than ever. We created this API to give non-Kubernetes workloads first-class representation in Istio. It elevates a VM or bare metal workload to the same level as a Kubernetes Pod, instead of just an endpoint with an IP address. You now even have the ability to define a Service that is backed by both Pods and VMs. Why is that useful? Well, you now have the ability to have a heterogeneous mix of deployments (VMs and Pods) for the same service, providing a great way to migrate VM workloads to a Kubernetes cluster without disrupting traffic to and from it.</p>
<p>VM-based workloads remain a high priority for us, and you can expect to see more in this area over the coming releases.</p>
</blockquote>
<p>There are the steps you should follow to install Istio and connect a virtual machine to it.</p>
<ul>
<li><a href="https://istio.io/latest/docs/setup/install/virtual-machine/#prerequisites" rel="nofollow noreferrer">Prerequisites</a></li>
<li><a href="https://istio.io/latest/docs/setup/install/virtual-machine/#prepare-the-guide-environment" rel="nofollow noreferrer">Prepare the guide environment</a></li>
<li><a href="https://istio.io/latest/docs/setup/install/virtual-machine/#install-the-istio-control-plane" rel="nofollow noreferrer">Install the Istio control plane</a></li>
<li><a href="https://istio.io/latest/docs/setup/install/virtual-machine/#configure-the-vm-namespace" rel="nofollow noreferrer">Configure the VM namespace</a></li>
<li><a href="https://istio.io/latest/docs/setup/install/virtual-machine/#create-files-to-transfer-to-the-virtual-machine" rel="nofollow noreferrer">Create files to transfer to the virtual machine</a></li>
<li><a href="https://istio.io/latest/docs/setup/install/virtual-machine/#configure-the-virtual-machine" rel="nofollow noreferrer">Configure the virtual machine</a></li>
<li><a href="https://istio.io/latest/docs/setup/install/virtual-machine/#start-istio-within-the-virtual-machine" rel="nofollow noreferrer">Start Istio within the virtual machine</a></li>
<li><a href="https://istio.io/latest/docs/setup/install/virtual-machine/#verify-istio-works-successfully" rel="nofollow noreferrer">Verify Istio Works Successfully</a></li>
</ul>
<p>There are videos on youtube about that.</p>
<ul>
<li><a href="https://www.youtube.com/watch?v=W32duZtHh2w" rel="nofollow noreferrer">Istio 1.7 improved VM support part 1</a></li>
<li><a href="https://www.youtube.com/watch?v=OR7n1XePSRQ" rel="nofollow noreferrer">Istio 1.7 improved VM support part 2</a></li>
</ul>
<p>There are examples on istio documentation.</p>
<ul>
<li><a href="https://istio.io/latest/docs/examples/virtual-machines/single-network/" rel="nofollow noreferrer">https://istio.io/latest/docs/examples/virtual-machines/single-network/</a></li>
<li><a href="https://istio.io/latest/docs/examples/virtual-machines/multi-network/" rel="nofollow noreferrer">https://istio.io/latest/docs/examples/virtual-machines/multi-network/</a></li>
<li><a href="https://istio.io/latest/docs/examples/virtual-machines/bookinfo/" rel="nofollow noreferrer">https://istio.io/latest/docs/examples/virtual-machines/bookinfo/</a></li>
</ul>
| Jakub |
<h3>Background</h3>
<p>I have app running in kubernetes cluster using sharded mongodb and elasticsearch statefulsets. I setup horizontal pod autoscalers for deployment components in my app and everything works well.</p>
<h3>Problems</h3>
<p>Problems arise when the traffic goes up. My server deployment scales out just fine, but mongodb shards and elasticsearch nodes cannot handle this much traffic and throttle overall response time.</p>
<p>Simple solution is to configure those statefulset with more shards, more replicas. What bugs me is that traffic spike happens like 3-4 hours a day, thus it's kinda wasteful to let all those boys sitting idly for the rest of the day.</p>
<p>I did some research and looks like database in general is not supposed to scale out/in dynamically as it will consume a lot of network and disk io just to do replication between them. Also there is potential of data loss and inconsistency during scaling up, scaling down.</p>
<h3>Questions</h3>
<p>If possible, what is proper way to handle dynamic scaling in mongodb, elasticsearch... and database in general?<br />
If not, what can I do to save some cents off my cloud bill as we only need the maximum power from database pods for a short period per day.</p>
| Dang Duy Nam | <p>You should read about Kubernetes autoscaling - <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">HPA</a>.</p>
<blockquote>
<p>The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics). Note that Horizontal Pod Autoscaling does not apply to objects that can't be scaled, for example, DaemonSets.</p>
<p>The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. The controller periodically adjusts the number of replicas in a replication controller or deployment to match the observed average CPU utilization to the target specified by user.</p>
</blockquote>
<p>With HPA you should have to also take care about the volume mounting and data latency.</p>
<hr />
<p>As @Serge mentioned in comments, I would suggest to check the native scaling cluster option provided by the MongoDB and Elasticsearch itself.</p>
<p>Take a look at</p>
<ul>
<li>MongoDB operator <a href="https://docs.mongodb.com/kubernetes-operator/master/tutorial/install-k8s-operator/" rel="nofollow noreferrer">documentation</a></li>
<li>Elasticsearch operator <a href="https://www.elastic.co/blog/introducing-elastic-cloud-on-kubernetes-the-elasticsearch-operator-and-beyond" rel="nofollow noreferrer">documentation</a></li>
<li>Elasticsearch future release <a href="https://www.elastic.co/guide/en/elasticsearch/reference/master/xpack-autoscaling.html" rel="nofollow noreferrer">autoscaling</a></li>
</ul>
<p>I am not very familiar with MongoDB and Elasticsearch with Kubernetes, but maybe those tutorials help you:</p>
<ul>
<li><a href="https://medium.com/faun/scaling-mongodb-on-kubernetes-32e446c16b82" rel="nofollow noreferrer">https://medium.com/faun/scaling-mongodb-on-kubernetes-32e446c16b82</a></li>
<li><a href="https://www.youtube.com/watch?v=J7h0F34iBx0" rel="nofollow noreferrer">https://www.youtube.com/watch?v=J7h0F34iBx0</a></li>
<li><a href="https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets/" rel="nofollow noreferrer">https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets/</a></li>
<li><a href="https://sematext.com/blog/elasticsearch-operator-on-kubernetes/#toc-what-is-the-elasticsearch-operator-1" rel="nofollow noreferrer">https://sematext.com/blog/elasticsearch-operator-on-kubernetes/#toc-what-is-the-elasticsearch-operator-1</a></li>
</ul>
<hr />
<p>If you use <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a> take a look at banzaicloud <a href="https://github.com/banzaicloud/hpa-operator" rel="nofollow noreferrer">Horizontal Pod Autoscaler operator</a></p>
<blockquote>
<p>You may not want nor can edit a Helm chart just to add an autoscaling feature. Nearly all charts supports custom annotations so we believe that it would be a good idea to be able to setup autoscaling just by adding some simple annotations to your deployment.</p>
<p>We have open sourced a Horizontal Pod Autoscaler operator. This operator watches for your Deployment or StatefulSet and automatically creates an HorizontalPodAutoscaler resource, should you provide the correct autoscale annotations.</p>
</blockquote>
<hr />
<p>Hope you find this useful.</p>
| Jakub |
<p>First of all, what I want to build is right below.
<a href="https://i.stack.imgur.com/b5yKz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b5yKz.png" alt="enter image description here" /></a></p>
<p>as above diagram, I want Ingress to make distribute traffics to service which is at other namespace <code>me</code> in same cluster. (Ingress is in <code>main</code> namespace) But the Ingress doesn't allow to point dns directly, I make ExternalName Service that points to <code>me-service</code> dns <code>me-service.me.svc.cluster.local</code> and then Ingress points to it.</p>
<p>Yaml of it is</p>
<p>main.k8s.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: main
---
apiVersion: v1
kind: Service
metadata:
name: me-service
namespace: main
spec:
externalName: me-service.me.svc.cluster.local
type: ExternalName
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: gce
name: main-router
namespace: main
spec:
rules:
- host: some-domain.me
http:
paths:
- backend:
service:
name: me-service
port:
number: 80
path: /
pathType: ImplementationSpecific
</code></pre>
<p>me.k8s.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
labels:
stag: production
name: me
---
apiVersion: v1
kind: Service # <-- this is the service I want to point
metadata:
labels:
app: me
stag: production
name: me-service
namespace: me
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: me
stag: production
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: me
stag: production
name: me-deployment
namespace: me
spec:
replicas: 2
selector:
matchLabels:
app: me
stag: production
template:
metadata:
labels:
app: me
stag: production
spec:
containers:
- image: gcr.io/me:latest
name: me
ports:
- containerPort: 80
resources:
limits:
cpu: 300m
memory: 512M
requests:
cpu: 250m
memory: 512M
</code></pre>
<p>And I checked dns address works but Ingress object doesn't created with error message</p>
<pre><code>me-service:80 (<error: endpoints "me-service" not found>)
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Translate 6m21s (x233 over 22h) loadbalancer-controller Translation failed: invalid ingress spec: could not find port "80" in service "main/me-service"
</code></pre>
<p>How can I make ingress work? If you need more information, please let me know. :pray:</p>
<p>GKE Engine: <strong>1.20.6-gke.1000</strong></p>
<p>HTTP Load Balancing: <strong>Enabled</strong></p>
<p>Network policy: <strong>Disabled</strong></p>
<p>Dataplane V2: <strong>Enabled</strong></p>
| HyeonJunOh | <p><em>I'm posting it as an answer for better visibility. As I already mentioned in my comments:</em></p>
<p>As far as I know you cannot use <strong>GKE ingress</strong> with <code>ExternalName</code> Service type. The two supported types are <code>LoadBalancer</code> and <code>NodePort</code>. If nothing changed recently, you shouldn't be able to create an ingress resource even with a simple <code>ClusterIP</code>, only two above mentioned svc types so I don't believe that <code>ExternalName</code> would work. Well, you can actually use <code>ClusterIP</code> but only if you use <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/container-native-load-balancing" rel="nofollow noreferrer">container native load balancing</a> which requires your <strong>GKE</strong> cluster to be VPC-native.</p>
<p>You can still use <strong>GKE</strong> but you don't have to use <strong>GCE ingress</strong> as an ingress controller at the same time. But I would try first if it doesn't work with the mentioned container-native load balancing.</p>
<p>You can always deploy <strong>different ingress controller on your <strong>GKE</strong> cluster e.g. nginx-ingress</strong>. It can use <code>ClusterIP</code> services out of the box, but I'm not sure if it can handle the <code>ExternalName</code> so you would have to try this out.</p>
<p>OP confirmed that with <strong>nginx-ingress</strong> it was possible to distribute traffic to services located in different namespaces:</p>
<blockquote>
<p>@mario Thank you for your comment. I successfully distribute traffics
to other namespace svc using NGINX-ingress. – HyeonJunOh Jul 23 at
9:23</p>
</blockquote>
| mario |
<p>I'm trying to proxy_pass the traffic based on user-agent. Tried to use server-snippet / configuration-snippet for it, but ingress doesn't allow me. (Forbids to use proxy_pass in server-snippet and argues about duplicates in configuration-snippet)</p>
<p>I can't just use the "backend" as I have to dynamically pass the traffic myself based on user-agent.
Any chances I could do it? Not working configuration example below (without user-agent yet)</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
spec:
rules:
- host: m-rm-qa.yadayadayada
http:
paths:
- path: /
backend:
serviceName: frontend-svc
servicePort: 80
metadata:
name: rm-frontend-ingress
namespace: rm-qa
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/server-snippet: |
proxy_pass http://prerender-service:3000;
rewrite .* /$scheme://$host$request_uri? break;
</code></pre>
| Федор Дао | <p>I've tried to reproduce your scenario using Nginx Ingress but without success using <code>server-snippet</code> and <code>configuration-snippet</code>.</p>
<p>I did some researches and see that <code>Nginx Plus</code> there's a snippet called <code>location-snippet</code> that should work. See <a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/#snippets-and-custom-templates" rel="nofollow noreferrer">here.</a></p>
<p>Alternatively, I've created a custom Nginx deployment with a Service type <code>LoadBalancer</code> and created a <code>configMap</code> with a custom Nginx configuration, and it's works! </p>
<p>If you want to try, you need to create a <code>configMap</code> with your custom <code>default.conf</code> file, it's looks like this:</p>
<blockquote>
<p>I'm using the namespace: <code>default</code> for this example, but you can create a custom namespace if you want.</p>
</blockquote>
<p><strong>nginx-custom-config.yaml:</strong></p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-nginx-config
namespace: default
data:
default.conf: |
upstream my-svc {
server echo-svc.default.svc.cluster.local;
}
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
if ($http_user_agent ~* "iPhone|iPad" ) {
add_header X-Vendor "Apple";
proxy_pass http://my-svc;
}
if ($http_user_agent ~ Chrome ) {
add_header X-Vendor "OpenSource";
proxy_pass http://my-svc;
}
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
</code></pre>
<p>Apply to Kubernetes:</p>
<pre><code>kubectl apply -f nginx-custom-config.yaml
</code></pre>
<ul>
<li>I've created <code>upstream</code> called <code>my-svc</code> pointing to my destination service <code>echo-svc.default.svc.cluster.local</code>.</li>
<li>In my <code>location: /</code> there's a condition that matches the <code>User-agent</code>, in this case, if a request was made from a Apple device <code>"iPhone|iPad"</code> a header named <code>X-Vendor</code> with value <code>Apple</code> will be created and redirected the request to my destination service <code>my-svc</code>. The same will happen if the request was made from "Chrome", but the header will be <code>X-Vendor: "OpenSource"</code>.</li>
<li>If the request was made from another browser, like firefox, curl etc... them the Nginx default page will be displayed.</li>
</ul>
<p>After that you need to create a <code>deployment</code> of a Nginx image mounting our <code>configMap</code> as a file inside the containers, like this:</p>
<p><strong>custom-nginx-deployment.yaml:</strong></p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-nginx
spec:
selector:
matchLabels:
app: custom-nginx
template:
metadata:
labels:
app: custom-nginx
spec:
containers:
- name: custom-nginx
image: nginx
volumeMounts:
- name: custom-nginx-config
mountPath: /etc/nginx/conf.d
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
volumes:
- name: custom-nginx-config
configMap:
name: custom-nginx-config
</code></pre>
<p><code>kubectl apply -f custom-nginx-deployment.yaml</code></p>
<p>And finally, creating a <code>LoadBalancer</code> Service to received the requests:</p>
<p><strong>custom-nginx-svc.yaml:</strong></p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: custom-nginx-svc
labels:
app: custom-nginx
spec:
selector:
app: custom-nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
</code></pre>
<p><code>kubectl apply -f custom-nginx-svc.yaml</code></p>
<p>You can check if the container and service was successfully deployed using the commands:</p>
<pre><code>kubectl get pods -l app=custom-nginx
kubectl get svc -l app=custom-nginx
</code></pre>
<p>Hope that helps!</p>
| Mr.KoopaKiller |
<p>I want to create a custom 403 error page.
Currently I already have an Ingress created and in the annotations I have something like this:</p>
<pre><code>"nginx.ingress.kubernetes.io/whitelist-source-range": "100.01.128.0/20,88.100.01.01"
</code></pre>
<p>So any attempt to access my web app outside that IP range receives a 403 error.</p>
<p>In order to create a custom page I tried adding the following annotations:</p>
<pre><code>"nginx.ingress.kubernetes.io/custom-http-errors": "403",
"nginx.ingress.kubernetes.io/default-backend": "default-http-backend"
</code></pre>
<p>where default-http-backend is the name of an app already deployed.
<a href="https://i.stack.imgur.com/6XixQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6XixQ.png" alt="Pod details"></a></p>
<p>the ingress has this:</p>
<pre><code>{
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "my-app-ingress",
"namespace": "my-app-test",
"selfLink": "/apis/extensions/v1beta1/namespaces/my-app-test/ingresses/my-app-ingress",
"uid": "8f31f2b4-428d-11ea-b15a-ee0dcf00d5a8",
"resourceVersion": "129105581",
"generation": 3,
"creationTimestamp": "2020-01-29T11:50:34Z",
"annotations": {
"kubernetes.io/ingress.class": "nginx",
"nginx.ingress.kubernetes.io/custom-http-errors": "403",
"nginx.ingress.kubernetes.io/default-backend": "default-http-backend",
"nginx.ingress.kubernetes.io/rewrite-target": "/",
"nginx.ingress.kubernetes.io/whitelist-source-range": "100.01.128.0/20,90.108.01.012"
}
},
"spec": {
"tls": [
{
"hosts": [
"my-app-test.retail-azure.js-devops.co.uk"
],
"secretName": "ssl-secret"
}
],
"rules": [
{
"host": "my-app-test.retail-azure.js-devops.co.uk",
"http": {
"paths": [
{
"path": "/api",
"backend": {
"serviceName": "my-app-backend",
"servicePort": 80
}
},
{
"path": "/",
"backend": {
"serviceName": "my-app-frontend",
"servicePort": 80
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{}
]
}
}
}
</code></pre>
<p>Yet I always get the default 403.
What am I missing?</p>
| RagnaRock | <p>I've reproduced your scenario and that worked for me.
I will try to guide you in steps I've followed.</p>
<p><strong>Cloud provider</strong>: GKE
<strong>Kubernetes Version</strong>: v1.15.3
<strong>Namespace</strong>: <code>default</code></p>
<p>I'm using 2 deployments of 2 images with a service for each one.</p>
<p><strong>Service 1</strong>: <code>default-http-backend</code> - with nginx image, it will be our default backend.</p>
<p><strong>Service 2</strong>: <code>custom-http-backend</code> - with inanimate/echo-server image, this service will be displayed if the request become from a whitelisted ip.</p>
<p><strong>Ingress</strong>: Nginx ingress with annotations.</p>
<p><strong>Expected behavior:</strong> The ingress will be configured to use <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#default-backend" rel="noreferrer">default-backend</a>, <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-http-errors" rel="noreferrer">custom-http-errors</a> and <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range" rel="noreferrer">whitelist-source-range</a> annotations. If the request was made from a whitelisted ip the ingress will redirect to <strong>custom-http-backend</strong>, if not it will be redirect to <strong>default-http-backend</strong>.</p>
<h3>Deployment 1: default-http-backend</h3>
<p>Create a file <code>default-http-backend.yaml</code> with this content:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
spec:
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
containers:
- name: default-http-backend
image: nginx
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
spec:
selector:
app: default-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>Apply the yaml file: <code>k apply -f default-http-backend.yaml</code></p>
<h3>Deployment 2: custom-http-backend</h3>
<p>Create a file <code>custom-http-backend.yaml</code> with this content:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-http-backend
spec:
selector:
matchLabels:
app: custom-http-backend
template:
metadata:
labels:
app: custom-http-backend
spec:
containers:
- name: custom-http-backend
image: inanimate/echo-server
ports:
- name: http
containerPort: 8080
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: custom-http-backend
spec:
selector:
app: custom-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
</code></pre>
<p>Apply the yaml file: <code>k apply -f custom-http-backend.yaml</code></p>
<h3>Check if services is up and running</h3>
<p><em>I'm using the alias <code>k</code> for <code>kubectl</code></em></p>
<pre><code>➜ ~ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
custom-http-backend ClusterIP 10.125.5.227 <none> 80/TCP 73s
default-http-backend ClusterIP 10.125.9.218 <none> 80/TCP 5m41s
...
</code></pre>
<pre><code>➜ ~ k get pods
NAME READY STATUS RESTARTS AGE
custom-http-backend-67844fb65d-k2mwl 1/1 Running 0 2m10s
default-http-backend-5485f569bd-fkd6f 1/1 Running 0 6m39s
...
</code></pre>
<p>You could test the service using <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="noreferrer">port-forward</a>:</p>
<p><strong>default-http-backend</strong>
<code>k port-forward svc/default-http-backend 8080:80</code>
Try to access <a href="http://localhost:8080" rel="noreferrer">http://localhost:8080</a> in your browse to see the nginx default page.</p>
<p><strong>custom-http-backend</strong>
<code>k port-forward svc/custom-http-backend 8080:80</code>
Try to access <a href="http://localhost:8080" rel="noreferrer">http://localhost:8080</a> in your browse to see the custom page provided by the echo-server image.</p>
<h3>Ingress configuration</h3>
<p>At this point we have both services up and running, we need to install and configure the nginx ingress. You can follow the <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">official documentation</a>, this will not covered here.</p>
<p>After installed let's deploy the ingress, based in the code you posted i did some modifications: tls removed, added other domain and removed the path <code>/api</code> for tests purposes only and add my home ip to whitelist.</p>
<p>Create a file <code>my-app-ingress.yaml</code> with the content:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '403'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
nginx.ingress.kubernetes.io/whitelist-source-range: 207.34.xxx.xx/32
spec:
rules:
- host: myapp.rabello.me
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
</code></pre>
<p>Apply the spec: <code>k apply -f my-app-ingress.yaml</code></p>
<p>Check the ingress with the command:</p>
<pre><code>➜ ~ k get ing
NAME HOSTS ADDRESS PORTS AGE
my-app-ingress myapp.rabello.me 146.148.xx.xxx 80 36m
</code></pre>
<p><strong>That's all!</strong></p>
<p>If I test from home with my whitelisted ip, the custom page is showed, but if i try to access using my cellphone in 4G network, the nginx default page is displayed.</p>
<p>Note I'm using ingress and services in the same namespace, if you need work with different namespace you need to use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="noreferrer">ExternalName</a>.</p>
<p>I hope that helps!</p>
<p><strong>References:</strong>
<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">kubernetes deployments</a></p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">kubernetes service</a></p>
<p><a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">nginx ingress</a></p>
<p><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="noreferrer">nginx annotations</a></p>
| Mr.KoopaKiller |
<p>I have set up a Google Cloud Platform kubernetes cluster (and Container Registry) with source code on GitHub. Source code is divided into folders with separate Dockerfiles for each microservice.</p>
<p>I want to set up CI/CD using GitHub actions.</p>
<p>As far as I understand, the <a href="https://github.com/actions/starter-workflows/blob/d9236ebe5585b1efd5732a29ea126807279ccd56/ci/google.yml" rel="nofollow noreferrer">default GKE workflow</a> will connect to gcloud using secrets, build the images and push them to Container Registry. And then perform an update.</p>
<h2>My questions</h2>
<ul>
<li>How is the deployment performed?</li>
<li>What is <strong>kustomize</strong> for?</li>
<li>Do I have to configure on gcloud anything else than <strong>GKE key / token</strong></li>
<li>Suppose I want to update multiple docker images. Will it suffice to build multiple images and push them? Like below (a little bit simplified for clarity), or do I have to also modify the <strong>Deploy</strong> job:</li>
</ul>
<pre><code> - name: Build
run: |-
docker build -t "gcr.io/$PROJECT_ID/$IMAGE_1:$GITHUB_SHA" service1/.
docker build -t "gcr.io/$PROJECT_ID/$IMAGE_2:$GITHUB_SHA" service2/.
docker build -t "gcr.io/$PROJECT_ID/$IMAGE_3:$GITHUB_SHA" service3/.
- name: Publish
run: |-
docker push "gcr.io/$PROJECT_ID/$IMAGE_1:$GITHUB_SHA"
docker push "gcr.io/$PROJECT_ID/$IMAGE_2:$GITHUB_SHA"
docker push "gcr.io/$PROJECT_ID/$IMAGE_3:$GITHUB_SHA"
</code></pre>
<hr />
<p>This is the deploy fragment from the GKE workflow:</p>
<pre><code> # Deploy the Docker image to the GKE cluster
- name: Deploy
run: |-
./kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA
./kustomize build . | kubectl apply -f -
kubectl rollout status deployment/$DEPLOYMENT_NAME
kubectl get services -o wide
</code></pre>
| Piotr Kolecki | <blockquote>
<p>How is the deployment performed?</p>
</blockquote>
<p>To know how to deploy or run this workflow please refer this <a href="https://github.com/google-github-actions/setup-gcloud/tree/master/example-workflows/gke" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>What is kustomize for?</p>
</blockquote>
<p>kustomize is a <a href="https://kustomize.io/" rel="nofollow noreferrer">configuration mangement</a> for application configuration</p>
<blockquote>
<p>Do I have to configure on gcloud anything else than GKE key / token</p>
</blockquote>
<p>You don't have to unless you are adding additional layer of security for authenticating the workflow.</p>
<blockquote>
<p>Suppose I want to update multiple docker images. Will it suffice to build multiple images and push them? Like below (a little bit simplified for clarity), or do I have to also modify the Deploy job</p>
</blockquote>
<p>I think no need for modification of deploy job. It is enough to build multiple images and push them into the GCR</p>
| Gautham |
<p>I am very new to docker Kubernetes. I have made my cluster of 3 nodes now I am creating a YAML file for pod creation. I have taken the image from <a href="https://github.com/utkudarilmaz/docker-hping3" rel="nofollow noreferrer">https://github.com/utkudarilmaz/docker-hping3</a> the image name is utkudarilmaz/hping3. Can someone help me to set the command or the docker file in the path? because I cannot understand the problem. I want to run my pod successfully working so, that I can utilize it.
My YAML file like</p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
name: second
labels:
app: web
spec:
containers:
- name: hping3
image: utkudarilmaz/hping3
command: ["hping3 [IP_ADDRESS"]
ports:
- containerPort: 80
nodeSelector:
disktype: ssd
</code></pre>
<p>if I do not specify [command] my pod status is CrashLoopBackOff. I have searched and found that <a href="https://stackoverflow.com/questions/41604499/my-kubernetes-pods-keep-crashing-with-crashloopbackoff-but-i-cant-find-any-lo">My kubernetes pods keep crashing with "CrashLoopBackOff" but I can't find any log</a>
I need a command to run the container continuously otherwise it goes in the cycle if I specify a command in YAML file like an above command: ["hping3 103.22.221.59"] and then when I run</p>
<pre><code>kubectl exec –it second – hping3 [IP_ADDRESS]
</code></pre>
<p>I get</p>
<pre><code>error: unable to upgrade connection: container not found ("hping3")
</code></pre>
<p>the output of kubectl decribe pod second</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m39s default-scheduler Successfully assigned default/second1 to netcs
Normal Pulled 3m35s kubelet Successfully pulled image "utkudarilmaz/hping3" in 2.714028668s
Normal Pulled 3m31s kubelet Successfully pulled image "utkudarilmaz/hping3" in 2.734426606s
Normal Pulled 3m15s kubelet Successfully pulled image "utkudarilmaz/hping3" in 2.61256593s
Normal Pulled 2m46s kubelet Successfully pulled image "utkudarilmaz/hping3" in 2.65727147s
Warning BackOff 2m11s (x5 over 3m4s) kubelet Back-off restarting failed container
Normal Pulling 2m4s (x5 over 3m38s) kubelet Pulling image "utkudarilmaz/hping3"
Normal Created 119s (x5 over 3m35s) kubelet Created container hping3
Warning Failed 119s (x5 over 3m35s) kubelet Error: failed to start container "hping3": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "hping3 103.22.221.59": executable file not found in $PATH: unknown
Normal Pulled 119s kubelet Successfully pulled image "utkudarilmaz/hping3" in 5.128803062s
</code></pre>
<p>Some Output of docker inspect $utkudarilmaz/hping3</p>
<pre><code>
"Mounts": [],
"Config": {
"Hostname": "104e9920881b",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "utkudarilmaz/hping3",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"hping3"
],
"OnBuild": null,
"Labels": {
"desription": "hping3 tool building on Alpine:latest",
"version": "1.0"
</code></pre>
<p>my container will not continue running
when I try this command</p>
<pre><code>command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ] from
https://stackoverflow.com/questions/31870222/how-can-i-keep-a-container-running-on-kubernetes/40093356
</code></pre>
<p>same error file not found in the path</p>
| goody | <p>First of all, you don't need to specify <code>containerPort</code> here as there is nothing listening on any tcp port in your <code>hping3</code> container:</p>
<pre><code>$ kubectl exec -ti second -- /bin/sh
/ # netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
</code></pre>
<p>In fact you don't even need to provide any <code>command</code> as <code>hping3</code> is already defined as an <code>ENTRYPOINT</code> in this docker image and you don't really need to overwrite it. All you need in order to run your <code>hping3</code> <code>Pod</code> is the following yaml manifest:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: second
spec:
containers:
- name: hping3
image: utkudarilmaz/hping3
args: ["IP-address"]
</code></pre>
<p>Yes, providing some <code>args</code> is obligatory in this case, otherwise your container will fall into <code>CrashLoopBackOff</code> state.</p>
<p>As you can read in the very brief description of the image in its <a href="https://github.com/utkudarilmaz/docker-hping3#docker-hping3" rel="noreferrer">README.md</a>:</p>
<blockquote>
<p><strong>Usage:</strong></p>
<pre><code>docker pull utkudarilmaz/hping3:latest
docker run utkudarilmaz/hping3:latest [parameters] target_ip
</code></pre>
</blockquote>
<p>providing <code>target_ip</code> is obligatory, but you don't have to provide anything else.</p>
<p>Although the above usage description doesn't say anything about running this image on <strong>kubernetes</strong>, such short description should be totally enough for us and we should be able to translate it "from <strong>docker</strong> to <strong>kubernetes</strong> language".</p>
<p>Take a look at the following section, titled <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="noreferrer">Define a Command and Arguments for a Container</a>, in the official <strong>kubernetes</strong> docs, especially <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes" rel="noreferrer">this</a> fragment:</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/Rv9Jw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Rv9Jw.png" alt="enter image description here" /></a></p>
<p>When you override the default Entrypoint and Cmd, these rules apply:</p>
<ul>
<li><p>If you do not supply <code>command</code> or <code>args</code> for a Container, the defaults defined in the Docker image are used.</p>
</li>
<li><p>If you supply a <code>command</code> but no <code>args</code> for a Container, only the supplied <code>command</code> is used. The default EntryPoint and the
default Cmd defined in the Docker image are ignored.</p>
</li>
<li><p>If you supply only <code>args</code> for a Container, the default Entrypoint defined in the Docker image is run with the <code>args</code> that
you supplied.</p>
</li>
<li><p>If you supply a <code>command</code> and <code>args</code>, the default Entrypoint and the default Cmd defined in the Docker image are ignored. Your
<code>command</code> is run with your <code>args</code>.</p>
</li>
</ul>
</blockquote>
<p>From the above we are particularly interested in the third point:</p>
<blockquote>
<ul>
<li>If you supply only <code>args</code> for a Container, the default Entrypoint defined in the Docker image is run with the <code>args</code> that
you supplied.</li>
</ul>
</blockquote>
<p>which means that in our <strong>kubernetes</strong> <code>Pod</code> definition we may supply only <code>args</code> and it's totally fine. As the <code>ENTRYPOINT</code> is already defined in the <code>utkudarilmaz/hping3</code> image, there is no need to overwrite it by defining a <code>command</code>.</p>
<p>I was able to reproduce the error messsage you get only when trying to connect to <code>hping3</code> container in <code>CrashLoopBackOff</code> state:</p>
<pre><code>$ kubectl exec -ti second -- hping3 [IP-address]
error: unable to upgrade connection: container not found ("hping3")
</code></pre>
<p>But when it runs, <code>kubectl exec</code> works without any issues:</p>
<pre><code>$ kubectl exec -ti second -- hping3 [IP-address]
HPING [IP-address] (eth0 [IP-address]): NO FLAGS are set, 40 headers + 0 data bytes
</code></pre>
<p>Btw. hyphens in your command look a bit strange and they are not exactly the same characters as <code>-</code> and are not interpreted correctly when copied from the code snippet in your question, leading to strange errors like the following:</p>
<pre><code>Error from server (NotFound): pods "–it" not found
</code></pre>
<p>So please mind the exact characters that you use in your commands.</p>
<p>As to the explanation of the error message you see when you <code>kubectl describe</code> your <code>Pod</code>:</p>
<pre><code>"hping3 [IP-address]": executable file not found in $PATH: unknown
</code></pre>
<p>it says clearly that an executable named "hping3 [IP-address]" (yes, name of a single file!) cannot be found in your <code>$PATH</code> and I'm sure you don't have executable with such name 😉</p>
<p>If you provide a <code>command</code> this way:</p>
<pre><code>command: ["hping3 [IP-address]"]
</code></pre>
<p>keep in mind that the whole string between the double quotes is interpreted as a single command / executable. That's why it was trying to look for executable file named "hping3 [IP-address]" but for obvious reasons it couldn't find it.</p>
<p>As already mentioned in comments, the correct usage of the <code>command</code> field can be:</p>
<pre><code>command: ["hping3","[IP-address]"]
</code></pre>
<p>but in your case you don't really need it.</p>
<p>I hope the above explanation was helpful.</p>
| mario |
<p>In Azure pipeline I download kubernetes deployment.yml property file which contains following content.</p>
<pre><code>spec:
imagePullSecrets:
- name: some-secret
containers:
- name: container-name
image: pathtoimage/data-processor:$(releaseVersion)
imagePullPolicy: Always
ports:
- containerPort: 8088
env:
</code></pre>
<p>My intention is to get the value from pipeline variable <code>$(releaseVersion)</code>. But it seems like <code>kubernetes</code> task doesn't allow this value to be accessed from pipeline variable. </p>
<p>I tried using inline configuration type and it works.That means If I copy same configuration as inline content to <code>kubernetes</code> task configuration, it works.</p>
<p>Is there anyway that I can make it work for the configuration from a file?</p>
| Channa | <p>As I understand, you may want to replace the variable of deployment.yml file content while build executed.</p>
<p>You can use one task which name is <strong><a href="https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens&targetId=8b5aa46a-618f-4eee-b902-48c215242e3e&utm_source=vstsproduct&utm_medium=ExtHubManageList" rel="noreferrer">Replace Tokens task</a></strong> (Note:The <strong>token</strong> under this task name is not same with PAToken). This is the task which support replace values of files in projects with environments variables when setting up VSTS Build/Release processes.</p>
<p>Install <strong>Replace Tokens</strong> from marketplace first, then add <strong>Replace Tokens task</strong> into your pipeline.</p>
<p>Configure the .yml file path in the Root directory. For me, my target file is under the Drop folder of my local. And then, point out which file you want to operate and replace.</p>
<p><a href="https://i.stack.imgur.com/pDoSy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pDoSy.png" alt="enter image description here"></a></p>
<p>For more argument configured, you can check this doc which I ever refer: <a href="https://github.com/qetza/vsts-replacetokens-task#readme" rel="noreferrer">https://github.com/qetza/vsts-replacetokens-task#readme</a></p>
<p><strong>Note</strong>: Please execute this task before Deploy to Kubernetes task, so that the change can be apply to the Kubernetes cluster. </p>
<p>Here also has another <a href="https://medium.com/@marcodesanctis2/a-build-and-release-pipeline-in-vsts-for-docker-and-azure-kubernetes-service-aks-41efc9a0c5c4" rel="noreferrer">sample blog</a> can for you refer.</p>
| Mengdi Liang |
<p>Tring to install kubernetes cluster with kubeadm, and faced issue with installing kube packages.
I continue getting error:</p>
<pre><code>https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for kubernetes
</code></pre>
<p>Repo config:</p>
<pre><code> [kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
</code></pre>
<p>Im using centos 7 distro</p>
<pre><code>Linux kube-master 3.10.0-1160.21.1.el7.x86_64 #1 SMP Tue Mar 16 18:28:22 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
| Mr. V.K. | <p>You can see very similar issue <a href="https://github.com/kubernetes/kubernetes/issues/60134" rel="noreferrer">here</a>.</p>
<p>As a <strong>quick workaround</strong>, you can simply disable <strong>GPG checking</strong> by setting <code>repo_gpgcheck=0</code> in <code>/etc/yum.repos.d/kubernetes.repo</code> but this is obviously not recommended from security perspective.</p>
<p>Additionally you may try the following:</p>
<ul>
<li><p>re-import the keys as suggested <a href="https://github.com/kubernetes/kubernetes/issues/60134#issuecomment-385942005" rel="noreferrer">here</a></p>
<pre><code>rpm --import https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
</code></pre>
</li>
<li><p>install the following version of <strong>GNUPG</strong> as suggested <a href="https://github.com/kubernetes/kubernetes/issues/60134#issuecomment-573247871" rel="noreferrer">here</a></p>
<pre><code>sudo yum install -y http://mirror.centos.org/centos/7/os/x86_64/Packages/gnupg2-2.0.22-5.el7_5.x86_64.rpm
</code></pre>
</li>
</ul>
| mario |
<p>I am running Istio 1.5, where SDS is enabled by default apparently, and am trying to enable TLS on north-south traffic in my EKS cluster (v1.15) and I have done the following:</p>
<ul>
<li>Followed steps here to set up a sample application <a href="https://istio.io/latest/docs/setup/getting-started/" rel="nofollow noreferrer">https://istio.io/latest/docs/setup/getting-started/</a></li>
<li>Installed cert manager 0.15.1</li>
<li>Created a cluster issuer</li>
<li>Configured the cluster issuer to attempt to solve the DNS challenge by integrating it with AWS Route53</li>
<li>Generate a certificate using the cluster issuer and letsencrypt</li>
<li>Followed the steps here to configure the gateway and virtual service with the certificate created above <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/</a></li>
<li>I copied the root certificate of letsencrypt to pass through the curl command</li>
<li>Tried to curl to the IP of the loadbalancer and I get this error</li>
</ul>
<p><a href="https://i.stack.imgur.com/GWqbY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GWqbY.jpg" alt="" /></a></p>
<p>Can anyone please guide me on how to resolve this?</p>
| YYashwanth | <p>There is related <a href="https://istio.io/latest/docs/ops/integrations/certmanager/" rel="nofollow noreferrer">documentation</a> about integration cert-menager and istio.</p>
<h2>cert-manager</h2>
<blockquote>
<p>Configuration</p>
<p>Consult the <a href="https://cert-manager.io/docs/installation/kubernetes/" rel="nofollow noreferrer">cert-manager installation documentation</a> to get started. No special changes are needed to work with Istio.</p>
<p>Usage</p>
<p>Istio Gateway
cert-manager can be used to write a secret to Kubernetes, which can then be referenced by a Gateway. To get started, configure a Certificate resource, following the <a href="https://cert-manager.io/docs/usage/certificate/" rel="nofollow noreferrer">cert-manager documentation</a>. The Certificate should be created in the same namespace as the istio-ingressgateway deployment. For example, a Certificate may look like:</p>
</blockquote>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: ingress-cert
namespace: istio-system
spec:
secretName: ingress-cert
commonName: my.example.com
dnsNames:
- my.example.com
...
</code></pre>
<blockquote>
<p>Once we have the certificate created, we should see the secret created in the istio-system namespace. This can then be referenced in the tls config for a Gateway under credentialName:</p>
</blockquote>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert # This should match the Certifcate secretName
hosts:
- my.example.com # This should match a DNS name in the Certificate
</code></pre>
<blockquote>
<p><a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/" rel="nofollow noreferrer">Kubernetes Ingress</a></p>
<p>cert-manager provides direct integration with Kubernetes Ingress by configuring an <a href="https://cert-manager.io/docs/usage/ingress/" rel="nofollow noreferrer">annotation on the Ingress object</a>. If this method is used, the Ingress must reside in the same namespace as the istio-ingressgateway deployment, as secrets will only be read within the same namespace.</p>
<p>Alternatively, a Certificate can be created as described in <a href="https://istio.io/latest/docs/ops/integrations/certmanager/#istio-gateway" rel="nofollow noreferrer">Istio Gateway</a>, then referenced in the Ingress object:</p>
</blockquote>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: my.example.com
http: ...
tls:
- hosts:
- my.example.com # This should match a DNS name in the Certificate
secretName: ingress-cert # This should match the Certifcate secretName
</code></pre>
<hr />
<p>Additionally there is full <a href="https://discuss.istio.io/t/problems-with-istio-ingress-and-cert-manager/5241/14" rel="nofollow noreferrer">reproduction</a> made by @chrisnyc with cert-menager and lets encrypt on istio discuss, which as @YYashwanth mentioned in comments solved his problem. So if you have similar issue take a look at above reproduction.</p>
| Jakub |
<p>My Test Environment Cluster has the following configurations :</p>
<p>Global Mesh Policy (Installed as part of cluster setup by our org) : output of <code>kubectl describe MeshPolicy default</code></p>
<pre><code>Name: default
Namespace:
Labels: operator.istio.io/component=Pilot
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.6
release=istio
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"authentication.istio.io/v1alpha1","kind":"MeshPolicy","metadata":{"annotations":{},"labels":{"operator.istio.io/component":...
API Version: authentication.istio.io/v1alpha1
Kind: MeshPolicy
Metadata:
Creation Timestamp: 2020-07-23T17:41:55Z
Generation: 1
Resource Version: 1088966
Self Link: /apis/authentication.istio.io/v1alpha1/meshpolicies/default
UID: d3a416fa-8733-4d12-9d97-b0bb4383c479
Spec:
Peers:
Mtls:
Events: <none>
</code></pre>
<p>The above configuration I believe enables services to receive connections in mTls mode.</p>
<p>DestinationRule : Output of <code>kubectl describe DestinationRule commerce-mesh-port -n istio-system</code></p>
<pre><code>Name: commerce-mesh-port
Namespace: istio-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"DestinationRule","metadata":{"annotations":{},"name":"commerce-mesh-port","namespace"...
API Version: networking.istio.io/v1beta1
Kind: DestinationRule
Metadata:
Creation Timestamp: 2020-07-23T17:41:59Z
Generation: 1
Resource Version: 33879
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/destinationrules/commerce-mesh-port
UID: 4ef0d49a-88d9-4b40-bb62-7879c500240a
Spec:
Host: *
Ports:
Name: commerce-mesh-port
Number: 16443
Protocol: TLS
Traffic Policy:
Tls:
Mode: ISTIO_MUTUAL
Events: <none>
</code></pre>
<p>Istio Ingress-Gateway :</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: finrpt-gateway
namespace: finrpt
spec:
selector:
istio: ingressgateway # use Istio's default ingress gateway
servers:
- port:
name: https
number: 443
protocol: https
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
- port:
name: http
number: 80
protocol: http
tls:
httpsRedirect: true
hosts:
- "*"
</code></pre>
<p>I created a secret to be used for TLS and using that to terminate the TLS traffic at the gateway (as configured in mode SIMPLE)</p>
<p>Next, I configured my VirtualService in the same namespace and did a URL match for HTTP :</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: finrpt-virtualservice
namespace: finrpt
spec:
hosts:
- "*"
gateways:
- finrpt-gateway
http:
- match:
- queryParams:
target:
exact: "commercialprocessor"
ignoreUriCase: true
route:
- destination:
host: finrpt-commercialprocessor
port:
number: 8118
</code></pre>
<p>The Service CommercialProcessor (ClusterIP) is expecting traffic on HTTP/8118.</p>
<p>With the above setting in place, when I browse to the External IP of my Ingress-Gateway, first I get a certificate error (expected as I am using self-signed for testing) and then on proceeding I get HTTP Error 503.</p>
<p>I am not able to find any useful logs in the gateway, I am wondering if the gateway is unable to communicate to my VirtualService in plaintext (TLS termination) and it is expecting https but I have put it as http?
Any help is highly appreciated, I am very new to Istio and I think I might be missing something naive here.</p>
<p>My expectation is : I should be able to hit the Gateway with https, gateway does the termination and forwards the unencrypted traffic to the destination configured in the VirtualService on HTTP port based on URL regex match ONLY (I have to keep URL match part constant here).</p>
| Jim | <p>As 503 often occurs and it´s hard to find the issue I set up little troubleshooting answer, there are another questions with 503 error which I encountered for several months with answers, useful informations from istio documentation and things I would check.</p>
<p>Examples with 503 error:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/58509666/istio-503s-between-public-gateway-and-service">Istio 503:s between (Public) Gateway and Service</a></li>
<li><a href="https://stackoverflow.com/questions/59174478/istio-egress-gateway-gives-http-503-error">IstIO egress gateway gives HTTP 503 error</a></li>
<li><a href="https://stackoverflow.com/questions/60074732/istio-ingress-gateway-with-tls-termination-returning-503-service-unavailable">Istio Ingress Gateway with TLS termination returning 503 service unavailable</a></li>
<li><a href="https://stackoverflow.com/questions/59560394/how-to-terminate-ssl-at-ingress-gateway-in-istio">how to terminate ssl at ingress-gateway in istio?</a></li>
<li><a href="https://stackoverflow.com/questions/54160215/accessing-service-using-istio-ingress-gives-503-error-when-mtls-is-enabled?rq=1">Accessing service using istio ingress gives 503 error when mTLS is enabled</a></li>
</ul>
<p>Common cause of 503 errors from istio documentation:</p>
<ul>
<li><a href="https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes" rel="nofollow noreferrer">https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes</a></li>
<li><a href="https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule" rel="nofollow noreferrer">https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule</a></li>
<li><a href="https://istio.io/latest/docs/concepts/traffic-management/#working-with-your-applications" rel="nofollow noreferrer">https://istio.io/latest/docs/concepts/traffic-management/#working-with-your-applications</a></li>
</ul>
<p>Few things I would check first:</p>
<ul>
<li>Check services ports name, Istio can route correctly the traffic if it knows the protocol. It should be <code><protocol>[-<suffix>]</code> as mentioned in <a href="https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/#manual-protocol-selection" rel="nofollow noreferrer">istio
documentation</a>.</li>
<li>Check mTLS, if there are any problems caused by mTLS, usually those problems would result in error 503.</li>
<li>Check if istio works, I would recommend to apply <a href="https://istio.io/latest/docs/examples/bookinfo/" rel="nofollow noreferrer">bookinfo application</a> example and check if it works as expected.</li>
<li>Check if your namespace is <a href="https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/" rel="nofollow noreferrer">injected</a> with <code>kubectl get namespace -L istio-injection</code></li>
<li>If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.</li>
</ul>
<hr />
<p>Hope you find this useful.</p>
| Jakub |
<p>First, let me show the kubernetes entities from a namespace called "kong":</p>
<pre><code>[projadmin@VOFDGSTP1 ~]$ kubectl get all -n kong
NAME READY STATUS RESTARTS AGE
pod/ingress-kong-5d997d864-wsmsw 2/2 Running 2 13d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-proxy LoadBalancer 10.100.200.3 <pending> 80:31180/TCP,443:31315/TCP 13d
service/kong-validation-webhook ClusterIP 10.100.200.175 <none> 443/TCP 13d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-kong 1/1 1 1 13d
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-kong-5d997d864 1 1 1 13d
</code></pre>
<p>When I am trying to ping the IPs from above, I am getting timeout error.</p>
<pre><code>[projadmin@VOFDGSTP1 ~]$ curl -i 10.100.200.175
curl: (7) Failed connect to 10.100.200.175:80; Connection timed out
[projadmin@VOFDGSTP1 ~]$ curl -i 10.100.200.176
curl: (7) Failed connect to 10.100.200.176:80; Connection timed out
[projadmin@VOFDGSTP1 ~]$ curl -i 10.100.200.3
curl: (7) Failed connect to 10.100.200.3:80; Connection timed out
</code></pre>
| Ashish Jain | <p>By the information you shared I could suppose you are trying to run the command outside the Cluster.</p>
<p>If you are doing this, it will not working, because you can't reach the <code>ClusterIP</code> services outside the cluster.</p>
<blockquote>
<p><code>ClusterIP</code>: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default <code>ServiceType</code>.</p>
</blockquote>
<p>To check if the server you are connected is part of the cluster, type <code>kubectl get nodes -owide</code> e try to find the the ip in the list.</p>
<p>I see your service <code>service/kong-proxy</code> is with <code>EXTERNAL-IP: <pending></code>, it's probably is occurring because you are trying to use a bare metal installation of Kubernetes, in this case you need to use <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> to make your <code>LoadBalancer</code> configuration working.</p>
<p>An alternative to test your service is use <code>kubectl port-foward</code>, this will map your service to localhost and you can acces by <a href="http://localhost:8080" rel="nofollow noreferrer">http://localhost:8080</a>. Example:</p>
<p><code>kubectl port-forward svc/kong-proxy -n kong 8080:80</code></p>
<p>This command will map your service on port 8080 of your localhost.</p>
<p><strong>References:</strong></p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Services types</a></p>
<p><a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a></p>
<p><a href="https://kubectl.docs.kubernetes.io/pages/container_debugging/port_forward_to_pods.html" rel="nofollow noreferrer">port-forward</a></p>
| Mr.KoopaKiller |
<p>I want to check if insecure-port is enabled or not in my Azure Kubernetes cluster. How can i do this.</p>
| srinivasa mahendrakar | <p>As far as I checked <a href="https://github.com/Azure/AKS/issues/1724" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>AKS clusters have the API insecure port disabled by default (--insecure-port=0)</p>
</blockquote>
<p>You should be able to confirm that with checking the <code>--insecure-port</code> value with</p>
<p><code>sudo nano /etc/kubernetes/manifests/kube-apiserver.yaml</code></p>
<p>Hope this answer your question, let me know if you have any questions.</p>
| Jakub |
<p>I am trying to setup CI using Azure DevOps and CD using GitOps for my AKS cluster. When CI completes the image is pushed to Azure Container Registry. My issue is the name of the image in my yaml file is :latest. When I push the image to container registry, Flux CD is not able to determine if there are any changes to the image or not because the name of the image remains same. I tried to look up the issue in github and came up with the below link:
<a href="https://github.com/GoogleCloudPlatform/cloud-builders/issues/22#issuecomment-316181326" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/cloud-builders/issues/22#issuecomment-316181326</a>
But I dont know how to implement it. Can someone please help me?</p>
| Sormita Chakraborty | <p>We had the similar issue and we fixed it by adding the checksum to the annotation in the deployment file with a unique value generator. It works like this for us:</p>
<p>Generate Helm Template -> Deployment manifest is created with unique checksum -> Trigger deployment.</p>
<p>We had the RollingUpdate enabled in our manifest which eliminated the downtime of the application. Below is our helm template config.
deployment.yaml</p>
<pre class="lang-yaml prettyprint-override"><code> template:
metadata:
labels:
app: {{ .Values.appName }}
annotations:
checksum/commonconfig: {{ .Values.CommonConfig | toJson | sha256sum | trunc 63 }}
checksum/podconfig: {{ .Values.PodConfig | toJson | sha256sum | trunc 63 }}
</code></pre>
<p>We have this in the helm chart which will generate the unique value in the deployment manifest. This will make the deployment to happen everytime even the latest tag of image is the same. Also, have the imagePullPolicy as Always.</p>
| Vamshi Siddarth |
<p>I have configured nginx-ingress-controller in kubernetes and I am trying to achieve method based routing from kubernetes.</p>
<p>This is my ingress.yaml file below:</p>
<pre><code>kind: Ingress
metadata:
name: cafe-ingress-with-annotations
annotations:
kubernetes.io/ingress.class: "nginx"
#nginx.ingress.kubernetes.io/use-regex: "true"
#nginx.ingress.kubernetes.io/app-root: /
#nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
location /tea {
limit_except GET {
deny all;
}
}
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
</code></pre>
<p>according to the annotation defined it is supposed to block all other methods like POST/DELETE...etc except GET method. But it is not denying any of the methods. Please help me how can I achieve method based routing using limit_except in nginx.
Thanks in advance.</p>
| bunny | <p>As you can read in <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="nofollow noreferrer">here</a>, the <code>configuration-snippet</code> annotation is used for adding an additional configuration to the NGINX location.</p>
<p>If you want to add <strong>custom locations</strong> in the <code>server</code> block, you need to use <code>server-snippet</code> annotation. As you can read <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-snippet" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>Using the annotation <code>nginx.ingress.kubernetes.io/server-snippet</code> it is
possible to add custom configuration in the server configuration
block.</p>
</blockquote>
<p>The following <code>Ingress</code> manifest should work:</p>
<pre><code>kind: Ingress
metadata:
name: cafe-ingress-with-annotations
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/server-snippet: |
location /tea {
limit_except GET {
deny all;
}
}
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
</code></pre>
| mario |
<p>I am installing minikube again on my Windows machine (did before a couple years ago but hadn't used in over a year) and the installation of the most recent kubectl and minikube went well. That is up until I tried to start minikube with:</p>
<pre><code>minikube start --vm-driver=virtualbox
</code></pre>
<p>Which gives the error:</p>
<pre><code>C:\>minikube start --vm-driver=virtualbox
* minikube v1.6.2 on Microsoft Windows 10 Pro 10.0.18362 Build 18362
* Selecting 'virtualbox' driver from user configuration (alternates: [])
! Specified Kubernetes version 1.10.0 is less than the oldest supported version: v1.11.10
X Sorry, Kubernetes 1.10.0 is not supported by this release of minikube
</code></pre>
<p>Which doesn't make sense since my <code>kubectl version --client</code> gives back the version of v1.17.0:</p>
<p><code>C:\>kubectl version --client</code></p>
<p><code>Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"windows/amd64"}</code></p>
<p>I did find that for some reason when I have the kubectl.exe that was downloaded to the correct kubectl folder in my <code>program files(x86)</code> (which the environment variable I had already was pointing to) it would say the version is v1.14.3. But then I copied the same file from that folder and just pasted it into the C Drive at its root and then it says the version is v1.17.0.</p>
<p>I am assuming that is just because it being at root is the same as adding it to the environment variables, but then that means something has an old v1.14.3 kubectl file, but there aren't any other kubectl files in there.</p>
<p>So basically, I am not sure if there is something that needs to be set in minikube (which from the documentation I haven't seen a reference to) but somehow minikube is detecting an old kubectl that I need to get rid of.</p>
| Jicaar | <p>Since you already had the minikube installed before and update the installation, the best thing to do is execute <code>minikube delete</code> to clean up all previous configuration.</p>
<blockquote>
<p>The <code>minikube delete</code> command can be used to delete your cluster. This command shuts down and deletes the Minikube Virtual Machine. No data or state is preserved.</p>
</blockquote>
<p>After that execute <code>minikube start --vm-driver=virtualbox</code> and wait the cluster up.</p>
<p><strong>References:</strong></p>
<p><a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#deleting-a-cluster" rel="noreferrer">https://kubernetes.io/docs/setup/learning-environment/minikube/#deleting-a-cluster</a></p>
| Mr.KoopaKiller |
<p>In my Kubernetes cluster I want to define a StatefulSet using a local persistence volume on each node. My Kubernetes cluster has worker nodes.</p>
<ul>
<li>worker-node-1</li>
<li>worker-node-2</li>
<li>worker-node-3</li>
</ul>
<p>My <code>StatefulSet</code> looks something like this:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myset
spec:
replicas: 3
...
template:
spec:
....
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- myset
topologyKey: kubernetes.io/hostname
containers:
....
volumeMounts:
- name: datadir
mountPath: /data
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: "local-storage"
resources:
requests:
storage: 10Gi
</code></pre>
<p>I want to achieve, that on each POD, running on a separate node, a local data volume is used.</p>
<p>I defined a <code>StorageClass</code> object:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>and the following <code>PersistentVolume</code>:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /var/lib/my-data/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node-1
</code></pre>
<p>But of course, this did not work as I have defined a <code>nodeAffinity</code> with only the hostname for my first worker-node-1. As a result I can see only one PV. The PVC and the POD on the corresponding node starts as expected. But on the other two nodes I have no PVs. How can I define, that a local <code>PersistenceVolume</code> is created for each worker-node?</p>
<p>I also tried to define a nodeAffinity with 3 values:</p>
<pre><code> nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node-1
- worker-node-2
- worker-node-3
</code></pre>
<p>But this also did not work.</p>
| Ralph | <blockquote>
<p><em>I fear that the <code>PersitenceVolume</code> I define is the problem. This object
will create exactly one <code>PV</code> and so only one of my PODs finds the
corresponding <code>PV</code> and can be scheduled.</em></p>
</blockquote>
<p>Yes, you're right. By creating <code>PersistentVolume</code> object, you create exactly <strong>one</strong> <code>PersistentVolume</code>. No more, no less. If you define 3 separate <code>PVs</code> that can be available on each of your 3 nodes, you shouldn't experience any problem.</p>
<p>If you have, let's say, 3 worker nodes, you need to create 3 separate <code>PersistentVolumes</code>, each one with different <code>NodeAffinity</code>. <strong>You don't need to define any <code>NodeAffinity</code> in your <code>StatefulSet</code> as it is already handled on <code>PersistentVolume</code> level and should be defined only there.</strong></p>
<p>As you can read in the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">local volume</a> documentation:</p>
<blockquote>
<p>Compared to <code>hostPath</code> volumes, <code>local</code> volumes are used in a durable and
portable manner without manually scheduling pods to nodes. The system
is aware of the volume's node constraints by looking at the node
affinity on the PersistentVolume.</p>
</blockquote>
<p><strong>Remember: PVC -> PV mapping is always 1:1. You cannot bind 1 PVC to 3 different PVs or the other way.</strong></p>
<blockquote>
<p><em>So my only solution is to switch form local PV to hostPath volumes which is working fine.</em></p>
</blockquote>
<p>Yes, it can be done with <code>hostpath</code> but I wouldn't say it is the only and the best solution. Local volumes have several advantages over hostpath volumes and it is worth considering choosing them. But as I mentioned above, in your use case you need to create 3 separate <code>PVs</code> manually. You already created one <code>PV</code> so it shouldn't be a big deal to create another two. This is the way to go.</p>
<blockquote>
<p><em>I want to achieve, that on each POD, running on a separate node, a
local data volume is used.</em></p>
</blockquote>
<p>It can be achieved with local volumes but in such case instead of using a single PVC in your <code>StatefulSet</code> definition as in the below fragment from your configuration:</p>
<pre><code> volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
</code></pre>
<p>you need to use only <code>volumeClaimTemplates</code> as in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components" rel="nofollow noreferrer">this</a> example, which may look as follows:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
</code></pre>
<p>As you can see, the <code>PVCs</code> won't "look" for a <code>PV</code> with any particular name so you can name them as you wish. They will "look" for a <code>PV</code> belonging to a particular <code>StorageClass</code> and in this particular case supporting <code>"ReadWriteOnce"</code> <code>accessMode</code>.</p>
<p>The scheduler will attempt to find the adequate node, on which your stateful pod can be scheduled. If another pod was already scheduled, let's say, on <code>worker-1</code> and the only <code>PV</code> belonging to our <code>local-storage</code> storage class isn't available any more, the scheduler will try to find another node that meets storage requirements. So again: no need for node affinity/ pod antiaffinity rules in your <code>StatefulSet</code> definition.</p>
<blockquote>
<p><em>But I need some mechanism that a PV is created for each node and
assigned with the PODs created by the StatefulSet. But this did not
work - I always have only one PV.</em></p>
</blockquote>
<p>In order to facilitate the management of volumes and automate the whole process to certain extent, take a look at <a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner" rel="nofollow noreferrer">Local Persistence Volume Static Provisioner</a>. As its name may already suggest, it doesn't support dynamic provisioning (as we have e.g. on various cloud platforms), which means you are still responsible for creating the underlying storage but the whole volume lifecycle can be handled automatically.</p>
<p>To make this whole theoretical explanation somewhat more practical, I'm adding below a working example, which you can quickly test for yourself. Make sure <code>/var/tmp/test</code> directory is created on every nodes or adjust the below examples to your needs:</p>
<p><code>StatefulSet</code> components (slightly modified example from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components" rel="nofollow noreferrer">here</a>):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 1Gi
</code></pre>
<p><code>StorageClass</code> definition:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>And finally a <code>PV</code>. You need to make 3 versions of the below yaml manifest by setting different names e.g. <code>example-pv-1</code>,<code>example-pv-2</code> and <code>example-pv-3</code> and node names.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-1 ### 👈 change it
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /var/tmp/test ### 👈 you can adjust shared directory on the node
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node-1 ### 👈 change this value by setting your node name
</code></pre>
<p>So 3 different <code>PVs</code> for 3 worker nodes.</p>
| mario |
<p>I have several AWS EC2 instances, and on them I have a Rancher instance deployed. On Rancher, I've deployed a website using Kubernetes, and it is deployed using Istio to handle the networking, I am able to log in with <code>http://portal.website.com:31380</code>. I also have AWS Route 53 to get the URL working and nginx for a load balancer across the EC2 instances.</p>
<p>But I want to be able to login with just <code>http://portal.website.com</code>, so removing the port. Is there a way for me to do this?</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: portal-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ingress
spec:
hosts:
- "*"
gateways:
- portal-gateway
http:
- match:
- uri:
prefix: "/"
rewrite:
uri: "/"
route:
- destination:
host: portal
port:
number: 80
websocketUpgrade: true
---
apiVersion: v1
kind: Service
metadata:
name: portal
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: portal
type: ClusterIP
</code></pre>
<p>Edit: I am accessing this on 31380, because it is setup to use a NodePort (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a>). The Istio docs say <code>If the EXTERNAL-IP value is <none> (or perpetually <pending>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.</code></p>
<p>Here is the output of <code>kubectl get svc istio-ingressgateway -n istio-system</code></p>
<p>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway NodePort 10.43.200.101 15020:30051/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30419/TCP,15030:30306/TCP,15031:31130/TCP,15032:32720/TCP,15443:30361/TCP 3h27m</p>
| Mike K. | <p>As you mentioned, istio <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports" rel="nofollow noreferrer">documentation</a> say that</p>
<blockquote>
<p>If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.</p>
</blockquote>
<hr />
<p>If we take a look at kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">documentation</a> about NodePort</p>
<blockquote>
<p>If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its .spec.ports[*].nodePort field.</p>
</blockquote>
<p>So if you ingress-gateway is NodePort then you have to use <a href="http://portal.website.com:31380" rel="nofollow noreferrer">http://portal.website.com:31380</a>.</p>
<p>If you want to use <a href="http://portal.website.com" rel="nofollow noreferrer">http://portal.website.com</a> to would have to change it to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a>.</p>
<p>As @sachin mentioned, If you use cloud like aws you can configure Istio with AWS Load Balancer with appropriate annotations.</p>
<blockquote>
<p>On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service's .status.loadBalancer</p>
</blockquote>
<p>I see you use aws, so you can read more about it in below links:</p>
<ul>
<li><a href="https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html</a></li>
<li><a href="https://istio.io/latest/blog/2018/aws-nlb/" rel="nofollow noreferrer">https://istio.io/latest/blog/2018/aws-nlb/</a></li>
</ul>
<hr />
<p>If it´s on premise then you could take a look at <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">metalLB</a></p>
<blockquote>
<p>MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.</p>
</blockquote>
<blockquote>
<p>Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.</p>
</blockquote>
<blockquote>
<p>Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.</p>
</blockquote>
<blockquote>
<p>MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible.</p>
</blockquote>
<p>You can read more about it in below link:</p>
<ul>
<li><a href="https://medium.com/@emirmujic/istio-and-metallb-on-minikube-242281b1134b" rel="nofollow noreferrer">https://medium.com/@emirmujic/istio-and-metallb-on-minikube-242281b1134b</a></li>
</ul>
| Jakub |
<p>I tried to kill one python process:</p>
<pre><code># ps aux | grep python
root 1 12.6 2.1 2234740 1332316 ? Ssl 20:04 19:36 /usr/bin/python3 /batch/run.py
root 490 0.0 0.0 11472 1012 pts/0 S+ 22:39 0:00 grep --color=auto python
# sudo kill -9 1
# ps aux | grep python
root 1 12.6 2.1 2234740 1333372 ? Ssl 20:04 19:38 /usr/bin/python3 /batch/run.py
root 494 0.0 0.0 11472 1088 pts/0 S+ 22:39 0:00 grep --color=auto python
</code></pre>
<p>Any idea why? Thanks.
Any other information needed to debug this?</p>
<p><strong>UPDATE</strong></p>
<p>In fact, I would not like to kill the container or pod. I modified the python codes in <code>/usr/local/lib/python3.6/dist-packages/</code> directly. If pod restarted, my changes will be gone.</p>
<p><strong>I need to modify third party codes (not my own codes) in the containers and see the results directly.</strong></p>
<p>Updating my own codes and redeploying the docker image is not my first choice in fact. Otherwise, why I ask questions here.</p>
<p><strong>Also, I am curious why it cannot be killed?</strong></p>
<p>Thanks</p>
| BAE | <p>As mentioned by @coderanger, container has the concept to be immutable.
What you trying to do isn't a good practice in Kubernetes/container environments.</p>
<p><strong>But...</strong></p>
<p>Sometimes a kind of <em>magic</em> is required keep the airplane in flight... There are some options that could help you:</p>
<p><strong>1. Rebuild container image</strong></p>
<p>The best solution in this case is rebuild your container image based in the current running image. You could run this image separately from your main workload to test the changes.
That is the best approach in this case, because you'll persist the changes in the image and the historical for rolling updates.</p>
<p><strong>2. Workaround to kill the pid</strong></p>
<p><em>I've tested in a container running flask with supervisord.</em></p>
<p>You could use the <code>SIGHUP</code> signal to restart the process inside your container:</p>
<blockquote>
<p><strong>SIGHUP</strong> - The SIGHUP signal disconnects a process from the parent process. This an also be used to restart processes. For example, "killall -SIGHUP compiz" will restart Compiz. This is useful for daemons with memory leaks.
...
<strong>SIGHUP</strong> P1990 Term Hangup detected on controlling terminal
or death of controlling process</p>
</blockquote>
<p>Inside your container, run:</p>
<p><code>kill -SIGHUP <PID></code> or <code>kill -1 <PID></code></p>
<p><strong>Sources:</strong>
- <a href="http://man7.org/linux/man-pages/man7/signal.7.html" rel="nofollow noreferrer">http://man7.org/linux/man-pages/man7/signal.7.html</a>
<a href="https://www.linux.org/threads/kill-signals-and-commands-revised.11625/" rel="nofollow noreferrer">https://www.linux.org/threads/kill-signals-and-commands-revised.11625/</a></p>
| Mr.KoopaKiller |
<p>I have two applications, nginx and redis, where nginx uses redis to cache some data so the redis address must be configured in nginx.</p>
<p>On the one hand, I could first apply the redis deployment and get its IP and then apply the nginx deployment to set up the two application in my <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer">minikube</a>.</p>
<p>But on the other, to simplify <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#deploying-containerized-applications" rel="nofollow noreferrer">installation</a> in the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">Kubernetes Dashboard</a> for QA, I want to create a single Kubernetes YAML file (like <a href="https://github.com/GoogleCloudPlatform/microservices-demo/blob/master/release/kubernetes-manifests.yaml" rel="nofollow noreferrer">GoogleCloudPlatform/microservices-demo/kubernetes-manifests.yaml</a>) to deploy these two applications on two diverse <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">Pod</a>s. However, if I do it by means of <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">Environment Variables</a>, I cannot get the redis address.</p>
<p>So how do I achieve it?</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 2
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master-c
image: docker.io/redis:alpine
ports:
- containerPort: 6379
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector: # Defines how the Deployment finds which Pods to manage.
matchLabels:
app: my-nginx
template:
metadata: # Defines what the newly created Pods are labeled.
labels:
app: my-nginx
tier: frontend
spec:
terminationGracePeriodSeconds: 5
containers:
- name: my-nginx # Defines container name
image: my-nginx:dev # docker image load -i my-nginx-docker_image.tar
imagePullPolicy: Never # Always, IfNotPresent (default), Never
ports:
env:
- name: NGINX_ERROR_LOG_SEVERITY_LEVEL
value: debug
- name: MY_APP_REDIS_HOST
# How to use the IP address of the POD with redis-master labeled that is created by the previous deployment?
value: 10.86.50.235
# https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
# valueFrom:
# fieldRef:
# fieldPath: status.podIP # this is the current POD IP
- name: MY_APP_CLIENT_ID
value: client_id
- name: MY_APP_CLIENT_SECRET
# https://kubernetes.io/docs/concepts/configuration/secret
value: client_secret
---
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
apiVersion: v1
kind: Service
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
# https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/
# metadata - Data that helps uniquely identify the object, including a name string, UID, and optional namespace
metadata:
name: my-nginx
spec:
type: NodePort
selector:
# Defines a proper selector for your pods with corresponding `.metadata.labels` field.
# Verify it using: kubectl get pods --selector app=my-nginx || kubectl get pod -l app=my-nginx
# Make sure the service points to correct pod by, for example, `kubectl describe pod -l app=my-nginx`
app: my-nginx
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- name: http
port: 6080
targetPort: 80
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30080
- name: https
port: 6443
targetPort: 443
nodePort: 30443
</code></pre>
<p>Added some network output,</p>
<pre><code>
Microsoft Windows [Version 10.0.18362.900]
(c) 2019 Microsoft Corporation. All rights reserved.
PS C:\Users\ssfang> kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-pod 1/1 Running 9 5d14h
redis-master-7db899bccb-npl6s 1/1 Running 3 2d15h
redis-master-7db899bccb-rgx47 1/1 Running 3 2d15h
C:\Users\ssfang> kubectl exec redis-master-7db899bccb-npl6s -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
C:\Users\ssfang> kubectl exec my-nginx-pod -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
C:\Users\ssfang> kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller-admission ClusterIP 10.108.221.2 <none> 443/TCP 7d11h
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 7d17h
C:\Users\ssfang> kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 172.17.0.2:53,172.17.0.5:53,172.17.0.2:9153 + 3 more... 7d17h
C:\Users\ssfang> kubectl get ep kube-dns --namespace=kube-system -o=yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
endpoints.kubernetes.io/last-change-trigger-time: "2020-07-09T02:08:35Z"
creationTimestamp: "2020-07-01T09:34:44Z"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: KubeDNS
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:endpoints.kubernetes.io/last-change-trigger-time: {}
f:labels:
.: {}
f:k8s-app: {}
f:kubernetes.io/cluster-service: {}
f:kubernetes.io/name: {}
f:subsets: {}
manager: kube-controller-manager
operation: Update
time: "2020-07-09T02:08:35Z"
name: kube-dns
namespace: kube-system
resourceVersion: "523617"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-dns
subsets:
- addresses:
nodeName: minikube
targetRef:
kind: Pod
namespace: kube-system
resourceVersion: "523566"
uid: ed3a9f46-718a-477a-8804-e87511db16d1
- ip: 172.17.0.5
nodeName: minikube
targetRef:
kind: Pod
name: coredns-546565776c-hmm5s
namespace: kube-system
resourceVersion: "523616"
uid: ae21c65c-e937-4e3d-8a7a-636d4f780855
ports:
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
- name: dns
port: 53
protocol: UDP
C:\Users\ssfang> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d20h
my-nginx-service NodePort 10.98.82.96 <none> 6080:30080/TCP,6443:30443/TCP 7d13h
PS C:\Users\ssfang> kubectl describe pod/my-nginx-pod | findstr IP
IP: 172.17.0.8
IPs:
IP: 172.17.0.8
PS C:\Users\ssfang> kubectl describe service/my-nginx-service | findstr IP
IP: 10.98.82.96
C:\Users\ssfang> kubectl describe pod/my-nginx-65ffdfb5b5-dzgjk | findstr IP
IP: 172.17.0.4
IPs:
IP: 172.17.0.4
</code></pre>
<p>Take two Pods with nginx for example to inspect network,</p>
<ol>
<li>C:\Users\ssfang> kubectl exec my-nginx-pod -it -- bash</li>
</ol>
<pre><code>
# How to install nslookup, dig, host commands in Linux
apt-get install dnsutils -y # In ubuntu
yum install bind-utils -y # In RHEL/Centos
root@my-nginx-pod:/etc# apt update && apt-get install -y dnsutils iputils-ping
root@my-nginx-pod:/etc# nslookup my-nginx-service
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: my-nginx-service.default.svc.cluster.local
Address: 10.98.82.96
root@my-nginx-pod:/etc# nslookup my-nginx-pod
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find my-nginx-pod: SERVFAIL
root@my-nginx-pod:/etc# ping -c3 -W60 my-nginx-pod
PING my-nginx-pod (172.17.0.8) 56(84) bytes of data.
64 bytes from my-nginx-pod (172.17.0.8): icmp_seq=1 ttl=64 time=0.011 ms
64 bytes from my-nginx-pod (172.17.0.8): icmp_seq=2 ttl=64 time=0.021 ms
64 bytes from my-nginx-pod (172.17.0.8): icmp_seq=3 ttl=64 time=0.020 ms
--- my-nginx-pod ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2065ms
rtt min/avg/max/mdev = 0.011/0.017/0.021/0.005 ms
root@my-nginx-pod:/etc# ping -c3 -W20 my-nginx-service
PING my-nginx-service.default.svc.cluster.local (10.98.82.96) 56(84) bytes of data.
--- my-nginx-service.default.svc.cluster.local ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2060ms
root@my-nginx-pod:/etc# ping -c3 -W20 my-nginx-pod.default.svc.cluster.local
ping: my-nginx-pod.default.svc.cluster.local: Name or service not known
root@my-nginx-pod:/etc# ping -c3 -W20 my-nginx-service.default.svc.cluster.local
PING my-nginx-service.default.svc.cluster.local (10.98.82.96) 56(84) bytes of data.
--- my-nginx-service.default.svc.cluster.local ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2051ms
</code></pre>
<ol start="2">
<li>C:\Users\ssfang> kubectl exec my-nginx-65ffdfb5b5-dzgjk -it -- bash</li>
</ol>
<pre><code>
root@my-nginx-65ffdfb5b5-dzgjk:/etc# ping -c3 -W20 my-nginx-pod.default.svc.cluster.local
ping: my-nginx-pod.default.svc.cluster.local: Name or service not known
root@my-nginx-65ffdfb5b5-dzgjk:/etc# ping -c3 -W20 my-nginx-service.default.svc.cluster.local
ping: my-nginx-service.default.svc.cluster.local: Name or service not known
root@my-nginx-65ffdfb5b5-dzgjk:/etc# ping -c3 -W20 172.17.0.8
PING 172.17.0.8 (172.17.0.8) 56(84) bytes of data.
64 bytes from 172.17.0.8: icmp_seq=1 ttl=64 time=0.195 ms
64 bytes from 172.17.0.8: icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from 172.17.0.8: icmp_seq=3 ttl=64 time=0.039 ms
--- 172.17.0.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.039/0.091/0.195/0.073 ms
</code></pre>
<ol start="3">
<li>C:\Users\ssfang> ssh -o StrictHostKeyChecking=no -i C:\Users\ssfang.minikube\machines\minikube\id_rsa [email protected] &:: minikube ssh</li>
</ol>
<pre><code>
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ ping default.svc.cluster.local
ping: bad address 'default.svc.cluster.local'
$ ping my-nginx-pod.default.svc.cluster.local
ping: bad address 'my-nginx-pod.default.svc.cluster.local'
$ ping my-nginx-service.default.svc.cluster.local
ping: bad address 'my-nginx-service.default.svc.cluster.local'
$ nslookup whoami
Server: 10.86.50.1
Address: 10.86.50.1:53
** server can't find whoami: NXDOMAIN
** server can't find whoami: NXDOMAIN
$ ping -c3 -W20 172.17.0.8
PING 172.17.0.8 (172.17.0.8): 56 data bytes
64 bytes from 172.17.0.8: seq=0 ttl=64 time=0.053 ms
64 bytes from 172.17.0.8: seq=1 ttl=64 time=0.035 ms
64 bytes from 172.17.0.8: seq=2 ttl=64 time=0.040 ms
--- 172.17.0.8 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.035/0.042/0.053 ms
$ ping -c3 -W20 172.17.0.4
PING 172.17.0.4 (172.17.0.4): 56 data bytes
64 bytes from 172.17.0.4: seq=0 ttl=64 time=0.070 ms
64 bytes from 172.17.0.4: seq=1 ttl=64 time=0.039 ms
64 bytes from 172.17.0.4: seq=2 ttl=64 time=0.038 ms
--- 172.17.0.4 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.038/0.049/0.070 ms
</code></pre>
| samm | <p>Hardcoding IP-address is not a good practice. Instead you can create a service for redis as well and configure the service dns name in your nginx deployment using the kubernetes dns config like this <code>my-svc.my-namespace.svc.cluster-domain.example</code>. Your nginx will then communicate to the redis container through this service.</p>
| Vamshi Siddarth |
<p>I have provisioned NFS over DigitalOcean block storage to have readwritemany access mode, now i am able to share PV between deployments, but i am unable to share it within the deployment when i have multiple mount paths with same claim name. Can someone kindly comment why this is happening, is it the right way to use PV, and if NFS doesnt support this what else can i use that will enable me to share volumes between pods with multiple mount paths with.</p>
<p><strong>Manifest</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 18Gi
storageClassName: nfs
</code></pre>
<hr />
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
replicas: 1
selector:
matchLabels:
app: web
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: web
spec:
containers:
- image: nginx:latest
name: nginx
resources: {}
volumeMounts:
- mountPath: /data
name: data
- mountPath: /beta
name: beta
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs-data
- name: beta
persistentVolumeClaim:
claimName: nfs-data
</code></pre>
<blockquote>
<p>PV DESCRIPTION</p>
</blockquote>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-nfs-server-nfs-server-provisioner-0 Bound pvc-442af801-0b76-444d-afea-382a12380926 20Gi RWO do-block-storage 24h
nfs-data Bound pvc-0ae84fe2-025b-450d-8973-b74c80275cb7 18Gi RWX nfs 1h
Name: nfs-data
Namespace: default
StorageClass: nfs
Status: Bound
Volume: pvc-0ae84fe2-025b-450d-8973-b74c80275cb7
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-server-nfs-server-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 18Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 2m16s (x2 over 2m16s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "cluster.local/nfs-server-nfs-server-provisioner" or manually created by system administrator
Normal Provisioning 2m16s cluster.local/nfs-server-nfs-server-provisioner_nfs-server-nfs-server-provisioner-0_8dd7b303-b9a1-4a07-8c6b-906b81c07402 External provisioner is provisioning volume for claim "default/nfs-data"
Normal ProvisioningSucceeded 2m16s cluster.local/nfs-server-nfs-server-provisioner_nfs-server-nfs-server-provisioner-0_8dd7b303-b9a1-4a07-8c6b-906b81c07402 Successfully provisioned volume pvc-0ae84fe2-025b-450d-8973-b74c80275cb7
</code></pre>
<blockquote>
<p>ERROR</p>
</blockquote>
<pre><code>Name: web-85f9fbf54-hfcvn
Namespace: default
Priority: 0
Node: pool-db4v93z2h-3yg9e/10.132.113.175
Start Time: Thu, 25 Jun 2020 19:25:40 +0500
Labels: app=web
pod-template-hash=85f9fbf54
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/web-85f9fbf54
Containers:
nginx:
Container ID:
Image: nginx:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/beta from beta (rw)
/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-pdsgk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-data
ReadOnly: false
beta:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-data
ReadOnly: false
default-token-pdsgk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-pdsgk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/web-85f9fbf54-hfcvn to pool-db4v93z2h-3yg9e
Warning FailedMount 22s kubelet, pool-db4v93z2h-3yg9e Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[default-token-pdsgk data beta]: timed out waiting for the condition
</code></pre>
| Talha Latif | <p>As I mentioned in comments you could try to use subPath, take a look at <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">kubernetes</a> and <a href="https://docs.openshift.com/container-platform/4.1/nodes/containers/nodes-containers-volumes.html#nodes-containers-volumes-subpath_nodes-containers-volumes" rel="nofollow noreferrer">openshift</a> documentation about it.</p>
<blockquote>
<p>Sometimes, it is useful to share one volume for multiple uses in a single Pod. The volumeMounts.subPath property can be used to specify a sub-path inside the referenced volume instead of its root.</p>
<p>Here is an example of a Pod with a LAMP stack (Linux Apache Mysql PHP) using a single, shared volume. The HTML contents are mapped to its html folder, and the databases will be stored in its mysql folder:</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "rootpasswd"
volumeMounts:
- mountPath: /var/lib/mysql
name: site-data
subPath: mysql
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
subPath: html
volumes:
- name: site-data
persistentVolumeClaim:
claimName: my-lamp-site-data
</code></pre>
<blockquote>
<p>Databases are stored in the <strong>mysql</strong> folder.</p>
<p>HTML content is stored in the <strong>html</strong> folder.</p>
</blockquote>
<hr />
<p>If that won´t work for you I would say you have to use pvc for every mount path.</p>
<p>Like for example <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/get_started_provisioning_storage_in_kubernetes#example" rel="nofollow noreferrer">here</a>.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nfs-web
spec:
volumes:
# List of volumes to use, i.e. *what* to mount
- name: myvolume
< volume details, see below >
- name: mysecondvolume
< volume details, see below >
containers:
- name: mycontainer
volumeMounts:
# List of mount directories, i.e. *where* to mount
# We want to mount 'myvolume' into /usr/share/nginx/html
- name: myvolume
mountPath: /usr/share/nginx/html/
# We want to mount 'mysecondvolume' into /var/log
- name: mysecondvolume
mountPath: /var/log/
</code></pre>
| Jakub |
<p>Currently running a k8s cluster however occasionally I get memory issues. The following error will pop up, </p>
<p><code>Failed create pod sandbox: rpc error: code = Unknown desc = failed to create a sandbox for pod "<web app>": Error response from daemon: devmapper: Thin Pool has 6500 free data blocks which is less than minimum required 7781 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior</code></p>
<p>I can resolve this by manually running <code>docker ps -a -f status=exited -q | xargs -r docker rm -v</code></p>
<p>However I want Kubernetes to do this work itself. Currently in my kublet config I have:</p>
<pre><code>evictionHard:
imagefs.available: 15%
memory.available: "100Mi"
nodefs.available: 10%
nodefs.inodesFree: 5%
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
</code></pre>
<p>What am i doing wrong?</p>
| D_G | <p>Reading the error you've posted seems to me you are using "<a href="https://docs.docker.com/storage/storagedriver/device-mapper-driver/" rel="nofollow noreferrer">devicemapper</a>" as storage driver.</p>
<blockquote>
<p>The <code>devicemapper</code> storage driver is deprecated in Docker Engine 18.09, and will be removed in a future release. It is recommended that users of the <code>devicemapper</code> storage driver migrate to <code>overlay2</code>. </p>
</blockquote>
<p>I should suggest you use "<strong>overlay2</strong>" as storage drive, unless you are running a non-support OS. See <a href="https://docs.docker.com/v17.09/engine/userguide/storagedriver/selectadriver/#docker-ce" rel="nofollow noreferrer">here</a> the support OS versions.</p>
<p>You can check your actual storage drive using <code>docker info</code> command, you will get an output like this:</p>
<pre><code>Client:
Debug Mode: false
Server:
Containers: 21
Running: 18
Paused: 0
Stopped: 3
Images: 11
Server Version: 19.03.5
Storage Driver: devicemapper <<== See here
Pool Name: docker-8:1-7999625-pool
Pool Blocksize: 65.54kB
...
</code></pre>
<p>>
Supposing you want to change the storage drive from <code>devicemapper</code> to <code>overlay2</code>, you need to following this steps:</p>
<p><strong><em>Changing the storage driver makes existing containers and images inaccessible on the local system. Use <code>docker save</code> to save any images you have built or push them to Docker Hub or a private registry before changing the storage driver, so that you do not need to re-create them later.</em></strong></p>
<p>Before following this procedure, you must first meet all the <a href="https://docs.docker.com/storage/storagedriver/overlayfs-driver/#prerequisites" rel="nofollow noreferrer">prerequisites</a>.</p>
<ol>
<li><p>Stop Docker.</p>
<pre><code>$ sudo systemctl stop docker
</code></pre></li>
<li><p>Copy the contents of <code>/var/lib/docker</code> to a temporary location.</p>
<pre><code>$ cp -au /var/lib/docker /var/lib/docker.bk
</code></pre></li>
<li><p>If you want to use a separate backing filesystem from the one used by <code>/var/lib/</code>, format the filesystem and mount it into <code>/var/lib/docker</code>. Make sure add this mount to <code>/etc/fstab</code> to make it permanent.</p></li>
<li><p>Edit <code>/etc/docker/daemon.json</code>. If it does not yet exist, create it. Assuming that the file was empty, add the following contents.</p>
<pre><code>{
"storage-driver": "overlay2"
}
</code></pre>
<p>Docker does not start if the <code>daemon.json</code> file contains badly-formed JSON.</p></li>
<li><p>Start Docker.</p>
<pre><code>$ sudo systemctl start docker
</code></pre></li>
<li><p>Verify that the daemon is using the <code>overlay2</code> storage driver. Use the <code>docker info</code> command and look for <code>Storage Driver</code> and <code>Backing filesystem</code>.</p></li>
</ol>
<pre><code>Client:
Debug Mode: false
Server:
Containers: 35
Running: 15
Paused: 0
Stopped: 20
Images: 11
Server Version: 19.03.5
Storage Driver: overlay2 <=== HERE
Backing Filesystem: extfs <== HERE
Supports d_type: true
</code></pre>
<p>Extracted from <a href="https://docs.docker.com/storage/storagedriver/overlayfs-driver/" rel="nofollow noreferrer">Docker Documentation.</a></p>
| Mr.KoopaKiller |
<p>I have a stateful application deployed in Kubernetes cluster. Now the challenge is how do I scale down the cluster in a graceful way so that each pod while terminating (during scale down) completes it’s pending tasks and then gracefully shuts-down. The scenario is similar to what is explained below but in my case the pods terminating will have few inflight tasks to be processed.</p>
<p><a href="https://medium.com/@marko.luksa/graceful-scaledown-of-stateful-apps-in-kubernetes-2205fc556ba9" rel="nofollow noreferrer">https://medium.com/@marko.luksa/graceful-scaledown-of-stateful-apps-in-kubernetes-2205fc556ba9</a> 1</p>
<p>Do we have an official feature support for this from kubernetes api.</p>
<pre><code>Kubernetes version: v1.11.0
Host OS: linux/amd64
CRI version: Docker 1.13.1
</code></pre>
<p><strong>UPDATE :</strong></p>
<p>Possible Solution - While performing a statefulset scale-down the preStop hook for the terminating pod(s) will send a message notification to a queue with the meta-data details of the resp. task(s) to be completed. Afterwards use a K8 Job to complete the tasks. Please do comment if the same is a recommended approach from K8 perspective.</p>
<p>Thanks In Advance!</p>
<p>Regards,
Balu</p>
| Balu R | <p>Your pod will be scaled down only after the in-progress job is completed. You may additionally configure the lifecycle in the deployment manifest with <code>prestop</code> attribute which will gracefully stop your application. This is one of the best practices to follow. Please refer <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers" rel="nofollow noreferrer">this</a> for detailed explanation and syntax.</p>
<h2>Updated Answer</h2>
<p>This is the yaml I tried to deploy on my local and tried generating the load to raise the cpu utilization and trigger the hpa.</p>
<p>Deployment.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: whoami
labels:
app: whoami
spec:
replicas: 1
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: containous/whoami
resources:
requests:
cpu: 30m
limits:
cpu: 40m
ports:
- name: web
containerPort: 80
lifecycle:
preStop:
exec:
command:
- /bin/sh
- echo "Starting Sleep"; date; sleep 600; echo "Pod will be terminated now"
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: whoami
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: whoami
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 40
# - type: Resource
# resource:
# name: memory
# targetAverageUtilization: 10
---
apiVersion: v1
kind: Service
metadata:
name: whoami-service
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: whoami
</code></pre>
<p>Once the pod is deployed, execute the below command which will generate the load.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run -i --tty load-generator --image=busybox /bin/sh
while true; do wget -q -O- http://whoami-service.default.svc.cluster.local; done
</code></pre>
<p>Once the replicas are created, I stopped the load and the pods are terminated after 600 seconds. This scenario worked for me. I believe this would be the similar case for statefulset as well. Hope this helps.</p>
| Vamshi Siddarth |
<p>I have 3 VPS based on Ubuntu 18.04 server and would like to build a kubernetes cluster.</p>
<p>I going to use Kubespray to install the Kubernetes cluster and my questions are: </p>
<ul>
<li>How to prepare servers, that they can talk to each other</li>
<li>How to setup the firewall, that the communication between VPS do not block</li>
<li>Is it possible to configure the firewall, to open particular port for a particular IP address. For example, only a VPS with 1.11.111.110 is allowed to access the port 10255 on VPS 1.11.111.111.</li>
<li>What else do I have to consider? </li>
</ul>
| softshipper | <blockquote>
<ul>
<li>How to prepare servers, that they can talk to each other</li>
</ul>
</blockquote>
<p>For prepare your Ubuntu instances to install Kubernetes you could check <a href="https://hostadvice.com/how-to/how-to-set-up-kubernetes-in-ubuntu/" rel="nofollow noreferrer">this</a> guide.
Use the kubespray <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubespray/" rel="nofollow noreferrer">documentation</a> to create your cluster.</p>
<blockquote>
<ul>
<li>How to setup the firewall, that the communication between VPS do not block</li>
<li>Is it possible to configure the firewall, to open particular port for a particular IP address. For example, only a VPS with 1.11.111.110 is allowed to access the port 10255 on VPS 1.11.111.111.</li>
</ul>
</blockquote>
<p>You could use iptables to make specific rule from you nodes, but I think it will be a challenge to manage theses rules for your service by hand... you could try but it will be really hard to manage and maintain.</p>
<blockquote>
<p>What else do I have to consider?</p>
</blockquote>
<p>I recommend this setup for understand how to Kubernetes works and for tests purposes, but it is not production-ready, the security of your node could be compromised since you are exposed the Kubernetes ports over the internet.</p>
<p>You can consider also use kubernetes as a single node, take a look is this <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">link.</a></p>
| Mr.KoopaKiller |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.