Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>How to set Node label to Pod environment variable? I need to know the label <code>topology.kubernetes.io/zone</code> value inside the pod.</p>
| Jonas | <p>The <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Downward API</a> currently does not support exposing node labels to pods/containers. There is an <a href="https://github.com/kubernetes/kubernetes/issues/40610" rel="nofollow noreferrer">open issue</a> about that on GitHib, but it is unclear when it will be implemented if at all.</p>
<p>That leaves the only option to get node labels from Kubernetes API, just as <code>kubectl</code> does. It is not easy to implement, especially if you want labels as environment variables. I'll give you an example how it can be done with an <code>initContainer</code>, <code>curl</code>, and <code>jq</code> but if possible, I suggest you rather implement this in your application, for it will be easier and cleaner.</p>
<p>To make a request for labels you need permissions to do that. Therefore, the example below creates a service account with permissions to <code>get</code> (describe) nodes. Then, the script in the <code>initContainer</code> uses the service account to make a request and extract labels from <code>json</code>. The <code>test</code> container reads environment variables from the file and <code>echo</code>es one.</p>
<p>Example:</p>
<pre class="lang-yaml prettyprint-override"><code># Create a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: describe-nodes
namespace: <insert-namespace-name-where-the-app-is>
---
# Create a cluster role that allowed to perform describe ("get") over ["nodes"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: describe-nodes
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
---
# Associate the cluster role with the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: describe-nodes
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: describe-nodes
subjects:
- kind: ServiceAccount
name: describe-nodes
namespace: <insert-namespace-name-where-the-app-is>
---
# Proof of concept pod
apiVersion: v1
kind: Pod
metadata:
name: get-node-labels
spec:
# Service account to get node labels from Kubernetes API
serviceAccountName: describe-nodes
# A volume to keep the extracted labels
volumes:
- name: node-info
emptyDir: {}
initContainers:
# The container that extracts the labels
- name: get-node-labels
# The image needs 'curl' and 'jq' apps in it
# I used curl image and run it as root to install 'jq'
# during runtime
# THIS IS A BAD PRACTICE UNSUITABLE FOR PRODUCTION
# Make an image where both present.
image: curlimages/curl
# Remove securityContext if you have an image with both curl and jq
securityContext:
runAsUser: 0
# It'll put labels here
volumeMounts:
- mountPath: /node
name: node-info
env:
# pass node name to the environment
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: APISERVER
value: https://kubernetes.default.svc
- name: SERVICEACCOUNT
value: /var/run/secrets/kubernetes.io/serviceaccount
- name: SCRIPT
value: |
set -eo pipefail
# install jq; you don't need this line if the image has it
apk add jq
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
# Get node labels into a json
curl --cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
-X GET ${APISERVER}/api/v1/nodes/${NODENAME} | jq .metadata.labels > /node/labels.json
# Extract 'topology.kubernetes.io/zone' from json
NODE_ZONE=$(jq '."topology.kubernetes.io/zone"' -r /node/labels.json)
# and save it into a file in the format suitable for sourcing
echo "export NODE_ZONE=${NODE_ZONE}" > /node/zone
command: ["/bin/ash", "-c"]
args:
- 'echo "$$SCRIPT" > /tmp/script && ash /tmp/script'
containers:
# A container that needs the label value
- name: test
image: debian:buster
command: ["/bin/bash", "-c"]
# source ENV variable from file, echo NODE_ZONE, and keep running doing nothing
args: ["source /node/zone && echo $$NODE_ZONE && cat /dev/stdout"]
volumeMounts:
- mountPath: /node
name: node-info
</code></pre>
| anemyte |
<p>I am new working with Airflow and Kubernetes. I am trying to use apache Airflow in Kubernetes.</p>
<p>To deploy it I used this chart: <a href="https://github.com/apache/airflow/tree/master/chart" rel="nofollow noreferrer">https://github.com/apache/airflow/tree/master/chart</a>.</p>
<p>I want to upload my dags in my github repository so:</p>
<pre class="lang-yml prettyprint-override"><code>gitSync:
enabled: true
# git repo clone url
# ssh examples ssh://[email protected]/apache/airflow.git
# [email protected]:apache/airflow.git
# https example: https://github.com/apache/airflow.git
repo: https://github.com/mygithubrepository.git
branch: master
rev: HEAD
root: "/git"
dest: "repo"
depth: 1
# the number of consecutive failures allowed before aborting
maxFailures: 0
# subpath within the repo where dags are located
# should be "" if dags are at repo root
subPath: ""
</code></pre>
<p>Then I see that to use a private github repository I have to create a secret as is specified in the value.yml file:</p>
<pre class="lang-yml prettyprint-override"><code># if your repo needs a user name password
# you can load them to a k8s secret like the one below
# ---
# apiVersion: v1
# kind: Secret
# metadata:
# name: git-credentials
# data:
# GIT_SYNC_USERNAME: <base64_encoded_git_username>
# GIT_SYNC_PASSWORD: <base64_encoded_git_password>
# and specify the name of the secret below
#credentialsSecret: git-credentials
</code></pre>
<p>I am creating the secret:</p>
<pre class="lang-yml prettyprint-override"><code>apiVersion: v1
data:
GIT_SYNC_USERNAME: bXluYW1l
GIT_SYNC_PASSWORD: bXl0b2tlbg==
kind: Secret
metadata:
name: git-credentials
namespace: default
</code></pre>
<p>Then I use the secret name in the value.yml file:</p>
<pre class="lang-yml prettyprint-override"><code>repo: https://github.com/mygithubrepository.git
branch: master
rev: HEAD
root: "/git"
dest: "repo"
depth: 1
# the number of consecutive failures allowed before aborting
maxFailures: 0
# subpath within the repo where dags are located
# should be "" if dags are at repo root
subPath: ""
# if your repo needs a user name password
# you can load them to a k8s secret like the one below
# ---
# apiVersion: v1
# kind: Secret
# metadata:
# name: git-credentials
# data:
# GIT_SYNC_USERNAME: <base64_encoded_git_username>
# GIT_SYNC_PASSWORD: <base64_encoded_git_password>
# and specify the name of the secret below
credentialsSecret: git-credentials
</code></pre>
<p>but seems not bee working.</p>
| J.C Guzman | <p>I see that you are connecting to your github repo via <code>https</code>.</p>
<p>Try to use:</p>
<pre><code>ssh://[email protected]/mygithubrepository.git
</code></pre>
<p>or simply</p>
<pre><code>[email protected]/mygithubrepository.git
</code></pre>
<p>You can experience issues with connecting via <code>https</code> especially if you have two-factor authentication enabled on your github account. It's described more in detail in <a href="https://medium.com/@ginnyfahs/github-error-authentication-failed-from-command-line-3a545bfd0ca8" rel="nofollow noreferrer">this</a> article.</p>
<p>Also take a look at <a href="https://stackoverflow.com/users/6309/vonc">VonC's</a> <a href="https://stackoverflow.com/a/29297250/11714114">answer</a> where he mentions:</p>
<blockquote>
<p>As noted in <a href="https://stackoverflow.com/users/101662/oliver">Oliver</a>'s
<a href="https://stackoverflow.com/a/34919582/6309">answer</a>, an HTTPS URL
would not use username/password if <a href="https://help.github.com/en/github/authenticating-to-github/securing-your-account-with-two-factor-authentication-2fa" rel="nofollow noreferrer">two-factor authentication
(2FA)</a>
is activated.</p>
<p>In that case, the password should be a <a href="https://help.github.com/en/github/authenticating-to-github/accessing-github-using-two-factor-authentication#using-two-factor-authentication-with-the-command-line" rel="nofollow noreferrer">PAT (personal access
token)</a> as seen in "<a href="https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line#using-a-token-on-the-command-line" rel="nofollow noreferrer">Using a token on the command
line</a>".</p>
<p>That applies only for HTTPS URLS, SSH is not affected by this
limitation.</p>
</blockquote>
<p>As well as at <a href="https://stackoverflow.com/a/21027728/11714114">this one</a> provided by <a href="https://stackoverflow.com/users/3155236/rc0r">rc0r</a>.</p>
<p>But as I said, simply using <code>ssh</code> instead of <code>https</code> should resolve your problem easily.</p>
| mario |
<p>Few queries on GKE</p>
<ul>
<li>We have few <code>GKE</code> CLusters running on <code>Default VPC</code>. Can we migrate these clusters to use <code>SharedVPC</code> or atleast <code>Custom VPC</code>? It seems existing clusters with default VPC mode cannot be changed to <code>SharedVPC model</code> as per GCP documentation but can we convert to <code>Custom VPC</code> from <code>default VPC</code></li>
<li>How to migrate from <code>Custom VPC</code> to <code>Shared VPC</code>? Is it creating a new Cluster from existing Cluster and select <code>SharedVPC</code> in networking section for new cluster and then copy the Kubernetes resources to new Cluster?</li>
<li>Also looks like we cannot convert <code>public</code> GKE Cluster to <code>private</code> mode. Does this too requires creation of new Cluster to migrate from <code>Public</code> to <code>Private</code> GKE Cluster?</li>
</ul>
| Zama Ques | <p>Unfortunatelly you cannot change any of those settings on the existing <strong>GKE</strong> cluster. You can clone the existing one by using <code>DUPLICATE</code> tab in cluster details:</p>
<p><a href="https://i.stack.imgur.com/C88fI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C88fI.png" alt="enter image description here" /></a></p>
<p>During new cluster creation you can change it from <code>Public</code> to <code>Private</code> in <code>Cluster -> Networking</code> section:</p>
<p><a href="https://i.stack.imgur.com/v2hOz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v2hOz.png" alt="enter image description here" /></a></p>
<p>After choosing it you'll need to correct fields that are marked in red:</p>
<p><a href="https://i.stack.imgur.com/l9Vwo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l9Vwo.png" alt="enter image description here" /></a></p>
<p>You can also choose different <code>VPC</code> network.</p>
<p>When it comes to migrating a workload this is separate story. You can choose the approach which is most suitable for you, ranging from manually exporting all your yaml manifests (pretty tedious and not very convenient process I would say) to using dedicated tools like <a href="https://github.com/vmware-tanzu/velero" rel="nofollow noreferrer">velero</a>.</p>
| mario |
<p>I followed the steps from this guide to create a kafka pod:
<a href="https://dzone.com/articles/ultimate-guide-to-installing-kafka-docker-on-kuber" rel="nofollow noreferrer">https://dzone.com/articles/ultimate-guide-to-installing-kafka-docker-on-kuber</a></p>
<p>Though I used LoadBalancer type for kafka-service (as said in the guide), I don't get an external IP for kafka-service:
<a href="https://i.stack.imgur.com/xvOvB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xvOvB.png" alt="enter image description here" /></a></p>
<p>On kubernetes dashboards kafka-service is shown as running.</p>
| Viorel Casapu | <p>The <code>LoadBalancer</code> service and <code>Ingress</code> are only available to Kubernetes if you are using any cloud provider, like: GCP, AWS, Azure etc... <strong>It's not supported by default for bare-metal</strong> implementations.</p>
<p>But, if you are running kubernetes bare-metal, <strong>alternatively</strong>, you can use <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetaLB</a> to <em>enable</em> the service <code>LoadBalancer</code> type and <code>ingress</code>.</p>
<blockquote>
<p>Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.</p>
</blockquote>
<p><strong>For minikube</strong></p>
<p><a href="https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service" rel="nofollow noreferrer">On minikube</a>, the LoadBalancer type makes the Service accessible through the minikube service command.</p>
<p>Run the following command:</p>
<p><code>minikube service hello-node</code></p>
<p><strong>Or</strong> you can <a href="https://kubernetes.io/docs/tutorials/hello-minikube/#enable-addons" rel="nofollow noreferrer">enable the nginx-ingress addon</a> if you want to to create an ingress:</p>
<p><code>minikube addons enable ingress</code></p>
| Mr.KoopaKiller |
<p>I'm running this tutorial <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html</a> and found that the elasticsearch operator comes included with a pre-defined secret which is accessed through <code>kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'</code>. I was wondering how I can access it in a manifest file for a pod that will make use of this as an env var. The pod's manifest is as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: user-depl
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
spec:
containers:
- name: user
image: reactor/user
env:
- name: PORT
value: "3000"
- name: ES_SECRET
valueFrom:
secretKeyRef:
name: quickstart-es-elastic-user
key: { { .data.elastic } }
---
apiVersion: v1
kind: Service
metadata:
name: user-svc
spec:
selector:
app: user
ports:
- name: user
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>When trying to define <code>ES_SECRET</code> as I did in this manifest, I get this error message: <code>invalid map key: map[interface {}]interface {}{\".data.elastic\":interface {}(nil)}\n</code>. Any help on resolving this would be much appreciated.</p>
| reactor | <p>The secret returned via API (<code>kubectl get secret ...</code>) is a JSON-structure, where there:</p>
<pre class="lang-json prettyprint-override"><code>{
"data": {
"elastic": "base64 encoded string"
}
}
</code></pre>
<p>So you just need to replace</p>
<pre class="lang-yaml prettyprint-override"><code>key: { { .data.elastic } }
</code></pre>
<p>with</p>
<pre class="lang-yaml prettyprint-override"><code>key: elastic
</code></pre>
<p>since it's <code>secretKeyRef</code>erence (e.g. you refer a value in some <code>key</code> in <code>data</code> (=contents) of some secret, which name you specified above). No need to worry about base64 decoding; Kubernetes does it for you.</p>
| anemyte |
<p>Requst:limits of a pod may be set to low at the beginning, to make full use of node's resource, we need to set the limits higher. However, when the resource of node is not enough, to make the node's still work well, we need to set the limits lower. It is better not to kill the pod, because it may influence the cluster. </p>
<p>Background:I am currently a beginner in k8s and docker, my mentor give me this requests. Can this requests fullfill normaly? Or is it better way to solve this kind of problem? Thanks for your helps!
All I tried:I am trying to do by editing the Cgroups, but I can only do this in a container, so may be container should be use in privileged mode.</p>
<p>I expect a resonable plan for this requests.
Thanks...</p>
| scofieldmao | <p>The clue is you want to change limits <strong>without killing the pod</strong>. </p>
<p>This is not the way Kubernetes works, as <a href="https://stackoverflow.com/users/1296707/markus-w-mahlberg">Markus W Mahlberg</a> explained in his comment above. In Kubernetes there is no "hot plug CPU/memory" or "live migration" facilities the convenient hypervisors provide. Kubernetes treats pods as ephemeral instances and does not take care about keeping them running. Whether you need to change resource limits for the application, change the app configuration, install app updates or repair misbehaving application, the <strong>"kill-and-recreate"</strong> approach is applied to pods. </p>
<p>Unfortunately, the solutions suggested here will not work for you: </p>
<ul>
<li>Increasing limits for the running container within the pod ( <code>docker update</code> command ) will lead to breaching the pod limits and killing the pod by Kubernetes. </li>
<li>Vertical Pod Autoscaler is part of Kubernetes project and relies on the "kill-and-recreate" approach as well.</li>
</ul>
<p>If you really need to keep the containers running and managing allocated resource limits for them "on-the-fly", perhaps Kubernetes is not suitable solution in this particular case. Probably you should consider using pure Docker or a VM-based solution. </p>
| mebius99 |
<p>I am trying to find min/max/average memory consumed by particular pod over a time inteval.</p>
<p>Currently I am using</p>
<pre><code>sum(container_memory_working_set_bytes{namespace="test", pod="test1", container!="POD", container!=""}) by (container)
Output -> test1 = 9217675264
</code></pre>
<p>For report purpose, I need to find what the min/peak memory used by pod over a time interval ( 6h)
and average too.</p>
| pythonhmmm | <p>You can do that with a range vector (add an <code>[interval]</code> to a metric name/selector) and an <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#aggregation_over_time" rel="nofollow noreferrer">aggregation-over-time function</a>:</p>
<pre><code>min_over_time(container_memory_usage_bytes{}[6h])
max_over_time(container_memory_usage_bytes{}[6h])
avg_over_time(container_memory_usage_bytes{}[6h])
</code></pre>
| anemyte |
<p>I have a fileserver with a folder <code>/mnt/kubernetes-volumes/</code>. Inside this folder, I have subdirectories for every user of my application:</p>
<ul>
<li><code>/mnt/kubernetes-volumes/customerA</code></li>
<li><code>/mnt/kubernetes-volumes/customerB</code></li>
<li><code>/mnt/kubernetes-volumes/customerC</code></li>
</ul>
<p>Inside each customer folder, I have two folders for the applications of my data stores:</p>
<ul>
<li><code>/mnt/kubernetes-volumes/customerA/elastic</code></li>
<li><code>/mnt/kubernetes-volumes/customerA/postgres</code></li>
</ul>
<p>I have a Kubernetes configuration file that specifies persistent volumes and claims. These mount to the <code>elastic</code> and <code>postgres</code> folders.</p>
<pre><code># Create a volume on the NFS disk that the postgres db can use.
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: customerA
name: postgres-pv
labels:
name: postgres-volume
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 1.2.3.4 # ip addres of nfs server
path: "/mnt/kubernetes-volumes/customerA/postgres"
---
# And another persistent volume for Elastic
</code></pre>
<p>Currently, my <code>/etc/exports</code> file looks like this:</p>
<pre><code>/mnt/kubernetes-volumes/customerA/postgres 10.0.0.1(rw,sync,no_subtree_check,no_root_squash)
/mnt/kubernetes-volumes/customerA/elastic 10.0.0.1(rw,sync,no_subtree_check,no_root_squash)
/mnt/kubernetes-volumes/customerA/postgres 10.0.0.2(rw,sync,no_subtree_check,no_root_squash)
/mnt/kubernetes-volumes/customerA/elastic 10.0.0.2(rw,sync,no_subtree_check,no_root_squash)
/mnt/kubernetes-volumes/customerB/postgres 10.0.0.1(rw,sync,no_subtree_check,no_root_squash)
/mnt/kubernetes-volumes/customerB/elastic 10.0.0.1(rw,sync,no_subtree_check,no_root_squash)
/mnt/kubernetes-volumes/customerB/postgres 10.0.0.2(rw,sync,no_subtree_check,no_root_squash)
/mnt/kubernetes-volumes/customerB/elastic 10.0.0.2(rw,sync,no_subtree_check,no_root_squash)
</code></pre>
<p>For every customer, I explicitly export a <code>postgres</code> and <code>elastic</code> folder separately for every node in my Kubernetes cluster. This works as intended. However, I now have to manually add rows to the <code>etc/exports</code> file for every new customer. Is it possible to have just two lines like this:</p>
<pre><code>/mnt/kubernetes-volumes/ 10.0.0.1(rw,sync,no_subtree_check,no_root_squash)
/mnt/kubernetes-volumes/ 10.0.0.2(rw,sync,no_subtree_check,no_root_squash)
</code></pre>
<p>And automatically have Kubernetes create the correct sub-directories (customer, postgres and elastic) inside the <code>kubernetes-volumes</code> directory and mount them?</p>
| yesman | <p>Kubernetes can't perform system commands on nodes by itself, you need to use some external tool, such as bash script.</p>
<p>You can use only the path <code>/mnt/kubernetes-volumes/</code> in you containers and using environment variable you can pass the customer name, but it will make all data acessible by all other pods, and isn't a good idea.</p>
<p>Also, you could try to use <a href="https://helm.sh/docs/chart_template_guide/getting_started/" rel="nofollow noreferrer">HELM templates</a> to create your <code>persistentVolumes</code> using the name of your customers as variables.</p>
| Mr.KoopaKiller |
<p>Going through blogs/official sites, I installed kubectl and minikube. After successful installation of both, I executed the following command.</p>
<pre><code>minikube start --driver=hyperv
</code></pre>
<p>After executing the above command I am struck and the process is not completed at all as mentioned below screenshot.
<a href="https://i.stack.imgur.com/6N4IK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6N4IK.png" alt="enter image description here" /></a></p>
<p>The process has been running in step:4 (Updating the running hyperv "minikube" VM...) for more than 30 minutes.</p>
<p>Please help me to resolve this as I just started learning Kubernetes.</p>
<p>Thanks in advance.</p>
| akhil | <p>Maybe this can help (from here <a href="https://stackoverflow.com/questions/56327843/minikube-is-slow-and-unresponsive">Minikube is slow and unresponsive</a>):</p>
<p><strong>1)</strong> Debugging issues with minikube by adding <code>-v</code> flag and set debug level (0, 1, 2, 3, 7).</p>
<p>As example: <code>minikube start --v=1</code> to set outbut to INFO level.<br/>
More detailed information <a href="https://github.com/kubernetes/minikube/blob/master/docs/debugging.md" rel="nofollow noreferrer">here</a></p>
<p><strong>2)</strong> Use logs command <code>minikube logs</code></p>
<p><strong>3)</strong> Because Minikube is working on Virtual Machine sometimes is better to delete minikube and start it again (It helped in this case).</p>
<pre><code>minikube delete
minikube start
</code></pre>
<p><strong>4)</strong> It might get slow due to lack of resources.</p>
<p>Minikube as default is using 2048MB of memory and 2 CPUs. More details about this can be fund <a href="https://github.com/kubernetes/minikube/blob/232080ae0cbcf9cb9a388eb76cc11cf6884e19c0/pkg/minikube/constants/constants.go#L97" rel="nofollow noreferrer">here</a>
In addition, you can enforce Minikue to create more using command <code>minikube start --cpus 4 --memory 8192</code></p>
| Bguess |
<p>I currently have a php-fpm container set up in Kubernetes to output error messages, exceptions,... to stderr. This way I can see my PHP errors, when using 'kubectl logs'.</p>
<p>I'm also working with sentry and I was wondering if there was a good way to collect my log output and send it to sentry, so I can use sentry to see my errors as well. I'm not looking to change the code though, so using php and some specific logger to send messages to sentry directly won't work for me.</p>
| patrick.barbosa.3979 | <p>You can use <a href="https://www.fluentd.org/" rel="nofollow noreferrer"><strong>Fluentd</strong></a> with <a href="https://www.fluentd.org/plugins/all" rel="nofollow noreferrer">an output plugin</a> that sends aggregated errors/exception events to <strong>Sentry</strong> e.g. <a href="https://github.com/y-ken/fluent-plugin-sentry" rel="nofollow noreferrer">this one</a>.</p>
<p><strong>Fluentd</strong> is deployed as a sidecar container in your app <code>Pod</code> so you don't have to change anything in your code.</p>
| mario |
<p>I've encountered a rather rookie issue with k8s. I'm particularly new to k8s, and setup staging and production services / deployments for a Django - celery - redis application within a cluster. However. In my excitement that I actually managed to get something working, I didn't check to think if it was 100% correct.</p>
<p>Essentially, I've noticed that the pre-production Django application doesn't care which celery deployment it references when dispatching a periodic task. It might go to staging, it might try the pre-production deployment. <strong>THIS IS BAD</strong>.</p>
<p>So, I've been looking at labels and selectors, as well as namespaces.</p>
<p>However, I should probably slow down - my first question, how would I use something native to k8s to run different environments of deployments, such that they are all isolated from each other. So the pre-production Django application can only talk to the pre-production celery-worker or pre-production celery-beat deployments...</p>
<p>*My answer I feel is to use labels and selectors? But ... is it best to use namespaces?</p>
<p>Any pro-guidance around this subject would be amazing.</p>
| Micheal J. Roberts | <p>Aside from creating a new cluster per environment, you can separate deployments by namespace or just different stacks in one namespace. The last case (the one you use now) is the easiest to shoot in the leg since you have to change a lot of thing to make it isolated. At very least you need a different set of resource names (both in manifests and configuration) and labels to match.</p>
<p>Out of the three methods I think namespace separation is the easiest; it works on DNS-based service discovery. Suppose you have a copy of redis and your application in two different namespaces (<code>dev</code> and <code>prod</code> for example). Both instances of the app are configured to use redis at <code>redis:6379</code>. When they call DNS to resolve the hostname, <code>CoreDNS</code> (the internal DNS service) would respond with different answers depending on which namespace the request came from. And so your app in <code>dev</code> namespace will get an IP-address of redis in <code>dev</code> namespace, and the app from <code>prod</code> namespace will contact redis from <code>prod</code> namespace. This method does not apply any restriction, if you wish you can specifically make it so that both apps use the same copy of redis. For that, instead of <code>redis:6379</code> you have to use a full service DNS name, like this:</p>
<p><code>redis.<namespace>.svc.cluster.local:6379</code></p>
<p>Regardless of whatever method you choose to go with, I strongly recommend you to get familiar with <a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a>, <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a>, or both. These tools are to help you avoid duplicating resource manifests and thus spend less time spawning and managing instances. I will give you a minimal example for <code>Kustomize</code> because it is built in <code>kubectl</code>. Consider the following directory structure:</p>
<pre><code>.
├── bases
│ └── my-app
│ ├── deployment.yml # your normal deployment manifest
│ └── kustomization.yml
└── instances
├── prod
│ ├── kustomization.yml
│ └── namespace.yml # a manifest that creates 'prod' namespace
└── test
├── kustomization.yml
└── namespace.yml # a manifest that creates 'test' namespace
</code></pre>
<p><code>bases</code> is where you keep a non-specific skeleton of your application. This isn't meant to be deployed, like a class it has to be instantiated. <code>instances</code> is where you describe various instances of your application. Instances are meant to be deployed.</p>
<p><strong>bases/my-app/kustomization.yml</strong>:</p>
<pre class="lang-yaml prettyprint-override"><code># which manifests to pick up
resources:
- deployment.yml
</code></pre>
<p><strong>instances/prod/kustomization.yml</strong>:</p>
<pre class="lang-yaml prettyprint-override"><code># refer what we deploy
bases:
- ../../bases/my-app
resources:
- namespace.yml
# and this overrides namespace attribute for all manifests referenced above
namespace: prod
</code></pre>
<p><strong>instances/test/kustomization.yml</strong>:</p>
<pre class="lang-yaml prettyprint-override"><code># the same as above, only the namespace is different
bases:
- ../../bases/my-app
resources:
- namespace.yml
namespace: test
</code></pre>
<p>Now if you go into <code>instances</code> directory and use <code>kubectl apply -k prod</code> you will deploy <code>deployment.yml</code> to <code>prod</code> namespace. Similarly <code>kubectl apply -k test</code> will deploy it to the <code>test</code> namespace.</p>
<p>And this is how you can create several identical copies of your application stack in different namespaces. It should be fairly isolated unless some shared resources from other namespaces involved. In other words, if you deploy each component (such as the database) per namespace and those components are not configured to access components from other namespaces - it will work as expected.</p>
<p>I encourage you to read more on <a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a> and <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a>, since namespace overriding is just a basic thing these can do. You can manage labels, configuration, names, stack components and more.</p>
| anemyte |
<p>I've deploed an app to k8s using <code>kubectl apply ...</code> few weeks ago, the application is <strong>running in prod.</strong>
Now we switch to helm installation and build chart that we want to update the application with it.
The problem is that there is already deployed artifacts like secret/config maps of the application
<strong>which I cannot delete</strong>.</p>
<p>And while running <code>helm upgrade --install app chart/myapp -n ns </code></p>
<p><strong>I got error like</strong></p>
<blockquote>
<p>Error: rendered manifests contain a resource that already exists.
Unable to continue with install: existing resource conflict:
namespace: ns, name: bts, existing_kind: /v1, Kind=Secret, new_kind:
/v1, Kind=Secret</p>
</blockquote>
<p>Is there any <strong>trick</strong> which I can use to <strong>overcome this without deleting</strong> the secret/configmap ?</p>
| PJEM | <p>After testing some options in my lab, I've realized that the way I told you in comments worked.</p>
<p>Helm uses information on <code>metadata</code> and <code>labels</code> are injected in the resources to know what are the resources managed by itself. The <strong>workaround</strong> below shows how you can <em>import</em> a previous created secret, not managed by Helm, using <code>matadata</code> information from a new secret deployed with Helm.</p>
<p>Let's suppose the <code>my-secret</code> is already deployed and you want to <em>"import"</em> that resource to helm, you need to get the metadata information of the new resource. Let's dig into it:</p>
<p><strong>Scenario:</strong></p>
<ol>
<li>A <code>secret</code> name <code>my-secret</code> deployed in <code>default</code> name space. (Not managed by Helm)</li>
<li>A helm chart with a secret template named <code>my-new-var</code> with a different value.</li>
</ol>
<p><strong>Steps:</strong></p>
<ol>
<li>Create a normal secret for testing purposes using this spec:</li>
</ol>
<pre><code> apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
secret: S29vcGFLaWxsZXIK
</code></pre>
<ol start="2">
<li>Apply the Helm chart to create the <code>my-new-secret</code>. The real purpose of that is get the <code>metadata</code> and <code>labels</code> information.</li>
</ol>
<p>After that you can see the secret file using the command:</p>
<p><code>kubectl get secrets my-secret -o yaml</code>:</p>
<pre><code>apiVersion: v1
data:
secret: VXB2b3RlSXQ=
kind: Secret
metadata:
annotations:
meta.helm.sh/release-name: myapp-1599472603
meta.helm.sh/release-namespace: default
creationTimestamp: "2020-09-07T10:03:05Z"
labels:
app.kubernetes.io/managed-by: Helm
name: my-secret
namespace: default
resourceVersion: "2064792"
selfLink: /api/v1/namespaces/default/secrets/my-secret
uid: 7cf66475-b26b-415b-8c11-8fb6974da495
type: Opaque
</code></pre>
<p>From this file we need to get the <code>annotations</code> and <code>labels</code> to apply in our old <code>my-secret</code>.</p>
<ol start="3">
<li>Edit the secret file created on step 1 to add thoses information. It will result in a file like that:</li>
</ol>
<pre><code>apiVersion: v1
data:
secret: S29vcGFLaWxsZXIK
kind: Secret
metadata:
annotations:
meta.helm.sh/release-name: myapp-1599472603
meta.helm.sh/release-namespace: default
name: my-secret
labels:
app.kubernetes.io/managed-by: Helm
namespace: default
</code></pre>
<ol start="4">
<li><p>Delete the <code>my-new-secret</code> create by Helm, since we not longer use it:
<code>kubectl delete secrets my-new-secret</code></p>
</li>
<li><p>In the Helm chart, edit the secret name to match with the old secret, in our case change the name from <code>my-new-secret</code> to <code>my-secret</code>.</p>
</li>
<li><p>Upgrade the Helm chart, in my case I have used a value from Values.yaml:</p>
</li>
</ol>
<pre><code>$ helm upgrade -f myapp/values.yaml myapp-1599472603 ./myapp/
Release "myapp-1599472603" has been upgraded. Happy Helming!
NAME: myapp-1599472603
LAST DEPLOYED: Mon Sep 7 10:28:38 2020
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
</code></pre>
| Mr.KoopaKiller |
<p>This is an extension to the question <a href="https://stackoverflow.com/questions/39231880/kubernetes-api-gets-pods-on-specific-nodes">here</a> - how can I get the list pods running on nodes with a certain label?</p>
<p>I am trying to find the pods in a specific zone (failure-domain.beta.kubernetes.io/zone)</p>
| vrtx54234 | <p>You can get all nodes' name with the label you want using <code>for</code> command and list the pods within theses nodes:</p>
<p>Example:</p>
<pre><code>for node in $(kubectl get nodes -l failure-domain.beta.kubernetes.io/zone=us-central1-c -ojsonpath='{.items[*].metadata.name}'); do kubectl get pods -A -owide --field-selector spec.nodeName=$node; done
</code></pre>
<p>The command will list all pods with label <code>failure-domain.beta.kubernetes.io/zone=us-central1-c</code> and then list the pods.</p>
| Mr.KoopaKiller |
<p>On Minikube using KubeCtl, I run an image created by Docker using the following command:</p>
<pre><code>kubectl run my-service --image=my-service-image:latest --port=8080 --image-pull-policy Never
</code></pre>
<p>But on Minukube, a different configuration is to be applied to the application. I prepared some environment variables in a deployment file and want to apply them to the images on Minikube. Is there a way to tell KubeCtl to run those images using a given deployment file or even a different way to provide the images with those values?</p>
<p>I tried the <em>apply</em> verb of KubeCtl for example, but it tries to create the pod instead of applying the configuration on it.</p>
| AHH | <p>In minukube/kubernetes you need to apply the environment variables in the yaml file of your pod/deployment.</p>
<p>Here is a example of how you can configure the environment variables in a deployment spec:</p>
<pre><code>apiVersion: apps/v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">Here</a> you can find more information abour environment variables.</p>
<p>In this case, if you want to change any value, you need to delete the pod and apply it again. But if you use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer"><code>deployment</code></a> all modification can be done using <code>kubectl apply</code> command.</p>
| Mr.KoopaKiller |
<p>My question is built on the question and answers from this question - <a href="https://stackoverflow.com/questions/41509439/whats-the-difference-between-clusterip-nodeport-and-loadbalancer-service-types">What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?</a></p>
<p>The question might not be well-formed for some of you.</p>
<p>I am trying to understand the differences between <code>clusterIP</code>, <code>nodePort</code> and <code>Loadbalancer</code> and when to use these with an example. I suppose that my understanding of the following concept is correct
K8s consists of the following components</p>
<ul>
<li>Node - A VM or physical machine. Runs kubectl and docker process</li>
<li>Pod - unit which encapsulates container(s) and volumes (storage). If a pod contains multiple containers then shared volume could be the way for process communication</li>
<li>Node can have one or multiple pods. Each pod will have its own IP</li>
<li>Cluster - replicas of a Node. Each node in a cluster will contain same pods (instances, type)</li>
</ul>
<p>Here is the scenario:</p>
<p>My application has a <code>web server</code> (always returning 200OK) and a <code>database</code> (always returning the same value) for simplicity. Also, say I am on <code>GCP</code> and I make images of <code>webserver</code> and of the <code>database</code>. Each of these will be run in their own respective <code>pods</code> and will have 2 replicas.</p>
<p>I suppose I'll have two clusters (<code>cluster-webserver (node1-web (containing pod1-web), node2-web (containing pod2-web))</code> and <code>cluster-database (node1-db (containing pod1-db), node2-db (containing pod2-db))</code>. Each node will have its own <code>ip</code> address (<code>node1-webip, node2-webip, node1-dbip, node2-dbip</code>)</p>
<p>A client application (browser) should be able to access the web application from outside <code>web</code> cluster but the <code>database</code> shouldn't be accessible from outside <code>database</code> cluster. However <code>web</code> nodes should be able to access <code>database</code> nodes)</p>
<ul>
<li>Question 1 - Am I correct that if I create a service for <code>web</code> (<code>webServiceName</code>) and a service for <code>database</code> then by default, I'll get only <code>clusterIP</code> and a <code>port</code> (or <code>targetPort</code>).</li>
<li>Question 1.2 - Am I correct that <code>clusterIP</code> is an <code>IP</code> assigned to a <code>pod</code>, not the <code>node</code> i.e. in my example, clusterIP gets assigned to <code>pod1-web</code>, not <code>node1-web</code> even though <code>node1</code> has only <code>pod1</code>.</li>
<li>Question 1.3 - Am I correct that as <code>cluster IP</code> is accessible from only within the cluster, <code>pod1-web</code> and <code>pod2-web</code> can talk to each other and <code>pod1-db</code> and <code>pod2-db</code> can talk to each other using <code>clusterIP/dns:port</code> or <code>clusterIP/dns:targetPort</code> but <code>web</code> can't talk to <code>database</code> (and vice versa) and external client can't talk to web? Also, the <code>nodes</code> are not accessible using the <code>cluster IP</code>.</li>
<li>Question 1.4 - Am I correct that <code>dns</code> i.e. <code>servicename.namespace.svc.cluster.local</code> would map the <code>clusterIP</code>?</li>
<li>Question 1.5 - For which type of applications I might use only <code>clusterIP</code>? Where multiple instances of an application need to communicate with each other (eg master-slave configuration)?</li>
</ul>
<p>If I use <code>nodePort</code> then <code>K8s</code> will open a <code>port</code> on each of the <code>node</code> and will forward <code>nodeIP/nodePort</code> to <code>cluster IP (on pod)/Cluster Port</code></p>
<ul>
<li>Question 2 - Can <code>web</code> nodes now access <code>database</code> nodes using <code>nodeIP:nodePort</code> which will route the traffic to <code>database's</code> <code>clusterIP (on pod):clusterport/targertPort</code>? ( I have read that clusterIP/dns:nodePort will not work).</li>
<li>Question 2.1 - How do I get a <code>node's</code> <code>IP</code>? Is <code>nodeIP</code> the <code>IP</code> I'll get when I run <code>describe pods</code> command?</li>
<li>Question 2.2 - Is there a <code>dns</code> equivalent for the <code>node IP</code> as <code>node IP</code> could change during failovers. Or does <code>dns</code> now resolve to the node's IP instead of <code>clusterIP</code>?</li>
<li>Question 2.3 - I read that <code>K8s</code> will create <code>endpoints</code> for each <code>service</code>. Is <code>endpoint</code> same as <code>node</code> or is it same as <code>pod</code>? If I run <code>kubectl describe pods</code> or <code>kubectl get endpoints</code>, would I get same <code>IPs</code>)?</li>
</ul>
<p>As I am on <code>GCP</code>, I can use <code>Loadbalancer</code> for <code>web</code> cluster to get an external IP. Using the external IP, the client application can access the <code>web</code> service</p>
<p>I saw this configuration for a <code>LoadBalancer</code></p>
<pre><code>spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
type: LoadBalancer
</code></pre>
<p>Questi</p>
<ul>
<li>Question 3 - Is it exposing an external <code>IP</code> and port <code>80</code> to outside world? What would be the value of <code>nodePort</code> in this case?</li>
</ul>
| Manu Chadha | <blockquote>
<p>My question is built on the question and answers from this question -
<a href="https://stackoverflow.com/questions/41509439/whats-the-difference-between-clusterip-nodeport-and-loadbalancer-service-types">What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?</a></p>
<p>The question might not be well-formed for some of you.</p>
</blockquote>
<p>It's ok but in my opinion it's a bit too extensive for a single question and it could be posted as a few separate questions as it touches quite a few different topics.</p>
<blockquote>
<p>I am trying to understand the differences between <code>clusterIP</code>,
<code>nodePort</code> and <code>Loadbalancer</code> and when to use these with an example. I
suppose that my understanding of the following concept is correct K8s
consists of the following components</p>
<ul>
<li>Node - A VM or physical machine. Runs kubectl and docker process</li>
</ul>
</blockquote>
<p>Not <code>kubectl</code> but <code>kubelet</code>. You can check it by <code>ssh-ing</code> into your node and runnign <code>systemctl status kubelet</code>. And yes, it runs also some sort of <strong>container runtime environment</strong>. It doesn't have to be exactly <strong>docker</strong>.</p>
<blockquote>
<ul>
<li>Pod - unit which encapsulates container(s) and volumes (storage). If a pod contains multiple containers then shared volume could be the way
for process communication</li>
<li>Node can have one or multiple pods. Each pod will have its own IP</li>
</ul>
</blockquote>
<p>That's correct.</p>
<blockquote>
<ul>
<li>Cluster - replicas of a Node. Each node in a cluster will contain same pods (instances, type)</li>
</ul>
</blockquote>
<p>Not really. Kubernetes <code>nodes</code> are not different replicas. They are part of the same kubernetes cluster but they are <strong>independent instances</strong>, which are capable of running your containerized apps. In <strong>kubernetes</strong> terminology it is called a <strong>workload</strong>. Workload isn't part of kubernetes cluster, it's something that you run on it. Your <code>Pods</code> can be scheduled on different nodes and it doesn't always have to be an even distribution. Suppose you have kubernetes cluster consisting of 3 worker nodes (nodes on which workload can be scheduled as opposed to master node, that usually runs only kubernetes control plane components). If you deploy your application as a <code>Deployment</code> e.g. 5 different replicas of the same <code>Pod</code> are created. Usually they are scheduled on different nodes, but situation when <strong>node1</strong> runs 2 replicas, <strong>node2</strong> 3 replicas and <strong>node3</strong> zero replicas is perfectly possible.</p>
<p>You need to keep in mind that there are different clustering levels. You have your <strong>kubernetes cluster</strong> which basically is an environment to run your containerized workload.</p>
<p>There are also clusters within this cluster i.e. it is perfectly possible that your workload forms clusters as well e.g. you can have a database deployed as a <code>StatefulSet</code> and it can run in a cluster. In such scenario, different stateful <code>Pods</code> will form members or nodes of such cluster.</p>
<p>Even if your <code>Pods</code> don't comunicate with each other but e.g. serve exactly the same content, <code>Deployment</code> resoure makes sure that there is always certain number of replicas of such a <code>Pod</code> that is up and running. If one kubernetes node for some reason becomes unavailable, it means such <code>Pod</code> needs to be re-scheduled on one of the available nodes. So the replication of your workload isn't achieved by deploying it on different kubernetes nodes but by assuring that certain amout of replicas of a <code>Pod</code> of a certain kind is always up and running, and it may be running on the same as well as on different kubernetes nodes.</p>
<blockquote>
<p>Here is the scenario:</p>
<p>My application has a <code>web server</code> (always returning 200OK) and a
<code>database</code> (always returning the same value) for simplicity. Also, say
I am on <code>GCP</code> and I make images of <code>webserver</code> and of the <code>database</code>.
Each of these will be run in their own respective <code>pods</code> and will have
2 replicas.</p>
<p>I suppose I'll have two clusters (<code>cluster-webserver (node1-web (containing pod1-web), node2-web (containing pod2-web))</code> and
<code>cluster-database (node1-db (containing pod1-db), node2-db (containing pod2-db))</code>. Each node will have its own <code>ip</code> address (<code>node1-webip, node2-webip, node1-dbip, node2-dbip</code>)</p>
</blockquote>
<p>See above what I wrote about different clustering levels. Clusters formed by your app have nothing to do with kubernetes cluster nor its nodes. And I would say you would rather have 2 different <strong>microservices</strong> comunicating with each other and in some way also dependent on one another. But yes, you may see your database as a separate db cluster deployed within kubernetes cluster.</p>
<blockquote>
<p>A client application (browser) should be able to access the web
application from outside <code>web</code> cluster but the <code>database</code> shouldn't be
accessible from outside <code>database</code> cluster. However <code>web</code> nodes should
be able to access <code>database</code> nodes)</p>
<ul>
<li>Question 1 - Am I correct that if I create a service for <code>web</code> (<code>webServiceName</code>) and a service for <code>database</code> then by default, I'll
get only <code>clusterIP</code> and a <code>port</code> (or <code>targetPort</code>).</li>
</ul>
</blockquote>
<p>Yes, <code>ClusterIP</code> service type is often simply called a <code>Service</code> because it's the default <code>Service</code> type. If you don't specify <code>type</code> like in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">this example</a>, <code>ClusterIP</code> type is created. To understand the difference between <code>port</code> and <code>targetPort</code> you can take a look at <a href="https://stackoverflow.com/a/63472670/11714114">this answer</a> or <a href="https://stackoverflow.com/a/63472670/11714114">kubernetes official docs</a>.</p>
<blockquote>
<ul>
<li>Question 1.2 - Am I correct that <code>clusterIP</code> is an <code>IP</code> assigned to a <code>pod</code>, not the <code>node</code> i.e. in my example, clusterIP gets assigned to
<code>pod1-web</code>, not <code>node1-web</code> even though <code>node1</code> has only <code>pod1</code>.</li>
</ul>
</blockquote>
<p>Basically yes. <code>ClusterIP</code> is one of the things that can be easily misunderstood as it is used to denote also a specific <code>Service</code> type, but in this context yes, it's an internal IP assigned within a kubernetes cluster to a specific resource, in this case to a <code>Pod</code>, but <code>Service</code> object has it's own Cluster IP assigned. <code>Pods</code> as part of kubernetes cluster get their own internal IPs (from kubernetes cluster perspective) - cluster IPs. Nodes can have completely different addressing scheme. They can also be private IPs but they are not cluster IPs, in other words they are not internal kubernetes cluster IPs from cluster perspective. Apart from those external IPs (from kubernetes cluster perspective), kubernetes nodes as legitimate API resources / objects have also their own Cluster IPs assigned.</p>
<p>You can check it by running:</p>
<pre><code>kubectl get nodes --output wide
</code></pre>
<p>It will show you both internal and external nodes IPs.</p>
<blockquote>
<ul>
<li>Question 1.3 - Am I correct that as <code>cluster IP</code> is accessible from only within the cluster, <code>pod1-web</code> and <code>pod2-web</code> can talk to each
other and <code>pod1-db</code> and <code>pod2-db</code> can talk to each other using
<code>clusterIP/dns:port</code> or <code>clusterIP/dns:targetPort</code> but <code>web</code> can't
talk to <code>database</code> (and vice versa) and external client can't talk to
web? Also, the <code>nodes</code> are not accessible using the <code>cluster IP</code>.</li>
</ul>
</blockquote>
<p>Yes, cluster IPs are only accessible from within the cluster. And yes, web pods and db pods can communicate with each other (typically the communication is initiated from <code>web</code> pods) provided you exposed (in your case db pods) via <code>ClusterIP</code> <code>Service</code>. As already mentioned, this type of <code>Service</code> exposes some set of <code>Pods</code> forming one microservice to some other set of <code>Pods</code> which need to comunicate with them and it exposes them only internally, within the cluster so no external client has access to them. You expose your <code>Pods</code> externally by using <code>LoadBalancer</code>, <code>NodePort</code> or in many scenarios via <code>ingress</code> (which under the hood also uses loadbalancer).</p>
<p>this fragment is not very clear to me:</p>
<blockquote>
<p>but <code>web</code> can't
talk to <code>database</code> (and vice versa) and external client can't talk to
web? Also, the <code>nodes</code> are not accessible using the <code>cluster IP</code>.</p>
</blockquote>
<p>If you expose your db via <code>Service</code> to be accessible from <code>web</code> <code>Pods</code>, they will have access to it. And if your web <code>Pods</code> are exposed to the external world e.g. via <code>LoadBalancer</code> or <code>NodePort</code>, they will be accessible from outside. And yes, nodes won't be accessible from outside by their cluster IPs as they are private internal IPs of a kubernetes cluster.</p>
<blockquote>
<ul>
<li>Question 1.4 - Am I correct that <code>dns</code> i.e. <code>servicename.namespace.svc.cluster.local</code> would map the <code>clusterIP</code>?</li>
</ul>
</blockquote>
<p>Yes, specifically <code>cluster IP</code> of this <code>Service</code>. More on that you can find <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<ul>
<li>Question 1.5 - For which type of applications I might use only <code>clusterIP</code>? Where multiple instances of an application need to
communicate with each other (eg master-slave configuration)?</li>
</ul>
</blockquote>
<p>For something that doesn't need to be exposed externally, like some backend services, that are accessible not directly from outside but through some frontend <code>Pods</code> which process external requests and pass them to the backend afterwards. It may be also used for database pods which practically never should be accessed directly from outside.</p>
<blockquote>
<p>If I use <code>nodePort</code> then <code>K8s</code> will open a <code>port</code> on each of the
<code>node</code> and will forward <code>nodeIP/nodePort</code> to <code>cluster IP (on pod)/Cluster Port</code></p>
</blockquote>
<p>Yes, in <code>NodePort</code> Service configuration this destination port exposed by a <code>Pod</code> is called <code>targetPort</code>. Somewhere in between there is also a <code>port</code>, wchich refers to a port of the <code>Service</code> itself. So the <code>Service</code> has its <code>ClusterIP</code> (different then backend <code>Pods</code> IPs) and its port which usually is the same as <code>targetPort</code> (targetPort defaults to the value set for <code>port</code>) but can be set to a different value.</p>
<blockquote>
<ul>
<li>Question 2 - Can <code>web</code> nodes now access <code>database</code> nodes using <code>nodeIP:nodePort</code> which will route the traffic to <code>database's</code>
<code>clusterIP (on pod):clusterport/targertPort</code>?</li>
</ul>
</blockquote>
<p>I think you've mixed it up a bit. If <code>web</code> is something external to the kubernetes cluster, it might have sense to access <code>Pods</code> deployed on kubernetes cluster via <code>nodeIP:nodePort</code> but if it's part of the same kubernetes cluster, it can use simple <code>ClusterIP</code> <code>Service</code>.</p>
<blockquote>
<p>( I have read that
clusterIP/dns:nodePort will not work).</p>
</blockquote>
<p>From the external world of cours it won't work as <code>Cluster IPs</code> are not accessible from outside, they are internal kubernetes IPs. But from within the cluster ? It's perfectly possible. As I said in different part of my answer, kubernetes nodes have also their cluster IPs and it's perfectly possible to access your app on <code>nodePort</code> but from within the cluster i.e. from some other <code>Pod</code>. So when you look at internal (cluster) IP addresses of the nodes in my example it is also perfectly possible to run:</p>
<pre><code>root@nginx-deployment-85ff79dd56-5lhsk:/# curl http://10.164.0.8:32641
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<blockquote>
<ul>
<li>Question 2.1 - How do I get a <code>node's</code> <code>IP</code>? Is <code>nodeIP</code> the <code>IP</code> I'll get when I run <code>describe pods</code> command?</li>
</ul>
</blockquote>
<p>To check IPs of your nodes run:</p>
<pre><code>kubectl get nodes --output wide
</code></pre>
<p>It will show you both their internal (yes, nodes also have their ClusterIPs!) and external IPs.</p>
<blockquote>
<ul>
<li>Question 2.2 - Is there a <code>dns</code> equivalent for the <code>node IP</code> as <code>node IP</code> could change during failovers. Or does <code>dns</code> now resolve to
the node's IP instead of <code>clusterIP</code>?</li>
</ul>
</blockquote>
<p>No, there isn't. Take a look at <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#what-things-get-dns-names" rel="nofollow noreferrer">What things get DNS names?</a></p>
<blockquote>
<ul>
<li>Question 2.3 - I read that <code>K8s</code> will create <code>endpoints</code> for each <code>service</code>. Is <code>endpoint</code> same as <code>node</code> or is it same as <code>pod</code>? If I
run <code>kubectl describe pods</code> or <code>kubectl get endpoints</code>, would I get
same <code>IPs</code>)?</li>
</ul>
</blockquote>
<p>No, <code>endpoints</code> is another type of kubernetes API object / resource.</p>
<pre><code>$ kubectl api-resources | grep endpoints
endpoints ep true Endpoints
</code></pre>
<p>If you run:</p>
<pre><code>kubectl explain endpoints
</code></pre>
<p>you will get it's detailed description:</p>
<pre><code>KIND: Endpoints
VERSION: v1
DESCRIPTION:
Endpoints is a collection of endpoints that implement the actual service.
Example: Name: "mysvc", Subsets: [
{
Addresses: [{"ip": "10.10.1.1"}, {"ip": "10.10.2.2"}],
Ports: [{"name": "a", "port": 8675}, {"name": "b", "port": 309}]
},
{
Addresses: [{"ip": "10.10.3.3"}],
Ports: [{"name": "a", "port": 93}, {"name": "b", "port": 76}]
},
]
</code></pre>
<p>Usually you don't have to worry about creating <code>endpoints</code> resource as it is created automatically. So to answer your question, <code>endpoints</code> stores information about <code>Pods</code> IPs and keeps track on them as <code>Pods</code> can be destroyed and recreated and their IPs are subject to change. For a <code>Service</code> to keep routing the traffic properly, although <code>Pods</code> IPs change, an object like <code>endpoints</code> must exist which keeps track of those IPs.</p>
<p>You can easily check it by yourself. Simply create a deployment, consisting of 3 <code>Pods</code> and expose it as a simple <code>ClusterIP</code> <code>Service</code>. Check its <code>endpoint</code> object. Then delete one <code>Pod</code>, verify its IP has changed and check again its <code>endpoint</code> object. You can do it by running:</p>
<pre><code>kubectl get ep <endpoints-object-name> -o yaml
</code></pre>
<p>or</p>
<pre><code>kubectl describe ep <endpoints-object-name>
</code></pre>
<p>So basically different <em>endpoints</em> (as many as backend <code>Pods</code> exposed by a certain <code>Service</code>) are internal (ClusterIP) addresses of <code>Pods</code> exposed by the <code>Service</code> but <code>endpoints</code> <strong>object / API resource</strong> is a single kubernetes <strong>resource</strong> that keeps track of those <em>endpoints</em>. I hope this is clear.</p>
<blockquote>
<p>As I am on <code>GCP</code>, I can use <code>Loadbalancer</code> for <code>web</code> cluster to get an
external IP. Using the external IP, the client application can access
the <code>web</code> service</p>
<p>I saw this configuration for a <code>LoadBalancer</code></p>
<pre><code>spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
type: LoadBalancer
</code></pre>
<p>Questi</p>
<ul>
<li>Question 3 - Is it exposing an external <code>IP</code> and port <code>80</code> to outside world? What would be the value of <code>nodePort</code> in this case?</li>
</ul>
</blockquote>
<p>Yes, under the hood a call to GCP API is made so that external http/https loadbalancer with a public IP is created.</p>
<p>Suppose you have a <code>Deployment</code> called <code>nginx-deployment</code>. If you run:</p>
<pre><code>kubectl expose deployment nginx-deployment --type LoadBalancer
</code></pre>
<p>It will create a new <code>Service</code> of <code>LoadBalancer</code> type. If you then run:</p>
<pre><code>kubectl get svc
</code></pre>
<p>you will see your <code>LoadBalancer</code> <code>Service</code> has both external IP and cluster IP assigned.</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-deployment LoadBalancer 10.3.248.43 <some external ip> 80:32641/TCP 102s
</code></pre>
<p>If you run:</p>
<pre><code>$ kubectl get svc nginx-deployment
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-deployment LoadBalancer 10.3.248.43 <some external ip> 80:32641/TCP 👈 16m
</code></pre>
<p>You'll notice that <code>nodePort</code> value for this <code>Service</code> has been also set, in this case to <code>32641</code>. If you want to dive into it even deeper, run:</p>
<pre><code>kubectl get svc nginx-deployment -o yaml
</code></pre>
<p>and you will see it in this section:</p>
<pre><code>...
spec:
clusterIP: 10.3.248.43
externalTrafficPolicy: Cluster
ports:
- nodePort: 32641 👈
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer 👈
...
</code></pre>
<p>As you can see although the <code>Service</code> type is <code>LoadBalancer</code> it also has its <code>nodePort</code> value set. And you can test that it works by accessing your <code>Deployment</code> using this port, but not on the <code>IP</code> of the <code>LoadBalancer</code> but on <code>IPs</code> of your nodes. I know it may seem pretty confusing as <code>LoadBalancer</code> and <code>NodePort</code> are two different <code>Service</code> types. <code>LB</code> needs to distrute the incoming traffic to some backend <code>Pods</code> (e.g. managed by a <code>Deployment</code>) and needs this <code>nodePort</code> value set in its own specification to be able to route the traffic to <code>Pods</code> scheduled on different nodes. I hope this is a bit clearer now.</p>
| mario |
<p>I attempt to build a Pod that runs a service that requires:</p>
<ol>
<li>cluster-internal services to be resolved and accessed by their FQDN (<code>*.cluster.local</code>),</li>
<li>while also have an active OpenVPN connection to a remote cluster and have services from this remote cluster to be resolved and accessed by their FQDN (<code>*.cluster.remote</code>).</li>
</ol>
<p>The service container within the Pod without an OpenVPN sidecar can access all services provided an FQDN using the <code>*.cluster.local</code> namespace. Here is the <code>/etc/resolv.conf</code> in this case:</p>
<pre><code>nameserver 169.254.25.10
search default.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
<h4>When OpenVPN sidecar manages <code>resolv.conf</code></h4>
<p>The OpenVPN sidecar is started in the following way:</p>
<pre><code> containers:
{{- if .Values.vpn.enabled }}
- name: vpn
image: "ghcr.io/wfg/openvpn-client"
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
volumeMounts:
- name: vpn-working-directory
mountPath: /data/vpn
env:
- name: KILL_SWITCH
value: "off"
- name: VPN_CONFIG_FILE
value: connection.conf
securityContext:
privileged: true
capabilities:
add:
- "NET_ADMIN"
resources:
limits:
cpu: 100m
memory: 80Mi
requests:
cpu: 25m
memory: 20Mi
{{- end }}
</code></pre>
<p><em>and</em> the OpenVPN client configuration contains the following lines:</p>
<pre><code> script-security 2
up /etc/openvpn/up.sh
down /etc/openvpn/down.sh
</code></pre>
<p>Then OpenVPN client will overwrite <code>resolv.conf</code> so that it contains the following:</p>
<pre><code>nameserver 192.168.255.1
options ndots:5
</code></pre>
<p>In this case, any service in <code>*.cluster.remote</code> is resolved, but no services from <code>*.cluster.local</code>. This is expected.</p>
<h4>When OpenVPN sidecar does not manage <code>resolv.conf</code>, but <code>spec.dnsConfig</code> is provided</h4>
<p>Remove the following lines from the OpenVPN client configuration:</p>
<pre><code> script-security 2
up /etc/openvpn/up.sh
down /etc/openvpn/down.sh
</code></pre>
<p>The <code>spec.dnsConfig</code> is provided as:</p>
<pre><code>
dnsConfig:
nameservers:
- 192.168.255.1
searches:
- cluster.remote
</code></pre>
<p>Then, <code>resolv.conf</code> will be the following:</p>
<pre><code>nameserver 192.168.255.1
nameserver 169.254.25.10
search default.cluster.local svc.cluster.local cluster.local cluster.remote
options ndots:5
</code></pre>
<p>This would work for <code>*.cluster.remote</code>, but not for anything <code>*.cluster.local</code>, because the second nameserver is tried as long as the first times out. I noticed that some folk would get around this limitation by setting up namespace rotation and timeout for 1 second, but this behavior looks very hectic to me, I would not consider this, not even as a workaround. Or maybe I'm missing something. <strong>My first question would be: Could rotation and timeout work in this case?</strong></p>
<p>My second question would be: is there any way to make <code>*.cluster.local</code> and <code>*.cluster.remote</code> DNS resolves work reliably from the service container inside the Pod <em>and</em> without using something like <code>dnsmasq</code>?</p>
<p>My third question would be: if <code>dnsmasq</code> is required, how can I configure it, provided, and overwrite <code>resolv.conf</code> by also making sure that the Kubernetes-provided nameserver can be anything (<code>169.254.25.10</code> in this case).</p>
<p>Best,
Zoltán</p>
| Dyin | <p>I had rather solved the problem by running a sidecar DNS-server, because:</p>
<ul>
<li>it is easier to implement, maintain and understand;</li>
<li>it works without surprises.</li>
</ul>
<p>Here is an example pod with <code>CoreDNS</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: default
spec:
volumes:
- name: config-volume
configMap:
name: foo-config
items:
- key: Corefile
path: Corefile
dnsPolicy: None # SIgnals Kubernetes that you want to supply your own DNS - otherwise `/etc/resolv.conf` will be overwritten by Kubernetes and there is then no way to update it.
dnsConfig:
nameservers:
- 127.0.0.1 # This will set the local Core DNS as the DNS resolver. When `dnsPolicy` is set, `dnsConfig` must be provided.
containers:
- name: dns
image: coredns/coredns
env:
- name: LOCAL_DNS
value: 10.233.0.3 # insert local DNS IP address (see kube-dns service ClusterIp)
- name: REMOTE_DNS
value: 192.168.255.1 # insert remote DNS IP address
args:
- '-conf'
- /etc/coredns/Corefile
volumeMounts:
- name: config-volume
readOnly: true
mountPath: /etc/coredns
- name: test
image: debian:buster
command:
- bash
- -c
- apt update && apt install -y dnsutils && cat /dev/stdout
---
apiVersion: v1
kind: ConfigMap
metadata:
name: foo-config
namespace: default
data:
Corefile: |
cluster.local:53 {
errors
health
forward . {$LOCAL_DNS}
cache 30
}
cluster.remote:53 {
errors
health
rewrite stop {
# rewrite cluster.remote to cluster.local and back
name suffix cluster.remote cluster.local answer auto
}
forward . {$REMOTE_DNS}
cache 30
}
</code></pre>
<p>The <code>CoreDNS</code> config above simply forwards <code>cluster.local</code> queries to the local service and <code>cluster.remote</code> - to the remote one. Using it I was able to resolve <code>kubernetes</code> service IP of both clusters:</p>
<pre><code>❯ k exec -it -n default foo -c test -- bash
root@foo:/# dig @localhost kubernetes.default.svc.cluster.local +short
10.100.0.1
root@foo:/# dig @localhost kubernetes.default.svc.cluster.remote +short
10.43.0.1
</code></pre>
<p>Update:</p>
<p>Possibly, the following Core DNS configuration is sufficient, in case you require access to the internet as well as <code>cluster.internal</code> is provided by Kubernetes itself:</p>
<pre><code>.:53 {
errors
health
forward . {$LOCAL_DNS}
cache 30
}
cluster.remote:53 {
errors
health
forward . {$REMOTE_DNS}
cache 30
}
</code></pre>
| anemyte |
<p>When a pod is in crashLoopBackOff state (for example), are the <strong>requested</strong> resources (CPU and MEMORY) still allocated to this pod ?</p>
<p>If you have the answer, please explain how did you do to make sure that it is or not still allocated to the pod <3</p>
| Bguess | <p>I got the answer, here is the test:</p>
<p>However the number of nodes I got or the amount of resources they have, I will create a resourceQuota for the namespace where I will perform the test:</p>
<p><code>resourceQuota.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: ResourceQuota
metadata:
name: test-so
spec:
hard:
cpu: "10m"
memory: 10Mi
pods: "10"
</code></pre>
<p>Let's create a busybox deployment that will CrashLoopBackOff with the maximum resource in the ResourceQuota allocated to it:</p>
<p><code>deployment1.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-so
name: test-so
spec:
replicas: 1
selector:
matchLabels:
app: test-so
template:
metadata:
labels:
app: test-so
spec:
containers:
- image: busybox
name: busybox
resources:
requests:
cpu: 10m
memory: 10Mi
</code></pre>
<p>As expected its in CrashLoopBackOff state, however its deployed:</p>
<pre><code>> kubectl get pods -o wide:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/test-so-57f76ccb9b-2w5vk 0/1 CrashLoopBackOff 3 (63s ago) 2m23s 10.244.5.2 so-cluster-1-worker2 <none> <none>
</code></pre>
<p>Let's now create a second deployment with the same amount of resources:</p>
<p><code>deployment2.yaml</code>:</p>
<pre><code> apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-so2
name: test-so2
spec:
replicas: 1
selector:
matchLabels:
app: test-so2
template:
metadata:
labels:
app: test-so2
spec:
containers:
- image: busybox
name: busybox
resources:
requests:
cpu: 10m
memory: 10Mi
</code></pre>
<p>No pods created and here is the status of the replicaset:</p>
<pre><code>❯ k describe rs test-so2-7dd9c65cbd
Name: test-so2-7dd9c65cbd
Namespace: so-tests
Selector: app=test-so2,pod-template-hash=7dd9c65cbd
Labels: app=test-so2
pod-template-hash=7dd9c65cbd
Annotations: deployment.kubernetes.io/desired-replicas: 1
deployment.kubernetes.io/max-replicas: 2
deployment.kubernetes.io/revision: 1
Controlled By: Deployment/test-so2
Replicas: 0 current / 1 desired
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=test-so2
pod-template-hash=7dd9c65cbd
Containers:
busybox:
Image: busybox
Port: <none>
Host Port: <none>
Requests:
cpu: 10m
memory: 10Mi
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
ReplicaFailure True FailedCreate
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-7x8qm" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-kv9m4" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-7w7wz" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-8gcnp" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-vllqf" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-2jhnb" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-gjtvw" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-qdq44" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 30s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-69rn7" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 11s (x4 over 29s) replicaset-controller (combined from similar events): Error creating: pods "test-so2-7dd9c65cbd-jjjl4" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
</code></pre>
<p>So that means that in fact, even if a pod is in CrashLoopBackOff state, it still blocks the requested amount of memory.</p>
<p>We know it now ! hahaha</p>
<p>Have a nice day, bguess</p>
| Bguess |
<p>A few months back, I deployed the Elastic-Search (version - 8.0.1) on Kubernetes (GCP) as a service as External load balancer using this <a href="https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-deploy-elasticsearch.html" rel="nofollow noreferrer">guide</a>.</p>
<p>Now, I am unable to perform any read or write operation on ElasticSearch. I checked the logs, in which I found that memory of the node is almost full.</p>
<p><strong>Here are some logs which support this analysis:</strong></p>
<blockquote>
<p>flood stage disk watermark [95%] exceeded on [hulk-es-default-0][/usr/share/elasticsearch/data] free: 18.5mb[1.8%], all indices on this node will be marked read-only</p>
<p>Cluster health status changed from [YELLOW] to [RED] (reason: [shards failed [[1][0]]]).</p>
<p>This node is unhealthy: health check failed on [/usr/share/elasticsearch/data].`</p>
</blockquote>
<p><strong>Here are the errors that are coming when performing any read/write operation:</strong></p>
<blockquote>
<p>elasticsearch.exceptions.TransportError: TransportError(503, 'master_not_discovered_exception', None)</p>
<p>elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPSConnectionPool(host='...', port=****): Read timed out. (read timeout=30))</p>
</blockquote>
<p>I increased the capacity of my elasticsearch persistent volume claim(PVC) but was unable to create the pod with that new volume.</p>
<p>I followed the following steps -</p>
<ul>
<li><p>Set the allowVolumeExpansion field to true in their StorageClass object(s)</p>
</li>
<li><p>Scaled ElasticSearch Operator Deployment to 0 Replicas.</p>
</li>
<li><p>Deleted the statefulset Object without deleting the pods using</p>
<p><code>kubectl delete sts <statefulset-name> --cascade=orphan</code></p>
</li>
</ul>
<p>Before deleting I saved the yaml of the statefulset using</p>
<pre><code>kubectl get sts <statefulset-name> -o yaml
</code></pre>
<ul>
<li>Increased the storage in capacity in the yaml file of PVC.</li>
<li>Recreated the StatefulSet with the new storage request by the yaml file I saved using</li>
</ul>
<p><code>kubectl apply -f file-name.yml</code></p>
<ul>
<li>Scaled back the operator deployment to 1</li>
</ul>
<p>But, when I recreated the stateful set, the <code>CrashLoopBackOff</code> error is being shown every-time.</p>
<p>Following are some logs -</p>
<ul>
<li>readiness probe failed</li>
<li>Likely root cause: java.io.IOException: No space left on device</li>
<li>using data paths, mounts [[/usr/share/elasticsearch/data (/dev/sdb)]], net usable_space [0b], net total_space [975.8mb], types [ext4]</li>
</ul>
<p>The persistent disk's volume that the ES pod is accessing is increased but still the pod is unable to start.
Can anyone guide me here, what is the problem here ?</p>
| Shobit Jain | <p>It seems, for some reason the Pod is not seeing the new volume Size. Have you tried <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-expansion" rel="nofollow noreferrer">this</a>, for the GKE Volume expansion?</p>
<p>If the Pod is always in the <code>CrashLoopBackOff</code> state you can use e.g: <code>kubectl debug mypod -it --image=busybox</code> So you will attach a debug container to your Pod and check what is going on with the mounted volume.</p>
<p>Others things you can also do is to create a snapshot/backup of your Volume and restore it on a new bigger volume to see if the Issue still persist.</p>
| dcardozo |
<p>I've a AKS cluster and I'm trying to resize the PVC used. Actually the PVC has a capacity of 5Gi and I already resized it to 25Gi:</p>
<pre><code>> kubectl describe pv
Name: mypv
Labels: failure-domain.beta.kubernetes.io/region=northeurope
Annotations: pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/azure-disk
volumehelper.VolumeDynamicallyCreatedByKey: azure-disk-dynamic-provisioner
Finalizers: [kubernetes.io/pv-protection]
StorageClass: default
Status: Bound
Claim: default/test-pvc
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 25Gi
...
> kubectl describe pvc
Name: test-pvc
Namespace: default
StorageClass: default
Status: Bound
Volume: mypv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-disk
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 25Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mypod
Events: <none>
</code></pre>
<p>But when I call "df -h" in mypod, it still shows me 5Gi (see /dev/sdc):</p>
<pre><code>/ # df -h
Filesystem Size Used Available Use% Mounted on
overlay 123.9G 22.3G 101.6G 18% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sdb1 123.9G 22.3G 101.6G 18% /dev/termination-log
shm 64.0M 0 64.0M 0% /dev/shm
/dev/sdb1 123.9G 22.3G 101.6G 18% /etc/resolv.conf
/dev/sdb1 123.9G 22.3G 101.6G 18% /etc/hostname
/dev/sdb1 123.9G 22.3G 101.6G 18% /etc/hosts
/dev/sdc 4.9G 4.4G 448.1M 91% /var/lib/mydb
tmpfs 1.9G 12.0K 1.9G 0% /run/secrets/kubernetes.io/serviceaccount
tmpfs 1.9G 0 1.9G 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 64.0M 0 64.0M 0% /proc/sched_debug
tmpfs 1.9G 0 1.9G 0% /proc/scsi
tmpfs 1.9G 0 1.9G 0% /sys/firmware
</code></pre>
<p>I already destroyed my pod and even my deployment but it still show 5Gi. Any idea how I can use the entire 25Gi in my pod?</p>
<p><strong>SOLUTION</strong></p>
<p>Thank you mario for the long response. Unfortunately the aks dasboard already showed me that the disk has 25GB. But calling the following returned 5GB:</p>
<p><code>az disk show --ids /subscriptions/<doesn't matter :-)>/resourceGroups/<doesn't matter :-)>/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-27ee71a5-<doesn't matter> --query "diskSizeGb"</code></p>
<p>So I finally called <code>az disk update --ids <disk-id> --size-gb 25</code>. Now, the command above returned 25 and I started my pod again. Since my pod uses Alpine Linux, it's not resizing the disk automatically and I had to do it manually:</p>
<pre><code>/ # apk add e2fsprogs-extra
(1/6) Installing libblkid (2.34-r1)
(2/6) Installing libcom_err (1.45.5-r0)
(3/6) Installing e2fsprogs-libs (1.45.5-r0)
(4/6) Installing libuuid (2.34-r1)
(5/6) Installing e2fsprogs (1.45.5-r0)
(6/6) Installing e2fsprogs-extra (1.45.5-r0)
Executing busybox-1.31.1-r9.trigger
OK: 48 MiB in 31 packages
/ # resize2fs /dev/sdc
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/sdc is mounted on /var/lib/<something :-)>; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 4
The filesystem on /dev/sdc is now 6553600 (4k) blocks long.
</code></pre>
<p>Note: In my pod I set the privileged-mode temporarely to true:</p>
<pre><code>...
spec:
containers:
- name: mypod
image: the-image:version
securityContext:
privileged: true
ports:
...
</code></pre>
<p>Otherwise resize2fs failed and say's something like "no such device or similar" (sorry, don't know the exact error message anymore - forgot to copy).</p>
| Nrgyzer | <p>I think <a href="https://github.com/kubernetes/kubernetes/issues/68427" rel="nofollow noreferrer">this</a> GitHub thread should answer your question.</p>
<p>As you can read there:</p>
<blockquote>
<p>... <em>I've tried resizing the persistent volume by adding allowVolumeExpansion: true for the storage class and editing the pvc
to the desired size.</em></p>
</blockquote>
<p>I assume that you've already done the above steps as well.</p>
<p>Reading on, the issue looks exactly as yours:</p>
<blockquote>
<p>After restarting the pod the size of the pvc has changed to the
desired size i.e from 2Ti -> 3Ti</p>
<pre><code>kubectl get pvc
mongo-0 Bound pvc-xxxx 3Ti RWO managed-premium 1h
</code></pre>
<p>but when i login to the pod and do a df -h the disk size still remains
at 2Ti.</p>
<pre><code>kubetl exec -it mongo-0 bash
root@mongo-0:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdc 2.0T 372M 2.0T 1% /mongodb
</code></pre>
</blockquote>
<p>Now let's take a look at <a href="https://github.com/kubernetes/kubernetes/issues/68427#issuecomment-422547344" rel="nofollow noreferrer">the possible solution</a>:</p>
<blockquote>
<p>I couldn't see any changes in the portal when i update the pvc. I had
to update the disk size in portal first - edit the pvc accordingly and
then deleting the pod made it to work. Thanks</p>
</blockquote>
<p>So please check the size of the disk in <strong>Azure portal</strong> and if you see its size unchanged, this might be the case.</p>
<p>Otherwise make sure you followed the steps mentioned in this <a href="https://github.com/kubernetes/kubernetes/issues/68427#issuecomment-451742005" rel="nofollow noreferrer">comment</a> however you don't get any error message when describing your <code>PVC</code> like <code>VolumeResizeFailed</code> so I believe this is not your case and before resizing it was properly <code>detached</code> from node. So first of all make sure there is no discrepancy between volume size in portal and information that you can see by describing your <code>PVC</code>.</p>
| mario |
<p>On macOS there's Docker Desktop which comes with a kubectl, there's the Homebrew kubectl, then there's the gcloud kubectl.</p>
<p>I'm looking to use Minikube for local Kubernetes development and also GKE for production.</p>
<p>Which kubectl should I use? I'm thoroughly confused by all the various versions and how they differ from one another. Does it matter at all other than the version of the binary?</p>
| damd | <p>It doesn't really matter from where you get an executable as long as it is a trusted source. Although you have to use a supported version (<a href="https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl" rel="nofollow noreferrer">documentation</a>):</p>
<blockquote>
<p>kubectl is supported within one minor version (older or newer) of kube-apiserver.</p>
<p>Example:</p>
<p>kube-apiserver is at 1.20
kubectl is supported at 1.21, 1.20, and 1.19</p>
</blockquote>
| anemyte |
<p>As discovered <a href="https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/" rel="nofollow noreferrer">here</a>, there is a new kind of kube service that are IPVS and have many algorithms for load balancing.</p>
<p>The only problem is I didn't find where those algorithms are specified.</p>
<p>My understanding:</p>
<ol>
<li><code>rr</code>: <strong>round-robin</strong>
<em>-> call backend pod one after another in a loop</em></li>
<li><code>lc</code>: <strong>least connection</strong>
<em>-> group all pod with the lowest number of connection, and send message to it. Which kind of connection? only the ones from this service ?</em></li>
<li><code>dh</code>: <strong>destination hashing</strong>
<em>-> ?something based on url?</em></li>
<li><code>sh:</code> <strong>source hashing</strong>
<em>-> ?something based on url?</em></li>
<li><code>sed</code>: <strong>shortest expected delay</strong>
<em>-> either the backend with less ping or some logic on the time a backend took to respond in the past</em></li>
<li><code>nq</code>: <strong>never queue</strong>
<em>-> same as least connection? but refusing messages at some points ?</em></li>
</ol>
<hr />
<p>If anyone has the documentation link (not provided in the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">official page</a> and still saying IPVS is beta whereas it is stable sync 1.11) or the real algorithm behind all of them, please help.</p>
<p>I tried: Google search with the terms + lookup in the official documentation.</p>
| charlescFR | <p>They are defined in the code
<a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/apis/config/types.go#L193" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/apis/config/types.go#L193</a></p>
<ul>
<li><code>rr</code> <strong>round robin</strong> : distributes jobs equally amongst the available real servers</li>
<li><code>lc</code> <strong>least connection</strong> : assigns more jobs to real servers with fewer active jobs</li>
<li><code>sh</code> <strong>source hashing</strong> : assigns jobs to servers through looking up a statically assigned hash table by their source IP addresses</li>
<li><code>dh</code> <strong>destination hashing</strong> : assigns jobs to servers through looking up a statically assigned hash table by their destination IP addresses</li>
<li><code>sed</code> <strong>shortest expected delay</strong> : assigns an incoming job to the server with the shortest expected delay. The expected delay that the job will experience is (Ci + 1) / Ui if sent to the ith server, in which Ci is the number of jobs on the ith server and Ui is the fixed service rate (weight) of the ith server.</li>
<li><code>nq</code> <strong>never queue</strong> : assigns an incoming job to an idle server if there is, instead of waiting for a fast one; if all the servers are busy, it adopts the ShortestExpectedDelay policy to assign the job.</li>
</ul>
<p>All those come from IPVS official documentation : <a href="http://www.linuxvirtualserver.org/docs/scheduling.html" rel="noreferrer">http://www.linuxvirtualserver.org/docs/scheduling.html</a></p>
<p>Regards</p>
| charlescFR |
<p>I have a ConfigMap as following:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: health-ip
data:
ip.json: |-
[
1.1.1.1,
2.2.2.2
]
</code></pre>
<p>I want to modify/append or patch a small piece of this configuration by adding ip <code>3.3.3.3</code> to the ConfigMap so that it becomes:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: health-ip
data:
ip.json: |-
[
1.1.1.1,
2.2.2.2,
3.3.3.3
]
</code></pre>
<p>How can it do this using <code>kubectl patch</code> or equivalent?</p>
| maopuppets | <p>There is no way to add without replace. As mentioned in comment by <strong>zerkms</strong>, <code>configmaps</code> does not undestand structure data.</p>
<p>You have a couple of options to achive what you want:</p>
<ol>
<li>Keep a "template" file of your configmap, update and apply when you need;</li>
<li>Automate the first task with a script that reads the configmap value and append the new value.</li>
<li>Use the <code>kubectl path</code> passing the whole ip list.</li>
</ol>
| Mr.KoopaKiller |
<p>I have an application that runs health checks on pods. Given the health check, I am attempting to patch a pod's label selector from being active: true to active: false. The following is the code for the iteration of pods to change each pod's labels.</p>
<pre><code>CoreV1Api corev1Api = new CoreV1Api();
for (V1Pod pod : fetchPodsByNamespaceAndLabel.getItems()) {
String jsonPatchBody = "[{\"op\":\"replace\",\"path\":\"/spec/template/metadata/labels/active\",\"value\":\"true\"}]";
V1Patch patch = new V1Patch(jsonPatchBody);
corev1Api.patchNamespacedPodCall(pod.getMetadata.getName(), namespace, patch, null, null, null, null, null);
}
</code></pre>
<p>I have adapted the jsonPatchBody from the <a href="https://github.com/kubernetes-client/java/blob/master/examples/examples-release-12/src/main/java/io/kubernetes/client/examples/PatchExample.java" rel="nofollow noreferrer">Patch Example</a> on the Kubernetes documentation section for examples.</p>
<p>The output of the run spits out no errors. The expected behavior is for the labels of these pods for active to all be set to true. These changes are not reflected. I believe the issue to be caused by the syntax provided by the body of the patch. Is the above the correct syntax for accessing labels in a pod?</p>
| giande | <p>After researching more of the current implementation, the client provides the <a href="https://github.com/kubernetes-client/java/blob/master/util/src/main/java/io/kubernetes/client/util/PatchUtils.java" rel="nofollow noreferrer">PatchUtils</a> api that allows me to build a type of patch.</p>
<pre><code>CoreV1Api coreV1Api = new CoreV1Api();
String body = "{\"metadata\":{\"labels\":{\"active\":\"true\"}}}";
V1Pod patch =
PatchUtils.patch(
V1Pod.class,
() ->
coreV1Api.patchNamespacedPodCall(
Objects.requireNonNull(pod.getMetadata().getName()),
namespace,
new V1Patch(body),
null,
null,
null,
null,
null),
V1Patch.PATCH_FORMAT_STRATEGIC_MERGE_PATCH,
coreV1Api.getApiClient());
System.out.println("Pod name: " + Objects.requireNonNull(pod.getMetadata()).getName() + "Patched by json-patched: " + body);
</code></pre>
<p>I wanted to ensure that the patch updated the current values for a property in my labels selector, so I implemented a <code>PATCH_FORMAT_STRATEGIC_MERGE_PATCH</code> from the <code>V1Patch</code> api. I referenced the Kubernetes <a href="https://github.com/kubernetes-client/java/blob/master/examples/examples-release-15/src/main/java/io/kubernetes/client/examples/PatchExample.java" rel="nofollow noreferrer">Patch Example</a> to build the structure of the Patch.</p>
| giande |
<p>I am spinning up a container (pod/Job) from a GKE.</p>
<p>I have set up the appropriate Service Account on the cluster's VMs.</p>
<p>Therefore, when I <strong>manually</strong> perform a <code>curl</code> to a specific CloudRun service endpoint, I can perform the request (and get authorized and have <code>200</code> in my response)</p>
<p>However, when I try to automate this by setting an image to run in a <code>Job</code> as follows, I get <code>401</code></p>
<pre><code> - name: pre-upgrade-job
image: "google/cloud-sdk"
args:
- curl
- -s
- -X
- GET
- -H
- "Authorization: Bearer $(gcloud auth print-identity-token)"
- https://my-cloud-run-endpoint
</code></pre>
<p>Here are the logs on <code>Stackdriver</code></p>
<pre><code>{
httpRequest: {
latency: "0s"
protocol: "HTTP/1.1"
remoteIp: "gdt3:r787:ff3:13:99:1234:avb:1f6b"
requestMethod: "GET"
requestSize: "313"
requestUrl: "https://my-cloud-run-endpoint"
serverIp: "212.45.313.83"
status: 401
userAgent: "curl/7.59.0"
}
insertId: "29jdnc39dhfbfb"
logName: "projects/my-gcp-project/logs/run.googleapis.com%2Frequests"
receiveTimestamp: "2019-09-26T16:27:30.681513204Z"
resource: {
labels: {
configuration_name: "my-cloud-run-service"
location: "us-east1"
project_id: "my-gcp-project"
revision_name: "my-cloudrun-service-d5dbd806-62e8-4b9c-8ab7-7d6f77fb73fb"
service_name: "my-cloud-run-service"
}
type: "cloud_run_revision"
}
severity: "WARNING"
textPayload: "The request was not authorized to invoke this service. Read more at https://cloud.google.com/run/docs/securing/authenticating"
timestamp: "2019-09-26T16:27:30.673565Z"
}
</code></pre>
<p>My question is how can I see if an "Authentication" header does reach the endpoint (the logs do not enlighten me much) and if it does, whether it is appropriately rendered upon image command/args invocation.</p>
| pkaramol | <p>In your job you use this container <code>google/cloud-sdk</code> which is a from scratch installation of <code>gcloud</code> tooling. It's generic, without any customization.</p>
<p>When you call this <code>$(gcloud auth print-identity-token)</code> you ask for the identity token of the service account configured in the <code>gcloud</code> tool.</p>
<p>If we put together this 2 paragraphs, you want to generate an identity token from a generic/blank installation of <code>gcloud</code> tool. By the way, you don't have defined service account in your <code>gcloud</code> and your token is empty (like @johnhanley said).</p>
<p>For solving this issue, add an environment variable like this</p>
<pre><code>env:
- GOOGLE_APPLICATION_CREDENTIAL=<path to your credential.json>
</code></pre>
<p>I don't know where is your current <code>credential.json</code> of your running environment. Try to perform an <code>echo</code> of this env var to find it and forward it correctly to your <code>gcloud</code> job.</p>
<p>If you are on compute engine or similar system compliant with metadata server, you can get a correct token with this command:</p>
<pre><code>curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=<URL of your service>"
</code></pre>
<p><strong>UPDATE</strong></p>
<p>Try to run your command outside of the <code>gcloud</code> container. Here the update of your job</p>
<pre><code> - name: pre-upgrade-job
image: "google/cloud-sdk"
entrypoint: "bash"
args:
- -c
- "curl -s -X GET -H \"Authorization: Bearer $(gcloud auth print-identity-token)\" https://my-cloud-run-endpoint"
</code></pre>
<p>Not sure that works. Let me know</p>
| guillaume blaquiere |
<p>I'm trying to customize the behavior of the <code>kube-scheduler</code> on an AKS cluster (kubernetes v1.19.3), as described in <a href="https://kubernetes.io/docs/reference/scheduling/config/" rel="nofollow noreferrer">Scheduler Configuration</a>.</p>
<p>My goal is to use the <code>NodeResourcesMostAllocated</code> plugin in order to schedule the pods using the least number of nodes possible.</p>
<p>Consider the following file - <code>most-allocated-scheduler.yaml</code></p>
<pre><code>apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: default-scheduler
- schedulerName: most-allocated-scheduler
plugins:
score:
disabled:
- name: NodeResourcesLeastAllocated
enabled:
- name: NodeResourcesMostAllocated
weight: 2
</code></pre>
<p>According to the documentation, I can specify scheduling profiles by running something like:</p>
<pre><code>kube-scheduler --config most-allocated-scheduler.yaml
</code></pre>
<p>But where exactly can I find the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/" rel="nofollow noreferrer">kube-scheduler</a> in order to run the above command? I'd like to do this ideally on a pipeline. Is it possible to do such thing when using AKS?</p>
| Rui Jarimba | <p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler" rel="noreferrer">kube-scheduler</a> is a part of <a href="https://kubernetes.io/docs/concepts/overview/components/#control-plane-components" rel="noreferrer">kubernetes control plane</a>. It's components are scheduled on <strong>master node</strong>, to which on managed kubernetes solutions such as <strong>AKS</strong>, <strong>GKE</strong> or <strong>EKS</strong>, you have no access.</p>
<p>This means it's not possible to reconfigure your <code>kube-scheduler</code> on a running <strong>AKS</strong> cluster. Compare with <a href="https://github.com/Azure/AKS/issues/609#issuecomment-414100148" rel="noreferrer">this</a> answer on <strong>AKS's GitHub</strong> page.</p>
<p>However, it is possible to provide custom configuration for your kube-scheduler when creating a new cluster, using <a href="https://github.com/Azure/aks-engine/blob/master/docs/topics/clusterdefinitions.md" rel="noreferrer">cluster definitions</a>, specifically in <a href="https://github.com/Azure/aks-engine/blob/master/docs/topics/clusterdefinitions.md#schedulerconfig" rel="noreferrer">schedulerConfig</a> section:</p>
<blockquote>
<h3>schedulerConfig</h3>
<p><code>schedulerConfig</code> declares runtime configuration for the
kube-scheduler daemon running on all master nodes. Like
<code>kubeletConfig</code>, <code>controllerManagerConfig</code>, and <code>apiServerConfig</code>
it is a generic key/value object, and a child property of
<code>kubernetesConfig</code>. An example custom apiserver config:</p>
<pre><code>"kubernetesConfig": {
"schedulerConfig": {
"--v": "2"
}
}
</code></pre>
<p>See
<a href="https://kubernetes.io/docs/reference/generated/kube-scheduler/" rel="noreferrer">here</a>
for a reference of supported kube-scheduler options.</p>
<p>...</p>
</blockquote>
<p>Keep in mind however that not all options are supported. Docs says that e.g. <code>--kubeconfig</code> is not supported, but as you can read <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/" rel="noreferrer">here</a>, this flag is deprecated anyway. There is nothing about <code>--config</code> flag so you can simply try if it works.</p>
<p>You can also achieve it by using <a href="https://github.com/Azure/aks-engine/blob/master/docs/topics/clusterdefinitions.md#custom-yaml-for-kubernetes-component-manifests" rel="noreferrer">Custom YAML for Kubernetes component manifests</a>:</p>
<blockquote>
<p>Custom YAML specifications can be configured for kube-scheduler,
kube-controller-manager, cloud-controller-manager and kube-apiserver
in addition to the addons described
<a href="https://github.com/Azure/aks-engine/blob/master/docs/topics/clusterdefinitions.md#addons" rel="noreferrer">above</a>.
You will need to pass in a <em>base64-encoded</em> string of the kubernetes
manifest YAML file to <em>KubernetesComponentConfig["data"]</em> . For
example, to pass a custom kube-scheduler config, do the following:</p>
<pre><code>"kubernetesConfig": {
"schedulerConfig": {
"data" : "<base64-encoded string of your k8s manifest YAML>"
}
}
</code></pre>
<blockquote>
<p><em><strong>NOTE</strong></em>: Custom YAML for addons is an experimental feature. Since <code>Addons.Data</code> allows you to provide your own scripts, you are
responsible for any undesirable consequences of their errors or
failures. Use at your own risk.</p>
</blockquote>
</blockquote>
<p>So as you can see, even in managed kubernetes solutions such as <strong>AKS</strong>, <code>kube-scheduler</code> can be customized to certain extent, but only when you create a new cluster.</p>
| mario |
<p>I'm trying to secure java applications on kubernetes.</p>
<p>For a simple Springboot app with permitAll, I choose openresty (nginx) with lua-resty-openidc as reverse proxy.</p>
<p>One example that illustrates mostly what I'm trying to do : <a href="https://medium.com/@lukas.eichler/securing-pods-with-sidecar-proxies-d84f8d34be3e" rel="nofollow noreferrer">https://medium.com/@lukas.eichler/securing-pods-with-sidecar-proxies-d84f8d34be3e</a></p>
<p>It "works" in localhost, but not on kubernetes.</p>
<p>Here's my nginx.conf :</p>
<pre><code>worker_processes 1;
error_log logs/error.log;
error_log logs/error.log notice;
error_log logs/error.log debug;
events {
worker_connections 1024;
}
http {
lua_package_path '~/lua/?.lua;;';
resolver ${dns.ip};
lua_ssl_trusted_certificate /ssl/certs/chain.pem;
lua_ssl_verify_depth 5;
lua_shared_dict discovery 1m;
lua_shared_dict jwks 1m;
lua_shared_dict introspection 10m;
lua_shared_dict sessions 10m;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
server_name localhost;
listen 80;
location /OAuth2Client {
access_by_lua_block {
local opts = {
discovery = "${openam-provider}/.well-known/openid-configuration",
redirect_uri = "http://localhost:8080/OAuth2Client/authorization-code/callback",
client_id = "myClientId",
client_secret = "myClientSecret",
scope = "openid profile email",
}
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 500
ngx.say(err)
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
end
ngx.req.set_header("Authorization", "Bearer " .. res.access_token)
ngx.req.set_header("X-USER", res.id_token.sub)
}
proxy_pass http://localhost:8080/OAuth2Client;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
</code></pre>
<p>So in local, as my nginx and my springboot app are running on localhost, the redirections are working.</p>
<p>Now, when I deploy it on kubernetes with this, the browser doesn't map localhost with the internal container ip.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: oauth2-client-deployment
spec:
selector:
matchLabels:
app: OAuth2Client
replicas: 2
template:
metadata:
labels:
app: OAuth2Client
spec:
#hostAliases:
#- ip: "127.0.0.1"
# hostnames:
# - "oauth2-client.local"
containers:
- name: oauth2-client-container
image: repo/oauth2-client-springboot:latest
env:
- name: SPRING_PROFILES_ACTIVE
value: dev
envFrom:
- secretRef:
name: openam-client-secret
- secretRef:
name: keystore-java-opts
volumeMounts:
- name: oauth2-client-keystore
mountPath: "/projected"
readOnly: true
ports:
- containerPort: 8080
- name: oauth2-sidecar
image: repo/oauth2-sidecar:latest
ports:
- containerPort: 80
volumes:
- name: oauth2-client-keystore
projected:
sources:
- secret:
name: keystore-secret
items:
- key: keystore.jks
path: keystore.jks
- secret:
name: truststore-secret
items:
- key: truststore.jks
path: truststore.jks
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: oauth2-client-service-sidecar
spec:
selector:
app: OAuth2Client
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
</code></pre>
<p>So how could I map this localhost ? I don't want the app container to be exposed as there's no security on it, that's why I used the nginx as sidecar and the service only targets it. How to tell nginx to redirect_uri and proxypass to the app container ip ?</p>
<p>And subsidiary question : as nginx doesn't accept env variables, how should I do to make it generic, so apps could provide their own redirect_uri that should be used in nginx.conf ?</p>
<p>Another subsidiary question : the command ngx.req.set_header("Authorization", "Bearer " .. res.access_token) doesn't seem to work, as I don't see any Authorization header in my request in my app...</p>
| Aramsham | <p>Configure your service with type <code>ClusterIP</code> to be reachable <strong><em>only</em></strong> internally, then use the fqdn in your services to reach the service without IP dependency.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: oauth2-client-service-sidecar
spec:
selector:
app: OAuth2Client
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
</code></pre>
<p>Then use <code>oauth2-client-service-sidecar.<namespacen>.cluster.local</code> in your nginx configuration to reach the service:</p>
<pre><code>proxy_pass http://oauth2-client-service-sidecar.<namespacen>.cluster.local/OAuth2Client;
</code></pre>
| Mr.KoopaKiller |
<p>I created configmap this way.</p>
<pre><code>kubectl create configmap some-config --from-literal=key4=value1
</code></pre>
<p>After that i created pod which looks like this</p>
<p>.<a href="https://i.stack.imgur.com/rhFYE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rhFYE.png" alt="enter image description here" /></a></p>
<p>I connect to this pod this way</p>
<pre><code>k exec -it nginx-configmap -- /bin/sh
</code></pre>
<p>I found the folder <code>/some/path</code> but i could get value from key4.</p>
| O.Man | <p>If you refer to your <code>ConfigMap</code> in your <code>Pod</code> this way:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: config-volume
volumes:
- name: config-volume
configMap:
name: some-config
</code></pre>
<p>it will be available in your <code>Pod</code> as a file <code>/var/www/html/key4</code> with the content of <code>value1</code>.</p>
<p>If you rather want it to be available as an <strong>environment variable</strong> you need to refer to it this way:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
envFrom:
- configMapRef:
name: some-config
</code></pre>
<p>As you can see you don't need for it any volumes and volume mounts.</p>
<p>Once you connect to such <code>Pod</code> by running:</p>
<pre><code>kubectl exec -ti mypod -- /bin/bash
</code></pre>
<p>You will see that your environment variable is defined:</p>
<pre><code>root@mypod:/# echo $key4
value1
</code></pre>
| mario |
<p>I am learning K8s and I have created a nginx pod using below yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: apache-manual
spec:
containers:
- image: ewoutp/docker-nginx-curl
name: apache-manual
ports:
- containerPort: 8080
protocol: TCP
</code></pre>
<p>I run this pod using the command <strong>microk8s.kubectl create -f k8s-apache.yaml</strong> now if I describe my pod it looks like</p>
<pre><code>Name: apache-manual
Namespace: default
Priority: 0
Node: itsupport-thinkpad-l490/192.168.0.190
Start Time: Wed, 22 Sep 2021 15:30:15 +0530
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.1.7.69
IPs:
IP: 10.1.7.69
Containers:
apache-manual:
Container ID: containerd://b7f1e7c2779076b786c001de2743d53f8c44214a1f3f98a21a77321f036138bf
Image: ewoutp/docker-nginx-curl
Image ID: sha256:806865143d5c6177c9fad6f146cc2e01085f367043f78aa16f226da5c09217b2
// ##### PORT it's taking is 8080 here but when I am hitting curl on 8080 it's failing but working on port 80 #########
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 22 Sep 2021 15:30:20 +0530
Ready: True
Restart Count: 0
Environment: <none>
// other stuff
</code></pre>
<p>Now if I am executing **microk8s.kubectl exec -it apache-manual -- curl http://localhost:8080
** it's giving me connection refused while <strong>microk8s.kubectl exec -it apache-manual -- curl http://localhost:80</strong> is working fine, I am not able to understand why nginx is running on default port i.e 80 but not on the port I have specified in the above yaml i.e <strong>- containerPort: 8080</strong></p>
<p>Am I missing something?</p>
| Kumar-Sandeep | <p>NGINX Docker image uses port 80 to listen for HTTP connections by default. containerPort is the port which you expose your application to external traffic.</p>
| coldbreathe |
<p>Lately I have problem with Pods in k8s. The k8s Node hang caused by too many tcp connections,But I don't know which container caused it.I want to know how to monitor the number of k8s pod's tcp connections,thanks</p>
| Blue ocean | <p>In <strong>kubernetes</strong> there is no such built-in metric. But you can find it in kubernetes addons, called Service Mesh like <a href="https://istio.io/latest/docs/concepts/what-is-istio/" rel="nofollow noreferrer">Istio</a> <em>Client/Server Telemetry Reporting</em> feature provides <code>istio_tcp_connections_opened_total</code> which can be exported to <strong>Prometheus</strong>.</p>
| mario |
<pre><code>kubectl set image deployment/$DEPLOYMENT_EXTENSION $INSTANCE_NAME=gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest
</code></pre>
<p>I use this command do load new created image to my existing cluster (update version of my app) . But when I do it , and then go to the site , I don't see any changes .</p>
<pre><code>spec:
terminationGracePeriodSeconds: 30
containers:
- name: booknotes
image: gcr.io/my-image:latest
imagePullPolicy: Always
</code></pre>
<p>I've also added this 2 lines in deployment.yaml file and applied it for my cluser: </p>
<pre><code>imagePullPolicy: Always
terminationGracePeriodSeconds: 30
</code></pre>
<p>But it still doesn't work. Can it be because I use <code>:latest</code> tag ? Or it isn't related ? If you have some idias pls let me know. And also if you need additional info , I will attach it !</p>
<p><strong>gitlab-ci.yml</strong></p>
<pre><code> stages:
- build
- docker-push
- deploy
cache:
paths:
- node_modules/
build:
stage: build
image: node:latest
script:
- yarn install
- npm run build
artifacts:
paths:
- dist/
only:
- master
docker:
stage: docker-push
image: docker:18.09.7
services:
- docker:18.09.7-dind
- google/cloud-sdk:latest
script:
- echo $GCP_ACCESS_JSON > $CI_PIPELINE_ID.json
- cat $CI_PIPELINE_ID.json | docker login -u _json_key --password-stdin $GCP_REGION
- docker build -t gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest .
- docker push gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest
only:
- master
test:
stage: deploy
image: google/cloud-sdk:latest
script:
- echo $GCP_ACCESS_JSON > $CI_PIPELINE_ID.json
- gcloud auth activate-service-account $GCP_CE_PROJECT_EMAIL --key-file $CI_PIPELINE_ID.json --project $GCP_PROJECT_ID
- gcloud container clusters get-credentials $CLUSTER_NAME --zone $ZONE --project $PROJECT_NAME
- kubectl set image deployment/$DEPLOYMENT_EXTENSION $INSTANCE_NAME=gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest
only:
- master
</code></pre>
| Andrey Radkevich | <p>This question, the symptoms and the reasons are very close to [this one],(<a href="https://stackoverflow.com/questions/58531740/google-cloud-platform-creating-a-pipeline-with-kubernetes-and-replacing-the-same/58543316#58543316">Google cloud platform creating a pipeline with Kubernetes and replacing the same container</a>)</p>
<p>Apply the same solution in your Gitlab-CI pipeline (use global variable to change the image tag and to deploy everytime a new one to force Kubernetes to pull and to deploy it).</p>
<p>I can help you on Gitlib-CI if you have difficulties to do this.</p>
| guillaume blaquiere |
<p>I ran <code>minikube start --vm=true</code> which output:</p>
<pre><code>😄 minikube v1.12.2 on Darwin 10.15.5
✨ Using the docker driver based on existing profile
❗ Your system has 16384MB memory but Docker has only 1991MB. For a better performance increase to at least 3GB.
Docker for Desktop > Settings > Resources > Memory
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: dashboard, default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
</code></pre>
<p>And then this <code>minikube addons enable ingress</code> which got me this error:</p>
<pre><code>💡 Due to docker networking limitations on darwin, ingress addon is not supported for this driver.
Alternatively to use this addon you can use a vm-based driver:
'minikube start --vm=true'
To track the update on this work in progress feature please check:
https://github.com/kubernetes/minikube/issues/7332
</code></pre>
<p>But I ran minikube with that specific flag - any suggestions?</p>
| Sticky | <p>It looks like your <strong>Minikube</strong> is not running as a VM. Actually it still uses <strong>Docker</strong> driver. Just take a closer look at the output, where <code>Docker</code> is mentioned a few times:</p>
<pre><code>✨ Using the docker driver based on existing profile
❗ Your system has 16384MB memory but Docker has only 1991MB. For a better performance increase to at least 3GB.
Docker for Desktop > Settings > Resources > Memory
</code></pre>
<p>Where the key point is <em>"based on existing profile"</em></p>
<p>and here:</p>
<pre><code>🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
</code></pre>
<p>Although you're trying to start your <strong>Minikube</strong> with <code>--vm=true</code> option, it's apparently ignored and your default settings are used.</p>
<p>Most probably it happens because first time you ran it with <code>--driver=docker</code> option (either explicitly or implicitly) and it has been saved in your <strong>Minikube</strong> profile. To fix this you will probably need to remove your <strong>Minikube</strong> instance and then start it again with <code>--vm=true</code> option. You can be even more scecific and choose the exact hypervisor by providing <code>--driver=hyperkit</code> option.</p>
<p>So, simply try to start your <strong>Minikube</strong> this way:</p>
<pre><code>minikube start --vm=true --driver=hyperkit
</code></pre>
<p>If this doesn't help and you'll see again the same output, mentioning that it is using <code>docker</code> driver all the time, run:</p>
<pre><code>minikube stop && minikube delete && minikube start --vm=true --driver=hyperkit
</code></pre>
<p>This should resolve your issue. Once it starts using <strong>HyperKit</strong> hypervisor, you should be able to run <code>minikube addons enable ingress</code> without any errors.</p>
| mario |
<p>I am trying to create a multiple node cluster of elasticsearch. So which service should i use to create a cluster using kubernetes. I was able to do it within a node using headless service for the internal communication between es. But the same is not happening in the case of multiple node. Also which ip and port i have to mention in "discovery.zen.ping.unicast.hosts" of the master node in the elasticsearch.yml file in worker node.</p>
<p>deployment.yml file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch-deployment
spec:
selector:
matchLabels:
app: elasticsearch
replicas: 2
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: sandeepp163/elasticsearch:latest
volumeMounts:
- mountPath: /usr/share/elasticsearch/config/
name: config
- mountPath: /var/logs/elasticsearch/
name: logs
volumes:
- name: config
hostPath:
path: "/etc/elasticsearch/"
- name: logs
hostPath:
path: "/var/logs/elasticsearch"
</code></pre>
<p>internal-communication service config</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: elasticsearch-cluster
spec:
clusterIP: None
selector:
app: elasticsearch
ports:
- name: transport
port: 9300
targetPort: 9300
</code></pre>
<p>external service config</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: load-service
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
type: NodePort
ports:
- nodePort: 31234
port: 9200
targetPort: 9200
</code></pre>
<p>The error im getting on the worker node.</p>
<pre><code>[2020-02-26T05:29:02,297][WARN ][o.e.d.z.ZenDiscovery ] [worker] not enough master nodes discovered during pinging (found [[]], but needed [1]), pinging again
</code></pre>
<p>elasticsearch.yml file in worker</p>
<pre><code>cluster.name: xxx
node.name: worker
node.master: false
node.data: true
node.ingest: false
discovery.zen.ping.unicast.hosts: ["192.168.9.0"]
discovery.zen.minimum_master_nodes: 1
</code></pre>
<p>elasticsearch.yml in master</p>
<pre><code>cluster.name: xxx
node.name: master
node.master: true
node.data: false
node.ingest: false
</code></pre>
<p>Thanks</p>
| sandeep P | <p>You could simply deploy Elasticsearch in a statefulset using HELM.</p>
<p><strong>1. Installing HELM:</strong></p>
<p>If you are using linux, type: <code>curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash</code></p>
<p>If not, see <a href="https://helm.sh/docs/intro/install/" rel="nofollow noreferrer">here</a> the installation process for your O.S.</p>
<p><strong>2. Add the stable repository to Helm, and upgrade them:</strong></p>
<pre><code>helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
</code></pre>
<p><strong>3. Install Elasticsearch Helm chart</strong></p>
<p>Now you're able to install the <a href="https://github.com/helm/charts/tree/master/stable/elasticsearch" rel="nofollow noreferrer">Elasticsearch chart</a>, type:</p>
<pre><code>helm install stable/elasticsearch --generate-name
</code></pre>
<p>Wait for the installation, you could check using <code>kubectl get pods -l app=elasticsearch</code></p>
<p>To access you could use <code>proxy-port</code> on service name:</p>
<pre><code>ES_SVC=$(kubectl get svc -owide -l "app=elasticsearch" -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward svc/$ES_SVC 9200:9200
</code></pre>
<p><strong>4. Access the service:</strong></p>
<p>To access the service go to <a href="http://127.0.0.1:9200" rel="nofollow noreferrer">http://127.0.0.1:9200</a> from your browser.</p>
<p>Hope that helps.</p>
| Mr.KoopaKiller |
<p>According to <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs" rel="nofollow noreferrer">this page</a>, it appears Google Kubernetes can make a Google managed SSL certificate if you're using LoadBalancer. That's what I want to use.</p>
<p>However, I used <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip" rel="nofollow noreferrer">this page</a> to set up an Ingress for my custom domain.</p>
<p>So right now, I have an Ingress and I can access my cluster using my custom domain just fine, but how do I add HTTPS to it? My suspicion is that Ingress also makes a LoadBalancer, but I can't figure out how to modify it according to the first link.</p>
| under_the_sea_salad | <blockquote>
<p>My suspicion is that Ingress also makes a LoadBalancer, but I can't
figure out how to modify it according to the first link.</p>
</blockquote>
<p>You're right. When you create an <strong>ingress</strong> object, <strong>load balancer</strong> is created automatically, behind the scenes. It's even mentioned <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>If you choose to expose your application using an
<a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">Ingress</a>,
which creates an HTTP(S) Load Balancer, you must <a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#reserve_new_static" rel="nofollow noreferrer">reserve a global
static IP
address</a>.</p>
</blockquote>
<p>You can even list it in your <strong>Google Cloud Console</strong> by goint to <code>Navigation menu</code> -> <code>Networking</code> -> <code>Network services</code> -> <code>Load balancing</code>.</p>
<p>The easiest way to edit it is by clicking 3 dots next to it and then <code>Edit</code>:</p>
<p><a href="https://i.stack.imgur.com/fhxtT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fhxtT.png" alt="enter image description here" /></a></p>
<p>But rather than editing it manually you need to modify your <code>Ingress</code> resource.</p>
<p>Suppose you have followed the steps outlined <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#step_2b_using_an_ingress" rel="nofollow noreferrer">here</a> and everything works as expected, but only via <strong>http</strong>, which is also expected as you have not configured <strong>SSL Certificate</strong> with your ingress so far and the <strong>Load Balancer</strong> it uses behind the scenes is also configured to work with http only.</p>
<p>If you followed <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs" rel="nofollow noreferrer">the guide you mentioned</a> and have already configured <strong>Google-managed SSL certificate</strong>, you only need to update your <strong>ingress</strong> resource configuration by adding <code>networking.gke.io/managed-certificates: certificate-name</code> annotation as @ldg suggested in his answer.</p>
<p>If you didn't configure your SSL certificate, you can do it from kubernetes level by applying the following yaml manifest as described <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">here</a>:</p>
<pre><code>apiVersion: networking.gke.io/v1beta2
kind: ManagedCertificate
metadata:
name: example-cert
spec:
domains:
- example.com
</code></pre>
<p>Save it as file <code>example-cert.yaml</code> and then run:</p>
<pre><code>kubectl apply -f example-cert.yaml
</code></pre>
<p>Once it is created you can re-apply your <strong>ingress</strong> configuration from the same yaml manifest as before with the mentioned annotation added.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helloweb
annotations:
kubernetes.io/ingress.global-static-ip-name: helloweb-ip
networking.gke.io/managed-certificates: example-cert ### 👈
labels:
app: hello
spec:
backend:
serviceName: helloweb-backend
servicePort: 8080
</code></pre>
<p>If for some reason you want to get the ingress you've deployed based on your running configuration, you can run:</p>
<pre><code>kubectl get ingress helloweb -o yaml > ingress.yaml
</code></pre>
<p>then you can edit the <code>ingress.yaml</code> file and re-apply it again.</p>
<p>After adding the annotation, go again in your <strong>Google Cloud Console</strong> to <code>Navigation menu</code> -> <code>Networking</code> -> <code>Network services</code> -> <code>Load balancing</code> and you'll notice that the <strong>protocol</strong> of the <strong>load balancer</strong> associated with the ingress have changed from <code>HTTP</code> to <code>HTTP(S)</code> and if the certificate is valid, you should be able to access your website using your custom domain via HTTPS.</p>
| mario |
<p>I know that i can use <code>kubectl get componentstatus</code>
command to check the health status of the k8 cluster but some how the output i am receiving do not show the health. Below is the output from master server.</p>
<p><a href="https://i.stack.imgur.com/7Td5F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Td5F.png" alt="enter image description here"></a></p>
<p>I can do deployments, can create pods and services which means everything is working fine but not sure how to check the health status.</p>
| thinkingmonster | <p><strong>Solved</strong> in kube-apiserver v1.17.0, also you should use command below in your older apiserver.</p>
<pre><code>kubectl get cs -o=go-template='{{printf "NAME\t\t\tHEALTH_STATUS\tMESSAGE\t\n"}}{{range .items}}{{$name := .metadata.name}}{{range .conditions}}{{printf "%-24s%-16s%-20s\n" $name .status .message}}{{end}}{{end}}'
</code></pre>
<p>enjoy</p>
| Mohammad Ravanbakhsh |
<p>I am struggling trying to replace an existing container with a container from my container-register from Google Cloud Platform.</p>
<p>This is my cloudbuild.yaml file.</p>
<p>steps:</p>
<pre><code> # This steps clone the repository into GCP
- name: gcr.io/cloud-builders/git
args: ['clone', 'https:///user/:[email protected]/PatrickVibild/scrappercontroller']
# This step runs the unit tests on the src
- name: 'docker.io/library/python:3.7'
id: Test
entrypoint: /bin/sh
args:
- -c
- 'pip install -r requirements.txt && python -m pytest src/tests/**'
#This step creates a container and leave it on CloudBuilds repository.
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/abiding-robot-255320/scrappercontroller', '.']
#Adds the container to Google container registry as an artefact
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/abiding-robot-255320/scrappercontroller']
#Uses the container and replaces the existing one in Kubernetes
- name: 'gcr.io/cloud-builders/kubectl'
args: ['set', 'image', 'deployment/scrappercontroller', 'scrappercontroller-sha256=gcr.io/abiding-robot-255320/scrappercontroller:latest']
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-central1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=scrapper-admin'
</code></pre>
<p>I have no issues building my project and I get green on all the steps, I might be missing in the last step but I cant find a way to replace my container in my cluster with a newer version of my code. </p>
<p>I can create a new workload inside my existing cluster manually using the GUI and selecting a container from my container registry, but from there the step to replace that workload container with my new version from the clouds fails.</p>
| Patrick Vibild | <p>It's a common pitfall. According with the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Note: A Deployment’s rollout is triggered if and only if the Deployment’s Pod template (that is, .spec.template) is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.</p>
</blockquote>
<p>Your issue come from the tag of your image doesn't change: the <code>:latest</code> is deployed and you ask for deploying <code>:latest</code>. No image name change, no rollout.</p>
<p>For changing this, I propose you to use <a href="https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values#using_default_substitutions" rel="nofollow noreferrer">substitution variables</a>, especially <code>COMMIT_SHA</code> or <code>SHORT_SHA</code>. You can not this in the documentation:</p>
<blockquote>
<p>only available for triggered builds</p>
</blockquote>
<p>This means that this variable is only populated when the build is automatically triggered and not manually. </p>
<p>For manual run, you have to specify your own variable, like this</p>
<pre><code>gcloud builds submit --substitutions=COMMIT_SHA=<what you want>
</code></pre>
<p>And update your build script like this:</p>
<pre><code> # This steps clone the repository into GCP
- name: gcr.io/cloud-builders/git
args: ['clone', 'https:///user/:[email protected]/PatrickVibild/scrappercontroller']
# This step runs the unit tests on the src
- name: 'docker.io/library/python:3.7'
id: Test
entrypoint: /bin/sh
args:
- -c
- 'pip install -r requirements.txt && python -m pytest src/tests/**'
#This step creates a container and leave it on CloudBuilds repository.
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/abiding-robot-255320/scrappercontroller:$COMMIT_SHA', '.']
#Adds the container to Google container registry as an artefact
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/abiding-robot-255320/scrappercontroller:$COMMIT_SHA']
#Uses the container and replaces the existing one in Kubernetes
- name: 'gcr.io/cloud-builders/kubectl'
args: ['set', 'image', 'deployment/scrappercontroller', 'scrappercontroller-sha256=gcr.io/abiding-robot-255320/scrappercontroller:COMMIT_SHA']
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-central1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=scrapper-admin'
</code></pre>
<p>And during the deployment, you should see this line:</p>
<pre><code>Step #2: Running: kubectl set image deployment.apps/test-deploy go111=gcr.io/<projectID>/postip:<what you want>
Step #2: deployment.apps/test-deploy image updated
</code></pre>
<p>If you don't see it, this mean that your rollout has not take into account.</p>
| guillaume blaquiere |
<p>I have an Azure Kubernetes Cluster with 4 nodes (Linux boxes). I provisioned the AKS cluster using yaml manifests. I want to update the following kernel parameters: net.ipv4.tcp_fin_timeout=30, net.ipv4.ip_local_port_range=1024 65500. The yaml manifest is below. How to update the yaml to include the kernel parameters that I have to change?</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: jmeter-slaves
labels:
jmeter_mode: slave
spec:
replicas: 1
selector:
matchLabels:
jmeter_mode: slave
template:
metadata:
labels:
jmeter_mode: slave
spec:
securityContext:
sysctls:
- name: net.ipv4.ip_local_port_range
value: "1024 65500"
containers:
- name: jmslave
image: prabhaharanv/jmeter-slave:latest
command: ["/jmeter/apache-jmeter-$(JMETERVERSION)/bin/jmeter-server"]
args: ["-Dserver.rmi.ssl.keystore.file /jmeter/apache-jmeter-$(JMETERVERSION)/bin/rmi_keystore.jks","-Djava.rmi.server.hostname=$(MY_POD_IP)", "-Dserver.rmi.localport=50000", "-Dserver_port=1099"]
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
imagePullPolicy: Always
ports:
- containerPort: 1099
- containerPort: 50000
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: JMETERVERSION
value: "5.1.1"
---
apiVersion: v1
kind: Service
metadata:
name: jmeter-slaves-svc
labels:
jmeter_mode: slave
spec:
clusterIP: None
ports:
- port: 1099
name: first
targetPort: 1099
- port: 50000
name: second
targetPort: 50000
selector:
jmeter_mode: slave
---
</code></pre>
| prabhaharan | <p>Take a look at <a href="https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/" rel="nofollow noreferrer">this</a> official kubernetes documentation section. There is all the information you need to set the mentioned <strong>kernel parameters</strong> or using different terminology - <strong>sysctls</strong>, in your <code>Pod</code>.</p>
<p>Note that there are so called <strong>safe</strong> and <strong>unsafe</strong> sysctls.</p>
<p>As to setting <code>net.ipv4.ip_local_port_range=1024 65500</code>, it is considered as <em>safe one</em> and you can set it for your <code>Pod</code> using a <code>securityContext</code> like below without a need to reconfigure the <strong>kubelet</strong> on your node:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: sysctl-example
spec:
securityContext:
sysctls:
- name: net.ipv4.ip_local_port_range
value: "1024 65500"
- name: net.ipv4.tcp_fin_timeout
value: "30"
...
</code></pre>
<p>However if you try to set this way also <code>net.ipv4.tcp_fin_timeout</code> you'll see plenty of failed attempts to create a <code>Pod</code> with the status <code>SysctlForbidden</code>:</p>
<pre><code>kubectl get pods
...
nginx-deployment-668d699fd8-zlvdm 0/1 SysctlForbidden 0 31s
nginx-deployment-668d699fd8-ztzpr 0/1 SysctlForbidden 0 58s
nginx-deployment-668d699fd8-zx4vq 0/1 SysctlForbidden 0 24s
...
</code></pre>
<p>It happens because <code>net.ipv4.tcp_fin_timeout</code> is an <em>unsafe sysctl</em> which needs to be explicitely allowed on node level by reconfiguring your <strong>kubelet</strong>.</p>
<p>To allow it you need to edit your <strong>kubelet</strong> configuration, specifically add one more option to those with which it is already started. You will typically find those options in file <code>/etc/default/kubelet</code>. You simply need to add one more:</p>
<pre><code>--allowed-unsafe-sysctls 'net.ipv4.tcp_fin_timeout'
</code></pre>
<p>and restart your <strong>kubelet</strong>:</p>
<pre><code>systemctl restart kubelet.service
</code></pre>
<p>Once <code>net.ipv4.tcp_fin_timeout</code> is allowed on node level, you can set it the same way as any <em>safe sysctls</em> i.e. via <code>securityContext</code> in your <code>Pod</code> specification.</p>
| mario |
<p>In my deployment, I would like to use a Persistent Volume Claim in combination with a config map mount. For example, I'd like the following:</p>
<pre><code>volumeMounts:
- name: py-js-storage
mountPath: /home/python
- name: my-config
mountPath: /home/python/my-config.properties
subPath: my-config.properties
readOnly: true
...
volumes:
- name: py-storage
{{- if .Values.py.persistence.enabled }}
persistentVolumeClaim:
claimName: python-storage
{{- else }}
emptyDir: {}
{{- end }}
</code></pre>
<p>Is this a possible and viable way to go? Is there any better way to approach such situation? </p>
| LoreV | <p>Since you didn't give your use case, my answer will be based on if it is possible or not. In fact: <strong>Yes, it is.</strong></p>
<p>I'm supposing you wish mount file from a <code>configMap</code> in a mount point that already contains other files, and your approach to use <code>subPath</code> is correct!</p>
<p>When you need to mount different volumes on the same path, you need specify <code>subPath</code> or the content of the original dir will be <strong>hidden</strong>.</p>
<p>In other words, if you want to keep both files (from the mount point <strong>and</strong> from configMap) you <strong>must</strong> use <code>subPath</code>.</p>
<p>To illustrate this, I've tested with the deployment code below. There I mount the hostPath <code>/mnt</code> that contains a file called <code>filesystem-file.txt</code> in my pod and the file <code>/mnt/configmap-file.txt</code> from my configmap <code>test-pd-plus-cfgmap</code>:</p>
<blockquote>
<p><strong>Note:</strong> I'm using Kubernetes 1.18.1</p>
</blockquote>
<p><strong>Configmap:</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: test-pd-plus-cfgmap
data:
file-from-cfgmap: file data
</code></pre>
<p><strong>Deployment:</strong></p>
<pre><code>
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-pv
spec:
replicas: 3
selector:
matchLabels:
app: test-pv
template:
metadata:
labels:
app: test-pv
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mnt
name: task-pv-storage
- mountPath: /mnt/configmap-file.txt
subPath: configmap-file.txt
name: task-cm-file
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
- name: task-cm-file
configMap:
name: test-pd-plus-cfgmap
</code></pre>
<p>As a result of the deployment, you can see the follow content in <code>/mnt</code> of the pod:</p>
<pre><code>$ kubectl exec test-pv-5bcb54bd46-q2xwm -- ls /mnt
configmap-file.txt
filesystem-file.txt
</code></pre>
<p>You could check this github <a href="https://github.com/kubernetes/kubernetes/issues/23748" rel="noreferrer">issue</a> with the same discussion.</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer">Here</a> you could read a little more about volumes <code>subPath</code>.</p>
| Mr.KoopaKiller |
<p>I have a Kubernetes Cronjob that runs on GKE and runs Cucumber JVM tests. In case a Step fails due to assertion failure, some resource being unavailable, etc., Cucumber rightly throws an exception which leads the Cronjob job to fail and the Kubernetes pod's status changes to <code>ERROR</code>. This leads to creation of a new pod that tries to run the same Cucumber tests again, which fails again and retries again.</p>
<p>I don't want any of these retries to happen. If a Cronjob job fails, I want it to remain in the failed status and not retry at all. Based on <a href="https://stackoverflow.com/questions/51657105/how-to-ensure-kubernetes-cronjob-does-not-restart-on-failure">this</a>, I have already tried setting <code>backoffLimit: 0</code> in combination with <code>restartPolicy: Never</code> in combination with <code>concurrencyPolicy: Forbid</code>, but it still retries by creating new pods and running the tests again. </p>
<p>What am I missing? Here's my kube manifest for the Cronjob:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: quality-apatha
namespace: default
labels:
app: quality-apatha
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
backoffLimit: 0
template:
spec:
containers:
- name: quality-apatha
image: FOO-IMAGE-PATH
imagePullPolicy: "Always"
resources:
limits:
cpu: 500m
memory: 512Mi
env:
- name: FOO
value: BAR
volumeMounts:
- name: FOO
mountPath: BAR
args:
- java
- -cp
- qe_java.job.jar:qe_java-1.0-SNAPSHOT-tests.jar
- org.junit.runner.JUnitCore
- com.liveramp.qe_java.RunCucumberTest
restartPolicy: Never
volumes:
- name: FOO
secret:
secretName: BAR
</code></pre>
<p>Is there any other Kubernetes <code>Kind</code> I can use to stop the retrying?</p>
<p>Thank you!</p>
| Core_Dumped | <p>To make things as simple as possible I tested it using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#example" rel="noreferrer">this</a> example from the official kubernetes documentation, applying to it minor modifications to illustrate what really happens in different scenarios.</p>
<p>I can confirm that when <code>backoffLimit</code> is set to <code>0</code> and <code>restartPolicy</code> to <code>Never</code> <strong>everything works exactly as expected and there are no retries</strong>. Note that every single run of your <code>Job</code> which in your example is scheduled to run <strong>at intervals of 60 seconds</strong> (<code>schedule: "*/1 * * * *"</code>) <strong>IS NOT considerd a retry</strong>.</p>
<p>Let's take a closer look at the following example (base <code>yaml</code> avialable <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#example" rel="noreferrer">here</a>):</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
backoffLimit: 0
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- non-existing-command
restartPolicy: Never
</code></pre>
<p>It spawns new cron job <code>every 60 seconds</code> according to the <code>schedule</code>, no matter if it fails or runs successfully. In this particular example it is configured to fail as we are trying to run <code>non-existing-command</code>.</p>
<p>You can check what's happening by running:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-1587558720-pgqq9 0/1 Error 0 61s
hello-1587558780-gpzxl 0/1 ContainerCreating 0 1s
</code></pre>
<p>As you can see there are <strong>no retries</strong>. Although the first <code>Pod</code> failed, the new one is spawned exactly 60 seconds later according to our specification. I'd like to emphasize it again. <strong>This is not a retry.</strong></p>
<p>On the other hand when we modify the above example and set <code>backoffLimit: 3</code>, we can observe the <strong>retries</strong>. As you can see, now new <code>Pods</code> are created <strong>much more often than every 60 seconds</strong>. <strong>This are retries.</strong></p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-1587565260-7db6j 0/1 Error 0 106s
hello-1587565260-tcqhv 0/1 Error 0 104s
hello-1587565260-vnbcl 0/1 Error 0 94s
hello-1587565320-7nc6z 0/1 Error 0 44s
hello-1587565320-l4p8r 0/1 Error 0 14s
hello-1587565320-mjnb6 0/1 Error 0 46s
hello-1587565320-wqbm2 0/1 Error 0 34s
</code></pre>
<p>What we can see above are <strong>3 retries</strong> (<code>Pod</code> creation attempts), related with <code>hello-1587565260</code> <strong>job</strong> and <strong>4 retries</strong> (including the orignal <strong>1st try</strong> not counted in <code>backoffLimit: 3</code>) related with <code>hello-1587565320</code> <strong>job</strong>.</p>
<p>As you can see the <strong>jobs</strong> themselves are still run according to the schedule, <strong>at 60 second intervals</strong>:</p>
<pre><code>kubectl get jobs
NAME COMPLETIONS DURATION AGE
hello-1587565260 0/1 2m12s 2m12s
hello-1587565320 0/1 72s 72s
hello-1587565380 0/1 11s 11s
</code></pre>
<p>However due to our <code>backoffLimit</code> set this time to <code>3</code>, every time the <code>Pod</code> responsible for running the job fails, <strong>3 additional retries occur</strong>.</p>
<p>I hope this helped to dispel any possible confusions about running <code>cronJobs</code> in <strong>kubernetes</strong>.</p>
<p>If you are rather interested in running something just once, not at regular intervals, take a look at simple <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job" rel="noreferrer">Job</a> instead of <code>CronJob</code>.</p>
<p>Also consider changing your <a href="https://en.wikipedia.org/wiki/Cron" rel="noreferrer">Cron</a> configuration if you still want to run this particular job on regular basis but let's say once in 24 h, not every minute.</p>
| mario |
<p>I am able to create a kubernetes cluster and I followed the steps in to pull a private image from GCR repository.
<a href="https://cloud.google.com/container-registry/docs/advanced-authentication" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/advanced-authentication</a>
<a href="https://cloud.google.com/container-registry/docs/access-control" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/access-control</a></p>
<p>I am unable to pull the image from GCR. I have used the below commands
gcloud auth login
I have authendiacted the service accounts.
Connection between the local machine and gcr as well.</p>
<p>Below is the error</p>
<pre><code>$ kubectl describe pod test-service-55cc8f947d-5frkl
Name: test-service-55cc8f947d-5frkl
Namespace: default
Priority: 0
Node: gke-test-gke-clus-test-node-poo-c97a8611-91g2/10.128.0.7
Start Time: Mon, 12 Oct 2020 10:01:55 +0530
Labels: app=test-service
pod-template-hash=55cc8f947d
tier=test-service
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container test-service
Status: Pending
IP: 10.48.0.33
IPs:
IP: 10.48.0.33
Controlled By: ReplicaSet/test-service-55cc8f947d
Containers:
test-service:
Container ID:
Image: gcr.io/test-256004/test-service:v2
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment:
test_SERVICE_BUCKET: test-pt-prod
COPY_FILES_DOCKER_IMAGE: gcr.io/test-256004/test-gcs-copy:latest
test_GCP_PROJECT: test-256004
PIXALATE_GCS_DATASET: test_pixalate
PIXALATE_BQ_TABLE: pixalate
APP_ADS_TXT_GCS_DATASET: test_appadstxt
APP_ADS_TXT_BQ_TABLE: appadstxt
Mounts:
/test/output from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6g7nl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
default-token-6g7nl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6g7nl
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 42s default-scheduler Successfully assigned default/test-service-55cc8f947d-5frkl to gke-test-gke-clus-test-node-poo-c97a8611-91g2
Normal SuccessfulAttachVolume 38s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-25025b4c-2e89-4400-8e0e-335298632e74"
Normal SandboxChanged 31s kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Pod sandbox changed, it will be killed and re-created.
Normal Pulling 15s (x2 over 32s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Pulling image "gcr.io/test-256004/test-service:v2"
Warning Failed 15s (x2 over 32s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Failed to pull image "gcr.io/test-256004/test-service:v2": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/test-256004/test-service, repository does not exist or may require 'docker login': denied: Permission denied for "v2" from request "/v2/test-256004/test-service/manifests/v2".
Warning Failed 15s (x2 over 32s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Error: ErrImagePull
Normal BackOff 3s (x4 over 29s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Back-off pulling image "gcr.io/test-256004/test-service:v2"
Warning Failed 3s (x4 over 29s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Error: ImagePullBackOff
</code></pre>
| klee | <p>If you don't use workload identity, the default service account of your pod is this one of the nodes, and the nodes, by default, use the Compute Engine service account.</p>
<p>Make sure to grant it the correct permission to access to GCR.</p>
<p>If you use another service account, grant it with the Storage Object Reader role (when you pull an image, you read a blob stored in Cloud Storage (at least it's the same permission)).</p>
<p><em>Note: even if it's the default service account, I don't recommend to use the Compute Engine service account with any change in its roles. Indeed, it is project editor, that is a lot of responsability.</em></p>
| guillaume blaquiere |
<p>I was trying to renew the expired certificates, i followed below steps and kubectl service started failing. I'm new to kubernetes please help me.</p>
<pre><code># kubeadm alpha certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
admin.conf Nov 11, 2020 12:52 UTC <invalid> no
apiserver Nov 11, 2020 12:52 UTC <invalid> no
apiserver-etcd-client Nov 11, 2020 12:52 UTC <invalid> no
apiserver-kubelet-client Nov 11, 2020 12:52 UTC <invalid> no
controller-manager.conf Nov 11, 2020 12:52 UTC <invalid> no
etcd-healthcheck-client Nov 11, 2020 12:52 UTC <invalid> no
etcd-peer Nov 11, 2020 12:52 UTC <invalid> no
etcd-server Nov 11, 2020 12:52 UTC <invalid> no
front-proxy-client Nov 11, 2020 12:52 UTC <invalid> no
scheduler.conf Nov 11, 2020 12:52 UTC <invalid> no
# kubeadm alpha certs renew all
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
# kubeadm alpha certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
admin.conf Nov 17, 2021 05:49 UTC 364d no
apiserver Nov 17, 2021 05:49 UTC 364d no
apiserver-etcd-client Nov 17, 2021 05:49 UTC 364d no
apiserver-kubelet-client Nov 17, 2021 05:49 UTC 364d no
controller-manager.conf Nov 17, 2021 05:49 UTC 364d no
etcd-healthcheck-client Nov 17, 2021 05:49 UTC 364d no
etcd-peer Nov 17, 2021 05:49 UTC 364d no
etcd-server Nov 17, 2021 05:49 UTC 364d no
front-proxy-client Nov 17, 2021 05:49 UTC 364d no
scheduler.conf Nov 17, 2021 05:49 UTC 364d no
:~> mkdir -p $HOME/.kube
:~> sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
:~> sudo chown $(id -u):$(id -g) $HOME/.kube/config
:~> sudo systemctl daemon-reload
:~> sudo systemctl stop kubelet
:~> sudo systemctl start kubelet
:~> sudo systemctl enable kubelet
:~> sudo systemctl stop docker
:~> sudo systemctl start docker
:~> kubectl get pods
The connection to the server 10.xx.xx.74:6443 was refused - did you specify the right host or port?
# kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean"GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 10.xx.xx.74:6443 was refused - did you specify the right host or port?
</code></pre>
<p>Kubectl status:</p>
<pre><code> # systemctl status kubelet
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Tue 2020-11-17 08:18:20 UTC; 1s ago
Docs: https://kubernetes.io/docs/
Process: 1452 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, sta
Main PID: 1452 (code=exited, status=255)
Nov 17 08:18:20 c536gocrb systemd[1]: Unit kubelet.service entered failed state.
Nov 17 08:18:20 c536gocrb systemd[1]: kubelet.service failed.
</code></pre>
<p>tried adding environment variable to admin 10.kubeadm.conf</p>
<p>Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"</p>
<p>kubeadm conf file:</p>
<pre><code> cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
</code></pre>
| Santhosh reddy | <p>The issue has been resolved. After replacing certificate data in kubelet.conf as suggested in <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#check-certificate-expiration" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#check-certificate-expiration</a></p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ==
server: https://xx.x.x.x.:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:cmaster
name: system:node:cmaster@kubernetes
current-context: system:node:cmaster@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:cmaster
user:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
</code></pre>
| Santhosh reddy |
<p>I am trying to setup a persistent volume for K8s that is running in Docker Desktop for Windows. The end goal being I want to run Jenkins and not lose any work if docker/K8s spins down.</p>
<p>I have tried a couple of things but I'm either misunderstanding the ability to do this or I am setting something up wrong. Currently I have the environment setup like so:</p>
<p>I have setup a volume in docker for Jenkins. All I did was create the volume, not sure if I need more configuration here.</p>
<pre><code>docker volume inspect jenkins-pv
[
{
"CreatedAt": "2020-05-20T16:02:42Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/jenkins-pv/_data",
"Name": "jenkins-pv",
"Options": {},
"Scope": "local"
}
]
</code></pre>
<p>I have also created a persistent volume in K8s pointing to the mount point in the Docker volume and deployed it.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
labels:
type: hostPath
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: "/var/lib/docker/volumes/jenkins-pv/_data"
</code></pre>
<p>I have also created a pv claim and deployed that.</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>Lastly I have created a deployment for Jenkins. I have confirmed it works and I am able to access the app.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-app
template:
metadata:
labels:
app: jenkins-app
spec:
containers:
- name: jenkins-pod
image: jenkins/jenkins:2.237-alpine
ports:
- containerPort: 50000
- containerPort: 8080
volumeMounts:
- name: jenkins-pv-volume
mountPath: /var/lib/docker/volumes/jenkins-pv/_data
volumes:
- name: jenkins-pv-volume
persistentVolumeClaim:
claimName: jenkins-pv-claim
</code></pre>
<p>However, the data does not persist quitting Docker and I have to reconfigure Jenkins every time I start. Did I miss something or how/what I am trying to do not possible? Is there a better or easier way to do this?</p>
<p>Thanks!</p>
| Nick Orlowski | <p>I figured out my issue, it was two fold. </p>
<ol>
<li>I was trying to save data from the wrong location within the pod that was running Jenkins.</li>
<li>I was never writing the data back to docker shared folder.</li>
</ol>
<p>To get this working I created a shared folder in Docker (C:\DockerShare).
Then I updated the host path in my Persistent Volume.
The format is <em>/host_mnt/path_to_docker_shared_folder_location</em>
Since I used C:\DockerShare my path is: <em>/host_mnt/c/DockerShare</em></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: hostPath
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /host_mnt/c/DockerShare/jenkins
</code></pre>
<p>I also had to update the Jenkins deployment because I was not actually saving any of the config.
I should have been saving data from <em>/var/jenkins_home</em>.</p>
<p>Deployment looks like this: </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-app
template:
metadata:
labels:
app: jenkins-app
spec:
containers:
- name: jenkins-pod
image: jenkins/jenkins:2.237-alpine
ports:
- containerPort: 50000
- containerPort: 8080
volumeMounts:
- name: jenkins
mountPath: /var/jenkins_home
volumes:
- name: jenkins
persistentVolumeClaim:
claimName: jenkins
</code></pre>
<p>Anyways its working now and I hope this helps someone else when it comes to setting up a PV.</p>
| Nick Orlowski |
<p>I try to set the value of the ssl-session-cache in my configmap for ingress-controller,</p>
<p>the problem is, that i can't find how to write it correct.</p>
<p>I need following changes in the nginx config: </p>
<p><code>ssl-session-cache builtin:3000 shared:SSL:100m</code></p>
<p><code>ssl-session-timeout: 3000</code></p>
<p>when i add
<code>ssl-session-timeout: "3000"</code> to the config map, it works correct - this i can see in nginx-config few seconds later. </p>
<p>but how i should write ssl-session-cache? </p>
<p><code>ssl-session-cache: builtin:"3000" shared:SSL:"100m"</code> goes well, but no changes in nginx</p>
<p><code>ssl-session-cache: "builtin:3000 shared:SSL:100m"</code> goes well, but no changes in nginx</p>
<p><code>ssl-session-cache "builtin:3000 shared:SSL:100m"</code> syntax error - can't change the configmap</p>
<p><code>ssl-session-cache builtin:"3000 shared:SSL:100m"</code> syntax error - can't change the configmap</p>
<p>Do someone have the idea, how to set ssl-session-cache in configmap correct?</p>
<p>Thank you!</p>
| Alexander Kolin | <h2>TL;DR</h2>
<p>After digging around and test the same scenario in my lab, I've found how to make it work.</p>
<p>As you can see <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-session-cache-size" rel="nofollow noreferrer">here</a> the parameter <code>ssl-session-cache</code> requires a <strong>boolean</strong> value to specify if it will be enabled or not.</p>
<p>The changes you need is handled by the parameter <code>ssl_session_cache_size</code> and requires a string, then is correct to suppose that it would work changing the value to <code>builtin:3000 shared:SSL:100m</code> but after reproduction and dive into the nginx configuration, I've concluded that it will not work because the option <code>builtin:1000</code> is <strong><em>hardcoded</em></strong>.</p>
<p>In order to make it work as expected I've found a solution using a nginx template as a <code>configMap</code> mounted as a volume into nginx-controller pod and other <code>configMap</code> for make the changes in the parameter <code>ssl_session_cache_size</code>.</p>
<h2>Workaround</h2>
<p>Take a look in the line <strong>343</strong> from the file <code>/etc/nginx/template</code> in the nginx-ingress-controller pod:</p>
<pre><code>bash-5.0$ grep -n 'builtin:' nginx.tmpl
343: ssl_session_cache builtin:1000 shared:SSL:{{ $cfg.SSLSessionCacheSize }};
</code></pre>
<p>As you can see, the option <code>builtin:1000</code> is <em>hardcoded</em> and cannot be change using custom data on yout approach.</p>
<p>However, there are some ways to make it work, you could directly change the template file into the pod, but theses changes will be lost if the pod die for some reason... or you could <strong>use a custom template mounted as <code>configMap</code> into nginx-controller pod.</strong></p>
<p>In this case, let's create a <code>configMap</code> with nginx.tmpl content changing the value of the line 343 for the desired value.</p>
<ol>
<li>Get template file from nginx-ingress-controller pod, it will create a file called<code>nginx.tmpl</code> locally:</li>
</ol>
<blockquote>
<p>NOTE: Make sure the namespace is correct.</p>
</blockquote>
<pre><code>$ NGINX_POD=$(kubectl get pods -n ingress-nginx -l=app.kubernetes.io/component=controller -ojsonpath='{.items[].metadata.name}')
$ kubectl exec $NGINX_POD -n ingress-nginx -- cat template/nginx.tmpl > nginx.tmpl
</code></pre>
<ol start="3">
<li>Change the value of the line 343 from <code>builtin:1000</code> to <code>builtin:3000</code>:</li>
</ol>
<pre><code>$ sed -i '343s/builtin:1000/builtin:3000/' nginx.tmpl
</code></pre>
<p>Checking if evething is ok:</p>
<pre><code>$ grep builtin nginx.tmpl
ssl_session_cache builtin:3000 shared:SSL:{{ $cfg.SSLSessionCacheSize }};
</code></pre>
<p>Ok, at this point we have a <code>nginx.tmpl</code> file with the desired parameter changed.</p>
<p>Let's move on and create a <code>configMap</code> with the custom nginx.tmpl file:</p>
<pre><code>$ kubectl create cm nginx.tmpl --from-file=nginx.tmpl
configmap/nginx.tmpl created
</code></pre>
<p>This will create a <code>configMap</code> called <code>nginx.tmpl</code> in the <code>ingress-nginx</code> namespace, if your ingress' namespace is different, make the proper changes before apply.</p>
<p>After that, we need to edit the nginx-ingress deployment and add a new <code>volume</code> and a <code>volumeMount</code> to the containers spec. In my case, the nginx-ingress deployment name <code>ingress-nginx-controller</code> in the <code>ingress-nginx</code> namespace. </p>
<p>Edit the deployment file:</p>
<pre><code>$ kubectl edit deployment -n ingress-nginx ingress-nginx-controller
</code></pre>
<p>And add the following configuration in the correct places:</p>
<pre><code>...
volumeMounts:
- mountPath: /etc/nginx/template
name: nginx-template-volume
readOnly: true
...
volumes:
- name: nginx-template-volume
configMap:
name: nginx.tmpl
items:
- key: nginx.tmpl
path: nginx.tmpl
...
</code></pre>
<p>After save the file, the nginx controller pod will be recreated with the <code>configMap</code> mounted as a file into the pod.</p>
<p>Let's check if the changes was propagated:</p>
<pre><code>$ kubectl exec -n ingress-nginx $NGINX_POD -- cat nginx.conf | grep -n ssl_session_cache
223: ssl_session_cache builtin:3000 shared:SSL:10m;
</code></pre>
<p>Great, the first part is done!</p>
<p>Now for the <code>shared:SSL:10m</code> we can use the same approach you already was used: <code>configMap</code> with the specific parameters as mentioned in this <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-session-cache-size" rel="nofollow noreferrer">doc</a>.</p>
<p>If you remember in the nginx.tmpl, for <code>shared:SSL</code> there is a variable called <strong>SSLSessionCache</strong> (<code>{{ $cfg.SSLSessionCacheSize }}</code>), in the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/config/config.go" rel="nofollow noreferrer">source code</a> is possible to check that the variable is represented by the option <code>ssl-session-cache-size</code>:</p>
<pre class="lang-golang prettyprint-override"><code>340 // Size of the SSL shared cache between all worker processes.
341 // http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
342 SSLSessionCacheSize string `json:"ssl-session-cache-size,omitempty"`
</code></pre>
<p>So, all we need to do is create a <code>configMap</code> with this parameter and the desired value:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
ssl-session-cache-size: "100m"
</code></pre>
<blockquote>
<p>Note: Adjust the namespace and configMap name for the equivalent of your environment.</p>
</blockquote>
<p>Applying this <code>configMap</code> NGINX will reload the configuration and make the changes in the configuration file.</p>
<p>Checking the results:</p>
<pre><code>$ NGINX_POD=$(kubectl get pods -n ingress-nginx -l=app.kubernetes.io/component=controller -ojsonpath='{.items[].metadata.name}')
$ kubectl exec -n ingress-nginx $NGINX_POD -- cat nginx.conf | grep -n ssl_session_cache
223: ssl_session_cache builtin:3000 shared:SSL:100m;
</code></pre>
<h2>Conclusion</h2>
<p>It would work as expected, unfortunately, I can't find a way to add a variable in the <code>builtin:</code>, so we will continue using it <em>hardcoded</em> but at this time it will be a configMap that you can easily make changes if needed.</p>
<h2>References:</h2>
<p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/custom-template.md" rel="nofollow noreferrer">NGINX INgress Custom template</a></p>
<p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/config/config.go" rel="nofollow noreferrer">NGINX Ingress Source Code</a></p>
| Mr.KoopaKiller |
<p>Assuming there is a Flask web server that has two routes, deployed as a CloudRun service over GKE.</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/cpu_intensive', methods=['POST'], endpoint='cpu_intensive')
def cpu_intensive():
#TODO: some actions, cpu intensive
@app.route('/batch_request', methods=['POST'], endpoint='batch_request')
def batch_request():
#TODO: invoke cpu_intensive
</code></pre>
<p>A "batch_request" is a batch of many same structured requests - each one is highly CPU intensive and handled by the function "cpu_intensive". No reasonable machine can handle a large batch and thus it needs to be paralleled across multiple replicas.
The deployment is configured that every instance can handle only 1 request at a time, so when multiple requests arrive CloudRun will replicate the instance.
I would like to have a service with these two endpoints, one to accept "batch_requests" and only break them down to smaller requests and another endpoint to actually handle a single "cpu_intensive" request. What is the best way for "batch_request" break down the batch to smaller requests and invoke "cpu_intensive" so that CloudRun will scale the number of instances?</p>
<ul>
<li>make http request to localhost - doesn't work since the load balancer is not aware of these calls.</li>
<li>keep the deployment URL in a conf file and make a network call to it?</li>
</ul>
<p>Other suggestions?</p>
| gidutz | <p><em>With more detail, it's now clearer!!</em></p>
<p>You have 2 responsibilities</p>
<ul>
<li>One to split -> Many request can be handle in parallel, no compute intensive</li>
<li>One to process -> Each request must be processed on a dedicated instance because of compute intensive process.</li>
</ul>
<p>If your split performs internal calls (with localhost for example) you will be only on the same instance, and you will parallelize nothing (just multi thread the same request on the same instance)</p>
<p>So, for this, you need 2 services:</p>
<ul>
<li>one to split, and it can accept several concurrent request</li>
<li>The second to process, and this time you need to set the concurrency param to 1 to be sure to accept only one request in the same time.</li>
</ul>
<p>To improve your design, and if the batch processing can be asynchronous (I mean, the split process don't need to know when the batch process is over), you can add PubSub or Cloud Task in the middle to decouple the 2 parts.</p>
<p>And if the processing requires more than 4 CPUs 4Gb of memory, or takes more than 1 hour, use Cloud Run on GKE and not Cloud Run managed.</p>
<p>Last word: Now, if you don't use PubSub, the best way is to set the Batch Process URL in Env Var of your Split Service to know it.</p>
| guillaume blaquiere |
<p>i am newbie for gitlab.
I don't understand how to create a pipeline for deploy an immage in different kubernetes cluster with agents.</p>
<p>for example i have :</p>
<p>dev cluster -> agent 1</p>
<p>test cluster -> agent 2</p>
<p>production cluster -> agent 3</p>
<p>now it's possibile in a single pipeline to deploy in every cluster ? how can i tell which agent to use for deploy in a specific environment?</p>
<p>Thanks in advance</p>
| Andrea Maestroni | <p>That will depend on your use case, but on your jobs you can use <code>kubectl</code> to specify the context to where you are deploying.</p>
<p><code>kubectl config set-context $GITLAB_AGENT_URL:$AGENT_NAME</code></p>
<p>where <code>$GITLAB_AGENT_URL</code> is the name of the project where the config of your Kubernetes Agents is stored.</p>
| bhito |
<p>I am trying to deploy a Pod in my <code>v1.13.6-gke.6</code> k8s cluster.</p>
<p>The image that I'm using is pretty simple:</p>
<pre><code>FROM scratch
LABEL maintainer "Bitnami <[email protected]>"
COPY rootfs /
USER 1001
CMD [ "/chart-repo" ]
</code></pre>
<p>As you can see, the user is set to <code>1001</code>.</p>
<p>The cluster that I am deploying the Pod in has a PSP setup.</p>
<pre><code>spec:
allowPrivilegeEscalation: false
allowedCapabilities:
- IPC_LOCK
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
runAsUser:
rule: MustRunAsNonRoot
</code></pre>
<p>So basically as per the <code>rule: MustRunAsNonRoot</code> rule, the above image should run.</p>
<p>But when I ran the image, I randomly run into :</p>
<pre><code>Error: container has runAsNonRoot and image will run as root
</code></pre>
<p>So digging further, I got this pattern:</p>
<p>Every time I run the image with <code>imagePullPolicy: IfNotPresent</code>, I always run into the issue. Meaning every time I picked up a cached image, it gives the <code>container has runAsNonRoot</code> error.</p>
<pre><code> Normal Pulled 12s (x3 over 14s) kubelet, test-1905-default-pool-1b8e4761-fz8s Container image "my-repo/bitnami/kubeapps-chart-repo:1.4.0-r1" already present on machine
Warning Failed 12s (x3 over 14s) kubelet, test-1905-default-pool-1b8e4761-fz8s Error: container has runAsNonRoot and image will run as root
</code></pre>
<p>BUT</p>
<p>Every time I run the image as <code>imagePullPolicy: Always</code>, the image SUCCESSFULLY runs:</p>
<pre><code> Normal Pulled 6s kubelet, test-1905-default-pool-1b8e4761-sh5g Successfully pulled image "my-repo/bitnami/kubeapps-chart-repo:1.4.0-r1"
Normal Created 5s kubelet, test-1905-default-pool-1b8e4761-sh5g Created container
Normal Started 5s kubelet, test-1905-default-pool-1b8e4761-sh5g Started container
</code></pre>
<p>So I'm not really sure what all this is about. I mean just because the <code>ImagePullPolicy</code> is different, why does it wrongly setup a PSP rule? </p>
| Jason Stanley | <p>Found out the issue. Its a known issue with k8s for 2 specific versions <code>v1.13.6</code> & <code>v1.14.2</code>.</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/78308" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/78308</a></p>
| Jason Stanley |
<p>I'm trying to capture the impact of enabling Horizontal Pod Autoscaler (HPA) resources on the performance of HPA, in Kubernetes. I have found a few metrics related to the HPA but find that they lack documentation. For example, the metric <code>horizontalpodautoscaler_queue_latency</code> is available but it is not clear what unit it is measured in - microseconds, milliseconds or anything else.</p>
<p>Can anyone point me to any documentation related to control plane metrics? It would be great if you can also point me to the code base of these metrics as well because I could not find any reference to control plan metrics (tried searching for <code>horizontalpodautoscaler_queue_latency</code>) in <a href="https://github.com/kubernetes/kubernetes" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes</a>.</p>
<p>Thanks a ton.</p>
| Sasidhar Sekar | <p>There is no much information about this metric, but I found this metrics is mensured in <strong>quantile</strong> and not time (milisecs, microsecs etc.).</p>
<p><a href="https://cloud.google.com/monitoring/api/metrics_anthos?hl=en" rel="nofollow noreferrer">Here</a> you can find the following:</p>
<blockquote>
<p><strong>horizontalpodautoscaler_queue_latency_count</strong></p>
<p>(Deprecated) How long an item stays in workqueuehorizontalpodautoscaler before being requested.</p>
</blockquote>
<p>Also, seems the metric is <strong>deprecated</strong>, maybe it is the reason that you cannot found too much information about.</p>
| Mr.KoopaKiller |
<p>I have an application running in the tomcat path /app1, how should I access this from the ingress path? </p>
<p>When accessing "/", it gives the default tomcat 404 - not found page, and when accessing via /app1 it shows "default backend -404" </p>
<p>What I wanna know is:
Is there anyway to configure the context path without using ngnix's ingress controller? (Just using GKE's default ingress controller)</p>
<p>Here is a sample of my ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gke-my-ingress-1
annotations:
kubernetes.io/ingress.global-static-ip-name: gke-my-static-ip
networking.gke.io/managed-certificates: gke-my-certificate
spec:
rules:
- host: mydomain.web.com
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: my-port
</code></pre>
<p>Edit: service output</p>
<pre><code>kubectl get svc
my-service NodePort <IP_REDACTED> <none> 8080:30310/TCP 5d16h
</code></pre>
<pre><code>kubectl describe svc my-service
Name: my-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-service","namespace":"default"},"spec":{"ports":[{"name"...
Selector: app=my-deployment-1
Type: NodePort
IP: <IP_REDACTED>
Port: my-port 8080/TCP
TargetPort: 8080/TCP
NodePort: my-port 30310/TCP
Endpoints: <IP_REDACTED>:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>and this is my Node Port service yaml:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- name: my-port
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: my-deployment-1
type: NodePort
</code></pre>
| user10518 | <p>Unfortunately current implementation of default <strong>GKE Ingress Controller</strong> doesn't support rewrite targets. There is still an open github issue that you can find <a href="https://github.com/kubernetes/ingress-gce/issues/109" rel="nofollow noreferrer">here</a>.</p>
<p>What you're trying to achieve is rewriting your ingress path to some specific path exposed by your application, in your case <strong>Apache Tomcat web server</strong>.</p>
<p>Isn't there any possibility to reconfigure your app to be served from the main path by <strong>Apache Tomcat</strong> ? If so, you can make it available on <code><IngressLoadBalancerIP>/app1</code> by configuring the following path in your <strong>ingress resource</strong> like in the example below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: default-backend
servicePort: 8080
- path: /app1
backend:
serviceName: my-service
servicePort: 8080
</code></pre>
<p>But unfortunately you cannot configure rewrite in the way that when you go to <code><IngressLoadBalancerIP>/app1</code> it rewrites to your <code><my-service>/app1</code>.</p>
<p>It seems that for now the only solution is to install different ingress controller such as mentioned <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">nginx insgress controller</a>.</p>
| mario |
<p>I started implementing the Kuberenetes for my simple app. I am facing issue when i use <code>NodePort</code>. IF I use <code>LoadBalancer</code> I can open my url. If I use <code>NodePort</code> it will take long time for trying to load and getting error <code>connection refused</code>. Below is my simple yaml file.</p>
<pre><code>> POD yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-webapp
labels:
app: webapp
spec:
containers:
- name: app
image: mydockerimage_xxx
</code></pre>
<pre><code>> service.yaml
kind: Service
apiVersion: v1
metadata:
# Unique key of the Service instance
name: service-webapp
spec:
ports:
# Accept traffic sent to port 80
- name: http
port: 80
nodePort: 30080
selector:
app: webapp
type: NodePort
</code></pre>
| nagaraj | <p>what I found something in GitHub : <a href="https://github.com/kubernetes/minikube/issues/11193" rel="nofollow noreferrer">Kubernetes NodePort</a> which solved my issue partially.</p>
| nagaraj |
<p>Can we create credentials.json file using gcloud auth login? Requirement is - user should use personal account in the minikube and use cred.json file as a secret in their cluster.
Current setup - For QA we have a service account key which get mounted on the gke cluster as secret.</p>
| pythonhmmm | <p>To access to google cloud API, you need an OAuth2 access_token and you can generate it with user credential (and after a <code>gcloud auth login</code>)</p>
<p>So, you can call <a href="https://cloud.google.com/sdk/gcloud/reference/iam/service-accounts/keys/create" rel="nofollow noreferrer">this command</a> or directly <a href="https://cloud.google.com/iam/docs/reference/rest/v1/projects.serviceAccounts.keys/create" rel="nofollow noreferrer">the API</a> with an access token in parameter</p>
<p><em>Example with curl</em></p>
<pre><code>curl -H "Authorization: Bearer $(gcloud auth print-access-token)"
</code></pre>
<p><strong>EDIT</strong></p>
<p>As explained before, a user credential created with <code>gcloud auth login</code> can only, today, create an access_token.</p>
<p>Access token are required to access to Google API (included Google Cloud products). If it's your use case, you can use it.</p>
<p>However, if you need an id_token, for example to access to private Cloud Functions, private Cloud Run, App Engine behind IAP, you can't (not directly, I have a fix for this if you want).</p>
| guillaume blaquiere |
<p>It is possible to duplicate a complete google cloud project with minimal manual interaction?
The purpose of this duplication is to create a white label.
In case there is no "easy" way, can you tell me what tools I can use to duplicate my existing project?</p>
| Julio Antonio López Siu | <p>You can use <a href="https://github.com/GoogleCloudPlatform/terraformer" rel="nofollow noreferrer">terraformer</a> to reverse engineering your infrastructure. You can generate TF file that you will be able to use with terraform in the other project.</p>
<p>However, it's only the infrastructure. Your VM configuration, your app contents, your data (file and database) content aren't duplicated! No magic tools for this.</p>
| guillaume blaquiere |
<p>I am trying to create a pod with both phpmyadmin and adminer in it. I have the Dockerfile created but I am not sure of the entrypoint needed.</p>
<p>Has anyone accomplished this before? I have everything figured out but the entrypoint...</p>
<pre><code>FROM phpmyadmin/phpmyadmin
ENV MYSQL_DATABASE=${MYSQL_DATABASE}
ENV MYSQL_USER=${MYSQL_USERNAME}
ENV MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
ENV MYSQL_PORT=3381
ENV PMA_USER=${MYSQL_USER}
ENV PMA_PORT=3381
ENV PMA_PASSWORD=${MYSQL_PASSWORD}
ENV PMA_HOST=${MYSQL_HOST}
EXPOSE 8081
ENTRYPOINT [ "executable" ]
FROM adminer:4
ENV POSTGRES_DB=${POSTGRES_DATABASE}
ENV POSTGRES_USER=${POSTGRES_USER}
ENV POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EXPOSE 8082
ENTRYPOINT [ "?" ]
</code></pre>
<p>------UPDATE 1 ----------
after read some comments I spilt my Dockerfiles and will create a yml file for the kube pod</p>
<pre><code>FROM phpmyadmin/phpmyadmin
ENV MYSQL_DATABASE=${MYSQL_DATABASE}
ENV MYSQL_USER=${MYSQL_USERNAME}
ENV MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
ENV MYSQL_PORT=3381
ENV PMA_USER=${MYSQL_USER}
ENV PMA_PORT=3381
ENV PMA_PASSWORD=${MYSQL_PASSWORD}
ENV PMA_HOST=${MYSQL_HOST}
EXPOSE 8081
ENTRYPOINT [ "executable" ]
</code></pre>
<p>container 2</p>
<pre><code>FROM adminer:4
ENV POSTGRES_DB=${POSTGRES_DATABASE}
ENV POSTGRES_USER=${POSTGRES_USER}
ENV POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EXPOSE 8082
ENTRYPOINT [ "?" ]
</code></pre>
<p>I am still not sure what the entrypoint script should be</p>
| Mike3355 | <p>Since you are not modifying anything in the image, you don't need to create a custom docker image for this, you could simply run 2 deployments in kubernetes passing the environment variables using a Kubernetes Secret.</p>
<p><strong>See this example of how to deploy both application on Kubernetes:</strong></p>
<ol>
<li><h3>Create a Kubernetes secret with your connection details:</h3></li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>cat <<EOF >./kustomization.yaml
secretGenerator:
- name: database-conn
literals:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_PORT=${MYSQL_PORT}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EOF
</code></pre>
<p>Apply the generated file:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -k .
secret/database-conn-mm8ck2296m created
</code></pre>
<ol start="2">
<li><h3>Deploy phpMyAdmin and Adminer:</h3></li>
</ol>
<p>You need to create <strong>two</strong> deployment, the first for phpMyAdmin, and other to Adminer, using the secrets created above in the containers, for example:</p>
<p>Create a file called <code>phpmyadmin-deploy.yml</code>:</p>
<blockquote>
<p>Note: Change the secret name from <code>database-conn-mm8ck2296m</code> to the generated name in the command above.</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: phpmyadmin
spec:
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
app: phpmyadmin
spec:
containers:
- name: phpmyadmin
image: phpmyadmin/phpmyadmin
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_DATABASE
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_USER
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_PORT
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_PORT
- name: PMA_HOST
value: mysql.host
- name: PMA_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_USER
- name: PMA_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_ROOT_PASSWORD
- name: PMA_PORT
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: MYSQL_PORT
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin-svc
spec:
selector:
app: phpmyadmin
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p><strong>Adminer:</strong></p>
<p>Create other file named <code>adminer-deploy.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: adminer
spec:
selector:
matchLabels:
app: adminer
template:
metadata:
labels:
app: adminer
spec:
containers:
- name: adminer
image: adminer:4
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: database-conn-mm8ck2296m
key: POSTGRES_PASSWORD
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: adminer-svc
spec:
selector:
app: adminer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
</code></pre>
<p>Deploy the yaml files with <code>kubectl apply -f *-deploy.yaml</code>, after some seconds type <code>kubectl get pods && kubectl get svc</code> to verify if everything is ok.</p>
<blockquote>
<p><strong>Note:</strong> Both services will be created as <code>ClusterIP</code>, it means that it will be only accessible internally. If you are using a cloud provider, you can use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">service type</a> <code>LoadBalancer</code> to get an external ip. Or you can use <code>kubectl port-forward</code> (<a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod" rel="nofollow noreferrer">see here</a>) command to access your service from your computer. </p>
</blockquote>
<p>Access application using port-forward:</p>
<p><strong>phpMyadmin:</strong></p>
<pre><code># This command will map the port 8080 from your localhost to phpMyadmin application:
kubectl port-forward svc/phpmyadmin-svc 8080:80
</code></pre>
<p><strong>Adminer</strong></p>
<pre><code># This command will map the port 8181 from your localhost to Adminer application:
kubectl port-forward svc/adminer-svc 8181:8080
</code></pre>
<p>And try to access: </p>
<p><a href="http://localhost:8080" rel="nofollow noreferrer">http://localhost:8080</a> <= phpMyAdmin
<a href="http://localhost:8181" rel="nofollow noreferrer">http://localhost:8181</a> <= Adminer</p>
<p><strong>References:</strong></p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Kubernetes Secrets</a></p>
<p><a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">Kubernetes Environment variables</a></p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">Kubernetes port forward</a></p>
| Mr.KoopaKiller |
<p>I've the below configuration in ingress.yaml which forwards the requests with uris like /default/demoservice/health or /custom/demoservice/health to backend demoservice. I would want to retrieve the first part of uri (i.e default or custom in the example above)from the uri and pass as custom header to upstream.</p>
<p>I've deployed the ingress configmap with custom header </p>
<pre><code>X-MyVariable-Path: ${request_uri}
</code></pre>
<p>but this sends the full request uri. How can I split?</p>
<pre><code>- path: "/(.*?)/(demoservice.*)$"
backend:
serviceName: demoservice
servicePort: 80
</code></pre>
| Shiva | <p>I have found a solution, tested it and it works.<br>
All you need is to add following annotations to your ingress object :</p>
<pre><code>nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-MyVariable-Path $1;
</code></pre>
<p>Where <code>$1</code> is referencing whatever is captured in first group of regexp expression in <code>path:</code> field.</p>
<p>I've reproduced your scenario using the following yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-MyVariable-Path $1;
nginx.ingress.kubernetes.io/use-regex: "true"
name: foo-bar-ingress
spec:
rules:
- http:
paths:
- backend:
serviceName: echo
servicePort: 80
path: /(.*?)/(demoservice.*)$
---
apiVersion: v1
kind: Service
metadata:
labels:
run: echo
name: echo
spec:
ports:
- port: 80
targetPort: 80
selector:
run: echo
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: echo
name: echo
spec:
containers:
- image: mendhak/http-https-echo
imagePullPolicy: Always
name: echo
</code></pre>
<p>You can test using curl:</p>
<p><code>curl -k https://<your_ip>/default/demoservice/healthz</code></p>
<p>Output:</p>
<pre><code> {
"path": "/default/demoservice/healthz",
"headers": {
"host": "192.168.39.129",
"x-request-id": "dfcc67a80f5b02e6fe6c647c8bf8cdf0",
"x-real-ip": "192.168.39.1",
"x-forwarded-for": "192.168.39.1",
"x-forwarded-host": "192.168.39.129",
"x-forwarded-port": "443",
"x-forwarded-proto": "https",
"x-scheme": "https",
"x-myvariable-path": "default", # your variable here
"user-agent": "curl/7.52.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "192.168.39.129",
"ip": "::ffff:172.17.0.4",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "echo"
}
}
</code></pre>
<p>I hope it helps =)</p>
| Mr.KoopaKiller |
<p>I'm looking into deploying a cluster on Google Kubernetes Engine in the near future. I've also been looking into using Vault by Hashicorp in order to manage the secrets that my cluster has access to. Specifically, I'd like to make use of dynamic secrets for greater security.</p>
<p>However, all of the documentation and Youtube videos that cover this type of setup always mention that a set of nodes strictly dedicated to Vault should operate as their own separate cluster - thus requiring more VMs. </p>
<p>I am curious if a serverless approach is possible here. Namely, using Google Cloud Run to create Vault containers on the fly. </p>
<p><a href="https://youtu.be/6P26wg2rWgo?t=1050" rel="nofollow noreferrer">This video (should start at the right time)</a> mentions that Vault can be run as a Deployment so I don't see there being an issue with state. And since <a href="https://cloud.google.com/run/#all-features" rel="nofollow noreferrer">Google</a> mention that each Cloud Run service gets its own stable HTTPS endpoint, I believe that I can simply pass this endpoint to my configuration and all of the pods will be able to find the service, even if new instances are created. However, I'm new to using Kubernetes so I'm not sure if I'm entirely correct here.</p>
<p>Can anyone with more experience using Kubernetes and/or Vault point out any potential drawbacks with this approach? Thank you.</p>
| mrstack999 | <p>In beta since 3 weeks, and not officially announced (It should be in a couple of days) you can have a look to <a href="https://cloud.google.com/secret-manager/docs/" rel="nofollow noreferrer">secret-manager</a>. It's a serverless secret manager with, I think, all the basic requirements that you need.</p>
<p><em>The main reason that it has not yet announced, it's because the client library in several languages aren't yet released/finished</em></p>
<p>The awesome guy on your video link, Seth Vargo, has been involved in this project. </p>
<p>He has also released <a href="https://github.com/GoogleCloudPlatform/berglas" rel="nofollow noreferrer">Berglas</a>. It's write in Python, use KMS for ciphering the secret and Google Cloud Storage for storing them. I also recommend it. </p>
<p>I built a <a href="https://pypi.org/manage/project/berglas-python/releases/" rel="nofollow noreferrer">python library to easily use Berglas secret in Python</a>.</p>
<p>Hope that this secret management tool will meet your expectation. In any case, it's serverless and quite cheap!</p>
| guillaume blaquiere |
<p>I have created A GKE cluster for a POC, later on, I want to stop/hibernate the cluster to save cost, any best approach/practice for it?</p>
| Aadesh kale | <p>You can put all your node pool to 0 VM but be careful to data lost (according to your node pool configuration, if you delete all the VM you can loose data). However, you will continue to pay for the control plane.</p>
<p>Another approach is to backup your data and to use IaC (Infra as code, such as terraform) to detroy and rebuild your cluster as needed.</p>
<hr />
<p>Both approach are valid, they depend on your use case and how long to you need to hibernate your cluster.</p>
<p>An alternative is to use GKE Autopilot if your workloads are compliant with this deployment mode.</p>
| guillaume blaquiere |
<p><strong>Problem</strong>:</p>
<p>Duplicate data when querying from prometheus for metrics from <em>kube-state-metrics</em>.</p>
<p>Sample query and result with 3 instances of <em>kube-state-metrics</em> running:</p>
<p>Query:</p>
<pre><code>kube_pod_container_resource_requests_cpu_cores{namespace="ns-dummy"}
</code></pre>
<p>Metrics</p>
<pre><code>kube_pod_container_resource_requests_cpu_cores{container="appname",endpoint="http",instance="172.232.35.142:8080",job="kube-state-metrics",namespace="ns-dummy",node="ip-172-232-34-25.ec2.internal",pod="app1-appname-6bd9d8d978-gfk7f",service="prom-kube-state-metrics"}
1
kube_pod_container_resource_requests_cpu_cores{container="appname",endpoint="http",instance="172.232.35.142:8080",job="kube-state-metrics",namespace="ns-dummy",node="ip-172-232-35-22.ec2.internal",pod="app2-appname-ccbdfc7c8-g9x6s",service="prom-kube-state-metrics"}
1
kube_pod_container_resource_requests_cpu_cores{container="appname",endpoint="http",instance="172.232.35.17:8080",job="kube-state-metrics",namespace="ns-dummy",node="ip-172-232-34-25.ec2.internal",pod="app1-appname-6bd9d8d978-gfk7f",service="prom-kube-state-metrics"}
1
kube_pod_container_resource_requests_cpu_cores{container="appname",endpoint="http",instance="172.232.35.17:8080",job="kube-state-metrics",namespace="ns-dummy",node="ip-172-232-35-22.ec2.internal",pod="app2-appname-ccbdfc7c8-g9x6s",service="prom-kube-state-metrics"}
1
kube_pod_container_resource_requests_cpu_cores{container="appname",endpoint="http",instance="172.232.37.171:8080",job="kube-state-metrics",namespace="ns-dummy",node="ip-172-232-34-25.ec2.internal",pod="app1-appname-6bd9d8d978-gfk7f",service="prom-kube-state-metrics"}
1
kube_pod_container_resource_requests_cpu_cores{container="appname",endpoint="http",instance="172.232.37.171:8080",job="kube-state-metrics",namespace="ns-dummy",node="ip-172-232-35-22.ec2.internal",pod="app2-appname-ccbdfc7c8-g9x6s",service="prom-kube-state-metrics"}
</code></pre>
<p><strong>Observation</strong>:</p>
<p>Every metric is coming up Nx when N pods are running for <em>kube-state-metrics</em>. If it's a single pod running, we get the correct info.</p>
<p><strong>Possible solutions</strong>:</p>
<ol>
<li>Scale down to single instance of kube-state-metrics. (Reduced availability is a concern)</li>
<li>Enable sharding. (Solves duplication problem, still less available)</li>
</ol>
<p>According to the <a href="https://github.com/kubernetes/kube-state-metrics#horizontal-scaling-sharding" rel="nofollow noreferrer">docs</a>, for horizontal scaling we have to pass sharding arguments to the pods.</p>
<p>Shards are zero indexed. So we have to pass the index and total number of shards for each pod.</p>
<p>We are using <a href="https://github.com/helm/charts/tree/master/stable/kube-state-metrics" rel="nofollow noreferrer">Helm chart</a> and it is deployed as a deployment. </p>
<p><strong>Questions</strong>:</p>
<ol>
<li>How can we pass different arguments to different pods in this scenario, if its possible?</li>
<li>Should we be worried about availability of the <em>kube-state-metrics</em> considering the self-healing nature of k8s workloads? </li>
<li>When should we really scale it to multiple instances and how?</li>
</ol>
| Shinto C V | <p>You could use a 'self-healing' deployment with only a single replica of <code>kube-state-metric</code> if the container down, the deployment will start a new container. Since kube-state-metric <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">is not focused on the health of the individual kubernetes components</a>. It only will affect you if your cluster is too big and make many objects changes per second.</p>
<blockquote>
<p>It is not focused on the health of the individual Kubernetes components, but rather on the health of the various objects inside, such as deployments, nodes and pods.</p>
</blockquote>
<p>For small cluster there's is no problem use in this way, but you really need a high availability monitoring platform I recommend you take a look in this two articles:
<a href="https://medium.com/nephely/creating-a-well-designed-and-highly-available-monitoring-stack-for-servers-kubernetes-cluster-and-47e810ec55af" rel="nofollow noreferrer">creating a well designed and highly available monitoring stack for kubernetes</a> and
<a href="https://logz.io/blog/kubernetes-monitoring/" rel="nofollow noreferrer">kubernetes monitoring</a></p>
| Mr.KoopaKiller |
<p>First of all, I'm pretty new to Kubernetes, and the amount of different architectures and solutions out there make it very difficult to find a good source to fit my specific needs.</p>
<p>So I have a cluster that runs many clones of the same application, which is a stateless heavy-load python application. I enable vertical auto-scaling to add more nodes in peak times, which should help dealing with larger traffic effectively. The thing I'm not sure about is the pod allocation strategy.</p>
<p>What I thought to do is to keep maximum number of idle pods running in my node, waiting for requests to start operate.
Is this approach even conceptually right/solid? Is it "in the spirit" of Kubernetes, or am I misusing it somehow?</p>
<p>The reason I think to avoid pod auto-scaling is because it's hard to determine a rule by which to perform the scaling, as well as I don't see the benefits, since each pod has basically two states - idle or full-power on.</p>
| Itay Davidson | <p>You can use cluster autoscaler to have some resources in idle if you want to avoid application errors in a peak time, for example.</p>
<p>The cluster autoscaler will increase your cluster size based on your resources usage, but this scaling isn't very quickly and sometimes it could take some minutes, you must have this in you mind when configure cluster autoscaler.</p>
<p>If you already know you peaks times, so you could schedule to increase the nodes number on the cluster to waiting the peak time.</p>
<p>Autoscaling is always complex to set-up in the beginning because your never know what will happens with your customers. There is no a magical formula to do this, my advice for you is to test all options that you have and try to find what approach best fits for your workload.</p>
<p>Here you can see how to configure cluster autoscaler in the most commons providers:</p>
<p><a href="https://kubernetes.io/docs/concepts/cluster-administration/cluster-management/" rel="nofollow noreferrer">Auto scaler GCE</a></p>
<p><a href="https://cloud.google.com/container-engine/docs/cluster-autoscaler" rel="nofollow noreferrer">Autoscaler GKE</a></p>
<p><a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md" rel="nofollow noreferrer">Autoscaler AWS</a></p>
<p><a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/azure/README.md" rel="nofollow noreferrer">Autoscaler Azure</a></p>
<p><a href="https://medium.com/kubecost/understanding-kubernetes-cluster-autoscaling-675099a1db92" rel="nofollow noreferrer">Here</a> there's a that article that could help you.</p>
<p>About pods resource allocation, the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#if-you-do-not-specify-a-cpu-limit" rel="nofollow noreferrer">documentation</a> mention:</p>
<blockquote>
<p>If you do not specify a CPU limit for a Container, then one of these situations applies:
- The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running.
- The Container is running in a namespace that has a default CPU limit, and the Container is automatically assigned the default limit. Cluster administrators can use a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#limitrange-v1-core/" rel="nofollow noreferrer">LimitRange</a> to specify a default value for the CPU limit.</p>
</blockquote>
<p>The containers will not allocate resources if it doesn't needed, but at moment they request for resources all resources on your node will be allocated for the pod.</p>
<p>You could create replicas of you container if you want balance the workload in your application, it make sense only if you limited resource of your container or if you know that each container/application support a limited number of requests.</p>
| Mr.KoopaKiller |
<p><strong>Rename an existing Kubernetes/Istio</strong></p>
<p>I am trying to rename an existing Kubernetes/Istio Google <code>regional</code> static Ip address, attached to an Istio ingress to a <code>Global Static ip address</code><strong>?</strong></p>
<p><strong>Confusion points - in connection with the question</strong></p>
<ol>
<li><p>Why use regions in static ip addresses?
DNS Zones is about subdomain level.
Resources is located geographically-physical somewhere, so hawing regions for resources make sense, but why do we need to specify a Region for a Static ip address?</p></li>
<li><p>Why having "pools" and how to manage them?</p></li>
<li><p>How it all fits together:</p>
<ul>
<li>Static ip address </li>
<li>Loadbalancer
-- DNS Zones</li>
<li>Pools</li>
</ul></li>
</ol>
<p><a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address</a>
<a href="https://cloud.google.com/compute/docs/regions-zones/" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/regions-zones/</a></p>
| Chris G. | <p>I will answer your questions the best way I can down below:</p>
<p>1 and 2 - <strong><em>Why use Regions in Static IP addresses? And Why do we need to specify a Region for a Static IP address?</em></strong></p>
<p><strong>Answer</strong>: As mentioned in the <a href="https://cloud.google.com/compute/docs/regions-zones/" rel="nofollow noreferrer">documentation</a> you have provided, Compute Engine resources are hosted in multiple locations worldwide. These locations are composed of regions and zones. </p>
<p>Resources that live in a zone, such as <em>virtual machine instances</em> or <em>zonal persistent disks</em>, are referred to as <strong>zonal resources</strong>. Other resources, like <em>static external IP addresses</em>, are <strong>regional</strong>. </p>
<p>Regional resources can be used by any resources in that region, regardless of zone, while zonal resources can only be used by other resources in the same zone.</p>
<p>For example, to attach a zonal persistent disk to an instance, both resources must be in the same zone. </p>
<p>Similarly, if you want to assign a <strong>static IP address</strong> to an instance, the instance must be in the <strong>same region</strong> as the <strong>static IP address</strong>. </p>
<blockquote>
<p>The overall underlying is that depending on the region where the IP
has been assigned, this will account for the latency between the
user-end machine and the data center where the IP is being generated
from. By specifying the region, you'll allow yourself to have the best
connection possible and reducing latency.</p>
</blockquote>
<p>3 - <strong><em>Why having "pools" and how to manage them?</em></strong></p>
<p><strong>Answer</strong>: Looking at our public <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools" rel="nofollow noreferrer">documentation</a> on Node pools, we can see that a node pool is a group of nodes within a cluster that all have the same configuration. Node pools use a NodeConfig specification and that each node in the pool has a Kubernetes node label, cloud.google.com/gke-nodepool, which has the node pool's name as its value. A node pool can contain only a single node or many nodes.</p>
<p>For example, you might create a node pool in your cluster with local SSDs, a minimum CPU platform, preemptible VMs, a specific node image, larger instance sizes, or different machine types. Custom node pools are useful when you need to schedule Pods that require more resources than others, such as more memory or more local disk space. If you need more control of where Pods are scheduled, you can use node taints.</p>
<p>You can learn more about managing node pools by looking into this documentation <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-pools#top_of_page" rel="nofollow noreferrer">here</a>. </p>
<p>4 - <strong><em>How does all (Static IP addresses, Load Balancers -- DNS Zones and Pools) fit together?</em></strong></p>
<p><strong>Answer</strong>: As mentioned earlier, all of these things (Static IP addresses, Load Balancers -- DNS Zones and Pools) need to be in the same proximity in order to all work together. However, depending on what regions you connect to by setting up in your Load Balancers, you can have connecting regions as well.</p>
<p>Moreover, I would like to ask you the following questions, just so I can have a better Idea of the situation:</p>
<p>1 - When you say that you are <em>trying to rename an existing Kubernetes/Istio Google regional static Ip address that is attached to an Istio ingress to a Global Static ip address</em>, can you explain in more detail? Are we talking about zones, clusters, etc?</p>
<p>2 - Can you please provide an example on what you are trying to accomplish? Just so that I can have a better idea on what you would like to be done.</p>
| Anthony Leo |
<p>I have the problem that I cannot mount volumes to pods in Kubernetes using the Azure File CSI in Azure cloud.</p>
<p>The error message I am receiving in the pod is</p>
<pre><code> Warning FailedMount 38s kubelet Unable to attach or mount volumes: unmounted volumes=[sensu-backend-etcd], unattached volumes=[default-token-42kfh sensu-backend-etcd sensu-asset-server-ca-cert]: timed out waiting for the condition
</code></pre>
<p>My storageclass looks like the following:</p>
<pre><code>items:
- allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"azure-csi-standard-lrs"},"mountOptions":["dir_mode=0640","file_mode=0640","uid=0","gid=0","mfsymlinks","cache=strict","nosharesock"],"parameters":{"location":"eastus","resourceGroup":"kubernetes-resource-group","shareName":"kubernetes","skuName":"Standard_LRS","storageAccount":"kubernetesrf"},"provisioner":"kubernetes.io/azure-file","reclaimPolicy":"Delete","volumeBindingMode":"Immediate"}
storageclass.kubernetes.io/is-default-class: "true"
creationTimestamp: "2020-12-21T19:16:19Z"
managedFields:
- apiVersion: storage.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:allowVolumeExpansion: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:storageclass.kubernetes.io/is-default-class: {}
f:mountOptions: {}
f:parameters:
.: {}
f:location: {}
f:resourceGroup: {}
f:shareName: {}
f:skuName: {}
f:storageAccount: {}
f:provisioner: {}
f:reclaimPolicy: {}
f:volumeBindingMode: {}
manager: kubectl-client-side-apply
operation: Update
time: "2020-12-21T19:16:19Z"
name: azure-csi-standard-lrs
resourceVersion: "15914"
selfLink: /apis/storage.k8s.io/v1/storageclasses/azure-csi-standard-lrs
uid: 3de65d08-14e7-4d0b-a6fe-39ab9a714191
mountOptions:
- dir_mode=0640
- file_mode=0640
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- nosharesock
parameters:
location: eastus
resourceGroup: kubernetes-resource-group
shareName: kubernetes
skuName: Standard_LRS
storageAccount: kubernetesrf
provisioner: kubernetes.io/azure-file
reclaimPolicy: Delete
volumeBindingMode: Immediate
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
<p>My PV and PVC are bound:</p>
<pre><code>sensu-backend-etcd 10Gi RWX Retain Bound sensu-system/sensu-backend-etcd azure-csi-standard-lrs 4m31s
</code></pre>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sensu-backend-etcd Bound sensu-backend-etcd 10Gi RWX azure-csi-standard-lrs 4m47s
</code></pre>
<p>In the kubelet log I get the following:</p>
<pre><code>Dez 21 19:26:37 kubernetes-3 kubelet[34828]: E1221 19:26:37.766476 34828 pod_workers.go:191] Error syncing pod bab5a69a-f8af-43f1-a3ae-642de8daa05d ("sensu-backend-0_sensu-system(bab5a69a-f8af-43f1-a3ae-642de8daa05d)"), skipping: unmounted volumes=[sensu-backend-etcd], unattached volumes=[sensu-backend-etcd sensu-asset-server-ca-cert default-token-42kfh]: timed out waiting for the condition
Dez 21 19:26:58 kubernetes-3 kubelet[34828]: I1221 19:26:58.002474 34828 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sensu-backend-etcd" (UniqueName: "kubernetes.io/csi/file.csi.azure.com^sensu-backend-etcd") pod "sensu-backend-0" (UID: "bab5a69a-f8af-43f1-a3ae-642de8daa05d")
Dez 21 19:26:58 kubernetes-3 kubelet[34828]: E1221 19:26:58.006699 34828 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/file.csi.azure.com^sensu-backend-etcd podName: nodeName:}" failed. No retries permitted until 2020-12-21 19:29:00.006639988 +0000 UTC m=+3608.682310977 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"sensu-backend-etcd\" (UniqueName: \"kubernetes.io/csi/file.csi.azure.com^sensu-backend-etcd\") pod \"sensu-backend-0\" (UID: \"bab5a69a-f8af-43f1-a3ae-642de8daa05d\") "
Dez 21 19:28:51 kubernetes-3 kubelet[34828]: E1221 19:28:51.768309 34828 kubelet.go:1594] Unable to attach or mount volumes for pod "sensu-backend-0_sensu-system(bab5a69a-f8af-43f1-a3ae-642de8daa05d)": unmounted volumes=[sensu-backend-etcd], unattached volumes=[sensu-backend-etcd sensu-asset-server-ca-cert default-token-42kfh]: timed out waiting for the condition; skipping pod
Dez 21 19:28:51 kubernetes-3 kubelet[34828]: E1221 19:28:51.768335 34828 pod_workers.go:191] Error syncing pod bab5a69a-f8af-43f1-a3ae-642de8daa05d ("sensu-backend-0_sensu-system(bab5a69a-f8af-43f1-a3ae-642de8daa05d)"), skipping: unmounted volumes=[sensu-backend-etcd], unattached volumes=[sensu-backend-etcd sensu-asset-server-ca-cert default-token-42kfh]: timed out waiting for the condition
Dez 21 19:29:00 kubernetes-3 kubelet[34828]: I1221 19:29:00.103881 34828 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sensu-backend-etcd" (UniqueName: "kubernetes.io/csi/file.csi.azure.com^sensu-backend-etcd") pod "sensu-backend-0" (UID: "bab5a69a-f8af-43f1-a3ae-642de8daa05d")
Dez 21 19:29:00 kubernetes-3 kubelet[34828]: E1221 19:29:00.108069 34828 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/file.csi.azure.com^sensu-backend-etcd podName: nodeName:}" failed. No retries permitted until 2020-12-21 19:31:02.108044076 +0000 UTC m=+3730.783715065 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"sensu-backend-etcd\" (UniqueName: \"kubernetes.io/csi/file.csi.azure.com^sensu-backend-etcd\") pod \"sensu-backend-0\" (UID: \"bab5a69a-f8af-43f1-a3ae-642de8daa05d\") "
Dez 21 19:31:02 kubernetes-3 kubelet[34828]: I1221 19:31:02.169246 34828 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sensu-backend-etcd" (UniqueName: "kubernetes.io/csi/file.csi.azure.com^sensu-backend-etcd") pod "sensu-backend-0" (UID: "bab5a69a-f8af-43f1-a3ae-642de8daa05d")
Dez 21 19:31:02 kubernetes-3 kubelet[34828]: E1221 19:31:02.172474 34828 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/file.csi.azure.com^sensu-backend-etcd podName: nodeName:}" failed. No retries permitted until 2020-12-21 19:33:04.172432877 +0000 UTC m=+3852.848103766 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"sensu-backend-etcd\" (UniqueName: \"kubernetes.io/csi/file.csi.azure.com^sensu-backend-etcd\") pod \"sensu-backend-0\" (UID: \"bab5a69a-f8af-43f1-a3ae-642de8daa05d\") "
Dez 21 19:31:09 kubernetes-3 kubelet[34828]: E1221 19:31:09.766084 34828 kubelet.go:1594] Unable to attach or mount volumes for pod "sensu-backend-0_sensu-system(bab5a69a-f8af-43f1-a3ae-642de8daa05d)": unmounted volumes=[sensu-backend-etcd], unattached volumes=[default-token-42kfh sensu-backend-etcd sensu-asset-server-ca-cert]: timed out waiting for the condition; skipping pod
</code></pre>
<p>In the kube-controller-manager pod I get:</p>
<pre><code>E1221 20:21:34.069309 1 csi_attacher.go:500] kubernetes.io/csi: attachdetacher.WaitForDetach timeout after 2m0s [volume=sensu-backend-etcd; attachment.ID=csi-9a83de4bef35f5d01e10e3a7d598204c459cac705371256e818e3a35b4b29e4e]
E1221 20:21:34.069453 1 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/file.csi.azure.com^sensu-backend-etcd podName: nodeName:kubernetes-3}" failed. No retries permitted until 2020-12-21 20:21:34.569430175 +0000 UTC m=+6862.322990347 (durationBeforeRetry 500ms). Error: "AttachVolume.Attach failed for volume \"sensu-backend-etcd\" (UniqueName: \"kubernetes.io/csi/file.csi.azure.com^sensu-backend-etcd\") from node \"kubernetes-3\" : attachdetachment timeout for volume sensu-backend-etcd"
I1221 20:21:34.069757 1 event.go:291] "Event occurred" object="sensu-system/sensu-backend-0" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="AttachVolume.Attach failed for volume \"sensu-backend-etcd\" : attachdetachment timeout for volume sensu-backend-etcd"
</code></pre>
<p>Anyone who knows this error and how to mitigate it?</p>
<p>Thanks in advance.</p>
<p>Best regards,
rforberger</p>
| Ronny Forberger | <p>I fixed it.</p>
<p>I switched to the disk.csi.azure.com provisioner and I had to use a volume name as a resource link to Azure like</p>
<pre><code> volumeHandle: /subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/kubernetes-resource-group/providers/Microsoft.Compute/disks/sensu-backend-etcd
</code></pre>
<p>in the PV.</p>
<p>Also, I had some mount options in the PV, which did not work with the Azure disk provisioner.</p>
| Ronny Forberger |
<p>Is it possible to use workload identity to access from a GKE pod to a GCP service of another project? A project that is different from the one in which the GKE cluster is created.</p>
<p>Thanks</p>
| Fares | <p>Yes, you can. If the service account bind with your K8S service account is autorize to access to resources in other projects, there is no issue. It's the same thing with your user account or other service accounts: Grant the account the access to the ressources and that's enough!</p>
| guillaume blaquiere |
<p>I'm trying to launch a SNMP query from a pod uploaded in an Azure cloud to an internal host on my company's network. The snmpget queries work well from the pod to, say, a public SNMP server, but the query to my target host results in:</p>
<pre><code>root@status-tanner-api-86557c6786-wpvdx:/home/status-tanner-api/poller# snmpget -c public -v 2c 192.168.118.23 1.3.6.1.2.1.1.1.0
Timeout: No Response from 192.168.118.23.
</code></pre>
<p>an NMAP shows that the SNMP port is open|filtered:</p>
<pre><code>Nmap scan report for 192.168.118.23
Host is up (0.16s latency).
PORT STATE SERVICE
161/udp open|filtered snmp
</code></pre>
<p>I requested a new rule to allow 161UDP from my pod, but I'm suspecting that I requested the rule to be made for the wrong IP address.</p>
<p>My theory is that I should be able to determine the IP address my pod uses to access this target host if <em>I could get inside the target host, open a connection from the pod and see using <code>netstat</code> which is the IP address my pod is using</em>. The problem is that I currently have no access to this host.
So, my question is <strong>How can I see from which address my pod is reaching the target host?</strong> Some sort of public address is obviously being used, but I can't tell which one is it without entering the target host.</p>
<p>I'm pretty sure I'm missing an important network tool that should help me in this situation. Any suggestion would be profoundly appreciated.</p>
| diego92sigma6 | <p>By default Kubernetes will use you node ip to reach the others servers, so you need to make a firewall rule using your node IP.</p>
<p>I've tested using a busybox pod to reach other server in my network</p>
<p>Here is my lab-1 node IP with ip <code>10.128.0.62</code>:</p>
<pre><code>$rabello@lab-1:~ ip ad | grep ens4 | grep inet
inet 10.128.0.62/32 scope global dynamic ens4
</code></pre>
<p>In this node I have a busybox pod with the ip <code>192.168.251.219</code>:</p>
<pre><code>$ kubectl exec -it busybox sh
/ # ip ad | grep eth0 | grep inet
inet 192.168.251.219/32 scope global eth0
</code></pre>
<p>When perform a ping test to another server in the network (server-1) we have:</p>
<pre><code>/ # ping 10.128.0.61
PING 10.128.0.61 (10.128.0.61): 56 data bytes
64 bytes from 10.128.0.61: seq=0 ttl=63 time=1.478 ms
64 bytes from 10.128.0.61: seq=1 ttl=63 time=0.337 ms
^C
--- 10.128.0.61 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.337/0.907/1.478 ms
</code></pre>
<p>Using tcpdump on server-1, we can see the ping requests from my pod using the node ip from lab-1:</p>
<pre><code>rabello@server-1:~$ sudo tcpdump -n icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:16:09.291714 IP 10.128.0.62 > 10.128.0.61: ICMP echo request, id 6230, seq 0, length 64
10:16:09.291775 IP 10.128.0.61 > 10.128.0.62: ICMP echo reply, id 6230, seq 0, length 64
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel
</code></pre>
<p>Make sure you have an appropriate firewall rule to allow your node (or your vpc range) reach your destination and check if you VPN is up (if you have one).</p>
<p>I hope it helps! =)</p>
| Mr.KoopaKiller |
<p>I can get all the things of a list of objects, such as <code>Secrets</code> and <code>ConfigMaps</code>.</p>
<pre><code>{
"kind": "SecretList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/kube-system/secrets",
"resourceVersion": "499638"
},
"items": [{
"metadata": {
"name": "aaa",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/secrets/aaa",
"uid": "96b0fbee-f14c-423d-9734-53fed20ae9f9",
"resourceVersion": "1354",
"creationTimestamp": "2020-02-24T11:20:23Z"
},
"data": "aaa"
}]
}
</code></pre>
<p>but I only want the name list, for this example :<code>"aaa"</code>. Is there any way?</p>
| qinghai5060 | <p>Yes, you can achieve it by using <code>jsonpath</code> output. Note that the specification you posted will look quite differently once applied. It will create one <code>Secret</code> object in your <code>kube-system</code> namespace and when you run:</p>
<pre><code>$ kubectl get secret -n kube-system aaa -o json
</code></pre>
<p>the output will look similar to the following:</p>
<pre><code>{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"creationTimestamp": "2020-02-25T11:08:21Z",
"name": "aaa",
"namespace": "kube-system",
"resourceVersion": "34488887",
"selfLink": "/api/v1/namespaces/kube-system/secrets/aaa",
"uid": "229edeb3-57bf-11ea-b366-42010a9c0093"
},
"type": "Opaque"
}
</code></pre>
<p>To get only the <code>name</code> of your <code>Secret</code> you need to run:</p>
<pre><code>kubectl get secret aaa -n kube-system -o jsonpath='{.metadata.name}'
</code></pre>
| mario |
<h1>What i have</h1>
<p>I have a Kubernetes cluster as follow:</p>
<ul>
<li>Single control plane (but plan to extend to 3 control plane for HA)</li>
<li>2 worker nodes</li>
</ul>
<p><br><br>
On this cluster i deployed (following this doc from traefik <a href="https://docs.traefik.io/user-guides/crd-acme/" rel="nofollow noreferrer">https://docs.traefik.io/user-guides/crd-acme/</a>):</p>
<ul>
<li><p>A deployment that create two pods :</p>
<ul>
<li>traefik itself: which will be in charge of routing with exposed port 80, 8080</li>
<li>whoami:a simple http server thats responds to http requests</li>
</ul>
</li>
<li><p>two services</p>
<ul>
<li>traefik service: <a href="https://i.stack.imgur.com/U1Zub.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U1Zub.png" alt="" /></a></li>
<li>whoami servic: <a href="https://i.stack.imgur.com/hoIQt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hoIQt.png" alt="" /></a></li>
</ul>
</li>
<li><p>One traefik IngressRoute:
<a href="https://i.stack.imgur.com/x5OIW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x5OIW.png" alt="" /></a></p>
</li>
</ul>
<h1>What i want</h1>
<p>I have multiple services running in the cluster and i want to expose them to the outside using Ingress.
More precisely i want to use the new <strong>Traefik 2.x</strong> CDR ingress methods.</p>
<p>My ultimate goal is to use new traefiks 2.x CRD to expose resources on port 80, 443, 8080 using <code>IngressRoute</code> Custom resource definitions</p>
<h1>What's the problem</h1>
<p>If i understand well, classic Ingress controllers allow exposition of every ports we want to the outside world (including 80, 8080 and 443).</p>
<p>But with the new traefik CDR ingress approach on it's own it does not exports anything at all.
One solution is to define the Traefik service as a loadbalancer typed service and then expose some ports. But you are forced to use the 30000-32767 ports range (same as nodeport), and i don't want to add a reverse proxy in front of the reverse proxy to be able to expose port 80 and 443...</p>
<p>Also i've seed from the doc of the new igress CRD (<a href="https://docs.traefik.io/user-guides/crd-acme/" rel="nofollow noreferrer">https://docs.traefik.io/user-guides/crd-acme/</a>) that:</p>
<p><code>kubectl port-forward --address 0.0.0.0 service/traefik 8000:8000 8080:8080 443:4443 -n default</code></p>
<p>is required, and i understand that now. You need to map the host port to the service port.
But mapping the ports that way feels clunky and counter intuitive. I don't want to have a part of the service description in a yaml and at the same time have to remember that i need to map port with <code>kubectl</code>.</p>
<p>I'm pretty sure there is a neat and simple solution to this problem, but i can't understand how to keep things simple. Do you guys have an experience in kubernetes with the new traefik 2.x CRD config?</p>
| Anthony Raymond | <p>You can try to use LoadBalancer service type for expose the Traefik service on ports 80, 443 and 8080. I've tested the yaml from the link you provided in GKE, and it's works.</p>
<p>You need to change the ports on 'traefik' service and add a 'LoadBalancer' as service type:</p>
<pre><code>kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80 <== Port to receive HTTP connections
- protocol: TCP
name: admin
port: 8080 <== Administration port
- protocol: TCP
name: websecure
port: 443 <== Port to receive HTTPS connections
selector:
app: traefik
type: LoadBalancer <== Define the type load balancer
</code></pre>
<p>Kubernetes will create a Loadbalancer for you service and you can access your application using ports 80 and 443.</p>
<pre><code>$ curl https://35.111.XXX.XX/tls -k
Hostname: whoami-5df4df6ff5-xwflt
IP: 127.0.0.1
IP: 10.60.1.11
RemoteAddr: 10.60.1.13:55262
GET /tls HTTP/1.1
Host: 35.111.XXX.XX
User-Agent: curl/7.66.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.60.1.1
X-Forwarded-Host: 35.111.XXX.XX
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: traefik-66dd84c65c-4c5gp
X-Real-Ip: 10.60.1.1
$ curl http://35.111.XXX.XX/notls
Hostname: whoami-5df4df6ff5-xwflt
IP: 127.0.0.1
IP: 10.60.1.11
RemoteAddr: 10.60.1.13:55262
GET /notls HTTP/1.1
Host: 35.111.XXX.XX
User-Agent: curl/7.66.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.60.1.1
X-Forwarded-Host: 35.111.XXX.XX
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-66dd84c65c-4c5gp
X-Real-Ip: 10.60.1.1
</code></pre>
| Mr.KoopaKiller |
<p>I have k8s cluster with pods, deployments etc.
I am using helm to deploy my app. I want to delete all deployment and using below command</p>
<pre><code>helm delete myNamespace --purge
</code></pre>
<p>If I will look at status of my pods, I will see that there are in terminating state, problem is that it takes time. Is there any way to remove it like instantly with some force flag or something?</p>
| liotur | <p>You can try the following command:</p>
<pre><code>helm delete myNamespace --purge --no-hooks
</code></pre>
<p>Also, you can use kubectl to forcefully delete the pods, instead of waiting for termination.</p>
<p>Here's what I got from this link.
<a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/" rel="noreferrer">https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/</a></p>
<p>If you want to delete a Pod forcibly using kubectl version >= 1.5, do the following:</p>
<pre><code>kubectl delete pods <pod> --grace-period=0 --force
</code></pre>
<p>If you’re using any version of kubectl <= 1.4, you should omit the --force option and use:</p>
<pre><code>kubectl delete pods <pod> --grace-period=0
</code></pre>
<p>If even after these commands the pod is stuck on Unknown state, use the following command to remove the pod from the cluster:</p>
<pre><code>kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'
</code></pre>
<p>Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved.</p>
| Muhammad Abdul Raheem |
<p>We have two k8s clusters:</p>
<p>Cluster details:</p>
<pre><code>Cluster 1:
Pod Network: 172.16.1.0/24
Node Network: 172.16.2.0/24
Cluster 2:
Pod Network: 172.16.3.0/24
Node Network: 172.16.4.0/24
</code></pre>
<p>All these networks are connectivity to one another.</p>
<p>Suppose we have 3 pods in each clusters </p>
<pre><code>Cluster1-Pod1: IP: 172.16.1.1
Cluster1-Pod2: IP: 172.16.1.2
Cluster1-Pod3: IP: 172.16.1.3
Cluster2-Pod1: IP: 172.16.3.1
Cluster2-Pod2: IP: 172.16.3.2
Cluster2-Pod3: IP: 172.16.3.3
</code></pre>
<p>How can one access the apps of pods in <code>cluster2</code> from <code>cluster1</code> without creating a <code>k8s</code> service using Pod IP or <code>hostname</code>.
Any solution available to publish/advertise pods <code>IP/hostname</code> from one cluster to another?
If creating a service is mandatory, any options to achieve it without Type: LoadBlanacer or Ingress?</p>
<p>Appreciate any inputs.</p>
| user3133062 | <p>Are you sure you are not confusing nodes with clusters? Because if these pods exist on different nodes, within the same cluster, then you can simply access the pod with its IP.</p>
<p>Otherwise, it is not possible to connect to your pod outside the cluster without making your pod public by a service(NodePort, LoadBalancer) or Ingress.</p>
| Muhammad Abdul Raheem |
<p>In my next.config.js, I have a part that looks like this:</p>
<pre><code>module.exports = {
serverRuntimeConfig: { // Will only be available on the server side
mySecret: 'secret'
},
publicRuntimeConfig: { // Will be available on both server and client
PORT: process.env.PORT,
GOOGLE_CLIENT_ID: process.env.GOOGLE_CLIENT_ID,
BACKEND_URL: process.env.BACKEND_URL
}
</code></pre>
<p>I have a .env file and when run locally, the Next.js application succesfully fetches the environment variables from the .env file.</p>
<p>I refer to the env variables like this for example:</p>
<pre><code>axios.get(publicRuntimeConfig.BACKOFFICE_BACKEND_URL)
</code></pre>
<p>However, when I have this application deployed onto my Kubernetes cluster, the environment variables set in the deploy file are not being collected. So they return as undefined. </p>
<p>I read that .env files cannot be read due to the differences between frontend (browser based) and backend (Node based), but there must be some way to make this work. </p>
<p>Does anyone know how to use environment variables saved in your pods/containers deploy file on your frontend (browser based) application? </p>
<p>Thanks.</p>
<p><strong>EDIT 1:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "38"
creationTimestamp: xx
generation: 40
labels:
app: appname
name: appname
namespace: development
resourceVersion: xx
selfLink: /apis/extensions/v1beta1/namespaces/development/deployments/appname
uid: xxx
spec:
progressDeadlineSeconds: xx
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: appname
tier: sometier
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
- name: SOME_VAR
value: xxxx
image: someimage
imagePullPolicy: Always
name: appname
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 3000
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: xxx
lastUpdateTime: xxxx
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 40
readyReplicas: 1
replicas: 1
updatedReplicas: 1
</code></pre>
| BURGERFLIPPER101 | <p>You can create a config-map and then mount it as a file in your deployment with your custom environment variables.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "38"
creationTimestamp: xx
generation: 40
labels:
app: appname
name: appname
namespace: development
resourceVersion: xx
selfLink: /apis/extensions/v1beta1/namespaces/development/deployments/appname
uid: xxx
spec:
progressDeadlineSeconds: xx
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: appname
tier: sometier
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
- name: SOME_VAR
value: xxxx
volumeMounts:
- name: environment-variables
mountPath: "your/path/to/store/the/file"
readOnly: true
image: someimage
imagePullPolicy: Always
name: appname
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 3000
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: environment-variables
configMap:
name: environment-variables
items:
- key: .env
path: .env
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: xxx
lastUpdateTime: xxxx
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 40
readyReplicas: 1
replicas: 1
updatedReplicas: 1
</code></pre>
<p>I added the following configuration in your deployment file:</p>
<pre><code> volumeMounts:
- name: environment-variables
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: environment-variables
configMap:
name: environment-variables
items:
- key: .env
path: .env
</code></pre>
<p>You can then create a config map with key ".env" with your environment variables on kubernetes.</p>
<p>Configmap like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: environment-variables
namespace: your-namespace
data:
.env: |
variable1: value1
variable2: value2
</code></pre>
| Muhammad Abdul Raheem |
<p>My pod needs to access <code>/dev/kvm</code> but it cannot run as privileged for security reasons.</p>
<p>How do I do this in Kubernetes?</p>
| Nick | <p>There is a <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/" rel="nofollow noreferrer">device-plugin</a> called <a href="https://github.com/kubevirt/kubernetes-device-plugins/blob/master/docs/README.kvm.md" rel="nofollow noreferrer">KVM Device Plugin</a> that serves exactly for this purpose.</p>
<blockquote>
<p>This software is a kubernetes device plugin that exposes /dev/kvm from
the system.</p>
</blockquote>
| mario |
<p>I have an application running inside in the kubernetes cluster where I am making an API call to an endpoint lets say <code>www.example.com/api</code> which another team maintains, but the request is timing out.
I discovered that the IPs needs to be whitelisted in order to make a successful request to that endpoint and we whitelisted the cluster IP .
Also at this point, we did not whitelist the node IPs that I got by running
<code> kubectl get nodes -o wide</code> . Any pointers will be very helpful.</p>
| Jainam Shah | <p>If you whitelisted the Control plane IP, it's useless, it's not the control plane that perform the API call, but your code running in the Pods.</p>
<p>And the pods run on your nodes. The problem is: if your cluster can scale automatically the number of Node, you don't know in advance the IPs that you will have.</p>
<p>(It's also for that reason that Google says not to trust the network (the IP) but the identity (the authentication that you can provide with your API Call)).</p>
<hr />
<p>Anyway, one recommended and secure way to secure and solve your issue is to create a cluster with private node (no public IPs) and to add a Cloud NAT to nat the external calls into a static (and owned) Public IP(s).</p>
<p>Because it's YOUR IP(s) you can turst and allow them (no reuse possible by another Google Cloud customer because it reuse an IP in the Google pool, that you used before).</p>
<p>You can find a sample <a href="https://cloud.google.com/nat/docs/gke-example" rel="nofollow noreferrer">here</a></p>
| guillaume blaquiere |
<p>I am running a django application in a Kubernetes cluster on gcloud. I implemented the database migration as a helm pre-intall hook that launches my app container and does the database migration. I use cloud-sql-proxy in a sidecar pattern as recommended in the official tutorial: <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine</a></p>
<p>Basically this launches my app and a cloud-sql-proxy containers within the pod described by the job. The problem is that cloud-sql-proxy never terminates after my app has completed the migration causing the pre-intall job to timeout and cancel my deployment. How do I gracefully exit the cloud-sql-proxy container after my app container completes so that the job can complete?</p>
<p>Here is my helm pre-intall hook template definition:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: database-migration-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
spec:
activeDeadlineSeconds: 230
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: db-migrate
image: {{ .Values.my-project.docker_repo }}{{ .Values.backend.image }}:{{ .Values.my-project.image.tag}}
imagePullPolicy: {{ .Values.my-project.image.pullPolicy }}
env:
- name: DJANGO_SETTINGS_MODULE
value: "{{ .Values.backend.django_settings_module }}"
- name: SENDGRID_API_KEY
valueFrom:
secretKeyRef:
name: sendgrid-api-key
key: sendgrid-api-key
- name: DJANGO_SECRET_KEY
valueFrom:
secretKeyRef:
name: django-secret-key
key: django-secret-key
- name: DB_USER
value: {{ .Values.postgresql.postgresqlUsername }}
- name: DB_PASSWORD
{{- if .Values.postgresql.enabled }}
value: {{ .Values.postgresql.postgresqlPassword }}
{{- else }}
valueFrom:
secretKeyRef:
name: database-password
key: database-pwd
{{- end }}
- name: DB_NAME
value: {{ .Values.postgresql.postgresqlDatabase }}
- name: DB_HOST
{{- if .Values.postgresql.enabled }}
value: "postgresql"
{{- else }}
value: "127.0.0.1"
{{- end }}
workingDir: /app-root
command: ["/bin/sh"]
args: ["-c", "python manage.py migrate --no-input"]
{{- if eq .Values.postgresql.enabled false }}
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
- "-instances=<INSTANCE_CONNECTION_NAME>=tcp:<DB_PORT>"
- "-credential_file=/secrets/service_account.json"
securityContext:
#fsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
volumeMounts:
- name: db-con-mnt
mountPath: /secrets/
readOnly: true
volumes:
- name: db-con-mnt
secret:
secretName: db-service-account-credentials
{{- end }}
</code></pre>
<p>Funny enough, if I kill the job with "kubectl delete jobs database-migration-job" after the migration is done the helm upgrade completes and my new app version gets installed.</p>
| Vess Perfanov | <p>Well, I have a solution which will work but might be hacky. First of all this is Kubernetes is lacking feature which is in discussion in this <a href="https://github.com/kubernetes/kubernetes/issues/40908" rel="nofollow noreferrer">issue</a>.</p>
<p>With Kubernetes v1.17, containers in same <a href="https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/" rel="nofollow noreferrer">Pods can share process namespaces</a>. This enables us to kill proxy container from app container. Since this is a Kubernetes job there shouldn't be any anomalies to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">enable postStop handlers</a> for app container.</p>
<p>With this solution when your app finishes and exits normally(or abnormally) then Kubernetes will run one last command from your dying container which will be <code>kill another process</code> in this case. This should result in job completion with success or fail depending on how you will be killing process. Process exit code will be container exit code, then it will be job exit code basically.</p>
| Akin Ozer |
<p>I am using kubectl in order to retrieve a list of pods:</p>
<pre><code> kubectl get pods --selector=artifact=boot-example -n my-sandbox
</code></pre>
<p>The results which I am getting are:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
boot-example-757c4c6d9c-kk7mg 0/1 Running 0 77m
boot-example-7dd6cd8d49-d46xs 1/1 Running 0 84m
boot-example-7dd6cd8d49-sktf8 1/1 Running 0 88m
</code></pre>
<p>I would like to get only those pods which are "<strong>ready</strong>" (passed readinessProbe). Is there any kubectl command which returns only "<strong>ready</strong>" pods? If not kubectl command, then maybe some other way?</p>
| fascynacja | <p>You can use this command:</p>
<pre><code>kubectl -n your-namespace get pods -o custom-columns=NAMESPACE:metadata.namespace,POD:metadata.name,PodIP:status.podIP,READY-true:status.containerStatuses[*].ready | grep true
</code></pre>
<p>This will return you the pods with containers that are "<strong>ready</strong>".</p>
<p>To do this without grep, you can use the following commands:</p>
<pre><code>kubectl -n your-namespace get pods -o go-template='{{range $index, $element := .items}}{{range .status.containerStatuses}}{{if .ready}}{{$element.metadata.name}}{{"\n"}}{{end}}{{end}}{{end}}'
kubectl -n your-namespace get pods -o jsonpath='{range .items[*]}{.status.containerStatuses[*].ready.true}{.metadata.name}{ "\n"}{end}'
</code></pre>
<p>This will return you the pod names that are "<strong>ready</strong>".</p>
| Muhammad Abdul Raheem |
<p>I am trying to deploy a small <code>Node.js</code> server using <code>Kubernetes</code>. And I have exposed this app internally as well as externally using <code>ClusterIP</code> type service and <code>NodePort</code> type service respectively. </p>
<p>I can, without any problem connect internally to the app using <code>ClusterIP</code> service.</p>
<p><strong>Problem is I can't use <code>NodePort</code> service to connect to app</strong></p>
<p>I am running <code>curl</code> cmd the ClusterIP and NodePort from my <code>master</code> node. As mentioned, only the ClusterIP is working.</p>
<p>Here is my <code>deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-test
spec:
replicas: 2
selector:
matchLabels:
name: deployment-test
template:
metadata:
labels:
# you can specify any labels you want here
name: deployment-test
spec:
containers:
- name: deployment-test
# image must be the same as you built before (name:tag)
image: banukajananathjayarathna/bitesizetroubleshooter:v1
ports:
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: Always
terminationGracePeriodSeconds: 60
</code></pre>
<p>And here is my 'clusterip.yaml`</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
labels:
# these labels can be anything
name: deployment-test-clusterip
name: deployment-test-clusterip
spec:
selector:
name: deployment-test
ports:
- protocol: TCP
port: 80
# target is the port exposed by your containers (in our example 8080)
targetPort: 8080
</code></pre>
<p>and here is <code>nodeport.yaml</code></p>
<pre><code>kind: Service
apiVersion: v1
metadata:
labels:
name: deployment-test-nodeport
name: deployment-test-nodeport
spec:
# this will make the service a NodePort service
type: NodePort
selector:
name: deployment-test
ports:
- protocol: TCP
# new -> this will be the port used to reach it from outside
# if not specified, a random port will be used from a specific range (default: 30000-32767)
# nodePort: 32556
port: 80
targetPort: 8080
</code></pre>
<p>And here are my services:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get svc -n test49
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deployment-test-clusterip ClusterIP 172.31.118.67 <none> 80/TCP 3d8h
deployment-test-nodeport NodePort 172.31.11.65 <none> 80:30400/TCP 3d8h
</code></pre>
<p>When I try (from master) <code>$ curl 172.31.118.67</code>, then it gives <code>Hello world</code> as the output from the app.</p>
<p><strong>But</strong>, when I run <code>$ curl 172.31.11.65</code>, I get the following error:</p>
<pre class="lang-sh prettyprint-override"><code>$ curl 172.31.11.65
curl: (7) Failed to connect to 172.31.11.65 port 80: Connection refused
</code></pre>
<p>I even tried <code>$ curl 172.31.11.65:80</code> and <code>$ curl 172.31.11.65:30400</code>, it still gives the error.</p>
<p>Can someone please tell me what I have done wrong here?</p>
| Jananath Banuka | <blockquote>
<p>When I try (from master) <code>$ curl 172.31.118.67</code>, then it gives <code>Hello
world</code> as the output from the app.</p>
</blockquote>
<p>It works because you exposed your deployment within the cluster using <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service" rel="nofollow noreferrer">ClusterIP Service</a> which "listens" on port 80 on its <code>ClusterIP</code> (172.31.118.67). As the name says it is an IP available only within your cluster. If you want to expose your <code>Deployment</code> to so called external world you cannot do it using this <code>Service</code> type.</p>
<p>Generally you use <code>ClusterIP</code> to make your application component (e.g. set of backend <code>Pods</code>) available for other application components, in other words to expose it within your cluster. Good use case for <code>ClusterIP</code> service is exposing your database (which can be additionally clustered and run as a set of <code>Pods</code>) for backend <code>Pods</code> which need to connect to it, as a single endpoint.</p>
<blockquote>
<p><strong>But</strong>, when I run <code>$ curl 172.31.11.65</code>, I get the following error:</p>
<p><code>bash $ curl 172.31.11.65 curl: (7) Failed to connect to
172.31.11.65 port 80: Connection refused</code></p>
</blockquote>
<p>Where are you trying to connect from ? It should be accessible from other <code>Pods</code> in your cluster as well as from your nodes. As @suren already mentioned in his answer, <code>NodePort</code> Service has all features of <code>ClusterIP</code> Service, but it additionally exposes your Deployment on some random port in range <code>30000-32767</code> on your Node's IP address (if another port within this range wasn't specified explicitly in <code>spec.ports.nodePort</code> in your <code>Service</code> definition). So basically any <code>Service</code> like <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> or <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> has also its <code>ClusterIP</code>.</p>
<blockquote>
<p>I even tried <code>$ curl 172.31.11.65:80</code> and <code>$ curl 172.31.11.65:30400</code>,
it still gives the error.</p>
</blockquote>
<p><code>curl 172.31.11.65:80</code> should have exactly the same effect as <code>curl 172.31.11.65</code> as <code>80</code> is the default http port. Doing <code>curl 172.31.11.65:30400</code> is pointless as it is not "listening" on this port (Service is actually nothing more than set of iptables port-forwarding rules so in fact there is nothing really listening on this port) on its <code>ClusterIP</code>. This port is used only to expose your <code>Pods</code> on your worker nodes IP addresses. Btw. you can check them simply by running <code>ip -4 a</code> and searching through available network interfaces (applied if you have kubernetes on premise installation). If you are rather using some cloud environment, you will not see your nodes external IPs in your system e.g. in GCP you can see them on your Compute Engine VMs list in GCP console. Additionally you need to set appropriate firewall rules as by default traffic to some random ports is blocked. So after allowing TCP ingress traffic to port <code>30400</code>, you should be able to access your application on <code>http://<any-of-your-k8s-cluster-worker-nodes-external-IP-address>:30400</code></p>
<p><code>ip -4 a</code> will still show you the internal IP address of your node and you should be able to connect using <code>curl <this-internal-ip>:30400</code> as well as using <code>curl 127.0.0.1:30400</code> (from node) as by default <code>kube-proxy</code> considers all available network interfaces for <code>NodePort</code> Service:</p>
<blockquote>
<p>The default for --nodeport-addresses is an empty list. This means that
kube-proxy should consider all available network interfaces for
NodePort.</p>
</blockquote>
<p>If you primarily want to expose your Deployment to external world and you want it to be available on standard port I would recommend you using <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> rather than <code>NodePort</code>. If you use some cloud environment it is available out of the box and you can easily define it as any other Service type without a need of additional configuration. If you have on-prem k8s installation you'll need to resort to some additional sollution such as <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a>.</p>
<p>Let me know if it clarified a bit using <code>Service</code> in kubernetes. It's still not completely clear to me what you want to achieve so it would be nice if you explain it in more detail.</p>
| mario |
<p>We just start to create our cluster on kubernetes.</p>
<p>Now we try to deploy tiller but we have en error:</p>
<blockquote>
<p>NetworkPlugin cni failed to set up pod
"tiller-deploy-64c9d747bd-br9j7_kube-system" network: open
/run/flannel/subnet.env: no such file or directory</p>
</blockquote>
<p>After that I call:</p>
<pre><code>kubectl get pods --all-namespaces -o wide
</code></pre>
<p>And got response:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
kube-system coredns-78fcdf6894-ksdvt 1/1 Running 2 7d 192.168.0.4 kube-master <none>
kube-system coredns-78fcdf6894-p4l9q 1/1 Running 2 7d 192.168.0.5 kube-master <none>
kube-system etcd-kube-master 1/1 Running 2 7d 10.168.209.20 kube-master <none>
kube-system kube-apiserver-kube-master 1/1 Running 2 7d 10.168.209.20 kube-master <none>
kube-system kube-controller-manager-kube-master 1/1 Running 2 7d 10.168.209.20 kube-master <none>
kube-system kube-flannel-ds-amd64-42rl7 0/1 CrashLoopBackOff 2135 7d 10.168.209.17 node5 <none>
kube-system kube-flannel-ds-amd64-5fx2p 0/1 CrashLoopBackOff 2164 7d 10.168.209.14 node2 <none>
kube-system kube-flannel-ds-amd64-6bw5g 0/1 CrashLoopBackOff 2166 7d 10.168.209.15 node3 <none>
kube-system kube-flannel-ds-amd64-hm826 1/1 Running 1 7d 10.168.209.20 kube-master <none>
kube-system kube-flannel-ds-amd64-thjps 0/1 CrashLoopBackOff 2160 7d 10.168.209.16 node4 <none>
kube-system kube-flannel-ds-amd64-w99ch 0/1 CrashLoopBackOff 2166 7d 10.168.209.13 node1 <none>
kube-system kube-proxy-d6v2n 1/1 Running 0 7d 10.168.209.13 node1 <none>
kube-system kube-proxy-lcckg 1/1 Running 0 7d 10.168.209.16 node4 <none>
kube-system kube-proxy-pgblx 1/1 Running 1 7d 10.168.209.20 kube-master <none>
kube-system kube-proxy-rnqq5 1/1 Running 0 7d 10.168.209.14 node2 <none>
kube-system kube-proxy-wc959 1/1 Running 0 7d 10.168.209.15 node3 <none>
kube-system kube-proxy-wfqqs 1/1 Running 0 7d 10.168.209.17 node5 <none>
kube-system kube-scheduler-kube-master 1/1 Running 2 7d 10.168.209.20 kube-master <none>
kube-system kubernetes-dashboard-6948bdb78-97qcq 0/1 ContainerCreating 0 7d <none> node5 <none>
kube-system tiller-deploy-64c9d747bd-br9j7 0/1 ContainerCreating 0 45m <none> node4 <none>
</code></pre>
<p>We have some flannel pods in CrashLoopBackOff status. For example <code>kube-flannel-ds-amd64-42rl7</code>.</p>
<p>When I call:</p>
<pre><code>kubectl describe pod -n kube-system kube-flannel-ds-amd64-42rl7
</code></pre>
<p>I've got status <code>Running</code>:</p>
<pre><code>Name: kube-flannel-ds-amd64-42rl7
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: node5/10.168.209.17
Start Time: Wed, 22 Aug 2018 16:47:10 +0300
Labels: app=flannel
controller-revision-hash=911701653
pod-template-generation=1
tier=node
Annotations: <none>
Status: Running
IP: 10.168.209.17
Controlled By: DaemonSet/kube-flannel-ds-amd64
Init Containers:
install-cni:
Container ID: docker://eb7ee47459a54d401969b1770ff45b39dc5768b0627eec79e189249790270169
Image: quay.io/coreos/flannel:v0.10.0-amd64
Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
Port: <none>
Host Port: <none>
Command:
cp
Args:
-f
/etc/kube-flannel/cni-conf.json
/etc/cni/net.d/10-flannel.conflist
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 22 Aug 2018 16:47:24 +0300
Finished: Wed, 22 Aug 2018 16:47:24 +0300
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/cni/net.d from cni (rw)
/etc/kube-flannel/ from flannel-cfg (rw)
/var/run/secrets/kubernetes.io/serviceaccount from flannel-token-9wmch (ro)
Containers:
kube-flannel:
Container ID: docker://521b457c648baf10f01e26dd867b8628c0f0a0cc0ea416731de658e67628d54e
Image: quay.io/coreos/flannel:v0.10.0-amd64
Image ID: docker-pullable://quay.io/coreos/flannel@sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
Port: <none>
Host Port: <none>
Command:
/opt/bin/flanneld
Args:
--ip-masq
--kube-subnet-mgr
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 30 Aug 2018 10:15:04 +0300
Finished: Thu, 30 Aug 2018 10:15:08 +0300
Ready: False
Restart Count: 2136
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
Environment:
POD_NAME: kube-flannel-ds-amd64-42rl7 (v1:metadata.name)
POD_NAMESPACE: kube-system (v1:metadata.namespace)
Mounts:
/etc/kube-flannel/ from flannel-cfg (rw)
/run from run (rw)
/var/run/secrets/kubernetes.io/serviceaccount from flannel-token-9wmch (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
run:
Type: HostPath (bare host directory volume)
Path: /run
HostPathType:
cni:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
flannel-cfg:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-flannel-cfg
Optional: false
flannel-token-9wmch:
Type: Secret (a volume populated by a Secret)
SecretName: flannel-token-9wmch
Optional: false
QoS Class: Guaranteed
Node-Selectors: beta.kubernetes.io/arch=amd64
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 51m (x2128 over 7d) kubelet, node5 Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
Warning BackOff 1m (x48936 over 7d) kubelet, node5 Back-off restarting failed container
</code></pre>
<p>here <code>kube-controller-manager.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --address=127.0.0.1
- --allocate-node-cidrs=true
- --cluster-cidr=192.168.0.0/24
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --use-service-account-credentials=true
image: k8s.gcr.io/kube-controller-manager-amd64:v1.11.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
status: {}
</code></pre>
<p>OS is CentOS Linux release 7.5.1804</p>
<p>logs from one of pods:</p>
<pre><code># kubectl logs --namespace kube-system kube-flannel-ds-amd64-5fx2p
main.go:475] Determining IP address of default interface
main.go:488] Using interface with name eth0 and address 10.168.209.14
main.go:505] Defaulting external address to interface address (10.168.209.14)
kube.go:131] Waiting 10m0s for node controller to sync
kube.go:294] Starting kube subnet manager
kube.go:138] Node controller sync successful
main.go:235] Created subnet manager: Kubernetes Subnet Manager - node2
main.go:238] Installing signal handlers
main.go:353] Found network config - Backend type: vxlan
vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
main.go:280] Error registering network: failed to acquire lease: node "node2" pod cidr not assigned
main.go:333] Stopping shutdownHandler...
</code></pre>
<p>Where error is?</p>
| Alexey Vashchenkov | <p>For <code>flannel</code> to work correctly, you must pass <code>--pod-network-cidr=10.244.0.0/16</code> to <code>kubeadm init</code>.</p>
| abdelkhaliq bouharaoua |
<p>So I’m using traefik 2.2, I run a bare metal kubernetes cluster with a single node master. I don’t have a physical or virtual load balancer so the traefik pod takes in all requests on ports 80 and 443. I have an example wordpress installed with helm. As you can see here exactly every other request is a 500 error. <a href="http://wp-example.cryptexlabs.com/feed/" rel="noreferrer">http://wp-example.cryptexlabs.com/feed/</a>. I can confirm that the request that is a 500 error never reaches the wordpress container so I know this has something to do with traefik. In the traefik logs it just shows there was a 500 error. So I have 1 pod in the traefik namespace, a service in the default service, an external name service in the default namespace that points to the example wordpress site which a wp-example namespace.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: traefik
chart: traefik-0.2.0
heritage: Tiller
release: traefik
name: traefik
namespace: traefik
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: traefik
release: traefik
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: traefik
chart: traefik-0.2.0
heritage: Tiller
release: traefik
spec:
containers:
- args:
- --api.insecure
- --accesslog
- --entrypoints.web.Address=:80
- --entrypoints.websecure.Address=:443
- --providers.kubernetescrd
- --certificatesresolvers.default.acme.tlschallenge
- [email protected]
- --certificatesresolvers.default.acme.storage=acme.json
- --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
image: traefik:2.2
imagePullPolicy: IfNotPresent
name: traefik
ports:
- containerPort: 80
hostPort: 80
name: web
protocol: TCP
- containerPort: 443
hostPort: 443
name: websecure
protocol: TCP
- containerPort: 8088
name: admin
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: traefik-service-account
serviceAccountName: traefik-service-account
terminationGracePeriodSeconds: 60
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: wp-example.cryptexlabs.com
namespace: wp-example
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`wp-example.cryptexlabs.com`)
services:
- name: wp-example
port: 80
- name: wp-example
port: 443
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: wp-example
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: wordpress
helm.sh/chart: wordpress-9.3.14
name: wp-example-wordpress
namespace: wp-example
spec:
clusterIP: 10.101.142.74
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31862
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 32473
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/instance: wp-example
app.kubernetes.io/name: wordpress
sessionAffinity: None
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/instance: wp-example
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: wordpress
helm.sh/chart: wordpress-9.3.14
name: wp-example-wordpress
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: wp-example
app.kubernetes.io/name: wordpress
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: wp-example
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: wordpress
helm.sh/chart: wordpress-9.3.14
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MARIADB_HOST
value: wp-example-mariadb
- name: MARIADB_PORT_NUMBER
value: "3306"
- name: WORDPRESS_DATABASE_NAME
value: bitnami_wordpress
- name: WORDPRESS_DATABASE_USER
value: bn_wordpress
- name: WORDPRESS_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
key: mariadb-password
name: wp-example-mariadb
- name: WORDPRESS_USERNAME
value: user
- name: WORDPRESS_PASSWORD
valueFrom:
secretKeyRef:
key: wordpress-password
name: wp-example-wordpress
- name: WORDPRESS_EMAIL
value: [email protected]
- name: WORDPRESS_FIRST_NAME
value: FirstName
- name: WORDPRESS_LAST_NAME
value: LastName
- name: WORDPRESS_HTACCESS_OVERRIDE_NONE
value: "no"
- name: WORDPRESS_HTACCESS_PERSISTENCE_ENABLED
value: "no"
- name: WORDPRESS_BLOG_NAME
value: "User's Blog!"
- name: WORDPRESS_SKIP_INSTALL
value: "no"
- name: WORDPRESS_TABLE_PREFIX
value: wp_
- name: WORDPRESS_SCHEME
value: http
image: docker.io/bitnami/wordpress:5.4.2-debian-10-r6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 6
httpGet:
path: /wp-login.php
port: http
scheme: HTTP
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: wordpress
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 6
httpGet:
path: /wp-login.php
port: http
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 300m
memory: 512Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/wordpress
name: wordpress-data
subPath: wordpress
dnsPolicy: ClusterFirst
hostAliases:
- hostnames:
- status.localhost
ip: 127.0.0.1
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
terminationGracePeriodSeconds: 30
volumes:
- name: wordpress-data
persistentVolumeClaim:
claimName: wp-example-wordpress
</code></pre>
<p>Output of <code>kubectl describe svc wp-example-wordpress -n wp-example</code></p>
<pre class="lang-yaml prettyprint-override"><code>Name: wp-example-wordpress
Namespace: wp-example
Labels: app.kubernetes.io/instance=wp-example
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=wordpress
helm.sh/chart=wordpress-9.3.14
Annotations: <none>
Selector: app.kubernetes.io/instance=wp-example,app.kubernetes.io/name=wordpress
Type: LoadBalancer
IP: 10.101.142.74
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31862/TCP
Endpoints: 10.32.0.17:8080
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 32473/TCP
Endpoints: 10.32.0.17:8443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<pre><code>josh@Joshs-MacBook-Pro-2:$ ab -n 10000 -c 10 http://wp-example.cryptexlabs.com/
This is ApacheBench, Version 2.3 <$Revision: 1874286 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking wp-example.cryptexlabs.com (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software: Apache/2.4.43
Server Hostname: wp-example.cryptexlabs.com
Server Port: 80
Document Path: /
Document Length: 26225 bytes
Concurrency Level: 10
Time taken for tests: 37.791 seconds
Complete requests: 10000
Failed requests: 5000
(Connect: 0, Receive: 0, Length: 5000, Exceptions: 0)
Non-2xx responses: 5000
Total transferred: 133295000 bytes
HTML transferred: 131230000 bytes
Requests per second: 264.61 [#/sec] (mean)
Time per request: 37.791 [ms] (mean)
Time per request: 3.779 [ms] (mean, across all concurrent requests)
Transfer rate: 3444.50 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 2 6 8.1 5 239
Processing: 4 32 29.2 39 315
Waiting: 4 29 26.0 34 307
Total: 7 38 31.6 43 458
Percentage of the requests served within a certain time (ms)
50% 43
66% 49
75% 51
80% 52
90% 56
95% 60
98% 97
99% 180
100% 458 (longest request)
</code></pre>
<p>Traefik Debug Logs: <a href="https://pastebin.com/QUaAR6G0" rel="noreferrer">https://pastebin.com/QUaAR6G0</a> are showing something about SSL and x509 certs though I'm making the request via http not https.</p>
<p>I did a test with an nginx container that uses the same pattern and I did not have any issues. So this has something to do specifically with the relationship between wordpress and traefik.</p>
<p>I also saw a reference on traefik regarding to the fact that Keep-Alive was not enabled on the downstream server and traefik has Keep-Alive enabled by default. I have also tried enabling Keep-Alive by extending the wordpress image and enabling Keep-Alive on wordpress. When I access the wordpress container through `kubectl port-forward I can see that the Keep-Alive headers are being sent so I know its enabled but I am still seeing 50% of the requests failing.</p>
| Josh Woodcock | <p>I saw in the traefik logs that HTTP connections are fine but when HTTPS redirections happen for favicon etc. then you get x509 sertificate not valid. That's because wordpress pod has ssl certificate that's not valid.</p>
<p>You can use <code>--serversTransport.insecureSkipVerify=true</code> safely inside your cluster since traffic will be encrypted and outside traffic is HTTP.</p>
<p>If you need to use trusted certificate in future, deploy it with wordpress app and use traefik with ssl passthrough so traffic would be decrypted at pod level. Then you can remove insecure option on traefik.</p>
| Akin Ozer |
<p>While trying to install Kubernetes on nixos, using the following stanza:</p>
<pre><code>services.kubernetes.masterAddress = "XXXXXX";
users.users.XXXXXX.extraGroups = [ "kubernetes" ];
services.kubernetes = {
roles = ["master" "node"];
};
</code></pre>
<p>I hit the following issue:</p>
<pre><code>open /var/lib/kubernetes/secrets/etcd.pem: no such file or directory
</code></pre>
<p>I recognize this as a TLS/SSL certificate, but how should I go about generating that file?</p>
| user3416536 | <p>The article you used is really old. It was published <code>2017-07-21</code> so almost 2,5 years ago. You can be pretty sure it's outdated in one way or another however major <strong><em>NixOS</em></strong> <em>approach to setting up kubernetes cluster</em> from end user perspective may have not changed a lot during this time.</p>
<p>So, after familiarizing with it a bit more... I see that this is actually yet another approach to <strong>installing kubernetes cluster</strong> and it has nothing to do with "the hard way" I mentioned in my previous comment. On the contrary, it's the easiest kubernetes cluster setup I've ever seen. Actually you don't have to do anything but add a single entry in your <code>configuration.nix</code> and then run <code>nixos-rebuild switch</code> and you can expect everything to be up and running. But there is really a lot, not just a few things that NixOS takes care about "under the hood". Generating proper certificates is just one of many steps involved in kubernetes cluster setup. Keep in mind that Kubernetes installation from scratch is pretty complex task. Take a brief look at <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">this</a> article and you'll see what I mean. This is really amazing thing for educational purposes as there is probably no better way to understand something in-deep, than to build it from scratch, in the possibly most manual way.</p>
<p>On the other hand, if you just need to set up relatively quickly a working kubernetes cluster, Kubernetes the Hard Way won't be your choice. Fortunatelly there are a few solutions that give you possibility to set up your kubernetes cluster relatively quickly and simply.</p>
<p>One of them is <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/" rel="nofollow noreferrer">Minikube</a>.
The other one which gives you possibility to set-up multi-node kubernetes cluster is <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">kubeadm</a>.</p>
<p>Going back to <strong>NixOS</strong>, I'm really impressed by how simple it is to set up your kubernetes cluster on this system, provided everything works as expected. But what if it doesn't ( and this is mainly what your question was about ) ? You may try to debug it on your own and try to look for a workaround of your issue or simply create an issue on <strong>NixOS project</strong> <a href="https://github.com/NixOS" rel="nofollow noreferrer">github page</a> like <a href="https://github.com/NixOS/nixpkgs/issues/59364" rel="nofollow noreferrer">this one</a>. As you can see someone already reported exactly the same problem as yours. They say that on the 18.09 release it works properly so probably you're using newer version like 19.03. You can further read that there were some major changes like moving to mandatory pki in 19.03.</p>
<p>Take a closer look at this issue if you're particularly interested in <strong>running kubernetes on NixOS</strong> as there are a few advices and workarounds described there:</p>
<p><a href="https://github.com/NixOS/nixpkgs/issues/59364#issuecomment-485122860" rel="nofollow noreferrer">https://github.com/NixOS/nixpkgs/issues/59364#issuecomment-485122860</a>
<a href="https://github.com/NixOS/nixpkgs/issues/59364#issuecomment-485249797" rel="nofollow noreferrer">https://github.com/NixOS/nixpkgs/issues/59364#issuecomment-485249797</a></p>
<p>First of all make sure that your <code>masterAddress</code> is set properly i.e. as hostname, not ip address. As you put there only <code>"XXXXXX"</code> I can't guess what is currently set there. It's quite likely that when you set it e.g. to <code>localhost</code>, appropriate certificate would be generated properly:</p>
<pre><code>services.kubernetes = {
roles = ["master"];
masterAddress = "localhost";
};
</code></pre>
<p>You may also want to familiarize with <a href="https://nixos.org/nixos/manual/index.html#sec-kubernetes" rel="nofollow noreferrer">this</a> info in <strong>NixOS</strong> docs related with <strong>Kubernetes</strong>.</p>
<p>Let me know if it helped.</p>
| mario |
<p>We run 8 microservices on k8s and i'd like to learn how to define what resource limits to use for my pods. This isn't that big of an issue when running on our cloud platform but I do like to test it locally as well.</p>
<p>Currently, i'll just guess and allocate, sometimes i'll check how much they're consuming once they're running using <code>kubectl describe nodes</code></p>
<p>Is there a better way to determine the resources a particular application will need?</p>
| Panda | <ol>
<li>You may try <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">Vertical Pod Autoscaler</a>. It was created to address the same issue you have.</li>
<li>You should also have some fancy graphs using Prometheus + Grafana. Graphs describe pods resource requests and limits as well as how much resources you have free in your cluster. Looking at "kubectl describe nodes" is not cool at all. You may find very good starting point here: <a href="https://github.com/coreos/kube-prometheus" rel="nofollow noreferrer">https://github.com/coreos/kube-prometheus</a></li>
</ol>
| Vasili Angapov |
<p>What order does kubectl execute if I just run <code>kubectl -f test/</code> which has all the files and folders below? Would it determine that <code>mysite-db-namespace.yml</code> to run first and then <code>other services.yml</code> etc?</p>
<p>Or should I be naming the files with numbers so it it executes exactly how I want?</p>
<pre><code>test/
├── database
│ ├── database-clusterip-service.yml
│ ├── database-deployment.yml
│ ├── persistent-Volume-Claim.yml
│ ├── storage-class.yml
│ └── mysite-db-namespace.yml
├── httpd
│ ├── httpd-clusterip-service.yml
│ ├── httpd-deployment.yml
│ ├── ingress-rules.yml.orig
│ ├── nginx-ingress-controller.yml.orig
│ └── nginx-ingress-lb-service.yml.orig
└── tomcat
├── tomcat-clusterip-service.yml
├── tomcat-deployment.yml
└── mysite-web-namespace.yml
</code></pre>
| user630702 | <p>You can use <a href="https://helm.sh/docs/topics/charts_hooks/" rel="nofollow noreferrer">helm hooks</a> to order your yamls. Other than that kubectl will load your yamls in same folder based on alphabetic order. This is not guarenteed to always work though, there can be race-condition issues where you apply the yaml first but it being in effect relies on the kubernetes side.</p>
| Akin Ozer |
<p>I am trying to create a pipeline job for <code>Angular</code> code to deploy the application into k8 cluster. Below there is a code for pipeline container <code>podTemplate</code>, during the build I get the next error.</p>
<pre><code>def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(
cloud: 'kubernetes',
namespace: 'test',
imagePullSecrets: ['regcred'],
label: label,
containers: [
containerTemplate(name: 'nodejs', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'docker', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'kubectl', image: 'k8spoc1/kubctl:latest', ttyEnabled: true, command: 'cat')
],
volumes: [
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock'),
hostPathVolume(hostPath: '/root/.m2/repository', mountPath: '/root/.m2/repository')
]
) {
node(label) {
def scmInfo = checkout scm
def image_tag
def image_name
sh 'pwd'
def gitCommit = scmInfo.GIT_COMMIT
def gitBranch = scmInfo.GIT_BRANCH
def commitId
commitId= scmInfo.GIT_COMMIT[0..7]
image_tag = "${scmInfo.GIT_BRANCH}-${scmInfo.GIT_COMMIT[0..7]}"
stage('NPM Install') {
container ('nodejs') {
withEnv(["NPM_CONFIG_LOGLEVEL=warn"]) {
sh 'npm install'
}
}
}
}
}
</code></pre>
<p>Error from Jenkins:</p>
<pre><code>[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: Labels must follow required specs - https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set: Ubuntu-82f3782f-b5aa-4029-9c51-57610153747c
Finished: FAILURE
</code></pre>
<p>Do I need to mention a <code>spec</code> value of my <code>Jenkins</code> file?</p>
| tp.palanisamy thangavel | <p>The error message you get:</p>
<pre><code>ERROR: Labels must follow required specs - https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set: Ubuntu-82f3782f-b5aa-4029-9c51-57610153747c
</code></pre>
<p>points out quite precisely what can be wrong with your <code>Pod</code> template. As you can see in <a href="https://kubernetes.io/docs/concepts/overv" rel="nofollow noreferrer">link</a> to kubernetes documentation given in the <code>ERROR</code> message, you need to follow certain rules when defining a <code>Pod</code>. <code>labels</code> element is a <code>dictionary</code>/<code>map</code> field that requires you to provide at least one key-value pair so you cannot just write <code>label: label</code> in your specification.</p>
<p>You can try to define your <code>PodTemplate</code> in <code>yaml</code> format (which is mostly used in <strong>kubernetes</strong>) like in <a href="https://github.com/jenkinsci/kubernetes-plugin#using-yaml-to-define-pod-templates" rel="nofollow noreferrer">this</a> example:</p>
<pre><code>podTemplate(yaml: """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: busybox
command:
- cat
tty: true
"""
) {
node(POD_LABEL) {
container('busybox') {
sh "hostname"
}
}
}
</code></pre>
<p>As you can read <a href="https://github.com/jenkinsci/kubernetes-plugin#pod-and-container-template-configuration" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>label The label of the pod. Can be set to a unique value to avoid
conflicts across builds, or omitted and POD_LABEL will be defined
inside the step.</p>
</blockquote>
<p><code>label</code> field can be omitted at all so first you can try without it and you shouldn't get any error message.</p>
| mario |
<p>I have a k8s deployment - I often deploy a new version to the docker repo - change the image tag - and try to replace the deployment using <code>kubectl replace -f file.yaml</code>. My replicas are set to 1 - I have only 1 pod of the deployment running at a time.</p>
<p>When I change the image tag (e.g changing v1 to v2) and try to replace it - it creates a new pod, but it remains in the 'pending' state indefinitely, while the old pod stays in 'Running' state.</p>
<p>I think the new pod waits for the old pod to be terminated - but it won't terminate by itself. I need it to be deleted by k8s so the new pod can take its place.</p>
<p>Using <code>replace --force</code> fixes this issue - but I'd like it to work using just <code>replace -f</code>. Any ideas how to achieve this?</p>
| Ali | <p>The issue you see has nothing to do with kubectl replace/apply. The real reason is that deployments by default use RollingUpdate strategy which by default waits for new pod to be Running and only then kills old pod. The reason why new pod is in Pending state is unclear from your question but in most cases this indicates lack of compute resources for new pod.</p>
<p>You may do two different things: </p>
<p>Use RollingUpdate strategy with maxUnavailable=1. This will do what you want - it will kill old pod and then create a new one.</p>
<pre><code>spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
</code></pre>
<p>OR you can specify Recreate strategy which effectively does the same:</p>
<pre><code>spec:
strategy:
type: Recreate
</code></pre>
<p>Read more about deployment strategies here: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy</a></p>
| Vasili Angapov |
<p>I am currently trying to deal with a deployment to a kubernetes cluster. The deployment keeps failing with the response </p>
<pre><code> Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/entrypoint.sh\": permission denied"
</code></pre>
<p>I have tried to change the permissions on the file which seem to succeed as if I ls -l I get -rwxr-xr-x as the permissions for the file.</p>
<p>I have tried placing the chmod command both in the dockerfile itself and prior to the image being built and uploaded but neither seems to make any difference.
Any ideas why I am still getting the error?</p>
<p>dockerfile below </p>
<pre><code>FROM node:10.15.0
CMD []
ENV NODE_PATH /opt/node_modules
# Add kraken files
RUN mkdir -p /opt/kraken
ADD . /opt/kraken/
# RUN chown -R node /opt/
WORKDIR /opt/kraken
RUN npm install && \
npm run build && \
npm prune --production
# Add the entrypoint
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
USER node
ENTRYPOINT ["/entrypoint.sh"]
</code></pre>
| tacoofdoomk | <p>This error is not about entrypoint error but command inside. Always start scripts with "sh script.sh" either entrypoint or cmd. In this case it would be: ENTRYPOINT ["sh", "entrypoint.sh"]</p>
| Akin Ozer |
<p>I am new to Kubernetes and I'm trying to deploy an application to kubernetes via microk8s. The application contains python flask backend, angular frontend, redis and mysql database. I deployed the images in multiple pods and the status is showing "running" but the pods are not communicating with each other. </p>
<p>Then app is completely dockerized and its functioning in the docker level.
Before deploying into kubernetes my flask host was 0.0.0.0 and mysql host was "service name" in the docker-compose.yaml but currently I replaced it with service names of kubernetes yml file.</p>
<p>Also, in angular frontend I have changed the url to connect to backed as <a href="http://localhost:5000" rel="nofollow noreferrer">http://localhost:5000</a> to <a href="http://backend-service" rel="nofollow noreferrer">http://backend-service</a>, where backend-service is the name(dns) given in the backend-service.yml file. But this also didn't make any change. Can someone tell me how can I make these pods communicate?</p>
<p>I am able to access only the frontend after deploying rest is not connected.</p>
<p>Listing down the service and deployment files of angular, backend.</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: angular-service
spec:
type: NodePort
selector:
name: angular
ports:
- protocol: TCP
nodePort: 30042
targetPort: 4200
port: 4200
</code></pre>
<hr>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: ClusterIP
selector:
name: backend
ports:
- protocol: TCP
targetPort: 5000
port: 5000
</code></pre>
<p>Thanks in advance!</p>
<p>(<strong>Modified service files</strong>)</p>
| anju | <p><em>For internal communication between different microservices in <strong>Kubernetes</em></strong> you should use <code>Service</code> of a type <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service" rel="noreferrer">ClusterIP</a>. It is actually the <strong>default type</strong> so even if you don't specify it in your <code>Service</code> yaml definition file, <strong>Kubernetes</strong> assumes you want to create <code>ClusterIP</code>.
It creates virtual internal IP (accessible within your Kubernetes cluster) and exposes your cluster component (microservice) as a <em>single entry point</em> even if it is backed up by many pods. </p>
<p>Let's assume you have front-end app which needs to communicate with back-end component which is run in 3 different pods. <code>ClusterIP</code> service provides single entry point and handles load-balancing between different pods, distributing requests evenly among them.</p>
<p>You can access your <code>ClusterIP</code> service by providing its IP address and port that your application component is exposed on. Note that you may define a different port (referred to as <code>port</code> in <code>Service</code> definition) for the <code>Service</code> to listen on than the actual port used by your application (referred to as <code>targetPort</code> in your <code>Service</code> definition). Although it is possible to access the <code>Service</code> using its <code>ClusterIP</code> address, all components that communicate with pods internally exposed by it <strong>should use its DNS name</strong>. It is simply a <code>Service</code> name that you created if all application components are placed in the same namespace. If some components are in a different namespaces you need to use fully qualified domain name so they can communicate across the namespaces.</p>
<p>Your <code>Service</code> definition files may look like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: angular-service
spec:
type: ClusterIP ### may be ommited as it is a default type
selector:
name: angular ### should match your labels defined for your angular pods
ports:
- protocol: TCP
targetPort: 4200 ### port your angular app listens on
port: 4200 ### port on which you want to expose it within your cluster
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: ClusterIP ### may be ommited as it is a default type
selector:
name: backend ### should match your labels defined for your backend pods
ports:
- protocol: TCP
targetPort: 5000 ### port your backend app listens on
port: 5000 ### port on which you want to expose it within your cluster
</code></pre>
<p>You can find a detailed description of this topic in official <strong>Kubernetes</strong> <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="noreferrer">documentation</a>.</p>
<hr>
<p><code>NodePort</code> has totally different function. It may be used e.g. to expose your front-end app on a specific port on your node's IP. Note that if you have Kubernetes cluster consisting of many nodes and your front-end pods are placed on different nodes, in order to access your application you need to use 3 different IP addresses. In such case you need an additional load balancer. If you use some cloud platform solution and you want to expose front-end part of your application to external world, Service type <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="noreferrer">LoadBalancer</a> is the way to go (instead of using <code>NodePort</code>).</p>
| mario |
<p>I have an existing azure virtual machines that deployed 30 docker containers.
So I have decided to use Kubernetes service/cluster to manage deploy dockers container on that existing azure virtual machines.
I have also deploy azure registry to store docker images.</p>
<p>Is it possible way?
Please help to give me your opinion?</p>
| English learner | <p>If you are familiar with Ansible then the best way is probably <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">Kubespray</a>. It is capable of creating clusters almost of any complexity and also contains many features that other cluster management tools like kubeadm don't have.</p>
| Vasili Angapov |
<p>I want to setup a kubernetes cluster locally where I would like to have 1 master node and 2 worker nodes. I have managed to do that but I am not able to access pods or see any logs of a specific pod because Internal IP address is the same for all nodes.</p>
<pre><code>vagrant@k8s-head:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-head Ready master 5m53s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
k8s-node-1 Ready <none> 4m7s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
k8s-node-2 Ready <none> 2m28s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
</code></pre>
<p>In order to resolve this problem I have found out that following things should be done:<br/>
- add <code>KUBELET_EXTRA_ARGS=--node-ip=<IP_ADDRESS></code> in <em>/etc/default/kubelet</em> file <br/>
- restart kubelet by running:<br/><code>sudo systemctl daemon-reload && sudo systemctl restart kubelet</code></p>
<p>The problem is that <em>/etc/default/kubelet</em> file is missing on this location and I am not able to add this additional parameter. Tried with creating file manually but it looks like it is not working when I restart kubelet, IP address is still the same.</p>
<p>Anyone faced this issue with missing /etc/default/kubelet file or if there is another easier way to setup different Internal IP addresses?</p>
| Boban Djordjevic | <p><strong>It is normal to have the same IP in every node for the Kubernetes Cluster running in VirtualBox</strong>, the reason is that it is a <code>NAT newtork</code> not intended for communication between virtual machines, the 10.0.2.15 IP is NATed when accessing the outside world.</p>
<p>The following diagram shows the networks that are created in a Kubernetes Cluster on top of VirtualBox, as you can see, every node has the same IP in the <code>NAT newtork</code> but different IPs on the other networks:</p>
<p><a href="https://i.stack.imgur.com/xmAHo.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xmAHo.png" alt="Kubernetes Cluster Networks"></a></p>
<p>In order to access the PODs you can use a NodePort and the <code>HOST ONLY</code> network.</p>
<p>See a full example and download the code at <a href="https://www.itwonderlab.com/ansible-kubernetes-vagrant-tutorial/" rel="noreferrer">Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube)</a>. It is a tutorial that explains how to launch a Kubernetes cluster using Ansible playbooks, Vagrant and VirtualBox. </p>
<p>It uses Calico for networking and it includes another tutorial for installing <strong>Istio</strong> if you need a micro service mesh.</p>
| Javier Ruiz |
<p>I am trying to allow Access Control Origin due to the following error in an Android Cordova app:</p>
<pre><code>http://localhost:8080/#/: Line 0 : Access to XMLHttpRequest at 'https://api.v2.domain.com/api/v1/users/me/favorites?lat=42.5467&lng=-83.2113&radius=10.0&limit=5&search=' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
</code></pre>
<p>I am trying to figure out where in Kubernetes to add it - I assume it's somewhere in the Service or the Deployment.</p>
<p>Here's both:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: domain-server
annotations:
dns.alpha.kubernetes.io/external: "api.v2.domain.com"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:152660121739:certificate/8efe41c4-9a53-4cf6-b056-5279df82bc5e
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
selector:
app: domain-server
ports:
- port: 443
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: domain-server
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 3
revisionHistoryLimit: 10
template:
metadata:
labels:
app: domain-server
spec:
containers:
- name: domain-server
image: "152660121739.dkr.ecr.us-east-2.amazonaws.com/domain-server"
imagePullPolicy: Always
resources:
limits:
memory: "1200Mi"
requests:
memory: "900Mi"
cpu: "200m"
ports:
- name: http
containerPort: 8080
...
</code></pre>
<p>Is this the correct place to put the header? If so, how would one add CORS to Kubernetes here? I am noticing some suggestions like Nginx ingresses, but the application I am using does not use Nginx.</p>
| Steven Matthews | <p>This problem is not about Kubernetes. Browsers enforce CORS, check reference here: <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS" rel="nofollow noreferrer">https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS</a> . You can also use <strong>curl</strong> or <strong>postman</strong> and see content without CORS error. </p>
<p>Normally nginx servers can fix that and kubernetes-nginx is not really different. It basically uses reverse proxy to control services. Check this reference to get started to fix CORS error by ingress: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-cors" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-cors</a></p>
| Akin Ozer |
<p>It is my understanding that you're gonna have an NLB or ALB in front of your Istio Gateway anyway?</p>
<p>But I am confused because it seems like Istio Gateway does a lot of things ALB does for Layer 7 and even more?</p>
<p>So I read ALB -> Istio Gateway is ok, but isn't that redundant? What about NLB -> ALB -> Istio Gateway, which seems like too much?</p>
<p>It seems like it is best to have NLB -> Istio Gateway to let them handle Layer 4 and Layer 7 respectively like they do best, can anyone enlighten and confirm?</p>
| atkayla | <p>If you are using Istio then yes, istio orginally created with ingress controller in mind. Gateway+Virtual Service basically enables what you want. Some ingress controllers are more easy and have different plusses but if istio handles all you want then go for it.</p>
| Akin Ozer |
<p>I'd like to diff a Kubernetes YAML template against the actual deployed ressources. This should be possible using <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#diff" rel="nofollow noreferrer">kubectl diff</a>. However, on my Kubernetes cluster in Azure, I get the following error:</p>
<pre><code>Error from server (InternalError): Internal error occurred: admission webhook "aks-webhook-admission-controller.azmk8s.io" does not support dry run
</code></pre>
<p>Is there something I can enable on AKS to let this work or is there some other way of achieving the diff?</p>
| dploeger | <p>As a workaround you can use standard GNU/Linux <code>diff</code> command in the following way:</p>
<pre><code>diff -uN <(kubectl get pods nginx-pod -o yaml) example_pod.yaml
</code></pre>
<hr>
<p>I know this is not a solution but just workaround but I think it still can be considered as full-fledged replacement tool.</p>
<blockquote>
<p>Thanks, but that doesn't work for me, because it's not just one pod
I'm interested in, it's a whole Helm release with deployment,
services, jobs, etc. – dploeger</p>
</blockquote>
<p>But anyway you won't compare everything at once, will you ?</p>
<p>You can use it for any resource you like, not only for <code>Pods</code>. Just substitute <code>Pod</code> by any other resource you like.</p>
<p>Anyway, under the hood <code>kubectl diff</code> uses <code>diff command</code></p>
<p>In <code>kubectl diff --help</code> you can read:</p>
<blockquote>
<p>KUBECTL_EXTERNAL_DIFF environment variable can be used to select your
own diff command. By default, the "diff" command available in your
path will be run with "-u" (unified diff) and "-N" (treat absent files
as empty) options.</p>
</blockquote>
<hr>
<p>The real problem in your case is that you cannot use for some reason <code>--dry-run</code> on your AKS Cluster, which is question to AKS users/experts. Maybe it can be enabled somehow but unfortunately I have no idea how.</p>
<p>Basically <code>kubectl diff</code> compares already deployed resource, which we can get by:</p>
<pre><code>kubectl get resource-type resource-name -o yaml
</code></pre>
<p>with the result of:</p>
<pre><code>kubectl apply -f nginx.yaml --dry-run --output yaml
</code></pre>
<p>and not with actual content of your yaml file (simple <code>cat nginx.yaml</code> would be ok for that purpose).</p>
<hr>
<p>You can additionally use:</p>
<pre><code>kubectl get all -l "app.kubernetes.io/instance=<helm_release_name>" -o yaml
</code></pre>
<p>to get <code>yamls</code> of all resources belonging to specific <strong>helm release</strong>. </p>
<p>As you can read in <code>man diff</code> it has following options:</p>
<pre><code> --from-file=FILE1
compare FILE1 to all operands; FILE1 can be a directory
--to-file=FILE2
compare all operands to FILE2; FILE2 can be a directory
</code></pre>
<p>so we are not limited to comparing single files but also files located in specific directory. Only we can't use these two options together.</p>
<p>So the full <code>diff</code> command for comparing all resources belonging to specific <strong>helm release</strong> currently deployed on our <strong>kubernetes cluster</strong> with <strong><code>yaml</code> files from a specific directory</strong> may look like this:</p>
<pre><code>diff -uN <(kubectl get all -l "app.kubernetes.io/instance=<helm_release_name>" -o yaml) --to-file=directory_containing_yamls/
</code></pre>
| mario |
<p>I am setting up a minikube which contains an activeMQ message queue together with InfluxDB and Grafana.</p>
<p>For Grafana, I was able to set the admin password via the deployment:</p>
<pre><code> containers:
- env:
- name: GF_INSTALL_PLUGINS
value: grafana-piechart-panel, blackmirror1-singlestat-math-panel
- name: GF_SECURITY_ADMIN_USER
value: <grafanaadminusername>
- name: GF_SECURITY_ADMIN_PASSWORD
value: <grafanaadminpassword>
image: grafana/grafana:6.6.0
name: grafana
volumeMounts:
- mountPath: /etc/grafana/provisioning
name: grafana-volume
subPath: provisioning/
- mountPath: /var/lib/grafana/dashboards
name: grafana-volume
subPath: dashboards/
- mountPath: /etc/grafana/grafana.ini
name: grafana-volume
subPath: grafana.ini
readOnly: true
restartPolicy: Always
volumes:
- name: grafana-volume
hostPath:
path: /grafana
</code></pre>
<p>For influxdb I set the user/passwd via a secret:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: influxdb
namespace: default
type: Opaque
stringData:
INFLUXDB_CONFIG_PATH: /etc/influxdb/influxdb.conf
INFLUXDB_ADMIN_USER: <influxdbadminuser>
INFLUXDB_ADMIN_PASSWORD: <influxdbbadminpassword>
INFLUXDB_DB: <mydb>
</code></pre>
<p>Currently, my ActiveMQ deployment looks like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq
spec:
replicas: 1
selector:
matchLabels:
app: activemq
template:
metadata:
labels:
app: activemq
spec:
containers:
- name: web
image: rmohr/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
- containerPort: 8161
resources:
limits:
memory: 512Mi
</code></pre>
<p>How do I achieve the similar result (password and admin user via config file) for ActiveMQ? Even better if this is achieved via encrypted secret, which I didn't manage yet in case of influxDB and Grafana</p>
| WolfiG | <p>I would do this the following way:</p>
<p><a href="https://activemq.apache.org/encrypted-passwords" rel="nofollow noreferrer">Here</a> you have nicely described encrypted passwords in <strong>ActiveMQ</strong>.</p>
<p>First you need to prepare such encrypted password. <strong>ActiveMQ</strong> has a built-in utility for that:</p>
<blockquote>
<p>As of ActiveMQ 5.4.1 you can encrypt your passwords and safely store
them in configuration files. To encrypt the password, you can use the
newly added encrypt command like:</p>
<pre><code>$ bin/activemq encrypt --password activemq --input mypassword
...
Encrypted text: eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp
</code></pre>
<p>Where the password you want to encrypt is passed with the input argument, while the password argument is a secret used by the encryptor. In a similar fashion you can test-out your passwords like:</p>
<pre><code>$ bin/activemq decrypt --password activemq --input eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp
...
Decrypted text: mypassword
</code></pre>
<p>Note: It is recommended that you use only alphanumeric characters for
the password. Special characters, such as $/^&, are not supported.</p>
<p>The next step is to add the password to the appropriate configuration
file, $ACTIVEMQ_HOME/conf/credentials-enc.properties by default.</p>
<pre><code>activemq.username=system
activemq.password=ENC(mYRkg+4Q4hua1kvpCCI2hg==)
guest.password=ENC(Cf3Jf3tM+UrSOoaKU50od5CuBa8rxjoL)
...
jdbc.password=ENC(eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp)
</code></pre>
</blockquote>
<p>You probably don't even have to rebuilt your image so it contains the appropriate configuration file with encrypted password. You can add it as <code>ConfigMap</code> data to a <code>volume</code>. You can read how to do that <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="nofollow noreferrer">here</a> so I'll rather avoid another copy-pasting from documentation. Alternatively you may want to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#secret" rel="nofollow noreferrer">secret volume</a>. It's not the most important point here as it is just a way of substituting your original <strong>ActiveMQ</strong> configuration file in your <code>Pod</code> by your custom configuration file and you probably already know how to do that.</p>
<p>There is one more step on <strong>ActiveMQ</strong> side to configure. This config file can be also passed via <code>ConfigMaP</code> like in the previous example.</p>
<blockquote>
<p>Finally, you need to instruct your property loader to encrypt
variables when it loads properties to the memory. Instead of standard
property loader we’ll use the special one (see
\$ACTIVEMQ_HOME/conf/activemq-security.xml) to achieve this.</p>
<pre><code><bean id="environmentVariablesConfiguration" class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig">
<property name="algorithm" value="PBEWithMD5AndDES" />
<property name="passwordEnvName" value="ACTIVEMQ\_ENCRYPTION\_PASSWORD" />
</bean>
<bean id="configurationEncryptor" class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor">
<property name="config" ref="environmentVariablesConfiguration" />
</bean>
<bean id="propertyConfigurer" class="org.jasypt.spring31.properties.EncryptablePropertyPlaceholderConfigurer">
<constructor-arg ref="configurationEncryptor" />
<property name="location" value="file:${activemq.base}/conf/credentials-enc.properties"/>
</bean>
</code></pre>
</blockquote>
<p>This way we instructed our <strong>ActiveMQ</strong> to load our <code>encryptor password</code> from the <code>ACTIVEMQ_ENCRYPTION_PASSWORD</code> environment variable and then use it to decrypt passwords from <code>credential-enc.properties file</code>.</p>
<p>Now let's take care about <code>ACTIVEMQ_ENCRYPTION_PASSWORD</code> env var content.</p>
<p>We can set such environment variable in our <code>Pod</code> via <code>Secret</code>. First we need to <a href="https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually" rel="nofollow noreferrer">create one</a>. Then we need to <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables" rel="nofollow noreferrer">use it as environment variable</a>.</p>
<p>I hope it helps.</p>
| mario |
<p>I have installed the rabbitmq cluster using helm chart. Rabbitmq using statefulsets so is there any way to auto scale this ? </p>
<p>Also one more question how to auto scale (HPA) deployment having PVC ? </p>
| Harsh Manvar | <p>StatefulSets can be autoscaled with HPA:</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
annotations:
name: some-service
spec:
maxReplicas: 4
metrics:
- resource:
name: memory
targetAverageUtilization: 80
type: Resource
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: some-service
</code></pre>
<p>Regarding PVC and StatefulSets and HPA - I'm not sure but I think that depends on reclaimPolicy of StorageClass of your PVC. Just make sure you have <code>reclaimPolicy: Retain</code> in your StorageClass definition. Having that you should preserve data on scaling events.</p>
<p>If you mean Deployments with HPA and PVC - it should work, but always remember that if you have multiple replicas with one shared PVC - all replicas will try to mount it. If PVC is ReadWriteMany - there should be no issues. If it is ReadWriteOnce - then all replicas will be scheduled on one node. If there is not enough resources on node to fit all replicas - you will get some pods in Pending state forever.</p>
| Vasili Angapov |
<p>I'm trying to figure out how to verify if a pod is running with security context privileged enabled (set to true).</p>
<p>I assumed that '<code>kubectl describe pod [name]</code>' would contain this information but it does not appear to.</p>
<p>I quickly created a pod using the following definition to test:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: priv-demo
spec:
volumes:
- name: priv-vol
emptyDir: {}
containers:
- name: priv-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: priv-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
privileged: true
</code></pre>
<p>Any ideas how to retrieve the security context? It must be an easy thing to do and I've just overlooked something.</p>
| Jon Kent | <pre><code>kubectl get pod POD_NAME -o json | jq -r '.spec.containers[].securityContext.privileged'
</code></pre>
| Vasili Angapov |
<p>I am login kubernetes's(v1.15.2) pod to check current redis cluster ip,but the problem is redis cluster(redis:5.0.1-alpine) ip is not current pod's ip:</p>
<pre><code>~ ⌚ 23:45:03
$ kubectl exec -it redis-app-0 /bin/ash
/data # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:1E:E0:1E
inet addr:172.30.224.30 Bcast:172.30.231.255 Mask:255.255.248.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:78447 errors:0 dropped:0 overruns:0 frame:0
TX packets:64255 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:74638073 (71.1 MiB) TX bytes:74257972 (70.8 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:3922 errors:0 dropped:0 overruns:0 frame:0
TX packets:3922 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:297128 (290.1 KiB) TX bytes:297128 (290.1 KiB)
/data # /usr/local/bin/redis-cli -c
127.0.0.1:6379> cluster nodes
a1ecebe5c9dc2f9edbe3c239c402881da10da6de 172.30.224.22:6379@16379 myself,master - 0 1582644797000 0 connected
127.0.0.1:6379>
</code></pre>
<p>the pod's ip is:172.30.224.30,and redis ip is:172.30.224.22,what is the problem?is it possible to fix it?</p>
| Dolphin | <p>How is this IP defined in your redis config ? Redis must've taken it from somewhere. Keep in mind that <code>Pod</code> ip is subject to change when the <code>Pod</code> is recreated so you have no guarantee that IP address defined in your <strong>Redis node</strong> statically will remain unchanged. I would even say that you can be almost 100% sure that it will change.</p>
<p>You can check IP address of a particular interface in a few ways. One of them is by running:</p>
<pre><code>hostname -I | awk '{print $1}'
</code></pre>
<p>You can try to add an <strong>init container</strong> to your <code>Pod</code> running simple bash script which would check current host ip address e.g. using the above command and then would populate redis config accordingly. But it seems to me an overkill and I'm almost sure it can be done more easily. If your <strong>redis node</strong> by default binds to <strong>all IP addresses</strong> (<code>0.0.0.0</code>), <code>cluster nodes</code> output should show you current ip address of your <code>Pod</code>.</p>
<p>Are you providing <strong>Redis</strong> configuration to your Pod via some <code>ConfigMap</code> ?</p>
<p>Please share more details related to your <code>Deployment</code> so you can get more accurate answer that resolves your particular issue.</p>
| mario |
<p>Recently I've updated Apache Ignite running in my .Net Core 3.1 application from 2.7.5 to 2.8.1 and today I noticed thousands of warnings like this in the log</p>
<pre><code>Jun 03 18:26:54 quote-service-us-deployment-5d874d8546-psbcs org.apache.ignite.internal.processors.odbc.ClientListenerNioListener: Site: WARN - Unable to perform handshake within timeout [timeout=10000, remoteAddr=/10.250.0.4:57941]
Jun 03 18:26:59 quote-service-uk-deployment-d644cbc86-7xcvw org.apache.ignite.internal.processors.odbc.ClientListenerNioListener: Site: WARN - Unable to perform handshake within timeout [timeout=10000, remoteAddr=/10.250.0.4:57982]
Jun 03 18:26:59 quote-service-us-deployment-5d874d8546-psbcs org.apache.ignite.internal.processors.odbc.ClientListenerNioListener: Site: WARN - Unable to perform handshake within timeout [timeout=10000, remoteAddr=/10.250.0.4:57985]
Jun 03 18:27:04 quote-service-uk-deployment-d644cbc86-7xcvw org.apache.ignite.internal.processors.odbc.ClientListenerNioListener: Site: WARN - Unable to perform handshake within timeout [timeout=10000, remoteAddr=/10.250.0.4:58050]
Jun 03 18:27:04 quote-service-us-deployment-5d874d8546-psbcs org.apache.ignite.internal.processors.odbc.ClientListenerNioListener: Site: WARN - Unable to perform handshake within timeout [timeout=10000, remoteAddr=/10.250.0.4:58051]
Jun 03 18:27:09 quote-service-uk-deployment-d644cbc86-7xcvw org.apache.ignite.internal.processors.odbc.ClientListenerNioListener: Site: WARN - Unable to perform handshake within timeout [timeout=10000, remoteAddr=/10.250.0.4:58114]
Jun 03 18:27:09 quote-service-us-deployment-5d874d8546-psbcs org.apache.ignite.internal.processors.odbc.ClientListenerNioListener: Site: WARN - Unable to perform handshake within timeout [timeout=10000, remoteAddr=/10.250.0.4:58118]
</code></pre>
<p>I don't use ODBC or JDBC directly in my app and the app is running in a Kubernetes cluster in a virtual network. <em>Interestingly, in all cases the IP on the other end of connection (10.250.0.4 in this case) belongs to the kube-proxy pod.</em> I am a bit perplexed by this. </p>
<p>UPD:
The same IP address is reported to belong also to the following pods:
azure-ip-masq-agent and azure-cni-networkmonitor
(I guess those belong to Azure Kubernetes Services that I use to run the K8s cluster)</p>
<p>So it is possible that the network monitor is attempting to reach the ODBC port (just guessing). Is there any opportunity to suppress that warning or disable ODBC connections at all? I don't use ODBC but I'd like to keep the JDBC connections enabled as I occasionally connect to the Ignite instances using DBeaver. Thank you!</p>
| Alex Avrutin | <p>If you've defined a service and opened port 10800 then K8 will perform a health check through kube-proxy. This causes Ignite to receive an incomplete handshake on that port log the "unable to perform handshake" message.</p>
<p>ClientListenerNioListener: Site: WARN - Unable to perform handshake within timeout
[timeout=10000, remoteAddr=/10.250.0.4:58050]</p>
<p>Here the client connector listener(ClientListenerNioListener) is saying that it was not able to establish a successful handshake within 10 seconds to remoteAddr=/10.250.0.4:58050</p>
<p>config client connector: <a href="https://apacheignite.readme.io/docs/binary-client-protocol#connectivity" rel="nofollow noreferrer">https://apacheignite.readme.io/docs/binary-client-protocol#connectivity</a><br>
client connector handshake: <a href="https://apacheignite.readme.io/docs/binary-client-protocol#connection-handshake" rel="nofollow noreferrer">https://apacheignite.readme.io/docs/binary-client-protocol#connection-handshake</a><br>
<br>
</p>
<p>example of service w/port 10800 opened:</p>
<pre><code>kind: Service
metadata:
# The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName
name: ignite
# The name must be equal to TcpDiscoveryKubernetesIpFinder.namespaceName
namespace: ignite
spec:
type: LoadBalancer
ports:
- name: rest
port: 8080
targetPort: 8080
- name: sql
port: 10800
targetPort: 10800
</code></pre>
<p>You can redefine the service to not open the port or update the service definition to
use different ports for the healthcheck:
<a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip</a></p>
<p>from the doc:<br>
service.spec.healthCheckNodePort - specifies the health check node port (numeric port number) for the service. If healthCheckNodePort isn’t specified, the service controller allocates a port from your cluster’s NodePort range. You can configure that range by setting an API server command line option, --service-node-port-range. It will use the user-specified healthCheckNodePort value if specified by the client. It only has an effect when type is set to LoadBalancer and externalTrafficPolicy is set to Local.</p>
| Alex K |
<p>I've set up single node kubernetes according to <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noreferrer">official tutorial</a>. </p>
<p>In addition to official documentation I've set-up single node cluster:</p>
<pre><code>kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
<p>Disabled eviction limit:</p>
<pre><code>cat << EOF >> /var/lib/kubelet/config.yaml
evictionHard:
imagefs.available: 1%
memory.available: 100Mi
nodefs.available: 1%
nodefs.inodesFree: 1%
EOF
systemctl daemon-reload
systemctl restart kubelet
</code></pre>
<p>And set systemd driver for Docker:</p>
<pre><code>cat << EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
systemctl daemon-reload
systemctl restart docker
</code></pre>
<p>I've tried following:</p>
<pre><code>docker build -t localhost:5000/my-image .
kubectl run -it --rm --restart=Always --image=localhost:5000/my-image my-image
</code></pre>
<p>But in pod logs I see <code>ImagePullBackOff</code>. If I setup local repository and I do <code>docker push localhost:5000/my-image</code> after I build image, then everything is working.</p>
<p>Is it is possible to use local images (which are already available after issuing <code>docker images</code>) without needing to setting up local repository, pushing to this repository and then pulling from it?</p>
| Wakan Tanka | <p>You simply need to set the <code>imagePullPolicy</code> in your <code>Pod</code> template in the <code>container</code> specification to <code>Never</code>. Otherwise <strong>the kubelet</strong> will try to pull the image. The example <code>Pod</code> definition may look like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: uses-local-image
image: local-image-name
imagePullPolicy: Never
</code></pre>
<p>More on that you can find <a href="https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images" rel="noreferrer">here</a>. </p>
<blockquote>
<p>By default, the kubelet will try to pull each image from the specified
registry. However, if the <code>imagePullPolicy</code> property of the
container is set to <code>IfNotPresent</code> or <code>Never</code>, then a local image
is used (preferentially or exclusively, respectively).</p>
<p>If you want to rely on pre-pulled images as a substitute for registry
authentication, you must ensure all nodes in the cluster have the same
pre-pulled images.</p>
<p>This can be used to preload certain images for speed or as an
alternative to authenticating to a private registry.</p>
<p>All pods will have read access to any pre-pulled images.</p>
</blockquote>
| mario |
<p>For example in the following example:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: exmaple-pvc
spec:
accessModes:
- ReadOnlyMany
- ReadWriteMany
storageClassName: standard
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
</code></pre>
<p>Why is this allowed? What is the actual behavior of the volume in this case? Read only? Read and write?</p>
| Chris Stryczynski | <p>To be able to fully understand why a certain structure is used in a specific field of <code>yaml</code> definition, first we need to understand the purpose of this particular field. We need to ask what it is for, what is its function in this particular <strong>kubernetes api-resource</strong>.</p>
<p>I struggled a bit finding the proper explanation of <code>accessModes</code> in <code>PersistentVolumeClaim</code> and I must admit that <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">what I found</a> in <strong>official kubernetes docs</strong> did not safisfy me:</p>
<blockquote>
<p>A <code>PersistentVolume</code> can be mounted on a host in any way supported by
the resource provider. As shown in the table below, providers will
have different capabilities and each PV’s access modes are set to the
specific modes supported by that particular volume. For example, NFS
can support multiple read/write clients, but a specific NFS PV might
be exported on the server as read-only. Each PV gets its own set of
access modes describing that specific PV’s capabilities.</p>
</blockquote>
<p>Fortunately this time I managed to find really great explanation of this topic in <a href="https://docs.openshift.com/dedicated/storage/understanding-persistent-storage.html#pv-access-modes_understanding-persistent-storage" rel="nofollow noreferrer">OpenShift documentation</a>. We can read there:</p>
<blockquote>
<p>Claims are matched to volumes with similar access modes. The only two
matching criteria are access modes and size. A claim’s access modes
represent a request. Therefore, you might be granted more, but never
less. For example, if a claim requests RWO, but the only volume
available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS
because it supports RWO.</p>
<p>Direct matches are always attempted first. The volume’s modes must
match or contain more modes than you requested. The size must be
greater than or equal to what is expected. If two types of volumes,
such as NFS and iSCSI, have the same set of access modes, either of
them can match a claim with those modes. There is no ordering between
types of volumes and no way to choose one type over another.</p>
<p>All volumes with the same modes are grouped, and then sorted by size,
smallest to largest. The binder gets the group with matching modes and
iterates over each, in size order, until one size matches.</p>
</blockquote>
<p><strong>And now probably the most important part:</strong></p>
<blockquote>
<p><strong>A volume’s <code>AccessModes</code> are descriptors of the volume’s
capabilities. They are not enforced constraints.</strong> The storage provider
is responsible for runtime errors resulting from invalid use of the
resource.</p>
</blockquote>
<p>I emphasized this part as <code>AccessModes</code> can be very easily misunderstood. Let's look at the example:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: exmaple-pvc-2
spec:
accessModes:
- ReadOnlyMany
storageClassName: standard
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
</code></pre>
<p>The fact that we specified in our <code>PersistentVolumeClaim</code> definition only <code>ReadOnlyMany</code> access mode doesn't mean it cannot be used in other <code>accessModes</code> supported by our storage provider. It's important to understand that we cannot put here any constraint on how the requested storage can be used by our <code>Pods</code>. If our storage provider, hidden behind our <code>standard</code> storage class, supports also <code>ReadWriteOnce</code>, it will be also available for use.</p>
<p><strong>Answering your particular question...</strong></p>
<blockquote>
<p>Why is this allowed? What is the actual behavior of the volume in this
case? Read only? Read and write?</p>
</blockquote>
<p>It doesn't define behavior of the volume at all. The volume will behave according to its <strong>capabilities</strong> (we don't define them, they are imposed in advance, being part of the storage specification). In other words we will be able to use it in our <code>Pods</code> in all possible ways, in which it is allowed to be used.</p>
<p>Let's say our <code>standard</code> storage provisioner, which in case of <strong>GKE</strong> happens to be <strong>Google Compute Engine Persistent Disk</strong>:</p>
<pre><code>$ kubectl get storageclass
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 10d
</code></pre>
<p>currently supports two <code>AccessModes</code>:</p>
<ul>
<li><code>ReadWriteOnce</code></li>
<li><code>ReadOnlyMany</code></li>
</ul>
<p>So we can use all of them, no matter what we specified in our claim e.g. this way:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: debian
template:
metadata:
labels:
app: debian
spec:
containers:
- name: debian
image: debian
command: ['sh', '-c', 'sleep 3600']
volumeMounts:
- mountPath: "/mnt"
name: my-volume
readOnly: true
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: example-pvc-2
initContainers:
- name: init-myservice
image: busybox
command: ['sh', '-c', 'echo "Content of my file" > /mnt/my_file']
volumeMounts:
- mountPath: "/mnt"
name: my-volume
</code></pre>
<p>In the above example <strong>both capabilities are used</strong>. First our volume is mounted in <code>rw</code> mode by the <code>init container</code> which saves to it some file and after that it is mounted to the <code>main container</code> as read-only file system. We are still able to do it even though we specified in our <code>PersistentVolumeClaim</code> only one access mode:</p>
<pre><code>spec:
accessModes:
- ReadOnlyMany
</code></pre>
<p>Going back to the question you asked in the title:</p>
<blockquote>
<p>Why can you set multiple accessModes on a persistent volume?</p>
</blockquote>
<p><strong>the answer is:</strong> You cannot set them at all as they are already set by the storage provider, you can only request this way what storage you want, what requirements it must meet and one of these requirements are access modes it supports.</p>
<p>Basically by typing:</p>
<pre><code>spec:
accessModes:
- ReadOnlyMany
- ReadWriteOnce
</code></pre>
<p>in our <code>PersistentVolulmeClaim</code> definition we say:</p>
<p><em>"Hey! Storage provider! Give me a volume that supports this set of <code>accessModes</code>. I don't care if it supports any others, like <code>ReadWriteMany</code>, as I don't need them. Give me something that meets my requirements!"</em></p>
<p>I believe that further explanation why <em>an array</em> is used here is not needed.</p>
| mario |
<p>I was able to bootstrap the master node for a kubernetes deployment using <code>kubeadm</code>, but I'm getting errors in the <code>kubeadm join phase kubelet-start phase</code>:</p>
<pre><code>kubeadm --v=5 join phase kubelet-start 192.168.1.198:6443 --token x4drpl.ie61lm4vrqyig5vg --discovery-token-ca-cert-hash sha256:hjksdhjsakdhjsakdhajdka --node-name media-server
W0118 23:53:28.414247 22327 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
I0118 23:53:28.414383 22327 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
I0118 23:53:28.414476 22327 join.go:441] [preflight] Discovering cluster-info
I0118 23:53:28.414744 22327 token.go:188] [discovery] Trying to connect to API Server "192.168.1.198:6443"
I0118 23:53:28.416434 22327 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.198:6443"
I0118 23:53:28.433749 22327 token.go:134] [discovery] Requesting info from "https://192.168.1.198:6443" again to validate TLS against the pinned public key
I0118 23:53:28.446096 22327 token.go:152] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.198:6443"
I0118 23:53:28.446130 22327 token.go:194] [discovery] Successfully established connection with API Server "192.168.1.198:6443"
I0118 23:53:28.446163 22327 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I0118 23:53:28.446186 22327 join.go:455] [preflight] Fetching init configuration
I0118 23:53:28.446197 22327 join.go:493] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I0118 23:53:28.461658 22327 interface.go:400] Looking for default routes with IPv4 addresses
I0118 23:53:28.461682 22327 interface.go:405] Default route transits interface "eno2"
I0118 23:53:28.462107 22327 interface.go:208] Interface eno2 is up
I0118 23:53:28.462180 22327 interface.go:256] Interface "eno2" has 2 addresses :[192.168.1.113/24 fe80::225:90ff:febe:5aaf/64].
I0118 23:53:28.462205 22327 interface.go:223] Checking addr 192.168.1.113/24.
I0118 23:53:28.462217 22327 interface.go:230] IP found 192.168.1.113
I0118 23:53:28.462228 22327 interface.go:262] Found valid IPv4 address 192.168.1.113 for interface "eno2".
I0118 23:53:28.462238 22327 interface.go:411] Found active IP 192.168.1.113
I0118 23:53:28.462284 22327 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0118 23:53:28.463384 22327 kubelet.go:115] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I0118 23:53:28.465766 22327 kubelet.go:133] [kubelet-start] Stopping the kubelet
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.Unfortunately, an error has occurred:
timed out waiting for the conditionThis error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
timed out waiting for the condition
error execution phase kubelet-start
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/anago-v1.17.1-beta.0.42+d224476cd0730b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/anago-v1.17.1-beta.0.42+d224476cd0730b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/anago-v1.17.1-beta.0.42+d224476cd0730b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).BindToCommand.func1.1
/workspace/anago-v1.17.1-beta.0.42+d224476cd0730b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:348
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/anago-v1.17.1-beta.0.42+d224476cd0730b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/workspace/anago-v1.17.1-beta.0.42+d224476cd0730b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/workspace/anago-v1.17.1-beta.0.42+d224476cd0730b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
/workspace/anago-v1.17.1-beta.0.42+d224476cd0730b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
</code></pre>
<p>Now, looking at the kubelet logs with <code>journalctl -xeu kubelet</code>:</p>
<pre><code>Jan 19 00:04:38 media-server systemd[23817]: kubelet.service: Executing: /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --cgroup-driver=cgroupfs
Jan 19 00:04:38 media-server kubelet[23817]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 19 00:04:38 media-server kubelet[23817]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 19 00:04:38 media-server kubelet[23817]: I0119 00:04:38.706834 23817 server.go:416] Version: v1.17.1
Jan 19 00:04:38 media-server kubelet[23817]: I0119 00:04:38.707261 23817 plugins.go:100] No cloud provider specified.
Jan 19 00:04:38 media-server kubelet[23817]: I0119 00:04:38.707304 23817 server.go:821] Client rotation is on, will bootstrap in background
Jan 19 00:04:38 media-server kubelet[23817]: E0119 00:04:38.709106 23817 bootstrap.go:240] unable to read existing bootstrap client config: invalid configuration: [unable to read client-cert /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: no such file or directory, unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: no such file or directory]
Jan 19 00:04:38 media-server kubelet[23817]: F0119 00:04:38.709153 23817 server.go:273] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Jan 19 00:04:38 media-server systemd[1]: kubelet.service: Child 23817 belongs to kubelet.service.
Jan 19 00:04:38 media-server systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
</code></pre>
<p>Interestingly, no <code>kubelet-client-current.pem</code> is found on the worker trying to join, in fact the only file inside <code>/var/lib/kubelet/pki</code> are <code>kubelet.{crt,key}</code></p>
<p>If I run the following command on the node trying to join I get that all certificates are missing:</p>
<pre><code># kubeadm alpha certs check-expiration
W0119 00:06:35.088034 24017 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0119 00:06:35.088082 24017 validation.go:28] Cannot validate kubelet config - no validator is available
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
!MISSING! admin.conf
!MISSING! apiserver
!MISSING! apiserver-etcd-client
!MISSING! apiserver-kubelet-client
!MISSING! controller-manager.conf
!MISSING! etcd-healthcheck-client
!MISSING! etcd-peer
!MISSING! etcd-server
!MISSING! front-proxy-client
!MISSING! scheduler.conf Error checking external CA condition for ca certificate authority: failure loading certificate for API server: failed to load certificate: couldn't load the certificate file /etc/kubernetes/pki/apiserver.crt: open /etc/kubernetes/pki/apiserver.crt: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>The only file in <code>/etc/kubernetes/pki</code> is <code>ca.crt</code></p>
<p>Both master and worker have kubeadm and kubelet versions 1.17.1, so a version mismatch doesn't look likely</p>
<p>something possibly unrelated but also prone to cause errors is that both worker and master nodes have docker setup with <code>Cgroup Driver: systemd</code> , but for some reason kubelet is being passed <code>--cgroup-driver=cgroupfs</code></p>
<p>What could be causing this issue? and more importantly, how do I fix it so I can successfully join nodes to the master?</p>
<h2>Edit: more information</h2>
<p>On the worker, the systemd files are:</p>
<pre><code>~# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
</code></pre>
<p>the unit service for <code>kubelet</code>:</p>
<pre><code>~# cat /etc/systemd/system/multi-user.target.wants/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
</code></pre>
<p>and the kubelet <code>config.yaml</code>:</p>
<pre><code>~# cat /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
</code></pre>
<p>contents of <code>/var/lib/kubelet/kubeadm-flags.env</code> on worker node versus master node:</p>
<p><em>worker:</em></p>
<p><code>KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1"</code></p>
<p><em>master:</em></p>
<p><code>KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf"</code></p>
<p>both master and worker have the same docker version 18.09, and their config files are identical:</p>
<pre><code>~$ cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"data-root": "/opt/var/docker/"
}
</code></pre>
| lurscher | <p>I believe, kubelet service on the worker node failed to authenticate to API server due to expired bootstrap token. Can you regenerate the token on master node and try to run kubeadm join command on the worker node ?</p>
<pre><code>CMD: kubeadm token create --print-join-command
</code></pre>
| Subramanian Manickam |
<p>According to the <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/image.md" rel="noreferrer">official documentation</a> I should be able to easily override the tags and name of my docker images using some nifty <code>kustomization</code> syntax. I have tried to reproduce this.</p>
<p><strong>In my <code>deployment.yaml</code> I have the following:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: my-app
name: my-app
spec:
strategy:
type: Recreate
template:
metadata:
labels:
service: my-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: my-app
image: the-my-app
imagePullPolicy: Always
ports:
- containerPort: 1234
restartPolicy: Always
</code></pre>
<p><strong>In my <code>kustomization.yaml</code> I have the following:</strong></p>
<pre><code>bases:
- ../base
resources:
- deployment.yaml
namespace: my-namespace
images:
- name: the-my-app
- newName: my.docker.registry.com/my-project/my-app
newTag: test
</code></pre>
<p>However, when I do this:</p>
<pre><code>kubectl apply -k my-kustomization-dir
</code></pre>
<p>and wait for the deployment to spin up, and later do</p>
<pre><code>kubectl describe pod/my-app-xxxxxxxxx-xxxxx
</code></pre>
<p>The events looks like this:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Successfully assigned my-namespace/my-app-xxxxxxxxxx-xxxxx to default-pool-xxxxxxxxxx-xxxxx
Normal Pulling 2s kubelet, default-pool-xxxxxxxxxx-xxxxx pulling image "the-my-app"
Warning Failed 0s kubelet, default-pool-xxxxxxxxxx-xxxxx Failed to pull image "the-my-app": rpc error: code = Unknown desc = Error response from daemon: pull access denied for the-my-app, repository does not exist or may require 'docker login'
Warning Failed 0s kubelet, default-pool-xxxxxxxxxx-xxxxx Error: ErrImagePull
Normal BackOff 0s kubelet, default-pool-xxxxxxxxxx-xxxxx Back-off pulling image "the-my-app"
Warning Failed 0s kubelet, default-pool-xxxxxxxxxx-xxxxx Error: ImagePullBackOff
</code></pre>
<p>Indicating that this did not work as expected (it tries to pull the original name specified in <code>deployment.yaml</code>).</p>
<p>So my question is, what am I doing wrong here?</p>
| Mr. Developerdude | <p>You have to remove "-" in newName line under the images section. It should be like this, it is worked.</p>
<pre><code> images:
- name: the-my-app
newName: my.docker.registry.com/my-project/my-app
newTag: test
</code></pre>
| Subramanian Manickam |
<p>I have a helm chart that is creating a config map for which I am passing content as a value from terraform using helm_release.</p>
<p>values.yml: default is empty</p>
<pre><code>sql_queries_file: ""
</code></pre>
<p>helm template for configmap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: sql-queries
data:
{{ .Values.sql_queries_file }}
</code></pre>
<p>terraform file:</p>
<pre><code>resource "helm_release" "example" {
............
..................
set {
name = "sql_queries_file"
value = file(./sql_queries.sql)
}
}
</code></pre>
<p>I have a sql_queris.sql fine inside terraform folder with sample data below.</p>
<pre><code>-- From http://docs.confluent.io/current/ksql/docs/tutorials/basics-docker.html#create-a-stream-and-table
-- Create a stream pageviews_original from the Kafka topic pageviews, specifying the value_format of DELIMITED
CREATE STREAM pageviews_original (viewtime bigint, userid varchar, pageid varchar) WITH (kafka_topic='pageviews', value_format='DELIMITED');
</code></pre>
<p>Error:</p>
<pre><code>Failed parsing key sql_queries_file with value <entire content here>
</code></pre>
<p>Is this the right way? or is there a better way?</p>
| SNR | <p>I would use <code>filebase64</code> to get the file with terraform to avoid templating issues. You can unmarshal it in helm like this: <code>{{ b64dec .Values.sql_queries_file }}</code>. By the way you should use data field in configMaps like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: sql-queries
data:
sql_queries.sql: |-
{{ .Values.sql_queries_file | nindent 4 }}
# {{ b64dec .Values.sql_queries_file | nindent 4 }} if you want to unmarshal
</code></pre>
<p>Edit: fixed typo in answer.</p>
| Akin Ozer |
<p>I use <code>.kube/config</code> to access Kubernetes api on a server. I am wondering does the token in config file ever get expired? How to prevent it from expire?</p>
| cometta | <p>Yes, it will be expired after one year. Automatic certificate renewal feature is the default on kubernetes 1.15 version unless you have explicitly disabled it during the kubeadm init phase with --certificate-renewal=false option.</p>
<p><strong>Check expiration:</strong></p>
<pre><code> kubeadm alpha certs check-expiration
</code></pre>
<p>E.g.</p>
<p>CERTIFICATE EXPIRES RESIDUAL TIME EXTERNALLY MANAGED</p>
<p>admin.conf Sep 06, 2020 04:34 UTC 361d no<br>
apiserver Sep 06, 2020 04:34 UTC 361d no<br>
apiserver-etcd-client Sep 06, 2020 04:34 UTC 361d no<br>
apiserver-kubelet-client Sep 06, 2020 04:34 UTC 361d no<br>
controller-manager.conf Sep 06, 2020 04:34 UTC 361d no<br>
etcd-healthcheck-client Sep 06, 2020 04:34 UTC 361d no<br>
etcd-peer Sep 06, 2020 04:34 UTC 361d no<br>
etcd-server Sep 06, 2020 04:34 UTC 361d no<br>
front-proxy-client Sep 06, 2020 04:34 UTC 361d no<br>
scheduler.conf Sep 06, 2020 04:34 UTC 361d no </p>
<p><strong>Renew all certifications:</strong></p>
<pre><code> kubeadm alpha certs renew all
</code></pre>
<p><strong>Renew only admin.conf:</strong></p>
<pre><code> kubeadm alpha certs renew admin.conf
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
</code></pre>
| Subramanian Manickam |
<p>Is there a way that I can get release logs for a particular K8s release within my K8s cluster as the replica-sets related to that deployment is no longer serving pods?</p>
<p>For an example <code>kubectl rollout history deployment/pod1-dep</code> would result</p>
<p>1</p>
<p>2 <- failed deploy</p>
<p>3 <- Latest deployment successful</p>
<p>If I want to pick the logs related to events in <code>2</code>, would it be a possible task, or is there a way that we can such functionality with this.</p>
| Pasan Chamikara | <p><em>This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.</em></p>
<p>As David Maze rightly suggested in his comment above:</p>
<blockquote>
<p>Once a pod is deleted, its logs are gone with it. If you have some
sort of external log collector that will generally keep historical
logs for you, but you needed to have set that up before you attempted
the update.</p>
</blockquote>
<p>So the answer to your particular question is: <strong>no, you can't get such logs once those particular pods are deleted.</strong></p>
| mario |
<p>Why does the following error occur when I install <code>Linkerd 2.x</code> on a private cluster in GKE?</p>
<pre><code>Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: tap.linkerd.io/v1alpha1: the server is currently unable to handle the request
</code></pre>
| cpretzer | <p><strong>Solution:</strong></p>
<p>The steps I followed are:</p>
<ol>
<li><p><code>kubectl get apiservices</code> : If linkered apiservice is down with the error CrashLoopBackOff try to follow the step 2 otherwise just try to restart the linkered service using kubectl delete apiservice/"service_name". For me it was v1alpha1.tap.linkerd.io.</p>
</li>
<li><p><code>kubectl get pods -n kube-system</code> and found out that pods like metrics-server, linkered, kubernetes-dashboard are down because of the main coreDNS pod was down.</p>
</li>
</ol>
<p>For me it was:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/coredns-85577b65b-zj2x2 0/1 CrashLoopBackOff 7 13m
</code></pre>
<ol start="3">
<li>Use kubectl describe pod/"pod_name" to check the error in coreDNS pod and if it is down because of <code>/etc/coredns/Corefile:10 - Error during parsing: Unknown directive proxy</code>, then we need to use forward instead of proxy in the yaml file where coreDNS config is there. Because CoreDNS version 1.5x used by the image does not support the proxy keyword anymore.</li>
</ol>
| Sanket Singh |
<p>I have a deployment which requires to read a license file from a host. The license file is a text file (not a yaml config). I know we can mount a ConfigMap in a deployment but afaik ConfigMap is only in yaml format.</p>
<p>What is the best way to mount this single file into a deployment?</p>
| Kintarō | <p>You can create a configmap from any file:</p>
<pre><code>kubectl create configmap <map-name> --from-file=file.cfg
</code></pre>
<p>Then you can mount the configmap to your pod:</p>
<pre><code>volumes:
- name: config
configMap:
name: mapName
</code></pre>
<pre><code>volumeMounts:
- name: config
mountPath: /dir/file.cfg
subPath: file.cfg
</code></pre>
| Burak Serdar |
<p>Suppose, I just installed one of the Kubernetes CNI plugins, for example <code>weave-net</code>:</p>
<pre><code>kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
</code></pre>
<p>How can I view or list the installed CNI plugins?</p>
<p>After installing how do I know that it is running? Or if I <code>kubectl delete</code> the plugin, how do I know it was deleted?</p>
<p>After installing the plugin, I sort of expected to see some objects created for this plugin. So that if I want to delete it, then I don't have to remember the exact URL I used to install, I could just lookup the object name and delete it.</p>
| jersey bean | <p>if you list the pods in kube-system namespace, you can see the pods. The pod's names will be started with weave-net-xxxxx. Since it is Deamonset object, the pod's count will be based on your k8s nodes. Actually one pod will be created for one node.</p>
<pre><code>kubectl get pods -n kube-system
</code></pre>
| Subramanian Manickam |
<p>We are trying to update a document using <code>updateOne()</code>. The filter identifies the document and we update an attribute of the document using <code>$set</code>. This update job is triggered by a cron job every minute.</p>
<p>Say, original document is <code>{_id: "USER1001", status: "INACTIVE"}</code>. We update it by calling <code>updateOne()</code> using the filter <code>{_id: "USER1001", status: "INACTIVE"}</code> and updating the status field as <code>{"$set":{status:"ACTIVE"}}</code>. We look at resultant value of <code>modifiedCount</code> and expect it to be 1, to claim that the <code>updateOne()</code> operation was successful. This then triggers downstream jobs.</p>
<p>The application is running in Kubernetes and we have scaled it. When we test the system under load, two simultaneous <code>updateOne()</code> calls are made to the same document with the same filter, from two different pods and both returns <code>modifiedCount</code> as 1.</p>
<p>We expect <code>modifiedCount</code> to be 1 for one for the pod and 0 for the other. But for some documents we see this result.</p>
<p><em>Sample code for reference</em></p>
<pre><code>// cron job that calls update() every minute
func update(){
filter := bson.D{{"_id", "USER1001"}, {"status", "INACTIVE"}}
result, err := collection.UpdateOne(context.Background(), filter, bson.M{"$set": bson.M{"status": "ACTIVE"}})
if result.ModifiedCount != 1 {
// no update done
} else {
// call downstream jobs
}
}
</code></pre>
<p><em>Sample log lines that we have from the application pods</em></p>
<ul>
<li>POD-1</li>
</ul>
<blockquote>
<p>[2020-11-20 17:30:58.610518875 +0000 UTC] [DEBUG] [myJob-7dc8b78bcf-c4677] update() :: Update result :: USER1001 / Matched=1 / Modified=1</p>
</blockquote>
<ul>
<li>POD-2</li>
</ul>
<blockquote>
<p>[2020-11-20 17:30:58.409843674 +0000 UTC] [DEBUG] [myJob-7dc8b78bcf-jd7m8] update() :: Update result :: USER1001 / Matched=1 / Modified=1</p>
</blockquote>
<p><strong>Question here is -</strong></p>
<ol>
<li>Has any one else seen this behaviour?</li>
<li>What could be causing this issue?</li>
</ol>
<p><strong>Additional info,</strong></p>
<ul>
<li>The application is in Go</li>
<li>Mongo is 4.4 and</li>
<li>We are using the latest mongo-driver for go.</li>
<li>The cron job runs every minute, not on the stroke of a minute. Which means that,
<ul>
<li>POD-1 might run it at,
<ul>
<li>10:00:23</li>
<li>10:01:23</li>
<li>10:02:23 etc</li>
</ul>
</li>
<li>POD-2 might run it at,
<ul>
<li>10:00:35</li>
<li>10:01:35</li>
<li>10:02:35 etc</li>
</ul>
</li>
</ul>
</li>
</ul>
| jerrymannel | <p>If you read this document first, find out that it is inactive and then decide to active it, this is a common race condition you have to deal with. Another process does the same thing, and then both update the document.</p>
<p>There are ways to prevent this. Mongodb document level operations are atomic, so the simplest solution for your case is to change your filter to <code>{_id:"USER1001","status":"INACTIVE"}</code> to make sure the document is inactive the moment you update it. Then only one node will successfully update the document though multiple nodes might attempt to update it.</p>
| Burak Serdar |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.