Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>How do you get around waiting on resources not yes created?</p> <p>In script I get:</p> <pre><code>kubectl wait --for=condition=ready --timeout=60s -n &lt;some namespace&gt; --all pods error: no matching resources found </code></pre>
Chris G.
<p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p> <p>As documented:</p> <blockquote> <p>Experimental: Wait for a specific condition on one or many resources.</p> <p>The command takes multiple resources and waits until the specified condition is seen in the Status field of every given resource.</p> <p>Alternatively, the command can wait for the given set of resources to be deleted by providing the &quot;delete&quot; keyword as the value to the --for flag.</p> <p>A successful message will be printed to stdout indicating when the specified condition has been met. One can use -o option to change to output destination.</p> </blockquote> <p>This command will not work for the resources that hasn't been created yet. @EmruzHossain has posted two valid points:</p> <ul> <li><p>Make sure you have provided a valid namespace.</p> </li> <li><p>First wait for the resource to get created. Probably a loop running <code>kubectl get</code> periodically. When the desired resource is found, break the loop. Then, run <code>kubectl wait</code> to wait for the resource to be ready.</p> </li> </ul> <p>Also, there is this open thread: <a href="https://github.com/kubernetes/kubernetes/issues/83242" rel="noreferrer">kubectl wait for un-existed resource. #83242</a> which is still waiting (no pun intended) to be implemented.</p>
Wytrzymały Wiktor
<p>I installed minikube in a remote computer. The service is up and the configuration looks ok:</p> <pre><code>$ kubectl cluster-info Kubernetes master is running at https://192.168.49.2:8443 KubeDNS is running at https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. </code></pre> <p>I am connected to this computer through SSH and am able to launch all graphic applications. Sanity check, confirming X11 forwarding is allowed:</p> <pre><code>$ grep X11Forwarding /etc/ssh/sshd_config X11Forwarding yes </code></pre> <p>But when I try to start the dashboard I get an X11 error:</p> <pre><code>$ minikube dashboard * Verifying dashboard health ... * Launching proxy ... * Verifying proxy health ... * Opening http://127.0.0.1:39571/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser... X11 connection rejected because of wrong authentication. [61559:61559:1119/114641.640444:ERROR:browser_main_loop.cc(1434)] Unable to open X display. </code></pre> <p>What could be the cause?</p> <p><strong>Update</strong>: replying to <a href="https://stackoverflow.com/questions/64910125/start-minikube-dashboard-in-remote-computer-through-ssh?noredirect=1#comment114761931_64910125">larks below</a>, the <code>ForwardX11Trusted</code> parameter is set to <code>yes</code> on the SSH client:</p> <pre><code>$ cat /etc/ssh/ssh_config | grep ForwardX11Trusted ForwardX11Trusted yes </code></pre>
Luís de Sousa
<p>There could be plenty of reasons for that. You may just need to:</p> <pre><code>export XAUTHORITY=$HOME/.Xauthority </code></pre> <p>Also make sure <code>~/.Xauthority</code> is owned by you.</p> <p>In order to verify it, run:</p> <pre><code>ls -l ~/.Xauthority </code></pre> <p>And then, depending on the result, you may need to fix ownership and permissions on that file with:</p> <pre><code>chown user:group ~/.Xauthority </code></pre> <p>and</p> <pre><code>chmod 0600 ~/.Xauthority </code></pre>
mario
<p>I have deployed 5 apps using Azure container instances, these are working fine, the issue I have is that currently, all containers are running all the time, which gets expensive. </p> <p>What I want to do is to start/stop instances when required using for this a Master container or VM that will be working all the time.</p> <p>E.G. </p> <p>This master service gets a request to spin up service number 3 for 2 hours then shut it down and all other containers will be off until they receive a similar request. </p> <p>For my use case, each service will be used for less than 5 hours a day most of the time. </p> <p>Now, I know Kubernetes its an engine made to manage containers but all examples I have found are for high scale services, not for 5 services with only one container each, also not sure if Kubernetes allows to have all the containers off most of the time. </p> <p>What I was thinking on is to handle all these throw some API, but I'm not fiding any service in Azure that allows something similar to this, I have only found options to create new containers, not to spin up and shut them down. </p> <p>EDIT: </p> <p>Also, this apps run process that are to heavy to have them on a serverless platform. </p>
Luis Ramon Ramirez Rodriguez
<p>Solution is to define horizontal pod autoscaler for your deployment.</p> <p>The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics). Note that Horizontal Pod Autoscaling does not apply to objects that can’t be scaled, for example, DaemonSets.</p> <p>The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. The controller periodically adjusts the number of replicas in a replication controller or deployment to match the observed average CPU utilization to the target specified by user.</p> <p>Configuration file should looks like this:</p> <pre><code>apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hpa-images-service spec: scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: example-deployment minReplicas: 2 maxReplicas: 100 targetCPUUtilizationPercentage: 75 </code></pre> <p>scaleRef should refer toyour deployment definition and minReplicas you can set as 0, value of targetCPUUtilization you can set according to your preferences.. Such approach should help you to save money due to termination pod which have high CPU utilization.</p> <p>Kubernetes official documentation: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">kubernetes-hpa</a>.</p> <p>GKE autoscaler documentation: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="nofollow noreferrer">gke-autoscaler</a>.</p> <p>Useful blog about saving cash using GCP: <a href="https://medium.com/pixboost/save-cash-by-running-kubernetes-services-on-preemptible-vms-in-google-cloud-cca02809ae09" rel="nofollow noreferrer">kubernetes-google-cloud</a>.</p>
Malgorzata
<h3>Version</h3> <ol> <li>k8s version: v1.19.0</li> <li>metrics server: v0.3.6</li> </ol> <p>I set up k8s cluster and metrics server, it can check nodes and pod on master node, work node can not see, it return unknown.</p> <pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% u-29 1160m 14% 37307Mi 58% u-31 2755m 22% 51647Mi 80% u-32 4661m 38% 32208Mi 50% u-34 1514m 12% 41083Mi 63% u-36 1570m 13% 40400Mi 62% </code></pre> <p>when the pod running on the client node, it return <code>unable to fetch pod metrics for pod default/nginx-7764dc5cf4-c2sbq: no metrics known for pod</code> when the pod running one the master node, it can return cpu or memory</p> <pre><code>NAME CPU(cores) MEMORY(bytes) nginx-7cdd6c99b8-6pfg2 0m 2Mi </code></pre>
benq
<p>This is a community wiki answer based on OP's comment posted for better visibility. Feel free to expand it.</p> <p>The issue was caused by using different versions of docker on different nodes. After upgrading docker to v19.3 on both nodes and executing <code>kubeadm reset</code> the issue was resolved.</p>
Wytrzymały Wiktor
<p>I want to configure the following settings in my <code>nginx</code> ingress controller deployment</p> <pre><code>proxy_socket_keepalive -&gt; on proxy_read_timeout -&gt; 3600 proxy_write_timeout -&gt;3600 </code></pre> <p>However I am unable to find them as <code>annotations</code> <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">here</a>, although they appear in the list of available <code>nginx</code> <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_socket_keepalive" rel="nofollow noreferrer">directives</a>.</p> <p>Why is that?</p>
pkaramol
<p>There is no <code>proxy_write_timeout</code>. I assume you meant the <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_send_timeout" rel="nofollow noreferrer">proxy_send_timeout</a>.</p> <p>Both:</p> <pre><code>nginx.ingress.kubernetes.io/proxy-send-timeout </code></pre> <p>and:</p> <pre><code>nginx.ingress.kubernetes.io/proxy-read-timeout </code></pre> <p>Can be found <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-timeouts" rel="nofollow noreferrer">here</a> and <a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/#general-customization" rel="nofollow noreferrer">here</a>.</p> <p>As for the <code>proxy_socket_keepalive</code>, unfortunately, this option cannot be set via annotations. You may want to nest it in the Nginx config, for example:</p> <pre><code>location / { client_max_body_size 128M; proxy_buffer_size 256k; proxy_buffers 4 512k; proxy_busy_buffers_size 512k; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_socket_keepalive on; </code></pre>
Wytrzymały Wiktor
<p>When I try to build a new image to Kubernetes, I got this error:</p> <blockquote> <p><code>**unable to decode &quot;K8sDeploy.yaml&quot;: no kind &quot;Deployment&quot; is registered for version &quot;apps/v1&quot;** </code></p> </blockquote> <p>Thie error began when I updated the Kubernetes version, here my version info:</p> <pre><code>Client Version: v1.19.2 Server Version: v1.16.13 </code></pre> <p><a href="https://i.stack.imgur.com/2uRGJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2uRGJ.jpg" alt="enter image description here" /></a></p> <p>I tried also to build by my localhost and does work, but by Jenkins don't.</p> <p>Somebody knows to solve this?</p>
Gabriel Aguiar
<p>To check what <code>apiVersion</code> supports a <code>Deployment</code> resource in your <strong>kubernetes</strong> cluster you may run:</p> <pre><code>$ kubectl explain deployment | head -2 </code></pre> <p>and you can be almost sure that the result will be as follows:</p> <pre><code>KIND: Deployment VERSION: apps/v1 </code></pre> <p>All modern <strong>kubernetes</strong> versions use <code>apps/v1</code>, which was available since <code>v1.9</code>, so for quite a long time already. As you may see <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="nofollow noreferrer">here</a>, older versions which were still available in <strong>kubernetes</strong> <code>1.15</code> have been deprecated in <code>1.16</code>.</p> <blockquote> <p>Client Version: v1.19.2 Server Version: v1.16.13</p> </blockquote> <p>As stated above, in <strong>kubernetes</strong> <code>1.16</code>, <code>Deployment</code> <strong>must use</strong> <code>apps/v1</code> and there is no possibility to use older api versions like <code>extensions/v1beta1</code>, <code>apps/v1beta1</code> or <code>apps/v1beta2</code> which were still avilable in <code>1.15</code>.</p> <p>Your issue seems to me rather an error from <strong>Jenkins</strong> (possibly old version of <strong>Jenkins</strong> itself or <strong>some of its plugins</strong> or perhaps something with its configuration) which is not able to recognize/parse the correct (and currently required) <code>apiVersion</code> for <code>Deployment</code> resource.</p> <p>For troubleshooting purpose you can try and change the <code>apiVersion</code> to one of the listed above. This should give you a different error (this time from kubernetes API server) as in <code>1.16</code> it won't be able to recognize it.</p> <p>But at least it should give you a clue. If with older <code>apiVersion</code> your <strong>Jenkins</strong> doesn't complain any more, it would mean that it is set to work with older API versions and an update may help.</p> <p>I see you filed an <a href="https://github.com/kubernetes/website/issues/24888" rel="nofollow noreferrer">issue</a> on <strong>kubernetes GitHub</strong> so let's wait what they say, but as I said before to me it doesn't look like an issue with <strong>kubernetes</strong> but rather with <strong>Jenkins</strong> ability to parse a legitimate <code>Deployment</code> <code>yaml</code>.</p>
mario
<p>I want to set AllowPrivilegeEscalation to false in a nonprivileged container but running with CAP_SYS_ADMIN capability. As per docs &quot;AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged OR 2) has CAP_SYS_ADMIN.&quot; . In this case it will be set to true or false ?</p>
sacboy
<p>As you already found in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">the docs</a>:</p> <blockquote> <p><code>AllowPrivilegeEscalation</code>: Controls whether a process can gain more privileges than its parent process. This bool directly controls whether the <code>no_new_privs</code> flag gets set on the container process. <strong><code>AllowPrivilegeEscalation</code> is true always when the container is: 1) run as Privileged OR 2) has <code>CAP_SYS_ADMIN</code>.</strong></p> </blockquote> <p>In your case the container has <code>CAP_SYS_ADMIN</code> so it would have the <code>AllowPrivilegeEscalation</code> set to <code>true</code>.</p> <p>This behavior is also explained in more detail in the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/auth/no-new-privs.md" rel="nofollow noreferrer">AllowPrivilegeEscalation design document</a>.</p>
Wytrzymały Wiktor
<p>I am trying to install a Kubernetes cluster with one master node and two worker nodes.</p> <p>I acquired 3 VMs for this purpose running on Ubuntu 21.10. In the master node, I installed <code>kubeadm:1.21.4</code>, <code>kubectl:1.21.4</code>, <code>kubelet:1.21.4</code> and <code>docker-ce:20.4</code>.</p> <p>I followed <a href="https://computingforgeeks.com/deploy-kubernetes-cluster-on-ubuntu-with-kubeadm/" rel="nofollow noreferrer">this guide</a> to install the cluster. The only difference was in my init command where I did not mention the <code>--control-plane-endpoint</code>. I used calico CNI <code>v3.19.1</code> and docker for CRI Runtime.</p> <p>After I installed the cluster, I deployed minio pod and exposed it as a NodePort. The pod got deployed in the worker node (<code>10.72.12.52</code>) and my master node IP is <code>10.72.12.51</code>). For the first two hours, I am able to access the login page via all three IPs (<code>10.72.12.51:30981</code>, <code>10.72.12.52:30981</code>, <code>10.72.13.53:30981</code>). However, after two hours, I lost access to the service via <code>10.72.12.51:30981</code> and <code>10.72.13.53:30981</code>. Now I am only able to access the service from the node on which it is running (<code>10.72.12.52</code>).</p> <p>I have disabled the firewall and added <code>calico.conf</code> file inside <code>/etc/NetworkManager/conf.d</code> with the following content:</p> <pre><code>[keyfile] unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico </code></pre> <p>What am I missing in the setup that might cause this issue?</p>
Abhinav Sharma
<p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p> <p>As mentioned by @AbhinavSharma the problem was solved by switching from Calico to Flannel CNI.</p> <p>More information regarding Flannel itself can be found <a href="https://github.com/flannel-io/flannel" rel="nofollow noreferrer">here</a>.</p>
Wytrzymały Wiktor
<p>I'm trying to run spark in an kubernetes cluster as described here <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html</a></p> <p>It works fine for some basic scripts like the provided examples. </p> <p>I noticed that the config folder despite being added to the image build by the "docker-image-tool.sh" is overwritten by a mount of a config map volume.</p> <p>I have two Questions:</p> <ol> <li>What sources does spark use to generate that config map or how do you edit it? As far as I understand the volume gets deleted when the last pod is deleted and regenerated when a new pod is created</li> <li>How are you supposed to handle the spark-env.sh script which can't be added to a simple config map?</li> </ol>
Itsmedenise
<p>One initially non-obvious thing about Kubernetes is that changing a ConfigMap (a set of configuration values) is not detected as a change to Deployments (how a Pod, or set of Pods, should be deployed onto the cluster) or Pods that reference that configuration. That expectation can result in unintentionally stale configuration persisting until a change to the Pod spec. This could include freshly created Pods due to an autoscaling event, or even restarts after a crash, resulting in misconfiguration and unexpected behaviour across the cluster.</p> <p><strong>Note: This doesn’t impact ConfigMaps mounted as volumes, which are periodically synced by the kubelet running on each node.</strong></p> <p>To update configmap execute:</p> <pre><code>$ kubectl replace -f file.yaml </code></pre> <p>You must create a ConfigMap before you can use it. So I recommend firstly modify configMap and then redeploy pod. </p> <p>Note that container using a ConfigMap as a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">subPath</a> volume mount will not receive ConfigMap updates.</p> <p>The <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">configMap</a> resource provides a way to inject configuration data into Pods. The data stored in a ConfigMap object can be referenced in a volume of type configMap and then consumed by containerized applications running in a Pod.</p> <p>When referencing a configMap object, you can simply provide its name in the volume to reference it. You can also customize the path to use for a specific entry in the ConfigMap.</p> <p>When a ConfigMap already being consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period (1 minute by default) + ttl of ConfigMaps cache (1 minute by default) in kubelet.</p> <p>But what I strongly recommend you is to use <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md" rel="nofollow noreferrer">Kubernetes Operator for Spark</a>. It supports mounting volumes and ConfigMaps in Spark pods to customize them, a feature that is not available in Apache Spark as of version 2.4.</p> <p>A SparkApplication can specify a Kubernetes ConfigMap storing Spark configuration files such as spark-env.sh or spark-defaults.conf using the optional field .spec.sparkConfigMap whose value is the name of the ConfigMap. The ConfigMap is assumed to be in the same namespace as that of the SparkApplication. Spark on K8S provides configuration options that allow for mounting certain volume types into the driver and executor pods. Volumes are "delivered" from Kubernetes side but they can be delivered from local storage in Spark. If no volume is set as local storage, Spark uses temporary scratch space to spill data to disk during shuffles and other operations. When using Kubernetes as the resource manager the pods will be created with an emptyDir volume mounted for each directory listed in spark.local.dir or the environment variable SPARK_LOCAL_DIRS . If no directories are explicitly specified then a default directory is created and configured appropriately.</p> <p>Useful blog: <a href="https://www.lightbend.com/blog/how-to-manage-monitor-spark-on-kubernetes-introduction-spark-submit-kubernetes-operator" rel="nofollow noreferrer">spark-kubernetes-operator</a>.</p>
Malgorzata
<p>I have simple issue with StatefulSet update on my dev environment and CI.</p> <p>I want to replace all StatefulSet replicas instantly without using Kubectl delete first. Is it possible to change the manifest to strategy: Replace as in Deployments and continue using kubectl apply ...</p>
mkasepuu
<p>Currently the <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">StatefulSets</a> support only two kinds of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies" rel="nofollow noreferrer">update strategies</a>:</p> <ul> <li><p><code>RollingUpdate</code>: The <code>RollingUpdate</code> update strategy implements automated, rolling update for the Pods in a StatefulSet. It is the default strategy when <code>.spec.updateStrategy</code> is left unspecified. When a StatefulSet's <code>.spec.updateStrategy.type</code> is set to <code>RollingUpdate</code>, the StatefulSet controller will delete and recreate each Pod in the StatefulSet. It will proceed in the same order as Pod termination (from the largest ordinal to the smallest), updating each Pod one at a time. It will wait until an updated Pod is Running and Ready prior to updating its predecessor.</p> </li> <li><p><code>OnDelete</code>: The <code>OnDelete</code> update strategy implements the legacy (1.6 and prior) behavior. When a StatefulSet's <code>.spec.updateStrategy.type</code> is set to <code>OnDelete</code>, the StatefulSet controller will not automatically update the Pods in a StatefulSet. Users must manually delete Pods to cause the controller to create new Pods that reflect modifications made to a StatefulSet's <code>.spec.template</code>.</p> </li> </ul> <p>However, there is a plan to implement a <a href="https://github.com/kubernetes/kubernetes/issues/68397" rel="nofollow noreferrer">MaxUnavailable Rolling Update to StatefulSet</a>. It would allow you to update X number of replicas together based on a <code>maxUnavailble</code> strategy. It led to this <a href="https://github.com/kubernetes/enhancements/pull/1010" rel="nofollow noreferrer">update proposal</a> but it is not done yet and judging from the latest comments it should be set as a milestone for Kubernetes 1.20.</p>
Wytrzymały Wiktor
<p>I would like to run a shell script inside the Kubernetes using CronJob, here is my CronJon.yaml file :</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: &quot;*/1 * * * *&quot; jobTemplate: spec: template: spec: containers: - name: hello image: busybox imagePullPolicy: IfNotPresent command: - /bin/sh - -c - /home/admin_/test.sh restartPolicy: OnFailure </code></pre> <p>CronJob has been created ( kubectl apply -f CronJob.yaml ) when I get the list of cronjob I can see the cron job ( kubectl get cj ) and when I run &quot;kubectl get pods&quot; I can see the pod is being created, but pod crashes. Can anyone help me to learn how I can create a CronJob inside the Kubernetes please ?</p> <p><a href="https://i.stack.imgur.com/JvQQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JvQQ3.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/lVneg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lVneg.png" alt="enter image description here" /></a></p>
Amin Pashna
<p>As correctly pointed out in the comments, you need to provide the script file in order to execute it via your <code>CronJob</code>. You can do that by mounting the file within a <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">volume</a>. For example, your <code>CronJob</code> could look like this:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: &quot;*/1 * * * *&quot; jobTemplate: spec: template: spec: containers: - name: hello image: busybox imagePullPolicy: IfNotPresent command: - /bin/sh - -c - /myscript/test.sh volumeMounts: - name: script-dir mountPath: /myscript restartPolicy: OnFailure volumes: - name: script-dir hostPath: path: /path/to/my/script/dir type: Directory </code></pre> <p>Example above shows how to use the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> type of volume in order to mount the script file.</p>
Wytrzymały Wiktor
<p>I'm looking for a generic way to expose multiple GKE TCP services to the outside world. I want SSL that's terminated at cluster edge. I would also prefer client certificate based auth, if possible.</p> <p>My current use case is to access PostgreSQL services deployed in GKE from private data centers (and only from there). But basically I'm interested in a solution that works for any TCP based service without builtin SSL and auth.</p> <p>One option would be to deploy an nginx as a reverse proxy for the TCP service, expose the nginx with a service of type LoadBalancer (L4, network load balancer), and configure the nginx with SSL and client certificate validation.</p> <p>Is there a better, more GKE native way to achieve it ?</p>
Laurentiu Soica
<p>To the best of my knowledge, <strong>there is no GKE-native way to achieve exactly what you need</strong>.</p> <p>If this was only dealing with HTTP-based traffic, you could simply use <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">GKE Ingress for HTTP(S) Load Balancing</a> but taking into consideration:</p> <blockquote> <p>But basically I'm interested in a solution that works for any TCP based service without builtin SSL and auth.</p> </blockquote> <p>this is not your use case.</p> <p>So you can either <strong>stay with what you've already set up</strong> as it seems to work well or as an alternative you can use:</p> <ol> <li><p>✅ <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">nginx ingress</a>, which unlike <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">GKE ingress</a> is able to expose to the external world <strong>not only HTTP/HTTPS-based traffic</strong>, but also <strong>can proxy TCP connections</strong> coming to arbitrary ports.</p> </li> <li><p>✅ You can use <strong>TLS termination proxy</strong> as a <strong>sidecar</strong> (something like <a href="https://hub.docker.com/r/mnuessler/tls-termination-proxy" rel="nofollow noreferrer">this one</a> or <a href="https://github.com/flaccid/docker-tls-proxy" rel="nofollow noreferrer">this one</a>) behind <a href="https://cloud.google.com/load-balancing/docs/network" rel="nofollow noreferrer">External TCP/UDP Network Load Balancer</a>. As it is <strong>not a proxy</strong> but a <strong>pass-through LB</strong>, it cannot provide SSL termination and will only be able to pass the encrypted TCP traffic to the backend <code>Pod</code> where it needs be handled by the above mentioned <strong>sidecar</strong>.</p> </li> <li><p>❌ From the GCP-native load balancing solutions presented in <a href="https://cloud.google.com/load-balancing/docs/choosing-load-balancer#summary-of-google-cloud-load-balancers" rel="nofollow noreferrer">this table</a> only <a href="https://cloud.google.com/load-balancing/docs/ssl#firewall_rules" rel="nofollow noreferrer">SSL Proxy</a> may seem useful at first glance as <strong>it can handle TCP traffic with SSL offload</strong>, however ❗<strong>it supports only limited set of well-known TCP ports</strong> and as far as I understand, you need to be able to <strong>expose arbitrary TCP ports</strong> so this won't help you much:</p> </li> </ol> <blockquote> <p><strong>SSL Proxy Load Balancing support for the following ports:</strong> 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, 3389, 5222, 5432, 5671, 5672, 5900, 5901, 6379, 8085, 8099, 9092, 9200, and 9300. When you use Google- managed SSL certificates with SSL Proxy Load Balancing, the frontend port for traffic must be 443 to enable the Google-managed SSL certificates to be provisioned and renewed.</p> </blockquote>
mario
<p>When defining a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">ServiceAccount</a>, you tell Kubernetes which apiGroups, resources, and verbs you want to give access to.:</p> <pre><code>apiVersion: v1 kind: ServiceAccount ... kind: Role rules: - apiGroups: [""] resources: ["pods", "pods/exec", "persistentvolumeclaims", "services"] verbs: ["get", "watch", "list", "create", "update", "patch", "delete", "deletecollection"] </code></pre> <p><em>Where can you find the full list of options?</em></p> <p>Runinng <code>kubectl api-resources -o wide</code> gives many of them, but does not return subresources like <code>pods/exec</code> or <code>pods/log</code>.</p>
Jethro
<p>Simply execute:</p> <pre><code>kubectl api-resources --verbs=list --namespaced -o name \ | xargs -n 1 kubectl get --show-kind --ignore-not-found -l &lt;label&gt;=&lt;value&gt; -n &lt;namespace&gt; </code></pre> <p>The <a href="https://shapeshed.com/unix-xargs/" rel="nofollow noreferrer">xargs</a> command in UNIX is a command line utility for building an execution pipeline from standard input. Whilst tools like grep can accept standard input as a parameter, many other tools cannot. Using xargs allows tools like echo and rm and mkdir to accept standard input as arguments.</p> <p>To fetch the logs, use the kubectl logs command, as follows:</p> <pre><code>kubectl logs your-pod-name -n namespace-name </code></pre> <p>Sub-resources and verbs that you need to define <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> roles are not documented anywhere in a static list. They are available in the discovery documentation, i.e. via the API, e.g. <code>/api/apps/v1</code>.</p> <p>The following bash script will list all the resources, sub-resources and verbs in the following format:</p> <pre><code>api_version resource: [verb] </code></pre> <p>where <code>api-version</code> is <code>core</code> for the core resources and should be replaced by <code>""</code> (an empty quoted string) in your role definition.</p> <p>For example, <code>core pods/status: get patch update</code>.</p> <p>The script requires [jq][1].</p> <pre><code>#!/bin/bash SERVER="localhost:8080" APIS=$(curl -s $SERVER/apis | jq -r '[.groups | .[].name] | join(" ")') # do core resources first, which are at a separate api location api="core" curl -s $SERVER/api/v1 | jq -r --arg api "$api" '.resources | .[] | "\($api) \(.name): \(.verbs | join(" "))"' # now do non-core resources for api in $APIS; do version=$(curl -s $SERVER/apis/$api | jq -r '.preferredVersion.version') curl -s $SERVER/apis/$api/$version | jq -r --arg api "$api" '.resources | .[]? | "\($api) \(.name): \(.verbs | join(" "))"' done </code></pre> <p>Note that where no verbs are listed via the api, the output will just show the api version and the resource, e.g.</p> <pre><code>core pods/exec: </code></pre> <p>In the specific instance of the following resources unfortunately no verbs are shown via the api.</p> <pre><code>nodes/proxy pods/attach pods/exec pods/portforward pods/proxy services/proxy </code></pre> <p>The supported verbs for these resources are as follows:</p> <pre><code>nodes/proxy: create delete get patch update pods/attach: create get pods/exec: create get pods/portforward: create get pods/proxy: create delete get patch update services/proxy: create delete get patch update </code></pre> <p>Documentation about logging: <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">kubernetes-logging</a>.</p> <p>More information you can find here: <a href="http://matthieure.me/2019/06/18/kubernetes-api-resources.html" rel="nofollow noreferrer">api-resources</a>.</p> <p>Useful blog: <a href="https://medium.com/faun/kubectl-commands-cheatsheet-43ce8f13adfb" rel="nofollow noreferrer">kubectl-cheat-sheet</a>.</p>
Malgorzata
<p>I have a Chart.yaml with the following:</p> <p><em>Chart.yaml</em></p> <pre class="lang-yaml prettyprint-override"><code>dependencies: - name: my-app version: &quot;0.1.0&quot; repository: &quot;@my-chartmuseum-repo&quot; </code></pre> <p>And I added the repo to helm:</p> <pre class="lang-sh prettyprint-override"><code># helm repo list NAME URL my-chartmuseum-repo http://127.0.0.1:8080/ stable https://charts.helm.sh/stable </code></pre> <p>When I run <code>helm dependency update my-owning-app</code> I get the successful message:</p> <pre class="lang-sh prettyprint-override"><code>helm dependency update my-owning-app Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the &quot;my-chartmuseum-repo&quot; chart repository ...Successfully got an update from the &quot;stable&quot; chart repository Update Complete. ⎈Happy Helming!⎈ Saving 1 charts Downloading my-app from repo http://127.0.0.1:8080/ Deleting outdated charts </code></pre> <p><strong>However</strong>, when I try to do this via <code>helm push my-owning-app/ my-chartmuseum-repo --dependency-update</code> I get the error:</p> <pre class="lang-sh prettyprint-override"><code>Error: no repository definition for @my-chartmuseum-repo. Please add them via 'helm repo add' Usage: helm push [flags] Flags: # ...elided... </code></pre> <p>Why would it work in the first command but not the second one to find the repository by name?</p>
Don Rhummy
<p>This is a community wiki answer. Feel free to expand on it.</p> <p>The <code>--dependency-update</code> flag for the <code>helm push</code> plugin is currently not working properly due to the fact that it does not omit the <code>@</code> symbol when checking the name of the repository.</p> <p>As a workaround, you could use the <a href="https://helm.sh/docs/helm/helm_dependency_update/" rel="nofollow noreferrer">Helm Dependency Update</a> with a <code>--repository-config string</code> flag:</p> <blockquote> <p>path to the file containing repository names and URLs (default &quot;~/.config/helm/repositories.yaml&quot;)</p> </blockquote>
Wytrzymały Wiktor
<p>How is <code>container port</code> different from <code>targetports</code> in a container in Kubernetes? Are they used interchangeably, if so why?</p> <p>I came across the below code snippet where <code>containerPort</code> is used to denote the <code>port</code> on a pod in Kubernetes.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: postgres-deployment labels: app: demo-voting-app spec: replicas: 1 selector: matchLabels: name: postgres-pod app: demo-voting-app template: metadata: name: postgres-pod labels: name: postgres-pod app: demo-voting-app spec: containers: - name: postgres image: postgres:9.4 ports: - containerPort: 5432 </code></pre> <p>In the above code snippet, they have given 5432 for the <code>containerPort</code> parameter (in the last line). So, how is this <code>containerPort</code> different from <code>targetport</code>?</p> <p>As far as I know, the term <code>port</code> in general refers to the <code>port</code> on the <code>service</code> (Kubernetes). Correct me if I'm incorrect.</p>
Purushothaman Srikanth
<p><strong>In a nutshell:</strong> <code>targetPort</code> and <code>containerPort</code> basically refer to the same port (so if both are used they are expected to have the same value) but they are used in two different contexts and have entirely different purposes.</p> <p>They cannot be used interchangeably as both are part of the specification of two distinct kubernetes resources/objects: <code>Service</code> and <code>Pod</code> respectively. While the purpose of <code>containerPort</code> can be treated as purely informational, <code>targetPort</code> is required by the <code>Service</code> which exposes a set of <code>Pods</code>.</p> <p>It's important to understand that by declaring <code>containerPort</code> with the specific value in your <code>Pod</code>/<code>Deployment</code> specification you cannot make your <code>Pod</code> to expose this specific port e.g. if you declare in <code>containerPort</code> field that your nginx <code>Pod</code> exposes port <code>8080</code> instead of default <code>80</code>, you still need to configure your nginx server in your container to listen on this port.</p> <p>Declaring <code>containerPort</code> in <code>Pod</code> specification is optional. Even without it your <code>Service</code> will know where to direct the request based on the info it has declared in its <code>targetPort</code>.</p> <p>It's good to remember that it's not required to declare <code>targetPort</code> in the <code>Service</code> definition. If you omit it, it defaults to the value you declared for <code>port</code> (which is the port of the <code>Service</code> itself).</p>
mario
<p>I know a scenario of kubernetes headless service <strong>with</strong> selector. But what’s the usage scenario of kubernetes headless service <strong>without</strong> selector?</p>
Cain
<p>Services without selectors are used if you want to have an external database cluster in production, but in your test environment you use your own databases, to point your Service to a Service in a different <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces" rel="nofollow noreferrer">Namespace</a> or on another cluster, when you are migrating a workload to Kubernetes. Service without selectors are often used to alias external services into the cluster DNS.</p> <p>Here ia an example of service without selector:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: example-service spec: ports: - protocol: TCP port: 80 targetPort: 9376 </code></pre> <p><strong>This Service has no selector, the corresponding Endpoint object is <em>not</em> created automatically. You can manually map the Service to the network address and port where it’s running, by adding an Endpoint object manually:</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Endpoints metadata: name: example-service subsets: - addresses: - ip: 192.0.2.42 ports: - port: 9376 </code></pre> <p>If you have more that one IP address for redundancy, you can repeat them in the addresses array. Once the endpoints are populated, the load balancer will start redirecting traffic from your Kubernetes service to the IP addresses, </p> <blockquote> <p><strong>Note:</strong> The endpoint IPs <em>must not</em> be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6). Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services, because <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="nofollow noreferrer">kube-proxy</a> doesn’t support virtual IPs as a destination.</p> </blockquote> <p>You can access a Service without a selector the same as if it had a selector. </p> <p>Take a look: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">services-without-selector</a>, <a href="https://stackoverflow.com/questions/61822431/what-s-the-usage-scenario-of-kubernetes-headless-service-without-selector#comment109421978_61822628">example-service-without-selector</a>.</p>
Malgorzata
<p>When we run <code>kubectl apply -f</code>, we create a new pod in Kubernetes. But it takes about 5 seconds to arrive <code>Running</code> status even though the image has been already pulled in the node. Before that, the pod is in <code>containercreating</code> status. I Run <code>kubectl describe </code> to see the events and found that the scheduling is very fast but the gap between scheduled and imagepulling is about 3 seconds , and the container starting time is about 2 seconds. I wonder if I can reduce the time of containercreating time. Thank you!</p>
Rui
<p>The target latency between Creation to Running is ~5 sec (if the image is pre-pulled). Your Pods' creation times are meeting both the scheduling time goal and the API latency goal. There was a <a href="https://github.com/kubernetes/kubernetes/issues/3954" rel="nofollow noreferrer">discussion</a> regarding that topic which resulted in the current SLA. And further <a href="https://github.com/kubernetes/enhancements/issues/1446" rel="nofollow noreferrer">Enhancement Proposals</a> (example) are rejected.</p> <p>However you may want to review the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/" rel="nofollow noreferrer">Scheduler Performance Tuning</a> but bear in mind that it would be relevant for large Kubernetes clusters mainly.</p>
Wytrzymały Wiktor
<p>I am trying to set up access to Pods on multiple Nodes with a single Service yaml. The Pods all have the same label (say, <code>label:app</code>), but are distributed across several Nodes, instead of on a single Node.</p> <p>As far as I know, I can set up a Service to forward access to a Pod through a NodePort, like:</p> <pre><code>spec: type: NodePort selector: label: app ports: targetPort: 5000 nodePort: 30000 </code></pre> <p>where accessing port 30000 on a node forwards to port 5000 on the pod.</p> <p>If I have pods on multiple nodes, is there a way a client can access a single endpoint, e.g. the Service itself, to get any pod in round-robin? Or does a client need to access a set of pods on a specific node, using that node's IP, as in <code>xx.xx.xx.xx:30000</code>?</p>
asuprem
<p>Although <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="noreferrer">LoadBalancer</a> is an undeniably recommended solution (especially in cloud environment), it's worth mentioning that <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="noreferrer">NodePort</a> also has <strong>load balancing capabilities</strong>.</p> <p>The fact that you're accessing your <code>NodePort</code> Service on a particular node doesn't mean that you are able to access this way only <code>Pods</code> that have been scheduled on that particular node.</p> <p>As you can read in <code>NodePort</code> Service <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="noreferrer">specification</a>:</p> <blockquote> <p>Each node proxies that port (the same port number on every Node) into your <code>Service</code>.</p> </blockquote> <p>So by accessing port <code>30080</code> on one particular node your request is not going directly to some random <code>Pod</code>, scheduled on that node. It is proxied to the <code>Service</code> object which is an abstraction that spans across all nodes. And this is probably the key point here as your <code>NodePort</code> Service isn't tied in any way to the node, IP of which you use to access your pods.</p> <p>Therefore <code>NodePort</code> Service is able to route client requests to all pods across the cluster using simple <strong>round robin algorithm</strong>.</p> <p>You can verify it easily using the following <code>Deployment</code>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: initContainers: - name: init-myservice image: nginx:1.14.2 command: ['sh', '-c', &quot;echo $MY_NODE_NAME &gt; /usr/share/nginx/html/index.html&quot;] env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName volumeMounts: - mountPath: /usr/share/nginx/html name: cache-volume containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html name: cache-volume volumes: - name: cache-volume emptyDir: {} </code></pre> <p>This will allow you to test to which node your http request is going to. You may additionally need to scale a bit this <code>Deployment</code> to make sure that all nodes are used:</p> <pre><code>kubectl scale deployment nginx-deployment --replicas=9 </code></pre> <p>Then verify that your pods are scheduled on different nodes:</p> <pre><code>kubectl get pods -o wide </code></pre> <p>List all your nodes:</p> <pre><code>kubectl get nodes -o wide </code></pre> <p>and pick the IP address of a node that you want to use to access your pods.</p> <p>Now you can expose the <code>Deployment</code> by running:</p> <pre><code>kubectl expose deployment nginx-deployment --type NodePort --port 80 --target-port 80 </code></pre> <p>or if you want to specify the port number by yourself e.g. as <code>30080</code>, apply the following <code>NodePort</code> Service definition as <code>kubectl expose</code> doesn't allow you to specify the exact <code>nodePort</code> value:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx-deployment spec: type: NodePort selector: app: nginx ports: - port: 80 targetPort: 80 nodePort: 30080 </code></pre> <p>Then try to access your pods exposed via <code>NodePort</code> Service using IP of the previously chosen node. You may need to try both normal and private/incognito modes or even different browser (simple refresh may not work) but eventually you will see that different requests land on pods scheduled on different nodes.</p> <p>Keep in mind that if you decide to use <code>NodePort</code> you won't be able to use <strong>well known ports</strong>. Actually it might be even feasible as you may change the default port range (<code>30000-32767</code>) to something like <code>1-1024</code> in <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="noreferrer">kube-apiserver</a> configuration by using <code>--service-node-port-range</code> option but its not recommended as it might lead to some unexpected issues.</p>
mario
<p>I have a CoreDNS running in our cluster that uses the Kube DNS service. I want to disable the AutoScaler and the Kube-DNS deployment or scale it to 0.</p> <p>As soon as I do this, however, it is always automatically scaled up to 2. What can I do?</p>
alexohneander
<p>The scenario you are going through is described by the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/custom-kube-dns" rel="nofollow noreferrer">official documentation</a>.</p> <ul> <li><p>Make sure that you created your custom CoreDNS as described <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/custom-kube-dns#creating_a_custom_deployment" rel="nofollow noreferrer">here</a>.</p> </li> <li><p>Disable the kube-dns managed by GKE by scaling the kube-dns Deployment and autoscaler to zero using the following command:</p> </li> </ul> <hr /> <pre><code>kubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system kubectl scale deployment --replicas=0 kube-dns --namespace=kube-system </code></pre> <hr /> <ul> <li>If the above command will still not work than try the following one:</li> </ul> <hr /> <pre><code>kubectl scale --replicas=0 deployment/kube-dns-autoscaler --namespace=kube-system kubectl scale --replicas=0 deployment/kube-dns --namespace=kube-system </code></pre> <hr /> <p>Remember to specify the <code>namespace</code>.</p>
Wytrzymały Wiktor
<p>In a GKE cluster with version 1.16.6-gke.12 from the rapid channel the <code>kubedns</code> container of the <code>kube-dns-...</code> pods of the <code>kube-dns</code> service fail permanently due to</p> <pre><code>kubedns 15 Mar 2020, 21:43:54 F0315 20:43:54.029575 1 server.go:61] Failed to create a kubernetes client: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029539 1 dns.go:48] version: 1.15.8 kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029524 1 flags.go:52] FLAG: --vmodule="" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029517 1 flags.go:52] FLAG: --version="false" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029512 1 flags.go:52] FLAG: --v="2" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029506 1 flags.go:52] FLAG: --stderrthreshold="2" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029500 1 flags.go:52] FLAG: --profiling="false" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029495 1 flags.go:52] FLAG: --nameservers="" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029490 1 flags.go:52] FLAG: --logtostderr="true" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029485 1 flags.go:52] FLAG: --log-flush-frequency="5s" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029479 1 flags.go:52] FLAG: --log-dir="" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029466 1 flags.go:52] FLAG: --log-backtrace-at=":0" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029460 1 flags.go:52] FLAG: --kubecfg-file="" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029439 1 flags.go:52] FLAG: --kube-master-url="" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029432 1 flags.go:52] FLAG: --initial-sync-timeout="1m0s" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029426 1 flags.go:52] FLAG: --healthz-port="8081" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029419 1 flags.go:52] FLAG: --federations="" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029412 1 flags.go:52] FLAG: --domain="cluster.local." kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029404 1 flags.go:52] FLAG: --dns-port="10053" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029398 1 flags.go:52] FLAG: --dns-bind-address="0.0.0.0" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029390 1 flags.go:52] FLAG: --config-period="10s" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029382 1 flags.go:52] FLAG: --config-map-namespace="kube-system" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029373 1 flags.go:52] FLAG: --config-map="" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029363 1 flags.go:52] FLAG: --config-dir="/kube-dns-config" kubedns 15 Mar 2020, 21:43:54 I0315 20:43:54.029288 1 flags.go:52] FLAG: --alsologtostderr="false" </code></pre> <p>Is there a workaround this. Where should I report this?</p> <p>Version information:</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-gke.12", GitCommit:"74e2d6182ba7947983ec6d59776c38c53b086a37", GitTreeState:"clean", BuildDate:"2020-02-27T18:38:03Z", GoVersion:"go1.13.4b4", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
Kalle Richter
<p>New GKE clusters now use Kubernetes version <strong>1.14</strong> by default. GKE now offers Kubernetes <strong>1.17</strong> in preview, which requires requesting access from Google Cloud to use. Analogically if there will be release of GKE which will use Kubernetes <strong>1.18</strong> - which solves problem with service account (kubernetes.io/docs/setup/release/notes - "Fixes service account token admission error in clusters that do not run the service account token controller- <a href="https://github.com/kubernetes/kubernetes/pull/87029" rel="nofollow noreferrer">admission</a>) - this GKE version will at the same time solve your problem.</p> <p>See: <a href="https://kubernetes.io/docs/setup/release/notes/" rel="nofollow noreferrer">kubernetes-1.18</a>, <a href="https://www.stackrox.com/post/2020/03/what-is-new-in-kubernetes-1.18/" rel="nofollow noreferrer">new-kubernetes-release</a>.</p>
Malgorzata
<p>I'm attempting to run several Docker apps in a GKE instance, with a load balancer setup exposing them. Each app comprises a simple node.js app with nginx to serve the site; a simple nginx config exposes the apps with a location block responding to <code>/</code>. This works well locally when developing since I can run each pod on a separate port, and access them simply at 127.0.0.1:8080 or similar.</p> <p>The problem I'm encountering is that when using the GCP load balancer, whilst I can easily route traffic to the Kubernetes services such that <a href="https://example.com/" rel="nofollow noreferrer">https://example.com/</a> maps to my <code>foo</code> service/pod and <a href="https://example.com/bar" rel="nofollow noreferrer">https://example.com/bar</a> goes to my <code>bar</code> service, the <code>bar</code> pod responds with a 404 since the path, <code>/bar</code> doesn't match the path specified in the location block.</p> <p>The number of these pods will scale a lot so I do not wish to manually know ahead of time what path each pod will be under, nor do I wish to embody this in my git repo.</p> <p>Is there a way I can dynamically define the path the location block matches, for example via an environment variable, such that I could define it as part of the Helm charts I use to deploy these services? Alternatively is it possible to match all paths? Is that a viable solution, or just asking for problems?</p> <p>Thanks for your help.</p>
aodj
<p>Simply use <strong>ingress</strong>. It will allow you to map different paths to different backend <code>Services</code>. It is very well explained both in <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">GCP docs</a> as well as in the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">official kubernetes documentation</a>.</p> <p>Typical ingress object definition may look as follows:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: my-ingress spec: backend: serviceName: my-products servicePort: 60001 rules: - http: paths: - path: / backend: serviceName: my-products servicePort: 60000 - path: /discounted backend: serviceName: my-discounted-products servicePort: 80 - path: /special backend: serviceName: special-offers servicePort: 80 - path: /news backend: serviceName: news servicePort: 80 </code></pre> <p>When you apply your ingress definition on <strong>GKE</strong>, <strong>load balancer</strong> is created automatically. Note that all <code>Services</code> may use same, standard http port and you don't have to use any custom ports.</p> <p>You may want to specify <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#default_backend" rel="nofollow noreferrer">a default backend</a>, present in the above example (<code>backend</code> section right under <code>spec</code>), but it's optional. It will ensure that:</p> <blockquote> <p>Any requests that don't match the paths in the rules field are sent to the Service and port specified in the backend field. For example, in the following Ingress, any requests that don't match / or /discounted are sent to a Service named my-products on port 60001.</p> </blockquote> <p>The only problem that you may encounter when using default <strong>ingress controller</strong> available on <strong>GKE</strong> is that for the time being <strong>it doesn't support rewrites</strong>.</p> <p>If your nginx pods expose app content only on <code>&quot;/&quot;</code> path, no support for rewrites shouldn't be a limitation at all and as far as I understand, this applies in your case:</p> <blockquote> <p>Each app comprises a simple node.js app with nginx to serve the site; a simple nginx config exposes the apps with a location block responding to /</p> </blockquote> <p>However if you decide at some point that you need mentioned rewrites because e.g. one of your apps isn't exposed under <code>/</code> but rather <code>/bar</code> within the <code>Pod</code> you may decide to deploy <a href="https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke" rel="nofollow noreferrer">nginx ingress controller</a> which can be also done pretty easily on <strong>GKE</strong>.</p> <p>So you will only need it in the following scenario: <em>user accesses the ingress IP followed by <code>/foo</code></em> -&gt; <em>request is not only redirected to the specific backend <code>Service</code> that exposes your <strong>nginx</strong> <code>Pod</code>, but also the original path (<code>/foo</code>) needs to be rewritten to the new path (<code>/bar</code>) under which the application is exposed within the <code>Pod</code></em></p> <h1>UPDATE:</h1> <blockquote> <p>Thank you for your reply. The above ingress configuration is very similar to what I've already configured forwarding /foo and /bar to different pods. The issue is that the path gets forwarded, and (after doing some more research on the issue) I believe I need to rewrite the URL that's sent to the pod, since the location / { ... } block in my nginx config won't match against the received path of /foo or /bar. – aodj Aug 14 at 9:17</p> </blockquote> <p>Well, you're right. The original access path e.g. <code>/foo</code> indeed gets forwarded to the target <code>Pod</code>. So choosing <code>/foo</code> path apart from leading you to the respective <code>backend</code> defined in the <strong>ingress resource</strong> implicates that the target <strong>nginx server</strong> running in a <code>Pod</code> must serve its content also under <code>/foo</code> path.</p> <p>I verified it with <strong>GKE ingress</strong> and can confirm by checking <code>Pod</code> logs that an http request sent to the <strong>nginx</strong> <code>Pod</code> thorough the <code>/foo</code> path, indeed comes to the <code>Pod</code> as request for <code>/usr/share/nginx/html/foo</code> while it serves its content under <code>/</code>, not /foo from <code>/usr/share/nginx/html</code>. So requesting for something that don't exist on the target server leads inevitably to <code>404 Error</code>.</p> <p>As I mentioned before, default ingress controller available on <strong>GKE</strong> doesn't support rewrites so if you want to use it for some reason, reconfiguring your target <strong>nginx servers</strong> seems the only solution to make it work.</p> <p>Fortunatelly we have another option which is <strong>nginx ingress controller</strong>. It supports rewrites so it can easily solve our problem. We can deploy it on our <strong>GKE cluster</strong> by running two following commands:</p> <pre><code>kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin \ --user $(gcloud config get-value account) kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml </code></pre> <p>Yes, it's really that simple! You can take a closer look at the installation process in <a href="https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke" rel="nofollow noreferrer">official docs</a>.</p> <p>Then we can apply the following <code>ingress</code> resource definition:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/rewrite-target: /$2 name: rewrite namespace: default spec: rules: - http: paths: - backend: serviceName: nginx-deployment-1 servicePort: 80 path: /foo(/|$)(.*) - backend: serviceName: nginx-deployment-2 servicePort: 80 path: /bar(/|$)(.*) </code></pre> <p>Note that we used <code>kubernetes.io/ingress.class: &quot;nginx&quot;</code> annotation to select our newly deployed <strong>nginx-ingress controller</strong> to handle this <strong>ingress resource</strong> rather than the default <strong>GKE-ingress controller</strong>.</p> <p>Rewrites that were used will make sure that the original access path gets rewritten before reaching the target <strong>nginx <code>Pod</code></strong>. So it's perfectly fine that both sets of <code>Pods</code> exposed by <code>nginx-deployment-1</code> and <code>nginx-deployment-2</code> <code>Services</code> serve their contents under <code>&quot;/&quot;</code>.</p> <p>If you want to quickly check how it works on your own, you can use the following <code>Deployments</code>:</p> <p><strong>nginx-deployment-1.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment-1 labels: app: nginx-1 spec: replicas: 3 selector: matchLabels: app: nginx-1 template: metadata: labels: app: nginx-1 spec: initContainers: - name: init-myservice image: nginx:1.14.2 command: ['sh', '-c', &quot;echo DEPLOYMENT-1 &gt; /usr/share/nginx/html/index.html&quot;] volumeMounts: - mountPath: /usr/share/nginx/html name: cache-volume containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html name: cache-volume volumes: - name: cache-volume emptyDir: {} </code></pre> <p><strong>nginx-deployment-2.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment-2 labels: app: nginx-2 spec: replicas: 3 selector: matchLabels: app: nginx-2 template: metadata: labels: app: nginx-2 spec: initContainers: - name: init-myservice image: nginx:1.14.2 command: ['sh', '-c', &quot;echo DEPLOYMENT-2 &gt; /usr/share/nginx/html/index.html&quot;] volumeMounts: - mountPath: /usr/share/nginx/html name: cache-volume containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html name: cache-volume volumes: - name: cache-volume emptyDir: {} </code></pre> <p>And expose them via <code>Services</code> by running:</p> <pre><code>kubectl expose deployment nginx-deployment-1 --type NodePort --target-port 80 --port 80 kubectl expose deployment nginx-deployment-2 --type NodePort --target-port 80 --port 80 </code></pre> <p>You may even omit <code>--type NodePort</code> as <strong>nginx-ingress</strong> controller accepts also <code>ClusterIP</code> <code>Services</code>.</p>
mario
<p>I am trying to using the kubernetes pod operator in airflow, and there is a directory that I wish to share with kubernetes pod on my airflow worker, is there is a way to mount airflow worker's directory to kubernetes pod?</p> <p>I tried with the code below, and the volumn seems not mounted successfully.</p> <pre><code>import datetime import unittest from unittest import TestCase from airflow.operators.kubernetes_pod_operator import KubernetesPodOperator from airflow.kubernetes.volume import Volume from airflow.kubernetes.volume_mount import VolumeMount class TestMailAlarm(TestCase): def setUp(self): self.namespace = "test-namespace" self.image = "ubuntu:16.04" self.name = "default" self.cluster_context = "default" self.dag_id = "test_dag" self.task_id = "root_test_dag" self.execution_date = datetime.datetime.now() self.context = {"dag_id": self.dag_id, "task_id": self.task_id, "execution_date": self.execution_date} self.cmds = ["sleep"] self.arguments = ["100"] self.volume_mount = VolumeMount('test', mount_path='/tmp', sub_path=None, read_only=False) volume_config = { 'persistentVolumeClaim': { 'claimName': 'test' } } self.volume = Volume(name='test', configs=volume_config) self.operator = KubernetesPodOperator( namespace=self.namespace, image=self.image, name=self.name, cmds=self.cmds, arguments=self.arguments, startup_timeout_seconds=600, is_delete_operator_pod=True, # the operator could run successfully but the directory /tmp is not mounted to kubernetes operator volume=[self.volume], volume_mount=[self.volume_mount], **self.context) def test_execute(self): self.operator.execute(self.context) </code></pre>
buxizhizhoum
<p>The example in the docs seems pretty similar to your code, only the parameters are plurals <strong><code>volume_mounts</code></strong> and <strong><code>volumes</code></strong>. For your code it would look like this: </p> <pre><code>self.operator = KubernetesPodOperator( namespace=self.namespace, image=self.image, name=self.name, cmds=self.cmds, arguments=self.arguments, startup_timeout_seconds=600, is_delete_operator_pod=True, # the operator could run successfully but the directory /tmp is not mounted to kubernetes operator volumes=[self.volume], volume_mounts=[self.volume_mount], **self.context) </code></pre>
ECris
<p>I am trying to deploy ingress-controller in GKE - K8S cluster where RBAC is enabled, but I am getting below error.</p> <p><a href="https://i.stack.imgur.com/2pw4n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2pw4n.png" alt="W"></a></p> <p>This is the command I ran ...</p> <p>helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.publishService.enabled=true </p> <p>it gave me below error<br> Error: validation failed: [serviceaccounts "nginx-ingress" not found, serviceaccounts "nginx-ingress-backend" not found, clusterroles.rbac.authorization.k8s.io "nginx-ingress" not found, clusterrolebindings.rbac.authorization.k8s.io "nginx-ingress" not found, roles.rbac.authorization.k8s.io "nginx-ingress" not found, rolebindings.rbac.authorization.k8s.io "nginx-ingress" not found, services "nginx-ingress-controller" not found, services "nginx-ingress-default-backend" not found, deployments.apps "nginx-ingress-controller" not found, deployments.apps "nginx-ingress-default-backend" not found]</p> <p>I am following this link : <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">https://cloud.google.com/community/tutorials/nginx-ingress-gke</a></p> <p>Could you please share your thoughts to debug this issue and also to fix. Thanks in advance.</p>
args
<p>There is a simple workaround: <strong>downgrade helm and tiller versions</strong>. </p> <p>Here are the steps how to do it: <a href="https://medium.com/@jyotirbhandari/how-to-downgrade-helm-version-on-server-and-client-4838ca100dbf" rel="nofollow noreferrer">downgrade-helm-tiller</a>.</p> <p>Remember that helm version on the server and client should be same to communicate.</p> <p>Similar problems: <a href="https://github.com/helm/helm/issues/7797" rel="nofollow noreferrer">helm-validation-failed</a>, <a href="https://stackoverflow.com/questions/60836127/error-validation-failed-serviceaccounts-nginx-ingress-not-found-serviceacc/60846646#60846646">validation-helm-install</a>.</p> <p>Useful documentation: <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">gke-nginx-ingress</a>.</p>
Malgorzata
<p>i have a docker-composes for app dotnetcore, I'm new to k8s, my app has quite a long envvariable. &quot;ConnectionStrings__DefaultConnection&quot; as below</p> <pre><code> productstudio: volumes: - ${USERPROFILE}/.aws:/root/.aws environment: - ASPNETCORE_ENVIRONMENT=Development - ConnectionStrings__DefaultConnection=Username=someuser;Password=somepassword;Server=postgres;Port=5432;Database=somedb;Search Path=some - EventBus__Enable=true - EventBus__HostUri=rabbitmq://eventbus/ </code></pre> <p>and i write configmap</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: be-productstudio-configmap labels: app: product-builder-be tier: backend data: ASPNETCORE_ENVIRONMENT: Development EventBus__Enable: true EventBus__HostUri: rabbitmq://eventbus/ ConnectionStrings__DefaultConnection: |- Username=someuser; Password=somepassword; Server=postgres; Port=5432; Database=somedb; Search Path=some </code></pre> <p>But I got an error</p> <pre><code>Error from server (BadRequest): error when creating &quot;manifect-be.yml&quot;: ConfigMap in version &quot;v1&quot; cannot be handled as a ConfigMap: v1.ConfigMap.Data: ReadString: expects &quot; or n, but found t, error found in #10 byte of ...|_Enable&quot;:true,&quot;Event|..., bigger context ...|nSearch Path=productstudio\&quot;&quot;,&quot;EventBus__Enable&quot;:true,&quot;EventBus__HostUri&quot;:&quot;rabbitmq://eventbus/&quot;},&quot;k|... </code></pre> <p>Can anyone help me, thanks</p>
Quân nguyễn
<p>I see two issues here:</p> <ol> <li>The error you see means that the value <code>true</code> for <code>EventBus__Enable</code> is not quoted and it gets treated as a keyword that means a boolean true. Environment variables are strings and must be quoted in your yaml definition. You need to make it look more like this:</li> </ol> <hr /> <pre><code> EventBus__Enable: &quot;true&quot; </code></pre> <ol start="2"> <li>You should not use spaces in your key definitions of your <code>ConfigMap</code>:</li> </ol> <hr /> <pre><code>Search Path=productstudio </code></pre> <p>as:</p> <blockquote> <p>Each key under the <code>data</code> or the <code>binaryData</code> field must consist of alphanumeric characters, <code>-</code>, <code>_</code> or <code>.</code>.</p> </blockquote> <p>You can use the <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">official docs</a> for a reference of a correctly configured ConfigMaps, for example:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: game-demo data: # property-like keys; each key maps to a simple value player_initial_lives: &quot;3&quot; ui_properties_file_name: &quot;user-interface.properties&quot; # file-like keys game.properties: | enemy.types=aliens,monsters player.maximum-lives=5 user-interface.properties: | color.good=purple color.bad=yellow allow.textmode=true </code></pre>
Wytrzymały Wiktor
<p>I have a k8s cluster that runs just fine. It has several standalone mongodb statefulsets connected via NFC. The problem is, whenever their is a power outage, the mongodb databases get corrupt:</p> <pre><code>{&quot;t&quot;:{&quot;$date&quot;:&quot;2021-10-15T13:10:06.446+00:00&quot;},&quot;s&quot;:&quot;W&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:22271, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Detected unclean shutdown - Lock file is not empty&quot;,&quot;attr&quot;:{&quot;lockFile&quot;:&quot;/data/db/mongod.lock&quot;}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-10-15T13:10:07.182+00:00&quot;},&quot;s&quot;:&quot;E&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:22435, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;WiredTiger error&quot;,&quot;attr&quot;:{&quot;error&quot;:0,&quot;message&quot;:&quot;[1634303407:182673][1:0x7f9515eb7a80], file:WiredTiger.wt, connection: __wt_block_read_off, 283: WiredTiger.wt: read checksum error for 4096B block at offset 12288: block header checksum of 0xc663f362 doesn't match expected checksum of 0xb8e27418&quot;}} </code></pre> <p>The pods status remain at CrashLoopBackOff so I cannot do <code>kubectl exec -it usersdb-0 -- mongod --repair</code> because it is not running.</p> <p>I have tried deleting wiredTiger.lock and mongod.lock but nothing seems to work. How can I repair this databases?</p>
Denn
<p>well after several attempts I think I have finally made some breakthrough so I wanted to leave this here for someone else.</p> <p>Since the mongodb is not running, add the command</p> <pre><code>command: [&quot;sleep&quot;] args: [&quot;infinity&quot;] </code></pre> <p>in the resource file (hoping it is a statefulset). Then repair the database using the command</p> <pre><code>kubectl exec -it &lt;NAME-OF-MONGODB-POD&gt; -- mongod --dbpath /data/db --repair </code></pre> <p>This will repair the standalone mongodb pod. Now remove the comment, apply the resource yaml file then kill the pod to recreate it afresh.</p> <p>Now the mongodb pod should be working fine.</p>
Denn
<p>Consider the RBAC role below. Is it possible to write a more sophisticated regex for <code>resources:</code> that prevents access to service accounts and namespaces but allows everything else? </p> <pre><code>- apiGroups: "*" resources: "*" verbs: "*" </code></pre>
kgunjikar
<p>A simple workaround for it is to disable possibility to access resources within namespace. Execute command:</p> <pre><code>$ kubectl api-resources --namespaced=false </code></pre> <p>Non-namespaced resources will be returned, otherwise returning namespaced resources by default.</p> <p>Also while you are using:</p> <ul> <li><p>apiGroups: "*" - this means that you want to grant access for all groups within Kubernetes API (both core API gorups and named groups )</p></li> <li><p>resources: "*" - this means that you want to grant access for all resources (get, services, endpoints etc.)</p></li> <li>verbs: "*" - this means that you want to allow operations on specified objects (get, list, edit etc.).</li> </ul> <p>In your case as you defined you don't prevent access but give it to every object etc.</p> <p>Take a look on: <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#api-resources" rel="nofollow noreferrer">api-resources</a>.</p>
Malgorzata
<p>I'm trying to install and configure Velero for kubernetes backup. I have followed the <a href="https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup" rel="noreferrer">link</a> to configure it in my GKE cluster. The installation went fine, but velero is not working.</p> <p>I am using google cloud shell for running all my commands (I have installed and configured velero client in my google cloud shell)</p> <p>On further inspection on velero deployment and velero pods, I found out that it is not able to pull the image from the docker repository.</p> <pre><code>kubectl get pods -n velero NAME READY STATUS RESTARTS AGE velero-5489b955f6-kqb7z 0/1 Init:ErrImagePull 0 20s </code></pre> <p>Error from velero pod (kubectl describe pod) (output redacted for readability - only relevant info shown below)</p> <pre><code> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 38s default-scheduler Successfully assigned velero/velero-5489b955f6-kqb7z to gke-gke-cluster1-default-pool-a354fba3-8674 Warning Failed 22s kubelet, gke-gke-cluster1-default-pool-a354fba3-8674 Failed to pull image &quot;velero/velero-plugin-for-gcp:v1.1.0&quot;: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Warning Failed 22s kubelet, gke-gke-cluster1-default-pool-a354fba3-8674 Error: ErrImagePull Normal BackOff 21s kubelet, gke-gke-cluster1-default-pool-a354fba3-8674 Back-off pulling image &quot;velero/velero-plugin-for-gcp:v1.1.0&quot; Warning Failed 21s kubelet, gke-gke-cluster1-default-pool-a354fba3-8674 Error: ImagePullBackOff Normal Pulling 8s (x2 over 37s) kubelet, gke-gke-cluster1-default-pool-a354fba3-8674 Pulling image &quot;velero/velero-plugin-for-gcp:v1.1.0&quot; </code></pre> <p>Command used to install velero: (some of the values are given as variables)</p> <pre><code>velero install \ --provider gcp \ --plugins velero/velero-plugin-for-gcp:v1.1.0 \ --bucket $storagebucket \ --secret-file ~/velero-backup-storage-sa-key.json </code></pre> <p>Velero Version</p> <pre><code>velero version Client: Version: v1.4.2 Git commit: 56a08a4d695d893f0863f697c2f926e27d70c0c5 &lt;error getting server version: timed out waiting for server status request to be processed&gt; </code></pre> <p>GKE version</p> <pre><code>v1.15.12-gke.2 </code></pre>
srsn
<blockquote> <p><em>Isn't this a Private Cluster ? – mario 31 mins ago</em></p> <p><em>@mario this is a private cluster but I can deploy other services without any issues (for eg: I have deployed nginx successfully) – Sreesan 15 mins ago</em></p> </blockquote> <p>Well, this is a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#docker_hub" rel="noreferrer">know limitation</a> of <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="noreferrer"><strong>GKE Private Clusters</strong></a>. As you can read in the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#docker_hub" rel="noreferrer">documentation</a>:</p> <blockquote> <h3>Can't pull image from public Docker Hub</h3> <p><strong>Symptoms</strong></p> <p>A Pod running in your cluster displays a warning in <code>kubectl describe</code> such as <code>Failed to pull image: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)</code></p> <p><strong>Potential causes</strong></p> <p>Nodes in a private cluster do not have outbound access to the public internet. They have limited access to Google APIs and services, including Container Registry.</p> <p><strong>Resolution</strong></p> <p>You cannot fetch images directly from Docker Hub. Instead, use images hosted on Container Registry. Note that while Container Registry's <a href="https://cloud.google.com/container-registry/docs/using-dockerhub-mirroring" rel="noreferrer">Docker Hub mirror</a> is accessible from a private cluster, it should not be exclusively relied upon. The mirror is only a cache, so images are periodically removed, and a private cluster is not able to fall back to Docker Hub.</p> </blockquote> <p>You can also compare it with <a href="https://stackoverflow.com/questions/62382321/gke-image-pull-errors-for-specific-public-docker-hub-images/62475193#62475193">this</a> answer.</p> <p>It can be easily verified on your own by making a simple experiment. Try to run two different nginx deployments. First based on image <code>nginx</code> (which equals to <code>nginx:latest</code>) and the second one based on <code>nginx:1.14.2</code>.</p> <p>While the first scenario is perfectly feasible because the <code>nginx:latest</code> image can be pulled from <strong>Container Registry's Docker Hub mirror</strong> which is accessible from a private cluster, any attempt of pulling <code>nginx:1.14.2</code> will fail which you'll see in <code>Pod</code> events. It happens because the <strong>kubelet</strong> is not able to find this version of the image in <strong>GCR</strong> and it tries to pull it from public docker registry (<code>https://registry-1.docker.io/v2/</code>), which in <strong>Private Clusters</strong> is not possible. <em>&quot;The mirror is only a cache, so images are periodically removed, and a private cluster is not able to fall back to Docker Hub.&quot;</em> - as you can read in docs.</p> <p>If you still have doubts, just <code>ssh</code> into your node and try to run following commands:</p> <pre><code>curl https://cloud.google.com/container-registry/ curl https://registry-1.docker.io/v2/ </code></pre> <p>While the first one works perfectly, the second one will eventually fail:</p> <pre><code>curl: (7) Failed to connect to registry-1.docker.io port 443: Connection timed out </code></pre> <p>Reason ? - <em>&quot;Nodes in a private cluster do not have outbound access to the public internet.&quot;</em></p> <h3>Solution ?</h3> <p>You can search what is currently available in <strong>GCR</strong> <a href="https://console.cloud.google.com/gcr/images/google-containers" rel="noreferrer">here</a>.</p> <p>In many cases you should be able to get the required image if you don't specify it's exact version (by default <code>latest</code> tag is used). While it can help with <code>nginx</code>, unfortunatelly no version of <a href="https://hub.docker.com/r/velero/velero-plugin-for-gcp" rel="noreferrer">velero/velero-plugin-for-gcp</a> is currently available in Google Container Registry's Docker Hub mirror.</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#private-nodes-outbound" rel="noreferrer">Granting private nodes outbound internet access</a> by using <a href="https://cloud.google.com/nat/docs/overview#NATwithGKE" rel="noreferrer">Cloud NAT</a> seems the only reasonable solution that can be applied in your case.</p>
mario
<p>I am working to configure Istio in my on prem Kubernetes cluster. As part of this I have to coordinate with my System Admins to setup DNS and load balancer resources.</p> <p>I have found with my work learing and setting up Istio, that I need to fully uninstall it and re-install it. <em>When I do that Istio will pick a new port for the Ingress Gateway.</em> This then necessitates me coordinating updates with the System Admins.</p> <p>It would be convenient if I could force Istio to just keep using the same port.</p> <p>I am using the Istio Operator to manage Istio. <strong>Is there a way to set an Ingress Gateway's NodePort with the Istio Operator?</strong></p>
Vaccano
<p>In your Istio operator yaml you can define/override ingressgateway settings (k8s section of an ingressgateway definition)</p> <p><a href="https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#KubernetesResourcesSpec" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#KubernetesResourcesSpec</a></p> <p>for example :</p> <pre><code>components: ingressGateways: - name: istio-ingressgateway enabled: true k8s: service: ports: - name: status-port port: 15021 - name: tls-istiod port: 15012 - name: tls port: 15443 nodePort: 31371 - name: http2 port: 80 nodePort: 31381 targetPort: 8280 - name: https port: 443 nodePort: 31391 targetPort: 8243 </code></pre>
Peter Claes
<p>I want to run one cron at different times.</p> <p>Is it possible to do something like this in my YML file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-cronjob spec: schedule: - &quot;*/10 00-08 * * *&quot; - &quot;*/5 09-18 * * *&quot; - &quot;*/10 19-23 * * *&quot; concurrencyPolicy: Forbid ... </code></pre> <p>or do I have to create separate YML files for every schedule time?</p>
Tomas Lukac
<p>The short answer is: no, you cannot create one <code>CronJob</code> YML with several crontab times schedules.</p> <p>The easy solution would be to use separate <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noreferrer">CronJob</a> resource for each crontab line from your example. You can use the same image for each of your <code>CronJobs</code>.</p>
Wytrzymały Wiktor
<p>There is a k8s single master node, I need to back it up and restore on a different server with different ip addresses. I googled this topic and found a solution - <a href="https://elastisys.com/2018/12/10/backup-kubernetes-how-and-why/" rel="nofollow noreferrer">https://elastisys.com/2018/12/10/backup-kubernetes-how-and-why/</a></p> <p>Everything looked easy; so, I followed the instruction and got a copy of the certificates and a snapshot of the etcd database. Then I used the second script to restore the node on a different server. It did not go well this time. It gave me a bunch of errors related to mismatching the certificates and server's local ip addresses.</p> <p>As far as I understood, when a kubernetes cluster is initializing, it creates a set of certificates assigned to the original server's ip addresses and I cannot just back it up and restore somewhere else. </p> <p>So, how to backup a k8s master node and restore it?</p>
Samuel
<p>Make sure that you added an extra flag to the kubeadm init command (<code>--ignore-preflight-errors=DirAvailable--var-lib-etcd</code>) to acknowledge that we want to use the pre-existing data.</p> <p>Do the following steps:</p> <ul> <li>replace the IP address in all config files in <code>/etc/kubernetes</code></li> <li>back up <code>/etc/kubernetes/pki</code></li> <li>identify certs in <code>/etc/kubernetes/pki</code> that have the old IP address as an alt name - <strong>1st</strong> step</li> <li>delete both the cert and key for each of them (for me it was just apiserver and etcd/peer)</li> <li>regenerate the certs using kubeadm alpha phase certs - <strong>2nd</strong> step</li> <li>identify configmap in the kube-system namespace that referenced the old IP - <strong>3rd</strong> step</li> <li>manually edit those configmaps</li> <li>restart kubelet and docker (to force all containers to be recreated)</li> </ul> <p><strong>1.</strong></p> <pre><code>/etc/kubernetes/pki# for f in $(find -name "*.crt"); do openssl x509 -in $f -text -noout &gt; $f.txt; done /etc/kubernetes/pki# grep -Rl 12\\.34\\.56\\.78 . ./apiserver.crt.txt ./etcd/peer.crt.txt /etc/kubernetes/pki# for f in $(find -name "*.crt"); do rm $f.txt; done </code></pre> <p><strong>2.</strong></p> <pre><code>/etc/kubernetes/pki# rm apiserver.crt apiserver.key /etc/kubernetes/pki# kubeadm alpha phase certs apiserver ... /etc/kubernetes/pki# rm etcd/peer.crt etcd/peer.key /etc/kubernetes/pki# kubeadm alpha phase certs etcd-peer </code></pre> <p>... <strong>3.</strong></p> <pre><code>$ kubectl -n kube-system get cm -o yaml | less ... $ kubectl -n kube-system edit cm ... </code></pre> <p>Take a look here: <a href="https://github.com/kubernetes/kubeadm/issues/338" rel="nofollow noreferrer">master-backup</a>.</p> <p><strong>UPDATE:</strong></p> <p>During replacing master nodes and changing IP you cannot contact the api-server to change the configmaps in step 4. Moreover if you have single master k8s cluster connection between worker nodes will be interrupted till new master will be up. </p> <p>To ensure connection between master and worker nodes during master replacement you have to create <a href="https://searchdisasterrecovery.techtarget.com/definition/high-availability-cluster-HA-cluster" rel="nofollow noreferrer">HA cluster</a>.</p> <p>The certificate is signed for {your-old-IP-here} and secure communication can't then happen to {your-new-ip-here}</p> <p>You can add more IPs in the certificate in beforehand though...</p> <p>The api-server certificate is signed for hostname kubernetes, so you can add that as an alias to the new IP address in <code>/etc/hosts</code> then do k<code>ubectl --server=https://kubernetes:6443 ....</code> .</p>
Malgorzata
<p>Attempting to connect to a Jupyter Lab container (ultimately other applications as well) running on a cloud managed Kubernetes service using Kong as the ingress controller. Receiving <code>&quot;no Route matched with those values&quot;</code> on the http response to Kong's public IP and the ingress-controller logs indicate:</p> <pre><code>service kong/rjup2 does not have any active endpoints no configuration change, skipping sync to Kong </code></pre> <p>Deployment Config:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: rjup2 namespace: kong spec: selector: matchLabels: run: rjup2 replicas: 1 template: metadata: labels: run: rjup2 spec: restartPolicy: Always containers: - name: rjup2 image: jupyter/minimal-notebook imagePullPolicy: Always ports: - containerPort: 8888 protocol: TCP </code></pre> <p>Service Config:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: rjup2 namespace: kong spec: selector: app: rjup2 type: ClusterIP ports: - name: http port: 80 targetPort: 8888 protocol: TCP </code></pre> <p>Ingress Resource Config:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: rjup2 namespace: kong spec: tls: - hosts: - &lt;AKS API server address&gt; rules: - host: &lt;AKS API server address&gt; http: paths: - path: / backend: serviceName: rjup2 servicePort: 80 </code></pre> <p>The <code>API Server Address</code> is properly populated in the deployed YAML. I have tried different namespaces before consolidating them under Kong's default namespace and also tried making the service ports 8888 in addition to the containers target port.</p> <p>Thanks for any assistance in debugging this.</p>
jpjenk
<p><strong>Your <code>rjup2</code> <code>Service</code> doesn't have a valid selector. Note that the <code>Pods</code> you are trying to expose are labelled with <code>run: rjup2</code> label and your <code>Service</code> has <code>app: rjup2</code> selector.</strong></p> <p>Btw. you get very clear error message that indicates where the problem could be:</p> <pre><code>service kong/rjup2 does not have any active endpoints </code></pre> <p>If your <code>rjup2</code> service in <code>kong</code> namespace doesn't have any active endpoints, it means it doesn't expose your <code>Pods</code> properly which may indicate a possible mismatch in your configuration.</p> <p>You can check it by running:</p> <pre><code>kubectl get ep -n kong </code></pre> <p>Normally you should see the matching <code>Endpoints</code> object. In your case you won't see it as your <code>Service</code> cannot expose any pods untill it has a valid selector.</p> <p>If you fix your <code>Service</code> definition, everything should work just fine:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: rjup2 namespace: kong spec: selector: run: rjup2 type: ClusterIP ports: - name: http port: 80 targetPort: 8888 protocol: TCP </code></pre>
mario
<p>I am using airflow 2.4.3 and running KubernetesPodOperator</p> <p>Below is the code and error:-</p> <p>Please help me with creating a KubernetesPosOperator in python. I have tried on both GCP and Azure.</p> <p>Also adding the kubernetes documentation for reference:-</p> <p><a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/_api/airflow/providers/cncf/kubernetes/operators/kubernetes_pod/index.html#airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/_api/airflow/providers/cncf/kubernetes/operators/kubernetes_pod/index.html#airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator</a></p> <p>I can also share any other info if required.</p> <pre><code> from kubernetes.client import models as k8s from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator #custom modules from spark_API.spark_submit import SparkSubmit #import modules import json import datetime import logging import airflow_API import time airflow_dag_object = AirflowDagUtilities(&quot;aaa_test_airflow_api&quot;) def def_func1(**kwargs): print(&quot;In func1&quot;) namespace = &quot;segmentation-pipeline&quot; docker_image = &quot;****:v6&quot; # name commented out is_delete_operator_pod = True docker_image_creds = [k8s.V1LocalObjectReference(&quot;****&quot;)] # harbor name commented out submit_command = [&quot;/bin/bash&quot;,&quot;-c&quot;] max_cores = 60 driver_memory = &quot;4g&quot; executor_memory = &quot;4g&quot; submit_args = &quot;/usr/local/spark/bin/spark-submit --master local[&quot; + str(max_cores) + &quot;] --driver-memory &quot; + \ driver_memory + &quot; --executor-memory &quot; + executor_memory + &quot; &quot; submit_spark_pipeline_config_conf = &quot;--conf &quot; + '\'' + 'spark.pipelineConfig' + &quot;=&quot; + json.dumps(_infra_config.get_infra_config(),separators=(',',':')) + '\'' + &quot; &quot; submit_spark_broadcast_timeout = &quot;--conf &quot; + '\&quot;' + &quot;spark.sql.broadcastTimeout&quot; + &quot;=&quot; + str(&quot;36000&quot;) + '\&quot;' + &quot; &quot; submit_spark_max_result_size = &quot;--conf &quot; + '\&quot;' + &quot;spark.driver.maxResultSize&quot; + &quot;=&quot; + str(&quot;0&quot;) + '\&quot;' + &quot; &quot; final_dependency_jars = [&quot;./resources/mysql_connector_java_5.1.45.jar&quot;,\ &quot;./resources/commons_httpclient_3.0.1.jar&quot;] dependency_jars_string = ','.join(list(set(final_dependency_jars))) submit_spark_dependency_jars = &quot;--conf &quot; + '\&quot;' + &quot;spark.jars&quot; + &quot;=&quot; + dependency_jars_string + '\&quot;' + &quot; &quot; extra_conf = [] extra_conf_final = [] for conf in extra_conf: conf_appended_string = &quot;--conf &quot; + '\&quot;' + conf + '\'' + &quot; &quot; extra_conf_final.append(conf_appended_string) extra_conf = &quot; &quot;.join(extra_conf_final) + &quot; &quot; airflow_task_settings = airflow_API.extract_airflow_task_details(kwargs['task_instance']) submit_spark_airflow_task_details = &quot;--conf &quot; + '\&quot;' + &quot;spark.airflowTaskDetails&quot; + &quot;=&quot; + json.dumps(airflow_task_settings) + '\'' + &quot; &quot; common_submit_args_beginning = submit_args + submit_spark_broadcast_timeout + submit_spark_max_result_size + submit_spark_dependency_jars + extra_conf + submit_spark_airflow_task_details application_resource = &quot;/update_scores.py&quot; application_arguments = [&quot;test_args&quot;] string_application_arguments = &quot; &quot; for i in range(0,len(application_arguments)): string_application_arguments = string_application_arguments + &quot; &quot; + json.dumps(application_arguments[i]) common_submit_args_end = application_resource + string_application_arguments platform_utilities = PlatformUtilities(_infra_config) print(&quot;platform_utilities.get_python_modules_path() -&gt; &quot;,str(platform_utilities.get_python_modules_path())) submit_spark_python_module_path = &quot;--conf &quot; + '\&quot;' + &quot;spark.modulePath&quot; + &quot;=&quot; + str(platform_utilities.get_python_modules_path()) + '\&quot;' + &quot; &quot; submit_spark_args = [common_submit_args_beginning + submit_spark_pipeline_config_conf + submit_spark_python_module_path + common_submit_args_end] print(&quot;submit_spark_args -&gt; &quot;,submit_spark_args) submit_in_cluster = True submit_spark_pod_affinity = k8s.V1Affinity( node_affinity=k8s.V1NodeAffinity(k8s.V1NodeSelectorTerm( match_expressions=[ k8s.V1NodeSelectorRequirement(key=&quot;****&quot;, operator=&quot;In&quot;, values=[&quot;n2-highmem-8&quot;]), k8s.V1NodeSelectorRequirement(key=&quot;deployment&quot;, operator=&quot;In&quot;, values=[&quot;dynamic&quot;]), ] ) ) ) submit_spark_pod_tolerations = [k8s.V1Toleration(key=&quot;deployment&quot;, operator=&quot;Equal&quot;, value=&quot;dynamic&quot;, effect=&quot;NoSchedule&quot;)] application_name = &quot;test_airflow_api_test_task_id&quot; container_resources = k8s.V1ResourceRequirements( requests={ 'memory': str(&quot;10Gi&quot;), 'cpu': str(&quot;2&quot;) }, limits={ 'memory': str(&quot;50Gi&quot;), 'cpu': str(&quot;5&quot;) } ) submit_startup_timeout_seconds = 600 submit_get_logs = True kube_submssion = KubernetesPodOperator(namespace = namespace, image = docker_image, is_delete_operator_pod = is_delete_operator_pod, image_pull_secrets = docker_image_creds, cmds = submit_command, arguments = submit_spark_args, in_cluster = submit_in_cluster, affinity = submit_spark_pod_affinity, tolerations = submit_spark_pod_tolerations, container_resources = container_resources, name = application_name, task_id = application_name, startup_timeout_seconds = submit_startup_timeout_seconds, get_logs = submit_get_logs ) kube_submssion.execute(context = None) def def_func2(**kwargs): print(&quot;In func2&quot;) dag_base = airflow_dag_object.get_dag_object() func1=PythonOperator( task_id='func1', provide_context=True, python_callable=def_func1, dag=dag_base ) func2=PythonOperator( task_id='func2', provide_context=True, python_callable=def_func2, dag=dag_base ) func1 &gt;&gt; func2 </code></pre> <p>OUTPUT ERROR:-</p> <pre><code>Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py&quot;, line 419, in execute context=context, File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py&quot;, line 387, in get_or_create_pod pod = self.find_pod(self.namespace or pod_request_obj.metadata.namespace, context=context) File &quot;/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py&quot;, line 371, in find_pod label_selector=label_selector, File &quot;/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py&quot;, line 15697, in list_namespaced_pod return self.list_namespaced_pod_with_http_info(namespace, **kwargs) # noqa: E501 File &quot;/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py&quot;, line 15826, in list_namespaced_pod_with_http_info collection_formats=collection_formats) File &quot;/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api_client.py&quot;, line 353, in call_api _preload_content, _request_timeout, _host) File &quot;/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api_client.py&quot;, line 184, in __call_api _request_timeout=_request_timeout) File &quot;/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api_client.py&quot;, line 377, in request headers=headers) File &quot;/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/rest.py&quot;, line 244, in GET query_params=query_params) File &quot;/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/rest.py&quot;, line 234, in request raise ApiException(http_resp=r) kubernetes.client.exceptions.ApiException: (400) Reason: Bad Request HTTP response headers: HTTPHeaderDict({'Audit-Id': '6ab39ea1-f955-4481-b3eb-7b3abe747a7c', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '8e487991-120d-49d0-940a-ace0b0e64421', 'X-Kubernetes-Pf-Prioritylevel-Uid': '8f6ab0b3-abdf-4782-994c-2f0f247592d2', 'Date': 'Thu, 12 Jan 2023 13:13:20 GMT', 'Content-Length': '169'}) HTTP response body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;found ',', expected: !, identifier, or 'end of string'&quot;,&quot;reason&quot;:&quot;BadRequest&quot;,&quot;code&quot;:400} </code></pre>
Himanshu Malhotra
<p>In previous version of <strong>airflow &lt; 2.3</strong> <strong>KubernetesPodOperator</strong> used to work with <strong>None</strong> context</p> <p>As mentioned in your question</p> <pre><code>kube_submssion = KubernetesPodOperator(namespace = namespace, image = docker_image, is_delete_operator_pod = is_delete_operator_pod, image_pull_secrets = docker_image_creds, cmds = submit_command, arguments = submit_spark_args, in_cluster = submit_in_cluster, affinity = submit_spark_pod_affinity, tolerations = submit_spark_pod_tolerations, container_resources = container_resources, name = application_name, task_id = application_name, startup_timeout_seconds = submit_startup_timeout_seconds, get_logs = submit_get_logs ) kube_submssion.execute(context = None) </code></pre> <p>The execute method is expecting the context as mentioned in the documentation at followig link</p> <p><a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/_modules/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.html#KubernetesPodOperator.execute" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/_modules/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.html#KubernetesPodOperator.execute</a></p> <p>You can pass the context from **<strong>kwargs</strong> to the execute method. You can try by passing <code>kwargs</code> to <code>execute</code> method</p> <pre><code>kube_submssion.execute(context = kwargs) </code></pre>
Prashant Pal
<p>What is the standard approach to getting GCP credentials into k8s pods when using skaffold for local development? </p> <p>When I previously used docker compose and aws it was easy to volume mount the ~/.aws folder to the container and everything just worked. Is there an equivalent solution for skaffold and gcp?</p>
Marty Young
<blockquote> <p>When I previously used docker compose and aws it was easy to volume mount the ~/.aws folder to the container and everything just worked. Is there an equivalent solution for skaffold and gcp?</p> </blockquote> <p>You didn't mention what kind of <strong>kubernetes cluster</strong> you have deployed locally but if you use <strong>Minikube</strong> it can be actually achieved in a very similar way.</p> <p>Supposed you have already initialized your <strong>Cloud SDK</strong> locally by running:</p> <pre><code>gcloud auth login gcloud container clusters get-credentials &lt;cluster name&gt; --zone &lt;zone&gt; --project &lt;project name&gt; gcloud config set project &lt;project name&gt; </code></pre> <p>so you can run your <code>gcloud commands</code> on the local machine on which <strong>Minikube</strong> is installed. You can easily delagate this access to your <code>Pods</code> created either by <strong>Skakffold</strong> or manually on <strong>Minikube</strong>.</p> <p>You just need to start your <strong>Minikube</strong> as follows:</p> <pre><code>minikube start --mount=true --mount-string=&quot;$HOME/.config/gcloud/:/home/docker/.config/gcloud/&quot; </code></pre> <p>To make things simple I'm mounting local <strong>Cloud SDK</strong> config directory into <strong>Minikube</strong> host using as a mount point <code>/home/docker/.config/gcloud/</code>.</p> <p>Once it is available on <strong>Minikube host VM</strong>, it can be easily mounted into any <code>Pod</code>. We can use one of the <strong>Cloud SDK</strong> docker images available <a href="https://hub.docker.com/r/google/cloud-sdk/" rel="nofollow noreferrer">here</a> or any other image that comes with <strong>Cloud SDK</strong> preinstalled.</p> <p>Sample <code>Pod</code> to test this out may look like the one below:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: cloud-sdk-pod spec: containers: - image: google/cloud-sdk:alpine command: ['sh', '-c', 'sleep 3600'] name: cloud-sdk-container volumeMounts: - mountPath: /root/.config/gcloud name: gcloud-volume volumes: - name: gcloud-volume hostPath: # directory location on host path: /home/docker/.config/gcloud # this field is optional type: Directory </code></pre> <p>After connecting to the <code>Pod</code> by running:</p> <pre><code>kubectl exec -ti cloud-sdk-pod -- /bin/bash </code></pre> <p>we'll be able to execute any <code>gcloud</code> commands as we are able to execute them on our local machine.</p>
mario
<p>I’m trying to create a cluster in GKE project-1 with shared network of project-2.</p> <p>Roles given to Service account:<br /> project-1: Kubernetes Engine Cluster Admin, Compute Network Admin, Kubernetes Engine Host Service Agent User<br /> project-2: Kubernetes Engine Service Agent, Compute Network User, Kubernetes Engine Host Service Agent User</p> <p>Service Account is created under project-1. API &amp; Services are enabled in both Projects.</p> <p>But I am getting this error persistently. Error: googleapi: Error 403: Kubernetes Engine Service Agent is missing required permissions on this project. See Troubleshooting | Kubernetes Engine Documentation | Google Cloud for more info: required “container.hostServiceAgent.use” permission(s) for “projects/project-2”., forbidden</p> <pre><code>data &quot;google_compute_network&quot; &quot;shared_vpc&quot; { name = &quot;network-name-in-project-2&quot; project = &quot;project-2&quot; } data &quot;google_compute_subnetwork&quot; &quot;shared_subnet&quot; { name = &quot;subnet-name-in-project-2&quot; project = &quot;project-2&quot; region = &quot;us-east1&quot; } # cluster creation under project 1 # project 1 specified in Provider resource &quot;google_container_cluster&quot; &quot;mowx_cluster&quot; { name = var.cluster_name location = &quot;us-east1&quot; initial_node_count = 1 master_auth { username = &quot;&quot; password = &quot;&quot; client_certificate_config { issue_client_certificate = false } } remove_default_node_pool = true cluster_autoscaling { enabled = false } # cluster_ipv4_cidr = var.cluster_pod_cidr ip_allocation_policy { cluster_secondary_range_name = &quot;pods&quot; services_secondary_range_name = &quot;svc&quot; } network = data.google_compute_network.shared_vpc.id subnetwork = data.google_compute_subnetwork.shared_subnet.id } </code></pre>
xyphan
<p>This is a community wiki answer based on the discussion in the comments and posted for better visibility. Feel free to expand it.</p> <p>The error you encountered:</p> <pre><code>Error: googleapi: Error 403: Kubernetes Engine Service Agent is missing required permissions on this project. See Troubleshooting | Kubernetes Engine Documentation | Google Cloud for more info: required “container.hostServiceAgent.use” permission(s) for “projects/project-2”., forbidden </code></pre> <p>means that the necessary service agent was not created:</p> <p><code>roles/container.serviceAgent</code> - Kubernetes Engine Service Agent:</p> <blockquote> <p>Gives Kubernetes Engine account access to manage cluster resources. Includes access to service accounts.</p> </blockquote> <p>The official <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted" rel="nofollow noreferrer">troubleshooting docs</a> describe a solution for such problems:</p> <blockquote> <p>To resolve the issue, if you have removed the <code>Kubernetes Engine Service Agent</code> role from your Google Kubernetes Engine service account, add it back. Otherwise, you must re-enable the Kubernetes Engine API, which will correctly restore your service accounts and permissions. You can do this in the gcloud tool or the Cloud Console.</p> </blockquote> <p>The solution above works as in your use case the account was missing so it had to be (re)created.</p>
Wytrzymały Wiktor
<p>Im trying to upgrade kube cluster from Ubuntu 16 to 18. After the upgrade kube-dns pod is constantly crashing. The problem appears only on U18 if i'm rolling back to U16 everything works fine.</p> <p>Kube version "v1.10.11"</p> <p>kube-dns pod events:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 28m default-scheduler Successfully assigned kube-dns-75966d58fb-pqxz4 to Normal SuccessfulMountVolume 28m kubelet, MountVolume.SetUp succeeded for volume "kube-dns-config" Normal SuccessfulMountVolume 28m kubelet, MountVolume.SetUp succeeded for volume "kube-dns-token-h4q66" Normal Pulling 28m kubelet, pulling image "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.10" Normal Pulled 28m kubelet, Successfully pulled image "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.10" Normal Started 28m kubelet, Started container Normal Created 28m kubelet, Created container Normal Pulling 28m kubelet, pulling image "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10" Normal Pulling 28m kubelet, pulling image "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10" Normal Pulled 28m kubelet, Successfully pulled image "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10" Normal Created 28m kubelet, Created container Normal Pulled 28m kubelet, Successfully pulled image "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10" Normal Started 28m kubelet, Started container Normal Created 25m (x2 over 28m) kubelet, Created container Normal Started 25m (x2 over 28m) kubelet, Started container Normal Killing 25m kubelet, Killing container with id docker://dnsmasq:Container failed liveness probe.. Container will be killed and recreated. Normal Pulled 25m kubelet, Container image "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10" already present on machine Warning Unhealthy 4m (x26 over 27m) kubelet, Liveness probe failed: HTTP probe failed with statuscode: 503 </code></pre> <p>kube-dns sidecar container logs:</p> <pre><code>kubectl logs kube-dns-75966d58fb-pqxz4 -n kube-system -c sidecar I0809 16:31:26.768964 1 main.go:51] Version v1.14.8.3 I0809 16:31:26.769049 1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns}) I0809 16:31:26.769079 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} I0809 16:31:26.769117 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} W0809 16:31:33.770594 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:49305-&gt;127.0.0.1:53: i/o timeout W0809 16:31:40.771166 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:49655-&gt;127.0.0.1:53: i/o timeout W0809 16:31:47.771773 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:53322-&gt;127.0.0.1:53: i/o timeout W0809 16:31:54.772386 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:58999-&gt;127.0.0.1:53: i/o timeout W0809 16:32:01.772972 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:35034-&gt;127.0.0.1:53: i/o timeout W0809 16:32:08.773540 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:33250-&gt;127.0.0.1:53: i/o timeout </code></pre> <p>kube-dns dnsmasq container logs:</p> <pre><code>kubectl logs kube-dns-75966d58fb-pqxz4 -n kube-system -c dnsmasq I0809 16:29:51.596517 1 main.go:74] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --dns-forward-max=150 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/in6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000} I0809 16:29:51.596679 1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --dns-forward-max=150 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/in6.arpa/127.0.0.1#10053] I0809 16:29:52.135179 1 nanny.go:119] W0809 16:29:52.135211 1 nanny.go:120] Got EOF from stdout I0809 16:29:52.135277 1 nanny.go:116] dnsmasq[20]: started, version 2.78 cachesize 1000 I0809 16:29:52.135293 1 nanny.go:116] dnsmasq[20]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify I0809 16:29:52.135303 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in6.arpa I0809 16:29:52.135314 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I0809 16:29:52.135323 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain cluster.local I0809 16:29:52.135329 1 nanny.go:116] dnsmasq[20]: reading /etc/resolv.conf I0809 16:29:52.135334 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in6.arpa I0809 16:29:52.135343 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I0809 16:29:52.135348 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain cluster.local I0809 16:29:52.135353 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.53#53 I0809 16:29:52.135397 1 nanny.go:116] dnsmasq[20]: read /etc/hosts - 7 addresses I0809 16:31:28.728897 1 nanny.go:116] dnsmasq[20]: Maximum number of concurrent DNS queries reached (max: 150) I0809 16:31:38.746899 1 nanny.go:116] dnsmasq[20]: Maximum number of concurrent DNS queries reached (max: 150) </code></pre> <p>I have deleted the existing pods but newly created getting same error after some time. Not sure why this is happening only on Ubuntu 18. Any ideas how to fix this?</p>
DenisTs
<p>In my case i have found that in ubuntu18 the resolve.conf was pointing pointing to: <code>/etc/resolv.conf -&gt; ../run/systemd/resolve/stub-resolv.conf</code> and it had <code>nameserver 127.0.0.53</code> entry. At the same time under /run/systemd/resolve you should have another resolv.conf</p> <pre><code>/run/systemd/resolve$ ll total 8 drwxr-xr-x 2 systemd-resolve systemd-resolve 80 Aug 12 13:24 ./ drwxr-xr-x 23 root root 520 Aug 12 11:54 ../ -rw-r--r-- 1 systemd-resolve systemd-resolve 607 Aug 12 13:24 resolv.conf -rw-r--r-- 1 systemd-resolve systemd-resolve 735 Aug 12 13:24 stub-resolv.conf </code></pre> <p>In my case resolv.conf contains private IP nameserver 172.27.0.2. Just relink to the ../run/systemd/resolve/resolv.conf on all cluster machines and reboot the kube-dns pods.</p>
Terence Bor
<p>I have a storage class : </p> <pre><code> kubectl describe storageclass my-local-storage Name: my-local-storage IsDefaultClass: No Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"my-local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"} Provisioner: kubernetes.io/no-provisioner Parameters: &lt;none&gt; AllowVolumeExpansion: &lt;unset&gt; MountOptions: &lt;none&gt; ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: &lt;none&gt; </code></pre> <p><strong>Peristent Volume</strong></p> <pre><code>kubectl describe pv my-local-pv Name: my-local-pv Labels: &lt;none&gt; Annotations: pv.kubernetes.io/bound-by-controller: yes Finalizers: [kubernetes.io/pv-protection] StorageClass: my-local-storage Status: Bound Claim: default/my-claim Reclaim Policy: Retain Access Modes: RWO VolumeMode: Filesystem Capacity: 1Mi Node Affinity: Required Terms: Term 0: kubernetes.io/hostname in [kubenode2] Message: Source: Type: LocalVolume (a persistent volume backed by local storage on a node) Path: /home/node/serviceLogsNew Events: &lt;none&gt; </code></pre> <p><strong>Persistent Volume Claim</strong></p> <pre><code>node@kubemaster:~/Desktop$ kubectl describe pvc my-claim Name: my-claim Namespace: default StorageClass: my-local-storage Status: Bound Volume: my-local-pv Labels: &lt;none&gt; Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes Finalizers: [kubernetes.io/pvc-protection] Capacity: 1Mi Access Modes: RWO VolumeMode: Filesystem Mounted By: podname-deployment-897d6947b-hnvvq podname-deployment-897d6947b-q4f79 Events: &lt;none&gt; </code></pre> <p>Now, I have created a persistent Volume with capacity: 1Mi.</p> <p>I am running 2 pods attached to PV using PVC. Pods are creating log files. The size of files inside the folder (/home/node/serviceLogsNew) used for PV grows to 5 MB. Still everything is working fine.</p> <p>So, capacity is ignored while using Local PV / PVC? Is it configurable? </p>
Ankit Bansal
<p>Please take a look at <a href="https://github.com/kubernetes/kubernetes/issues/48701" rel="noreferrer">this</a> github issue. I believe that <a href="https://github.com/kubernetes/kubernetes/issues/48701#issuecomment-314929576" rel="noreferrer">this</a> comment answers also your question:</p> <blockquote> <p>this is working as intended, kube can't/won't enforce the capacity of PVs, the capacity field on PVs is just a label. It's up to the "administrator" i.e. the creator of the PV to label it accurately so that when users create PVCs that needs >= X Gi, they get what they want.</p> </blockquote> <p><a href="https://github.com/kubernetes/kubernetes/issues/48701#issuecomment-320375825" rel="noreferrer">This</a> advise may be also useful in your case:</p> <blockquote> <p>... If you want hard capacity boundaries with hostpath, then you should create a partition with the size you need, or use filesystem quota.</p> <p>If this is just ephemeral data, then you can consider using emptyDir volumes. Starting in 1.7, you can specify a limit on the capacity, and kubelet will evict your pod if you exceed the limit.</p> </blockquote> <p>The person who reported the issue actually uses <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="noreferrer">hostPath</a> volume type but <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="noreferrer">local</a> works pretty much the same and same rules are applied here when it comes to setting <code>capacity</code> in <code>PV</code> definition. <strong>Kubernetes</strong> doesn't have any mechanisms which could enforce a specific disk quota on the directory you mount into your <code>Pod</code> from <code>node</code>.</p> <p>Note that in your <code>PV</code> definition you can set a <code>capacity</code> which is much higher than the actual capacity of the underlying disk. Such <code>PV</code> will be created without any errors and will be usable allowing you to write data up to its actual maximum capacity.</p> <p>While <code>capacity</code> in <code>PV</code> definition is just a mere lablel, with <code>PVC</code> it's a bit different story. In this context <code>capacity</code> can be interpreted as a <strong>request for a specific minimal capacity</strong>. If your storage provisioner is able to satisfy your request, the storage will be provisioned. If it's unable to give you the storage with minimal capacity defined in your claim, it won't be provisioned.</p> <p>Let's assume you have defined a <code>PV</code> based on a specific directory on your host/node with the capacity of <code>150Gi</code>. If you define a <code>PVC</code> in which you claim for <code>151Gi</code>, the storage won't be provisioned as <code>PV</code> with the declared <code>capacity</code> (no matter if it is a real or some made up value) won't be able to satisfy the request set in our <code>PVC</code>. So in case of <code>PVC</code>, the <code>capacity</code> can be interpreted as a kind of constraint but it still can't enforce/limit the use of actually available underlying storage.</p> <p>Don't forget that <em>a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="noreferrer">local</a> volume represents a mounted local storage device such as <strong>a disk, partition or directory</em></strong> so it's not only the directory that you can use. It can be e.g. your <code>/dev/sdb</code> disk or <code>/dev/sda5</code> partition. You can also decide to use <strong>LVM partition with strictly defined capacity</strong>.</p>
mario
<p>I am writing a command line tool in Go which will perform an action based on the existence of a particular pod on a <code>k8s</code> cluster, in a specific namespace.</p> <p>I could do via command line (shell) invocations within my <code>go</code> program something like </p> <pre><code>kubectl get pods -n mynapespace l app=myapp </code></pre> <p>or in case I am not certain about the labels, something even less elegant as:</p> <pre><code>kubectl get pods -n mynapespace | grep -i somepatternIamcertainabout </code></pre> <p>However, given that I am using the k8s native language (Go) I was wondering whether there might be a more Go native/specific way of making such an inquiry to the k8s api server, without resorting to shell invocations from within my cli tool.</p>
pkaramol
<p>The kubectl utility is just a convenience wrapper that talks to the Kubernetes API using bog standard HTTP. The Go standard library has a great <a href="https://golang.org/pkg/net/http/" rel="nofollow noreferrer">http package</a>. The perfect fit for what you're trying to accomplish.</p> <p>In fact, you could just use <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">this official client package</a> from the Kubernetes project itself.</p>
cpk
<p>I'm trying to create a bare metal multimaster kubernetes cluster. The version of kubernetes I'm working with is 1.15.12. The issue I'm running into is with the command:</p> <pre><code>kubeadm init --control-plane-endpoint &quot;LOAD_BALANCER_DNS:LOAD_BALANCER_PORT&quot; --upload-certs --pod-network-cidr=192.168.0.0/16 </code></pre> <p>The error is that --control-plane-endpoint is unknown.</p> <p>I believe in version 1.15.12 this kubeadm flag doesn't exist. Am I using the correct flag or is there a substitute that I can use for the version that I'm using (v1.15.12)?</p>
CodeZer
<p>You are right, that flag was implemented in <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md" rel="nofollow noreferrer">Kubernetes v1.16</a>:</p> <blockquote> <p>kubeadm: provide <code>--control-plane-endpoint</code> flag for <code>controlPlaneEndpoint</code> (<a href="https://github.com/kubernetes/kubernetes/pull/79270" rel="nofollow noreferrer">#79270</a>)</p> </blockquote> <p>The version you are trying to use is pretty old and so it is highly recommend for you to either:</p> <ul> <li><p><a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="nofollow noreferrer">Upgrade your cluster</a></p> </li> <li><p>Create a new cluster from scratch using a more recent version of Kubernetes (preferably v1.20). The <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/" rel="nofollow noreferrer">kubeadm init</a> docs can help you with it, especially the <code>--kubernetes-version</code> flag: Choose a specific Kubernetes version for the control plane.</p> </li> </ul> <p>Remember that things get deprecated for a reason and keeping your cluster up to date can save you a lot of trouble in the future.</p>
Wytrzymały Wiktor
<p>I have installed cert-manager 0.12.0 for SSL certificate. </p> <p>My Issuer file is</p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: [email protected] privateKeySecretRef: name: letsencrypt-prod http01: {} </code></pre> <p>My certificate file</p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: tls-secret spec: secretName: tls-secret-prod dnsNames: - mydomain.com acme: config: - http01: ingressClass: nginx domains: - mydomain.com issuerRef: name: letsencrypt-prod kind: ClusterIssuer </code></pre> <p>Ingress configuration is</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cms annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod kubernetes.io/tls-acme: "true" spec: tls: - hosts: - mydomain.com secretName: tls-secret-prod rules: - host: mydomain.com http: paths: - backend: serviceName: apostrophe servicePort: 80 path: / </code></pre> <p>But still, SSL certificated is not valid. And Common name is “Kubernetes Ingress Controller Fake Certificate”.</p> <p>The following result to show orders and challenges</p> <pre><code>kubectl get orders, challenges -o wide NAME STATE DOMAIN REASON AGE challenge.certmanager.k8s.io/tls-secret-155743219-0 pending mydomain.com pods "cm-acme-http-solver-gk2zx" is forbidden: minimum cpu usage per Container is 100m, but request is 10m. 26m </code></pre> <p>I have updated the resources limit the range and reinstalled cert-manager with helm. I am still getting this error. I am not sure what goes wrong or show how to fix this.</p> <p>Please let me know if you need anything. Thanks in advance!</p>
Ramesh Murugesan
<p>The problem lays in cpu limits defined for specific pod. You have to change minimum CPU limit in deployment configuration file. As you can see pod (<strong>cm-acme-http-solver</strong>) is requesting <strong>100m</strong> CPU usage while minimum CPU usage defined for specific pod is *10m**. So change CPU limits in deployment configuration file from <strong>100m</strong> to <strong>10m</strong> or less or you can also increase CPU requests. </p> <p>Take a look here: <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/" rel="nofollow noreferrer">cert-manager-kubernetes</a>, <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/#attempt-to-create-a-pod-that-does-not-meet-the-minimum-cpu-request" rel="nofollow noreferrer">pod-min-cpu-request</a>.</p> <p>Useful article: <a href="https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b" rel="nofollow noreferrer">resources-limits-kubernetes</a>.</p>
Malgorzata
<p>I'm checking on a scaling issue and we are suspecting it has something to do with the memory, but after running a load testing on local machine it doesn't seems to have memory leak. We are hosting the .net core application in Kubernetes, with resources setting 800mi request memory without limit. And as per describe from this <a href="https://www.c-sharpcorner.com/article/garbage-collection-in-dot-net/#:%7E:text=The%20trigger%20for%20Garbage%20collection,GC." rel="nofollow noreferrer">Article</a></p> <blockquote> <p>The trigger for Garbage collection occurs when, The system has low physical memory and gets notification from OS.</p> </blockquote> <p>So does that mean GC is unlikely to kick in until my nodes are low on memory if we did not setup memory limit, and it will eventually occupied most of memory in node?</p>
ragk
<p>@Martin is right but I would like to provide some more insight on this topic.</p> <p><a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">Kubernetes best practices: Resource requests and limits</a> is a very good guide explaining the idea behind these mechanisms with a detailed explanation and examples.</p> <p>Also, <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">Managing Resources for Containers</a> will provide you with the official docs regarding:</p> <ul> <li><p>Requests and limits</p> </li> <li><p>Resource types</p> </li> <li><p>Resource requests and limits of Pod and Container</p> </li> <li><p>Resource units in Kubernetes</p> </li> <li><p>How Pods with resource requests are scheduled</p> </li> <li><p>How Pods with resource limits are run, etc</p> </li> </ul> <p>Bear in mind that it is very important is to have a good strategy when calculating how much resources you would need for each container. Optimally, your pods should be using exactly the amount of resources you requested but that's almost impossible to achieve. If the usage is lower than your request, you are wasting resources. If it's higher, you are risking performance issues. Consider a 25% margin up and down the request value as a good starting point. Regarding limits, achieving a good setting would depend on trying and adjusting. There is no optimal value that would fit everyone as it depends on many factors related to the application itself, the demand model, the tolerance to errors etc.</p> <p>And finally, you can use the <a href="https://github.com/kubernetes-sigs/metrics-server#kubernetes-metrics-server" rel="nofollow noreferrer">metrics-server</a> to get the CPU and memory usage of the pods.</p>
Wytrzymały Wiktor
<p>I'm trying to connect to a postgres container running in docker on my mac, from my minikube setup in virtualbox. But I'm running into dns resolve issues.</p> <p>I'm running postgres as a container on docker</p> <pre><code>&gt; docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a794aca3a6dc postgres "docker-entrypoint.s…" 3 days ago Up 3 days 0.0.0.0:5432-&gt;5432/tcp postgres </code></pre> <p>On my Mac / VirtualBox / Minikube setup I create a service</p> <pre><code>kind: Service apiVersion: v1 metadata: name: postgres-svc spec: type: ExternalName externalName: 10.0.2.2 ports: - port: 5432 </code></pre> <p><code>10.0.2.2</code> is alias to host interface (found this information <a href="https://stackoverflow.com/questions/9808560/why-do-we-use-10-0-2-2-to-connect-to-local-web-server-instead-of-using-computer/34732276#34732276">here</a>)</p> <pre><code>&gt; kubectl get service --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 21d hazelnut postgres-svc ExternalName &lt;none&gt; 10.0.2.2 5432/TCP 27m kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 21d kube-system kubernetes-dashboard ClusterIP 10.108.181.235 &lt;none&gt; 80/TCP 19d kube-system tiller-deploy ClusterIP 10.101.218.56 &lt;none&gt; 44134/TCP 20d </code></pre> <p>(our namespace is <code>hazelnut</code>, don't ask:-)</p> <p>In my deployment, if I connect to 10.0.2.2 directly, it connects to the postgres without issue, but if I try to resolve the hostname of the kubernetes service it doesnt' work. So it's not a firewall or routing issue, pure dns.</p> <p>I've tried <code>postgres-svc.hazelnut.cluster.local</code>, <code>postgres-svc</code>, <code>postgres-svc.hazelnut.svc.cluster.local</code>, <code>postgres-svc.hazelnut</code> all resulting in NXDOMAIN</p> <p><code>kubernetes.default</code> works though.</p> <pre><code>&gt; nslookup kubernetes.default Server: 10.96.0.10 Address: 10.96.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1 </code></pre> <p>In this <a href="https://stackoverflow.com/questions/52356455/kubernetes-externalname-service-not-visible-in-dns">post</a> they mention that using kube-dns should solve it, but I'm using it and to no avail</p> <pre><code>&gt; kubectl get svc --namespace=kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 21d ... </code></pre> <p>Any idea how I can get this to work properly?</p>
Tom Lous
<p>For the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">ExternalName service type</a> the <code>externalName</code> should be FQDN, not an IP address, e.g. </p> <pre><code>kind: Service ... metadata: name: postgres-svc spec: type: ExternalName externalName: mydb.mytestdomain </code></pre> <p>The host machine should be able to resolve the name of that FQDN. You might add a record into the <code>/etc/hosts</code> at the Mac host to achieve that: </p> <pre><code>10.0.0.2 mydb.mytestdomain </code></pre> <p>Actually, coredns uses name resolver configured in the <code>/etc/resolv.conf</code> in the Minikube VM. It points to the name resolver in the VirtualBox NAT Network (10.0.2.3). In turn, VirtualBox relies on the host name resolving mechanism that looks through the local <code>/etc/hosts</code> file. </p> <p>Tested for: MacOS 10.14.3, VBox 6.0.10, kubernetes 1.15.0, minikube 1.2.0, coredns</p>
mebius99
<p>I did a small deployment in K8s using Docker image but it is not showing in deployment but only showing in pods. Reason: It is not creating any default namespace in deployments.</p> <p>Please suggest:</p> <p>Following are the commands I used.</p> <pre><code>$ kubectl run hello-node --image=gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 --port=8080 --namespace=default pod/hello-node created $ kubectl get pods NAME READY STATUS RESTARTS AGE hello-node 1/1 Running 0 12s $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default hello-node 1/1 Running 0 9m9s kube-system event-exporter-v0.2.5-599d65f456-4dnqw 2/2 Running 0 23m kube-system kube-proxy-gke-hello-world-default-pool-c09f603f-3hq6 1/1 Running 0 23m $ kubectl get deployments **No resources found in default namespace.** $ kubectl get deployments --all-namespaces NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system event-exporter-v0.2.5 1/1 1 1 170m kube-system fluentd-gcp-scaler 1/1 1 1 170m kube-system heapster-gke 1/1 1 1 170m kube-system kube-dns 2/2 2 2 170m kube-system kube-dns-autoscaler 1/1 1 1 170m kube-system l7-default-backend 1/1 1 1 170m kube-system metrics-server-v0.3.1 1/1 1 1 170m </code></pre>
NewLearner
<p>Arghya Sadhu's answer is correct. In the past <code>kubectl run</code> command indeed created by default a <code>Deployment</code> instead of a <code>Pod</code>. Actually in the past you could use it with so called <a href="https://v1-17.docs.kubernetes.io/docs/reference/kubectl/conventions/#generators" rel="nofollow noreferrer">generators</a> and you were able to specify exactly what kind of resource you want to create by providing <code>--generator</code> flag followed by corresponding value. Currently <code>--generator</code> flag is deprecated and has no effect. </p> <p>Note that you've got quite clear message after running your <code>kubectl run</code> command:</p> <pre><code>$ kubectl run hello-node --image=gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 --port=8080 --namespace=default pod/hello-node created </code></pre> <p>It clearly says that the <code>Pod</code> <code>hello-node</code> was created. It doesn't mention about a <code>Deployment</code> anywhere.</p> <p>As an alternative to using <strong>imperative commands</strong> for creating either <code>Deployments</code> or <code>Pods</code> you can use <strong>declarative approach</strong>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello-node namespace: default labels: app: hello-node spec: replicas: 3 selector: matchLabels: app: hello-node template: metadata: labels: app: hello-node spec: containers: - name: hello-node-container image: gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 ports: - containerPort: 8080 </code></pre> <p>Declaration of <code>namespace</code> can be ommitted in this case as by default all resources are deployed into the <code>default</code> namespace.</p> <p>After saving the file e.g. as <code>nginx-deployment.yaml</code> you just need to run:</p> <pre><code>kubectl apply -f nginx-deployment.yaml </code></pre> <h3>Update:</h3> <p>Expansion of the environment variables within the yaml manifest actually doesn't work so the following line from the above deployment example cannot be used:</p> <pre><code>image: gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 </code></pre> <p>The simplest workaround is a fairly simple <code>sed</code> "trick".</p> <p>First we need to change a bit our project id's placeholder in our deployment definition yaml. It may look like this:</p> <pre><code>image: gcr.io/{{DEVSHELL_PROJECT_ID}}/hello-node:1.0 </code></pre> <p>Then when applying the deployment definition instead of simple <code>kubectl apply -f deployment.yaml</code> run this one-liner:</p> <pre><code>sed "s/{{DEVSHELL_PROJECT_ID}}/$DEVSHELL_PROJECT_ID/g" deployment.yaml | kubectl apply -f - </code></pre> <p>The above command tells <code>sed</code> to search through <code>deployment.yaml</code> document for <code>{{DEVSHELL_PROJECT_ID}}</code> string and each time this string occurs, to substitute it with the actual value of <code>$DEVSHELL_PROJECT_ID</code> environment variable.</p>
mario
<p>This question is regarding kubernetes storage. I am using a local kubernetes cluster where while some applications to be deployed need to be backed by pvcs. The PVC are provisioned dynamically . However, sometimes when there is no storage left on the cluster the pvc request just gets stuck in forever pending state. </p> <p>Is there any way that the available storage on the kubernetes cluster be checked? Checked extensively in the docs and it is just not clear how to check remaining storage capacity on a kubernetes cluster. </p> <p>Also, as per kubernetes docs the capacity of a node is different and the pvc allocation is bound to the pv which are a completely separate cluster resource just like nodes.</p> <p>In that case what storage do I need to check to find if there's any space available for say an x gb dynamic pvc? Also, how do i check it?</p>
sri3
<p>You can use <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/" rel="nofollow noreferrer">tools for monitoring resources</a>.</p> <p>One of it is <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> you can combine it with <a href="https://grafana.com/" rel="nofollow noreferrer">Grafana</a> to visualize collected metrics.</p> <p>Also take a look on <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#monitoring-ephemeral-storage-consumption" rel="nofollow noreferrer">compute-resources-consumption-monitoring</a>.</p> <p>When local ephemeral storage is used, it is monitored on an ongoing basis by the kubelet. The monitoring is performed by scanning each emptyDir volume, log directories, and writable layers on a periodic basis. Starting with Kubernetes 1.15, emptyDir volumes (but not log directories or writable layers) may, at the cluster operator’s option, be managed by use of project quotas. Project quotas were originally implemented in XFS, and have more recently been ported to ext4fs. Project quotas can be used for both monitoring and enforcement; as of Kubernetes 1.16, they are available as alpha functionality for monitoring only.</p> <p>Quotas are faster and more accurate than directory scanning. When a directory is assigned to a project, all files created under a directory are created in that project, and the kernel merely has to keep track of how many blocks are in use by files in that project. If a file is created and deleted, but with an open file descriptor, it continues to consume space. This space will be tracked by the quota, but will not be seen by a directory scan.</p> <p>To enable use of project quotas, the cluster operator must do the following:</p> <ul> <li>enable the <code>LocalStorageCapacityIsolationFSQuotaMonitoring=true</code> feature gate in the kubelet configuration. This defaults to false in Kubernetes 1.16, so must be explicitly set to true</li> <li>make sure that the root partition (or optional runtime partition) is built with project quotas enabled. Take notice on that all XFS filesystems support project quotas, but ext4 filesystems must be built specially.</li> </ul> <p>Make sure that the root partition (or optional runtime partition) is mounted with project quotas enabled.</p>
Malgorzata
<p>I recently installed <code>microk8s</code>, and enabled helm3 and dns addons on microk8s. Deployment from <code>stable/chart</code> works fine but any deployment from <code>bitnami/chart</code> fails.</p> <p><strong>OS:</strong> Ubuntu 20.04.1 LTS -- microk8s: 1.19/stable</p> <pre><code>microk8s.helm3 install my-release bitnami/jenkins =&gt; Error: parse error at (jenkins/charts/common/templates/_secrets.tpl:84): function &quot;lookup&quot; not defined microk8s.helm3 install my-release bitnami/magento =&gt; Error: parse error at (magento/charts/elasticsearch/charts/common/templates/_secrets.tpl:84): function &quot;lookup&quot; not defined </code></pre>
gharbi.bdr
<p>There was a bug reported <a href="https://github.com/jenkinsci/helm-charts/issues/193" rel="noreferrer">here</a> and <a href="https://github.com/helm/helm/issues/7955" rel="noreferrer">here</a> which was caused by the conditional inclusion of <code>lookup</code> into the function map.</p> <p>A fix for it was merged <a href="https://github.com/helm/helm/pull/7969" rel="noreferrer">here</a> and is now available from Helm version 3.2.0.</p> <p>So, in order to fix this issue you should update your Helm to version 3.2.0 or newer.</p>
Wytrzymały Wiktor
<p>I need to connect to windows remote server(shared drive) from GO API hosted in the alpine linux. I tried using tcp,ssh and ftp none of them didn't work. Any suggestions or ideas to tackle this?</p>
Srikanth Reddy
<p>Before proceeding with debugging the GO code, it would be needed to do some "unskilled labour" within container in order to ensure pre-requisites are met: </p> <ol> <li>samba client is installed and daemons are running; </li> <li>the target name gets resolved; </li> <li>there are no connectivity issues (routing, firewall rules, etc); </li> <li>there are share access permissions; </li> <li>mounting remote volume is allowed for the container.</li> </ol> <p>Connect to the container: </p> <pre><code>$ docker ps $ docker exec -it container_id /bin/bash </code></pre> <p>Samba daemons are running: </p> <pre><code>$ smbd status $ nmbd status </code></pre> <p>You use the right name format in your code and command lines: </p> <pre><code>UNC notation =&gt; \\server_name\share_name URL notation =&gt; smb://server_name/share_name </code></pre> <p>Target name is resolvable</p> <pre><code>$ nslookup server_name.domain_name $ nmblookup netbios_name $ ping server_name </code></pre> <p>Samba shares are visible</p> <pre><code>$ smbclient -L //server [-U user] # list of shares </code></pre> <p>and accessible (<code>ls</code>, <code>get</code>, <code>put</code> commands provide expected output here)</p> <pre><code>$ smbclient //server/share &gt; ls </code></pre> <p>Try to mount remote share as suggested by <a href="https://stackoverflow.com/users/7053644/cwadley">@cwadley</a> (mount could be prohibited by default in Docker container): </p> <pre><code>$ sudo mount -t cifs -o username=geeko,password=pass //server/share /mnt/smbshare </code></pre> <p>For investigation purposes you might use the Samba docker container available at <a href="https://github.com/dperson/samba" rel="nofollow noreferrer">GitHub</a>, or even deploy your application in it since it contains Samba client and helpful command line tools: </p> <pre><code>$ sudo docker run -it -p 139:139 -p 445:445 -d dperson/samba </code></pre> <p>After you get this working at the Docker level, you could easily reproduce this in Kubernetes. </p> <p>You might do the checks from within the running Pod in Kubernetes: </p> <pre><code>$ kubectl get deployments --show-labels $ LABEL=label_value; kubectl get pods -l app=$LABEL -o custom-columns=POD:metadata.name,CONTAINER:spec.containers[*].name $ kubectl exec pod_name -c container_name -- ping -c1 server_name </code></pre> <p>Having got it working in command line in Docker and Kubernetes, you should get your program code working also. </p> <p>Also, there is a really thoughtful discussion on StackOverflow regards Samba topic:<br> <a href="https://stackoverflow.com/questions/27989751/mount-smb-cifs-share-within-a-docker-container">Mount SMB/CIFS share within a Docker container</a></p>
mebius99
<p>I have a three node GCE cluster and a single-pod GKE deployment with three replicas. I created the PV and PVC like so:</p> <pre><code># Create a persistent volume for web content apiVersion: v1 kind: PersistentVolume metadata: name: nginx-content labels: type: local spec: capacity: storage: 5Gi accessModes: - ReadOnlyMany hostPath: path: "/usr/share/nginx/html" -- # Request a persistent volume for web content kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-content-claim annotations: volume.alpha.kubernetes.io/storage-class: default spec: accessModes: [ReadOnlyMany] resources: requests: storage: 5Gi </code></pre> <p>They are referenced in the container spec like so:</p> <pre><code> spec: containers: - image: launcher.gcr.io/google/nginx1 name: nginx-container volumeMounts: - name: nginx-content mountPath: /usr/share/nginx/html ports: - containerPort: 80 volumes: - name: nginx-content persistentVolumeClaim: claimName: nginx-content-claim </code></pre> <p>Even though I created the volumes as ReadOnlyMany, only one pod can mount the volume at any given time. The rest give "Error 400: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE". How can I make it so all three replicas read the same web content from the same volume?</p>
asdfaewefgav
<p>First I'd like to point out one fundamental discrapency in your configuration. Note that when you use your <code>PersistentVolumeClaim</code> defined as in your example, you don't use your <code>nginx-content</code> <code>PersistentVolume</code> at all. You can easily verify it by running:</p> <pre><code>kubectl get pv </code></pre> <p>on your <strong>GKE cluster</strong>. You'll notice that apart from your manually created <code>nginx-content</code> <code>PV</code>, there is another one, which was automatically provisioned based on the <code>PVC</code> that you applied.</p> <p>Note that in your <code>PersistentVolumeClaim</code> definition you're explicitely referring the <code>default</code> storage class which has nothing to do with your manually created <code>PV</code>. Actually even if you completely omit the annotation:</p> <pre><code>annotations: volume.alpha.kubernetes.io/storage-class: default </code></pre> <p>it will work exactly the same way, namely the <code>default</code> storage class will be used anyway. Using the default storage class on <strong>GKE</strong> means that <strong>GCE Persistent Disk</strong> will be used as your volume provisioner. You can read more about it <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#storageclasses" rel="noreferrer">here</a>:</p> <blockquote> <p>Volume implementations such as gcePersistentDisk are configured through StorageClass resources. GKE creates a default StorageClass for you which uses the standard persistent disk type (ext4). The default StorageClass is used when a PersistentVolumeClaim doesn't specify a StorageClassName. You can replace the provided default StorageClass with your own.</p> </blockquote> <p>But let's move on to the solution of the problem you're facing.</p> <h3>Solution:</h3> <p>First, I'd like to emphasize <strong>you don't have to use any NFS-like filesystems to achive your goal</strong>.</p> <p>If you need your <code>PersistentVolume</code> to be available in <code>ReadOnlyMany</code> mode, <strong>GCE Persistent Disk</strong> is a perfect solution that entirely meets your requirements.</p> <p>It can be mounted in <code>ro</code> mode by many <code>Pods</code> at the same time and what is even more important by many <code>Pods</code>, scheduled on different <strong>GKE</strong> <code>nodes</code>. Furthermore it's really simple to configure and it works on <strong>GKE</strong> out of the box.</p> <p>In case you want to use your storage in <code>ReadWriteMany</code> mode, I agree that something like NFS may be the only solution as <strong>GCE Persistent Disk</strong> doesn't provide such capability.</p> <p>Let's take a closer look how we can configure it.</p> <p>We need to start from defining our <code>PVC</code>. This step was actually already done by yourself but you got lost a bit in further steps. Let me explain how it works.</p> <p>The following configuration is correct (as I mentioned <code>annotations</code> section can be omitted):</p> <pre><code># Request a persistent volume for web content kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-content-claim spec: accessModes: [ReadOnlyMany] resources: requests: storage: 5Gi </code></pre> <p>However I'd like to add one important comment to this. You said:</p> <blockquote> <p>Even though I created the volumes as ReadOnlyMany, only one pod can mount the volume at any given time.</p> </blockquote> <p>Well, actually <strong>you didn't</strong>. I know it may seem a bit tricky and somewhat surprising but this is not the way how defining <code>accessModes</code> really works. In fact it's a widely misunderstood concept. First of all <strong>you cannot define access modes in <code>PVC</code></strong> in a sense of putting there the constraints you want. Supported <strong>access modes</strong> are inherent feature of a particular storage type. They are already defined by the storage provider.</p> <p>What you actually do in <code>PVC</code> definition is requesting a <code>PV</code> that supports the particular access mode or access modes. Note that it's in a form of <strong>a list</strong> which means you may provide many different access modes that you want your <code>PV</code> to support.</p> <p>Basically it's like saying: <em>&quot;Hey! Storage provider! Give me a volume that supports <code>ReadOnlyMany</code> mode.&quot;</em> You're asking this way for a storage that will satisfy your requirements. Keep in mind however that you can be given more than you ask. And this is also our scenario when asking for a <code>PV</code> that supports <code>ReadOnlyMany</code> mode in <strong>GCP</strong>. It creates for us a <code>PersistentVolume</code> which meets our requirements we listed in <code>accessModes</code> section but it also supports <code>ReadWriteOnce</code> mode. Although we didn't ask for something that also supports <code>ReadWriteOnce</code> you will probably agree with me that storage which has a built-in support for those two modes fully satisfies our request for something that supports <code>ReadOnlyMany</code>. So basically this is the way it works.</p> <p>Your <code>PV</code> that was automatically provisioned by GCP in response for your <code>PVC</code> supports those two <code>accessModes</code> and if you don't specify explicitely in <code>Pod</code> or <code>Deployment</code> definition that you want to mount it in <strong>read-only</strong> mode, by default it is mounted in <strong>read-write</strong> mode.</p> <p>You can easily verify it by attaching to the <code>Pod</code> that was able to successfully mount the <code>PersistentVolume</code>:</p> <pre><code>kubectl exec -ti pod-name -- /bin/bash </code></pre> <p>and trying to write something on the mounted filesystem.</p> <p>The error message you get:</p> <pre><code>&quot;Error 400: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE&quot; </code></pre> <p>concerns specifically <strong>GCE Persistent Disk</strong> that is already mounted by one <strong>GKE</strong> <code>node</code> in <code>ReadWriteOnce</code> mode and it cannot be mounted by another <code>node</code> on which the rest of your <code>Pods</code> were scheduled.</p> <p>If you want it to be mounted in <code>ReadOnlyMany</code> mode, you need to specify it explicitely in your <code>Deployment</code> definition by adding <code>readOnly: true</code> statement in the <code>volumes</code> section under <code>Pod's</code> template specification like below:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 volumeMounts: - mountPath: &quot;/usr/share/nginx/html&quot; name: nginx-content volumes: - name: nginx-content persistentVolumeClaim: claimName: nginx-content-claim readOnly: true </code></pre> <p>Keep in mind however that to be able to mount it in <code>readOnly</code> mode, first we need to pre-populate such volume with data. Otherwise you'll see another error message, saying that unformatted volume cannot be mounted in read only mode.</p> <p>The easiest way to do it is by creating a single <code>Pod</code> which will serve only for copying data which was already uploaded to one of our <strong>GKE nodes</strong> to our destination <code>PV</code>.</p> <p>Note that pre-populating <code>PersistentVolume</code> with data can be done in many different ways. You can mount in such <code>Pod</code> only your <code>PersistentVolume</code> that you will be using in your <code>Deployment</code> and get your data using <code>curl</code> or <code>wget</code> from some external location saving it directly on your destination <code>PV</code>. It's up to you.</p> <p>In my example I'm showing how to do it using additional <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="noreferrer">local</a> volume that allows us to mount into our <code>Pod</code> a <code>directory</code>, <code>partition</code> or <code>disk</code> (in my example I use a directory <code>/var/tmp/test</code> located on one of my GKE nodes) available on one of our kubernetes nodes. It's much more flexible solution than <code>hostPath</code> as we don't have to care about scheduling such <code>Pod</code> to particular node, that contains the data. Specific <strong>node affinity</strong> rule is already defined in <code>PersistentVolume</code> and <code>Pod</code> is automatically scheduled on specific node.</p> <p>To create it we need 3 things:</p> <p><code>StorageClass</code>:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer </code></pre> <p><code>PersistentVolume</code> definition:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /var/tmp/test nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - &lt;gke-node-name&gt; </code></pre> <p>and finally <code>PersistentVolumeClaim</code>:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi storageClassName: local-storage </code></pre> <p>Then we can create our temporary <code>Pod</code> which will serve only for copying data from our <strong>GKE node</strong> to our <strong>GCE Persistent Disk</strong>.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: myfrontend image: nginx volumeMounts: - mountPath: &quot;/mnt/source&quot; name: mypd - mountPath: &quot;/mnt/destination&quot; name: nginx-content volumes: - name: mypd persistentVolumeClaim: claimName: myclaim - name: nginx-content persistentVolumeClaim: claimName: nginx-content-claim </code></pre> <p>Paths you can see above are not really important. The task of this <code>Pod</code> is only to allow us to copy our data to the destination <code>PV</code>. Eventually our <code>PV</code> will be mounted in completely different path.</p> <p>Once the <code>Pod</code> is created and both volumes are successfully mounted, we can attach to it by running:</p> <pre><code>kubectl exec -ti my-pod -- /bin/bash </code></pre> <p>Withing the <code>Pod</code> simply run:</p> <pre><code>cp /mnt/source/* /mnt/destination/ </code></pre> <p>That's all. Now we can <code>exit</code> and delete our temporary <code>Pod</code>:</p> <pre><code>kubectl delete pod mypod </code></pre> <p>Once it is gone, we can apply our <code>Deployment</code> and our <code>PersistentVolume</code> finally can be mounted in <code>readOnly</code> mode by all the <code>Pods</code> located on various <strong>GKE nodes</strong>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 volumeMounts: - mountPath: &quot;/usr/share/nginx/html&quot; name: nginx-content volumes: - name: nginx-content persistentVolumeClaim: claimName: nginx-content-claim readOnly: true </code></pre> <p>Btw. if you are ok with the fact that your <code>Pods</code> will be scheduled only on one particular node, you can give up on using <strong>GCE Persistent Disk</strong> at all and switch to the above mentioned <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="noreferrer">local</a> volume. This way all your <code>Pods</code> will be able not only to read from it but also to write to it at the same time. The only caveat is that all those <code>Pods</code> will be running on a single node.</p>
mario
<p>I would like to list pods created within 24 hours. I didn't find any kubectl commands or anything to get those. Could anyone please help me with the kubectl command to get only the pods created in last 24 hours.</p>
Vivek Subramani
<p>In order to list all Pods created within the last 24h you can use the below command:</p> <pre><code>kubectl get pods --sort-by=.metadata.creationTimestamp | awk 'match($5,/^[0-9]h|^[0-9][0-9]h|^[0-9]m|^[0-9][0-9]m|^[0-9]s|^[0-9][0-9]s/) {print $0}' </code></pre> <p>If you also want to get Pods with errors only than you can use:</p> <pre><code>kubectl get pods --sort-by=.metadata.creationTimestamp | awk 'match($5,/^[0-9]h|^[0-9][0-9]h|^[0-9]m|^[0-9][0-9]m|^[0-9]s|^[0-9][0-9]s/) {print $0}' | grep -i Error </code></pre> <p>Or alternatively to only list Pods with the <code>Pending</code> status:</p> <pre><code>kubectl get pods --field-selector=status.phase=Pending --sort-by=.metadata.creationTimestamp | awk 'match($5,/^[0-9]h|^[0-9][0-9]h|^[0-9]m|^[0-9][0-9]m|^[0-9]s|^[0-9][0-9]s/) {print $0}' </code></pre>
Wytrzymały Wiktor
<p>I am trying to deploy a docker image which is in public repository. I am trying to create a loadbalancer service, and trying to expose the service in my system ip address, and not 127.0.0.1. I am using a windows 10 , and my docker has WSL2 instead of hyper-v.</p> <p>Below is my .yaml file. So, the service inside will run in port 4200, so to avoid any kind of confusion I was keeping all the ports in 4200.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: hoopla spec: selector: app: hoopla ports: - protocol: TCP port: 4200 targetPort: 4200 clusterIP: 10.96.1.3 type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.0.144 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/name: hoopla name: hoopla spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: hoopla template: metadata: labels: app.kubernetes.io/name: hoopla spec: containers: - image: pubrepo/myimg:latest name: hoopla ports: - containerPort: 4200 </code></pre> <p>Can anybody help me here to understand what mistake I am making. I basically want to expose this on my system IP address.</p>
Wan Street
<p>The <code>loadBalancer</code> service type require a cloud provider's load Balancer ( <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a> )</p> <pre><code>LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created </code></pre> <p>If you want to expose your service to your local machine, use kubernetes service <code>nodePort</code> type for example, and if you just want to test your webapp, you can use the kubernetes service <code>clusterIp</code> type and make a port-forward, for example with your clusterIp service:</p> <pre><code>kubectl port-forward svc/hoopla 4200:4200 </code></pre>
Bguess
<p>I am trying to create a deployment in GKE that uses multiple replicas. I have some static data which I want to have available in every pod. This data will not be updated, no write is required.</p> <p>I decided to use a PV with a corresponding PVC with the ReadOnlyMany storage class. The thing is, I do not know how to actually transfer my data to the volume - since it is read-only. I tried using </p> <pre><code>gcloud compute scp /local/path instance:/remote/path </code></pre> <p>but of course, I get a permission error. I then tried creating a new PV via the console. I attached it to a VM with</p> <pre><code>gcloud compute instances attach disk </code></pre> <p>mounted and formatted the disk, transfered my data, unmounted the disk, detached it from the VM and finally created a PVC following <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd" rel="noreferrer">the documentation</a>. I changed the storage class to ReadOnlyMany, the only difference.</p> <p>But still, when I'm trying to scale my deployment to more than one replicas I get an error saying the disk is already attached to another node.</p> <p>So, how can I create a volume that is to be used in ReadOnlyMany and populate the disk with data? Or is there a better approach since no write is required?</p> <p>Thanks in advance</p>
Nikolaos Paschos
<p>We can simplify a bit the whole process. On <strong>GKE</strong> you don't actually need to manually create a <code>PV</code> based on <strong>GCE Persistent Disk</strong>. All you need is to define proper <code>PVC</code> which can look as follows:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: webserver-content-claim spec: accessModes: [ReadOnlyMany] resources: requests: storage: 5Gi </code></pre> <p>Keep in mind that you cannot define access modes in <code>PVC</code> in a sense of putting there any specific constraints. What you basically do is simply requesting a storage that supports this particular access mode. Note that it's in a form of a list which means you may provide many different access modes that you want your <code>PV</code> to support. I explained it more in detail in <a href="https://stackoverflow.com/a/62545427/11714114">this</a> answer. But the key point here is that <strong>by setting <code>ReadOnlyMany</code> access mode in <code>PVC</code> definition you only request for a volume which supports this type of access but it doesn't mean it doesn't support other modes.</strong></p> <p>If you don't specify <code>readOnly: true</code> in <code>volumes</code> section of your <code>Pod</code> template as @Ievgen Goichuk suggested in his answer, by default it is mounted in <code>rw</code> mode. Since <strong>GCE Persistent Disk</strong> doesn't support <code>ReadWriteMany</code> access mode, such volume cannot be mounted by other <code>Pods</code>, scheduled on different <code>nodes</code> once it is already mounted in <code>rw</code> mode by one <code>Pod</code>, scheduled on one particular <code>node</code>. Mounting it in <code>rw</code> mode by this <code>Pod</code> is possible because <strong>GCE Persistent Disk</strong> supports also <code>ReadWriteOnce</code> access mode, which according to <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">the official docs</a> menas <em>&quot;the volume can be mounted as read-write by a single node&quot;</em>. That's why <code>Pods</code> scheduled on other nodes are unable to mount it.</p> <p><strong>But let's move on to the actual solution.</strong></p> <p>Once you create the above mentioned <code>PVC</code>, you'll see that the corresponding <code>PV</code> has also been created (<code>kubectl get pv</code>) and its <code>STATUS</code> is <code>Bound</code>.</p> <p>Now we only need to pre-populate it somehow before we start using it in <code>ReadOnlyMany</code> access mode. I will share what works best for me.</p> <p>If you've already uploaded your data on one of your <strong>Compute Engine</strong> instances, forming the <strong>node-pool</strong> of your worker nodes, you can skip the next step.</p> <p>I assume you have <strong>gcloud</strong> installed on your local machine.</p> <pre><code>gcloud compute scp /local/path instance:/remote/path </code></pre> <p>is the correct way to achieve that. @Nikolaos Paschos, if you get the <code>permission denied</code> error, it probably means the <code>/remote/path</code> you defiend is some restricted directory that you don't have access to as non-root user. You'll see this error if you try to copy something from your local filesystem e.g. to <code>/etc</code> directory on the remote machine. The safest way is to copy your files to your home directory to which you have access to:</p> <pre><code>gcloud compute scp --recurse /home/&lt;username&gt;/data/* &lt;instance-name&gt;:~ --zone &lt;zone-name&gt; </code></pre> <p>Use <code>--recurse</code> option if you want to copy all the files and directories with their content from the source directory.</p> <p>Once our data is uploaded to one of our worker nodes, we need to copy it to our newly created <code>PersistentVolume</code>. It can be done in a few different ways.</p> <p>I decided to use for it a temporary <code>Pod</code> with <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">local</a> volume.</p> <p>To make our data, already present on one of <strong>GKE worker nodes</strong>, available also to our temporary <code>Pod</code>, let's create the following:</p> <p><code>storage-class-local.yaml</code>:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer </code></pre> <p><code>pv-local.yaml</code>:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: local-pv spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /home/&lt;username&gt; nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - &lt;gke-node-name&gt; </code></pre> <p>and <code>pvc-local.yaml</code>:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi storageClassName: local-storage </code></pre> <p>In the next step let's create our temporary <code>Pod</code> which will enable us to copy our data from <code>node</code>, mounted into <code>Pod</code> as a local volume, to <code>PV</code> based on <strong>GCE Persistent Disk</strong>. It's definion may look as follows:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: myfrontend image: nginx volumeMounts: - mountPath: &quot;/mnt/source&quot; name: local-volume - mountPath: &quot;/mnt/destination&quot; name: gce-pd-volume volumes: - name: local-volume persistentVolumeClaim: claimName: myclaim - name: gce-pd-volume persistentVolumeClaim: claimName: webserver-content-claim </code></pre> <p>When the <code>Pod</code> is up and running, we can attach to it by:</p> <pre><code>kubectl exec -ti mypod -- /bin/bash </code></pre> <p>And copy our files:</p> <pre><code>cp -a /mnt/source/* /mnt/destination/ </code></pre> <p>Now we can delete our temporary pod, local pv and pvc. Our <code>PersistentVolume</code> is already pre-populated with data and can be moutned in <code>ro</code> mode.</p> <p>In order to test it we can ran the following <code>Deployment</code>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 volumeMounts: - mountPath: &quot;/usr/share/nginx/html&quot; name: webserver-content volumes: - name: webserver-content persistentVolumeClaim: claimName: webserver-content-claim readOnly: true ### don't forget about setting this option </code></pre>
mario
<p>I am getting an error "No nodes are available that match all of the predicates: MatchNodeSelector (7), PodToleratesNodeTaints (1)" for kube-state-metrics. Please guide me how to troubleshoot this issue </p> <p>admin@ip-172-20-58-79:~/kubernetes-prometheus$ kubectl describe po -n kube-system kube-state-metrics-747bcc4d7d-kfn7t</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 3s (x20 over 4m) default-scheduler No nodes are available that match all of the predicates: MatchNodeSelector (7), PodToleratesNodeTaints (1). </code></pre> <p>is this issue related to memory on a node? If yes how do I confirm it? I checked all nodes only one node seems to be above 80%, remaining are between 45% to 70% memory usage </p> <p>Node with 44% memory usage: <a href="https://i.stack.imgur.com/MsdHm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MsdHm.png" alt="Node with 44% memoery usage:"></a></p> <p>Total cluster memory usage:<br> <a href="https://i.stack.imgur.com/5ISa0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5ISa0.png" alt="Total cluster memory usage: "></a></p> <p>following screenshot shows kube-state-metrics (0/1 up) : </p> <p><a href="https://i.stack.imgur.com/oje5X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oje5X.png" alt="enter image description here"></a></p> <p>Furthermore, Prometheus showing kubernetes-pods (0/0 up) is it due to kube-state-metrics not working or any other reason? and kubernetes-apiservers (0/1 up) seen in the above screenshot why is not up? How to troubleshoot it?</p> <p><a href="https://i.stack.imgur.com/imlBw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/imlBw.png" alt="enter image description here"></a></p> <p>admin@ip-172-20-58-79:~/kubernetes-prometheus$ sudo tail -f /var/log/kube-apiserver.log | grep error</p> <pre><code>I0110 10:15:37.153827 7 logs.go:41] http: TLS handshake error from 172.20.44.75:60828: remote error: tls: bad certificate I0110 10:15:42.153543 7 logs.go:41] http: TLS handshake error from 172.20.44.75:60854: remote error: tls: bad certificate I0110 10:15:47.153699 7 logs.go:41] http: TLS handshake error from 172.20.44.75:60898: remote error: tls: bad certificate I0110 10:15:52.153788 7 logs.go:41] http: TLS handshake error from 172.20.44.75:60936: remote error: tls: bad certificate I0110 10:15:57.154014 7 logs.go:41] http: TLS handshake error from 172.20.44.75:60992: remote error: tls: bad certificate E0110 10:15:58.929167 7 status.go:62] apiserver received an error that is not an metav1.Status: write tcp 172.20.58.79:443-&gt;172.20.42.187:58104: write: connection reset by peer E0110 10:15:58.931574 7 status.go:62] apiserver received an error that is not an metav1.Status: write tcp 172.20.58.79:443-&gt;172.20.42.187:58098: write: connection reset by peer E0110 10:15:58.933864 7 status.go:62] apiserver received an error that is not an metav1.Status: write tcp 172.20.58.79:443-&gt;172.20.42.187:58088: write: connection reset by peer E0110 10:16:00.842018 7 status.go:62] apiserver received an error that is not an metav1.Status: write tcp 172.20.58.79:443-&gt;172.20.42.187:58064: write: connection reset by peer E0110 10:16:00.844301 7 status.go:62] apiserver received an error that is not an metav1.Status: write tcp 172.20.58.79:443-&gt;172.20.42.187:58058: write: connection reset by peer E0110 10:18:17.275590 7 status.go:62] apiserver received an error that is not an metav1.Status: write tcp 172.20.58.79:443-&gt;172.20.44.75:37402: write: connection reset by peer E0110 10:18:17.275705 7 runtime.go:66] Observed a panic: &amp;errors.errorString{s:"kill connection/stream"} (kill connection/stream) E0110 10:18:17.276401 7 runtime.go:66] Observed a panic: &amp;errors.errorString{s:"kill connection/stream"} (kill connection/stream) E0110 10:18:17.277808 7 status.go:62] apiserver received an error that is not an metav1.Status: write tcp 172.20.58.79:443-&gt;172.20.44.75:37392: write: connection reset by peer </code></pre> <p>Update after MaggieO's reply:</p> <pre><code>admin@ip-172-20-58-79:~/kubernetes-prometheus/kube-state-metrics-configs$ cat deployment.yaml apiVersion: apps/v1beta1 kind: Deployment metadata: labels: app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: v1.8.0 name: kube-state-metrics namespace: kube-system spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: kube-state-metrics template: metadata: labels: app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: v1.8.0 spec: containers: - image: quay.io/coreos/kube-state-metrics:v1.8.0 livenessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 5 timeoutSeconds: 5 name: kube-state-metrics ports: - containerPort: 8080 name: http-metrics - containerPort: 8081 name: telemetry readinessProbe: httpGet: path: / port: 8081 initialDelaySeconds: 5 timeoutSeconds: 5 nodeSelector: kubernetes.io/os: linux serviceAccountName: kube-state-metrics </code></pre> <p>Furthermore, I want to add this command to above deployment.yaml but getting indentation error. show please help me where should I add it exactly. </p> <pre><code>command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP </code></pre> <p>Update 2: @MaggieO even after adding commands/args it is showing same error and pod is in pending state : </p> <p>Update deployment.yaml :</p> <pre><code># Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "3" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"kube-state-metrics","app.kubernetes.io/version":"v1.8.0"},"name":"kube-state-metrics","namespace":"kube-system"},"spec":{"replicas":1,"selector":{"matchLabels":{"app.kubernetes.io/name":"kube-state-metrics"}},"template":{"metadata":{"labels":{"app.kubernetes.io/name":"kube-state-metrics","app.kubernetes.io/version":"v1.8.0"}},"spec":{"containers":[{"args":["--kubelet-insecure-tls","--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname"],"image":"quay.io/coreos/kube-state-metrics:v1.8.0","imagePullPolicy":"Always","livenessProbe":{"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":5,"timeoutSeconds":5},"name":"kube-state-metrics","ports":[{"containerPort":8080,"name":"http-metrics"},{"containerPort":8081,"name":"telemetry"}],"readinessProbe":{"httpGet":{"path":"/","port":8081},"initialDelaySeconds":5,"timeoutSeconds":5}}],"nodeSelector":{"kubernetes.io/os":"linux"},"serviceAccountName":"kube-state-metrics"}}}} creationTimestamp: 2020-01-10T05:33:13Z generation: 4 labels: app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: v1.8.0 name: kube-state-metrics namespace: kube-system resourceVersion: "178851301" selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/kube-state-metrics uid: b20aa645-336a-11ea-9618-0607d7cb72ed spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 2 selector: matchLabels: app.kubernetes.io/name: kube-state-metrics strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: v1.8.0 spec: containers: - args: - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP image: quay.io/coreos/kube-state-metrics:v1.8.0 imagePullPolicy: Always livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: kube-state-metrics ports: - containerPort: 8080 name: http-metrics protocol: TCP - containerPort: 8081 name: telemetry protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: / port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst nodeSelector: kubernetes.io/os: linux restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: kube-state-metrics serviceAccountName: kube-state-metrics terminationGracePeriodSeconds: 30 status: conditions: - lastTransitionTime: 2020-01-10T05:33:13Z lastUpdateTime: 2020-01-10T05:33:13Z message: Deployment does not have minimum availability. reason: MinimumReplicasUnavailable status: "False" type: Available - lastTransitionTime: 2020-01-15T07:24:27Z lastUpdateTime: 2020-01-15T07:29:12Z message: ReplicaSet "kube-state-metrics-7f8c9c6c8d" is progressing. reason: ReplicaSetUpdated status: "True" type: Progressing observedGeneration: 4 replicas: 2 unavailableReplicas: 2 updatedReplicas: 1 </code></pre> <p>Update 3: It is not able to get a node as shown in the following screenshot, let me know how to troubleshoot this issue </p> <p><a href="https://i.stack.imgur.com/vy2ID.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vy2ID.png" alt="enter image description here"></a></p>
Ashish Karpe
<p>Error on kubernetes-apiservers <code>Get https:// ...: x509: certificate is valid for 100.64.0.1, 127.0.0.1, not 172.20.58.79</code> means that controlplane nodes are targeted randomly, and the apiEndpoint only changes when the node is deleted from the cluster, it is not immediately noticeable as it requires changes with nodes in the cluster.</p> <p><strong>Workaround fix:</strong> manually synchronize kube-apiserver.pem between master nodes and restart kube-apiserver container.</p> <p>You can also remove the <strong>apiserver.<em></strong> and <strong>apiserver-kubelet-client.</em></strong> and recreate them with commands:</p> <pre><code>$ kubeadm init phase certs apiserver --config=/etc/kubernetes/kubeadm-config.yaml $ kubeadm init phase certs apiserver-kubelet-client --config=/etc/kubernetes/kubeadm-config.yaml $ systemctl stop kubelet delete the docker container with kubelet $ systemctl restart kubelet </code></pre> <p>Similar problems: <a href="https://github.com/rancher/rancher/issues/16151" rel="nofollow noreferrer">x509 certificate</a>, <a href="https://stackoverflow.com/questions/54303469/kubelet-x509-certificate-is-valid-for-10-233-0-1-not-for-ip">kubelet-x509</a>.</p> <p><strong>Then solve problem with metrics server.</strong></p> <p>Change the metrics-server-deployment.yaml file, and set the following args:</p> <pre><code>command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP </code></pre> <p>The metrics-server is now able to talk to the node (It was failing before because it could not resolve the hostname of the node).</p> <p>More information you can find here: <a href="https://github.com/kubernetes-sigs/metrics-server/issues/131" rel="nofollow noreferrer">metrics-server-issue</a>.</p>
Malgorzata
<p>i have eks cluster where applications are deployed to .<br /> i like to create a directory in this cluster that users cloud upload files to like FTP .<br /> this directory will have access to other pods or services within this EKS cluster to modify those files. how to I tackle this in the Kubernetes world? i found <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">local</a> as i need it to persist.</p>
user63898
<p>Actually, you have some options when it comes to the topic of AWS storage. There is a very good guide explaining <a href="https://www.missioncloud.com/blog/resource-amazon-ebs-vs-efs-vs-s3-picking-the-best-aws-storage-option-for-your-business" rel="nofollow noreferrer">What’s The Difference Between Amazon EBS Vs EFS Vs S3?</a> which says:</p> <blockquote> <p>Amazon EFS, Amazon EBS, and Amazon S3 are AWS’ three different storage types that can be applicable for different types of workload needs. Let’s take a closer look at the key features of each option, as well as the similarities and differences.</p> </blockquote> <p>It will give you the general idea of each one of them and would help you choose the most fitting solution for your needs.</p> <p>After that you can go to the proper guide explaining how to use the chosen type with your EKS cluster:</p> <ul> <li><p><a href="https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html" rel="nofollow noreferrer">Amazon EBS CSI driver</a></p> </li> <li><p><a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">Amazon EFS CSI driver</a></p> </li> <li><p><a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3.html" rel="nofollow noreferrer">Deploy Kubernetes resources and packages using Amazon EKS and a Helm chart repository in Amazon S3</a></p> </li> </ul> <p>This should be a proper starting point for you to tackle this particular topic.</p>
Wytrzymały Wiktor
<p>I am reading the tekton <a href="https://tekton.dev/vault/pipelines-v0.14.3/auth/" rel="noreferrer">docs</a> on authentication it explains that two things are needed to do authentication</p> <p>Create a secret docs give example below</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: basic-user-pass annotations: tekton.dev/git-0: https://github.com # Described below type: kubernetes.io/basic-auth stringData: username: &lt;username&gt; password: &lt;password&gt; </code></pre> <p>Add secret object to the service account</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: build-bot secrets: - name: basic-user-pass </code></pre> <p>My mental model of a service account in k8s is that it is a JWT used to access the k8s API server. I don't understand what's point of adding a secret to the ServiceAccount, how and why it is used.</p> <p>Questions:</p> <ul> <li>what does it mean for a service account to have secrets added to it?</li> <li>Why is it useful to add a secret to a service account?</li> <li>Why does tekton require the secret to be added to the service account?</li> <li>who/what looks at the service account secrets list?</li> </ul>
ams
<p>hope you are enjoying your kubernetes journey !</p> <p><em><strong>1) what does it mean for a service account to have secrets added to it? / Why is it useful to add a secret to a service account?</strong></em></p> <p>Fist of all, a little remider:</p> <p>As you may know, you have to see the serviceAccount as a user for a machine/an application/a script (and not only in kubernetes) in short, everything that is not human. As a human a service account, in order to authenticate to things (Git repository/docker registry or API that require authentication, needs to have credentials (username+password).</p> <p>In Kubernetes this credentials and especially the password are stored in &quot;secrets&quot;.</p> <p>Now, you should be aware that each namespace in kubernetes has a native service account named &quot;default&quot; that is associated with <strong>every</strong> running pod and that service account is linked to a native &quot;default&quot; kubernetes secret that is also present in all namespaces. This &quot;default&quot; secret contains the ca.crt and a token that let the pod to make calls to the internal Kubernetes API Server endpoint among other things.</p> <p>Since the secrets that contains the &quot;credentials&quot; is linked to a service account that is mounted to a pod, this pod can then be able to authenticate to things that require authentication.</p> <p>For example if someday you'll have to use a private docker registry to pull your images, you can do this in two ways, In each of them you have to create a secret first that will contain your sensitive data (crendentials):</p> <ul> <li>The first way consist of adding your secret name,that contains the registry credentials directly in the default serviceAccount (that, as a reminder, is mounted by default in the pod, or in a new created serviceAccount (like tekton is doing in your case) that will be added to the kubernetes deployment manifest in the field <code>serviceAccountName:</code>.</li> <li>The second way consist of adding the field <code>imagePullSecret</code> in your kubernetes deployment manifest.</li> </ul> <p>This way, when kubernetes comes to pull your private docker image, it will check if the needed credentials that are in the serviceAccount secrets works, if not it will check the secret you have added in the imagePullSecret field (or the opposite) and it will be able to connect to the registry and pull the image to run it as a container in a pod !</p> <p><em><strong>2) who/what looks at the service account secrets list?</strong></em></p> <p>For example in a brand new namespace:</p> <pre><code>❯ k get sa NAME SECRETS AGE default 1 30m </code></pre> <p>This default serviceAccount is linked to a secret named &quot;default-token-r4vrb&quot;:</p> <pre><code>❯ k get sa default -o yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: &quot;2022-05-06T08:48:38Z&quot; name: default namespace: so-tests resourceVersion: &quot;1771&quot; uid: c3598708-ad14-4806-af31-5c54d60e29b7 secrets: - name: default-token-r4vrb </code></pre> <p>This default-token secret contains what is needed to authenticate the Kubernetes APi endpoint (certificate + token):</p> <pre><code>❯ k get secret default-token-r4vrb -o yaml apiVersion: v1 data: ca.crt: base64encodedCaCertificate namespace: base64encodedNamespace token: base64encodedToken kind: Secret metadata: annotations: kubernetes.io/service-account.name: default kubernetes.io/service-account.uid: c3598708-ad14-4806-af31-5c54d60e29b7 creationTimestamp: &quot;2022-05-06T08:48:38Z&quot; name: default-token-r4vrb namespace: so-tests resourceVersion: &quot;1770&quot; uid: d342a372-66d1-4c92-b520-23c23babc798 type: kubernetes.io/service-account-token </code></pre> <p><em><strong>3) Why does tekton require the secret to be added to the service account? who/what looks at the service account secrets list?</strong></em></p> <p>Now I hope you know why, they choose to use a serviceAccount to do this but they could have just mounted the secret into the pod directly also :)</p> <p>Hope this has helped you. Here is some docs to be more familiar with K8S SA: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></p> <p>bguess/</p>
Bguess
<p>I’ve dependency in <code>priority class</code> inside my <code>k8s</code> yaml configs files and I need to install <strong>before</strong> any of my yaml inside the template folder the prio class</p> <pre><code>apiVersion: scheduling.k8s.io/v1beta1 kind: PriorityClass metadata: name: ocritical value: 1000 globalDefault: false </code></pre> <p>After reading the helm docs it seems that I can use the <a href="http://%20https://helm.sh/docs/topics/charts_hooks/#writing-a-hook" rel="nofollow noreferrer">pre-install hook</a></p> <p>I’ve changed my yaml and add anotiations section with pre-hook, and still it doesnt works, any idea what I miss here? </p> <pre><code>apiVersion: scheduling.k8s.io/v1beta1 kind: PriorityClass metadata: name: ocritical annotations: "helm.sh/hook": pre-install value: 1000 globalDefault: false </code></pre> <p>The yaml is located inisde the <code>template</code> folder</p>
Jon lib
<p>You put quotation marks for <code>helm.sh/hook</code> annotation which is incorrect - you can only add quotation marks for values of them. You can add description field in your configuration file, remember that this field is an arbitrary string. It is meant to tell users of the cluster when they should use this PriorityClass. </p> <p>Your PriorityClass should looks like this:</p> <pre><code>apiVersion: scheduling.k8s.io/v1beta1 kind: PriorityClass metadata: name: ocritical annotations: helm.sh/hook: pre-install,pre-upgrade helm.sh/hook-delete-policy: before-hook-creation value: 1000 globalDefault: false description: "This priority class should be used for XYZ service pods only." </code></pre> <p>More information about proper configuration of PriorityClass you can find here: <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer">PriorityClass</a>. More information about installing hooks you can find here: <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer">helm-hooks</a>.</p> <p>I hope it helps.</p>
Malgorzata
<p>I tried to create the POD using command <code> kubectl run --generator=run-pod/v1 mypod--image=myimage:1 -it bash</code> and after successful pod creation it prompts for bash command in side container.</p> <p>Is there anyway to achieve above command using YML file? I tried below YML but it does not go to bash directly after successful creation of POD. I had to manually write command <code>kubectl exec -it POD_NAME bash</code>. But want to avoid using exec command to bash my container. I want my YML to take me to my container after creation of POD. is there anyway to achieve this?</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod namespace: mynamespcae labels: app: mypod spec: containers: - args: - bash name: mypod image: myimage:1 stdin: true stdinOnce: true tty: true </code></pre>
user1591156
<p>This is a community wiki answer. Feel free to expand it.</p> <p>As already mentioned by David, it is not possible to go to bash directly after a Pod is created by only using the YAML syntax. You have to use a proper <code>kubectl</code> command like <code>kubectl exec</code> in order to <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">Get a Shell to a Running Container</a>.</p>
Wytrzymały Wiktor
<p>i'm working with Minikube to make a full stack K8s application using React as a frontend and ASP NET Core as a backend. Here there are my configuration</p> <p>Deployments and Services</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ServiceAccount metadata: name: web-frontend --- apiVersion: apps/v1 kind: Deployment metadata: name: frontend-deployment labels: app: frontend spec: replicas: 1 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: serviceAccountName: web-frontend containers: - name: frontend image: frontend ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: frontend-svc spec: selector: app: frontend ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: v1 kind: ServiceAccount metadata: name: backend --- apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment labels: app: backend spec: replicas: 1 selector: matchLabels: app: backend template: metadata: labels: app: backend spec: serviceAccountName: backend containers: - name: backend image: backend ports: - containerPort: 5000 --- apiVersion: v1 kind: Service metadata: name: backend spec: selector: app: backend ports: - protocol: TCP port: 5000 targetPort: 5000 </code></pre> <p>Dockerfiles for the frontend</p> <pre><code> FROM node:alpine as build-image WORKDIR /app COPY package.json ./ COPY package-lock.json ./ RUN npm i COPY . . CMD [&quot;npm&quot;, &quot;run&quot;, &quot;start&quot;] </code></pre> <p>This is instead my Ingress</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend-backend-ingress annotations: # nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: ingressClassName: nginx rules: - http: paths: - path: /?(.*) pathType: Prefix backend: service: name: frontend-svc port: number: 80 - path: /api/?(.*) pathType: Prefix backend: service: name: backend port: number: 5000 </code></pre> <p>However, when I type <code>minikube tunnel</code> to expose the ingress IP locally I can reach the frontend, but when the frontend tries to get a <code>fetch</code> request to <code>/api/something</code> in the browser console I get <code>GET http://localhost/api/patients/ 404 (Not Found)</code> and an error <code>SyntaxError: Unexpected token &lt; in JSON at position 0</code>.</p> <p>Moreover, If I change the Ingress in this way</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend-backend-ingress annotations: # nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: ingressClassName: nginx rules: - http: paths: - path: / pathType: Prefix backend: service: name: frontend-svc port: number: 80 - path: /api/ pathType: Prefix backend: service: name: backend port: number: 5000 </code></pre> <p>Then I can issue <code>curl localhost/api/something</code> and I get the JSON result, but when the frontend tries to contact the backend I get</p> <pre><code>GET http://localhost/api/patients/ 500 (Internal Server Error) SyntaxError: Unexpected end of JSON input at main.358f50ad.js:2:473736 at s (main.358f50ad.js:2:298197) at Generator._invoke (main.358f50ad.js:2:297985) at Generator.next (main.358f50ad.js:2:298626) at Al (main.358f50ad.js:2:439869) at a (main.358f50ad.js:2:440073) </code></pre> <p>This looks strange because if I try the frontend and the backend outside kubernetes everything works fine and from the React application the result from the backend is correctly fetched (of course using the <code>proxy</code> inside the <code>package.json</code>)</p>
alex
<p>To contact or make links between apps you could use their kubernetes native FQDN ( try to ping or telnet it if you want to test the connection but here is how it works: Thr default FQDN of any service is:</p> <pre><code> &lt;service-name&gt;.&lt;namespace&gt;.svc.cluster.local. </code></pre> <p>In your above example, you should be able to contact you backend service from your frontend one with:</p> <pre><code>backend.YOURNAMESPACENAME.svc.cluster.local:5000 </code></pre> <p>For services in same namespace, you don't need to use the FQDN to access services, just the service name would be enough:</p> <pre><code>backend:5000 </code></pre> <p>I don't know where you exactly configure the links between the frontend and backend but however, you should &quot;variabilize&quot; this link and add the variable definition in kubernetes manifest.</p>
Bguess
<p>I have issue with version 3.1 of Docker Desktop and on enabling kubernetes its always stuck at Starting looking at logs in i can see repeating the following log from (AppData/Local/Docker/log.txt):</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>\"https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop\": net/http: TLS handshake timeout" [16:15:55.267][GoBackendProcess ][Info ] msg="external: POST /events 200 \"DockerDesktopGo\" \"\"" [16:16:06.268][ApiProxy ][Info ] msg="cannot get lease for master node: Get \"https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop\": net/http: TLS handshake timeout" [16:16:06.268][GoBackendProcess ][Info ] msg="external: POST /events 200 \"DockerDesktopGo\" \"\""</code></pre> </div> </div> </p> <p>I have tried deleting the pki folder inside AppData/Local/Docker , but without any success.</p>
Калоян Ников
<p>This looks like a common issue reported <a href="https://github.com/docker/for-win/issues/9324" rel="nofollow noreferrer">here</a> or <a href="https://github.com/docker/for-mac/issues/5027" rel="nofollow noreferrer">here</a>, which occurs also on <strong>MacOS</strong>. As per <a href="https://github.com/docker/for-mac/issues/5027#issuecomment-718076014" rel="nofollow noreferrer">this comment</a>, apart from deleting <code>pki</code>, you should also remove <code>.kube</code> directory and restart <code>Docker</code>:</p> <blockquote> <p>I have workarounded as:</p> <pre><code>rm -rf ~/Library/Group\ Containers/group.com.docker/pki/ rm -rf ~/.kube </code></pre> <p>And restarting docker</p> </blockquote> <p>As mentioned in <a href="https://github.com/docker/for-win/issues/9324#issuecomment-740631055" rel="nofollow noreferrer">this comment</a>, the respective directory on Windows can be found in:</p> <pre><code>C:\Users\&lt;USER&gt;\AppData\Local\Docker </code></pre> <p>If none of the above helps, as the last resort solution you may try to completely re-install your <strong>Docker Desktop</strong> as there might be some remnants of the previous installation causing the issue. Compare with <a href="https://github.com/docker/for-mac/issues/5027#issuecomment-776260603" rel="nofollow noreferrer">this comment</a>.</p>
mario
<p>I've noticed some logs in my Zabbix, telling me that some random IP, from my private subnet, is trying to log in as <code>guest</code> user. I know the IP is <code>10.190.0.1</code> but there are currently no pods with that IP. Does anyone have any idea how to see which pod had it?</p> <p>The first thing I thought of, is looking and GCP Log Exporter, but we're not adding labels to logs with what POD it is. I'm sure I should be able to see it from the terminal level. So any suggestion would be nice.</p> <p>Also, I know it won't be reserved but I took a look either way</p> <pre class="lang-sh prettyprint-override"><code>gcloud compute addresses list | grep '10.190.0.1' &lt;empty line&gt; </code></pre> <p>and</p> <pre class="lang-sh prettyprint-override"><code>kubectl get all -o wide -A | grep 10.190.0.1 &lt;empty line&gt; </code></pre>
CptDolphin
<p>Hi you are doing the right way. I mean the:</p> <pre><code>kubectl get pods,svc -o wide </code></pre> <p>will effectively show you the pods and services and their IP. If the line is empty though, it is because there is no such IP in services or pods in your cluster workoads. two things to check:</p> <ul> <li>maybe the IP has changed</li> <li>maybe this logs come from an IP in the master node? something from the k8s control plane?</li> </ul> <p>bgess</p>
Bguess
<p>I have a Kubernetes cluster setup (on-premise), that has an NFS share (my-nfs.internal.tld) mounted to <code>/exports/backup</code> on each node to create backups there.</p> <p>Now I'm setting up my logging stack and I wanted to make the data persistent. So I figured I could start by storing the indices on the NFS.</p> <p>Now I found three different ways to achieve this:</p> <h2>NFS-PV</h2> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: logging-data spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce nfs: server: my-nfs.internal.tld path: /path/to/exports/backup/logging-data/ </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: logging-data-pvc spec: accessModes: - ReadWriteOnce storageClassName: logging-data resources: requests: storage: 10Gi </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment ... spec: ... template: ... spec: ... volumes: - name: logging-data-volume persistentVolumeClaim: claimName: logging-data-pvc </code></pre> <p>This would, of course, require, that my cluster gets access to the NFS (instead of only the nodes as it is currently setup).</p> <h2>hostPath-PV</h2> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: logging-data spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: /exports/backup/logging-data/ </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: logging-data-pvc spec: accessModes: - ReadWriteOnce storageClassName: logging-data resources: requests: storage: 10Gi </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment ... spec: ... template: ... spec: ... volumes: - name: logging-data-volume persistentVolumeClaim: claimName: logging-data-pvc </code></pre> <h2>hostPath mount in deployment</h2> <p>As the nfs is mounted to all my nodes, I could also just use the host path directly in the deployment without pinning anything.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment ... spec: ... template: ... spec: ... volumes: - name: logging-data-volume hostPath: path: /exports/backup/logging-data type: DirectoryOrCreate </code></pre> <hr /> <p>So my question is: Is there really any difference between these three? I'm pretty sure all three work. I tested the second and third already. I was not yet able to test the first though (in this specific setup at least). Especially the second and third solutions seem very similar to me. The second makes it easier to re-use deployment files on multiple clusters, I think, as you can use persistent volumes of different types without changing the <code>volumes</code> part of the deployment. But is there any difference beyond that? Performance maybe? Or is one of them deprecated and will be removed soon?</p> <p>I found a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="nofollow noreferrer">tutorial</a> mentioning, that the hostPath-PV only works on single-node clusters. But I'm sure it does also works in my case here. Maybe the comment was about: &quot;On multi-node clusters the data changes when deployed to different nodes.&quot;</p> <p>From reading to a lot of documentation and How-To's I understand, that the first one is the preferred solution. I would probably also go for it as it is the one easiest <em>replicated</em> to a cloud setup. But I do not really understand why this is preferred to the other two.</p> <p>Thanks in advance for your input on the matter!</p>
Max N.
<p>The <a href="https://kubernetes.io/docs/concepts/storage/volumes/#nfs" rel="nofollow noreferrer">NFS</a> is indeed the preferred solution:</p> <blockquote> <p>An <code>nfs</code> volume allows an existing NFS (Network File System) share to be mounted into a Pod. Unlike <code>emptyDir</code>, which is erased when a Pod is removed, the contents of an <code>nfs</code> volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated with data, and that data can be shared between pods. NFS can be mounted by multiple writers simultaneously.</p> </blockquote> <p>So, an NFS is useful for two reasons:</p> <ul> <li><p>Data is persistent.</p> </li> <li><p>It can be accessed from multiple pods at the same time and the data can be shared between pods.</p> </li> </ul> <p>See the NFS <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs" rel="nofollow noreferrer">example</a> for more details.</p> <p>While the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a>:</p> <blockquote> <p>A <code>hostPath</code> volume mounts a file or directory from the host node's filesystem into your Pod.</p> <p>Pods with identical configuration (such as created from a PodTemplate) may behave differently on different nodes due to different files on the nodes</p> <p>The files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged Container or modify the file permissions on the host to be able to write to a <code>hostPath</code> volume</p> </blockquote> <p><code>hostPath</code> is not recommended due to several reasons:</p> <ul> <li><p>You don't directly control which node your pods will run on, so you're not guaranteed that the pod will actually be scheduled on the node that has the data volume.</p> </li> <li><p>You expose your cluster to security threats.</p> </li> <li><p>If a node goes down you need the pod to be scheduled on other node where your locally provisioned volume will not be available.</p> </li> </ul> <p>the <code>hostPath</code> would be good if for example you would like to use it for log collector running in a <code>DaemonSet</code>. Other than that, it would be better to use the NFS.</p>
Wytrzymały Wiktor
<p>I have a Azure Kubernetes Cluster with all pods and services in running state. The issue I have is when I do a curl from pod1 to the service url of pod2, it fails intermittently with a Unable to resolve host error. </p> <p>To illustrate, I have 3 pods - pod1, pod2, pod3 When I get into pod1 using </p> <blockquote> <p>kubectl exec -it pod1</p> </blockquote> <p>and I run curl using service url of pod2 :</p> <blockquote> <p>curl <a href="http://api-batchprocessing:3000/result" rel="nofollow noreferrer">http://api-batchprocessing:3000/result</a></p> </blockquote> <p>the command succeeds about every 6/10 times, the remaining 4/10 it fails with error "<code>curl: (6) Could not resolve host:api-batchprocessing</code>". </p> <p>When I tried calling another service running on pod3 using curl, I get the same issue.</p> <p>I have tried below approaches without any success: - delete coredns pods in kube-system - delete and recreate azure kubernetes cluster. above seem to resolve it temporarily, but in few tries I get the same intermittent 'could not resolve host:' issue.</p> <p>Any help/pointers on this issue will be much appreciated.</p>
jack
<p>Problem may lay in <a href="https://www.cloudflare.com/learning/dns/what-is-dns/" rel="nofollow noreferrer">DNS</a> configuration. Looks like coredns uses the DNS server list in a differnt way that kube-dns did. If you have to resolve both public and private hostnames, always check only private DNS servers are on the list, or find the right configuration to route private DNS queries against your private premises.</p> <p>Possible steps to find and get rid of problem:</p> <ol> <li>Turn coredns logs on.</li> </ol> <p>The only thing you need is this <a href="https://learn.getgrav.org/16/advanced/yaml" rel="nofollow noreferrer">YAML</a> file:</p> <pre><code>apiVersion: v1 data: log.override: | log kind: ConfigMap metadata: labels: addonmanager.kubernetes.io/mode: EnsureExists k8s-app: kube-dns kubernetes.io/cluster-service: "true" name: coredns-custom namespace: kube-system </code></pre> <ol start="2"> <li><p>Use kubectl log or VSCode Kubernetes extension to open/review your coredns logs.</p></li> <li><p>Attach to one of our environment pods and executed some DNS resolution actions, including nslookup and curl. Some of the executions where loop queries to put pressure on the DNS and networking components:</p></li> <li><p>Review coredns the logs.</p></li> </ol> <p>You will see before, curl is trying to resolve DNS by trying both A and AAAA entries, for all the search domains defined in our pods. In other words, to resolve "api-batchprocessing", curl is making DNS queries agains coredns. However, coredns is responding properly with an "NXDOMAIN" (no exist) or "NOERROR" (record found). So the problem is elsewhere.</p> <ol start="5"> <li>Configure DNS servers at VNET level</li> </ol> <p>Possible explanation to this random DNS resolution error is that, under high load scenarios, coredns is using on all the DNS servers defined at VNET level. Some queries were probably going to the on-prem servers. Others, to Google. Google doesn't know how to resolve your private hostnames.</p> <ol start="6"> <li>Remove the Google DNS servers from the VNET, restar the cluster and run checks.</li> </ol> <p>Here you can find more information: <a href="https://www.sesispla.net/en/azure-aks-1-11-random-dns-resolution-error-after-cluster-upgrade/" rel="nofollow noreferrer">random-dns-error</a>.</p> <p>I hope it helps.</p>
Malgorzata
<p>I have deployed a statefulset in AKS - My goal is to load balance traffic to my statefulset.</p> <p>From my understanding I can define a LoadBalancer Service that can route traffic based on Selectors, something like this.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: type: LoadBalancer ports: - port: 80 name: web selector: app: nginx </code></pre> <p>However I don't want to necessarily go down the LoadBalance route and I would prefer Ingress doing this work for me, My question is can any of the ingress controller support routing rules which can do Path based routing to endpoints based on selectors? Instead of routing to another service.</p> <p><strong>Update</strong> To elaborate more on the scenario - Each pod in my statefulset is a stateless node doing data processing of a HTTP feed. I want my ingress service to be able to load balance traffic across these statefulset pods ( honoring keep-alives etc), however given the nature of statefulsets in k8s they are currently exposed through a headless service. I am not sure if a headless service can load balance traffic to my statefulsets?</p> <p><strong>Update 2</strong> Quick search reveals <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">headless</a> service does <em><strong>not</strong></em> loadbalance</p> <p><code>Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed &quot;headless&quot; Services, by explicitly specifying &quot;None&quot; for the cluster IP (.spec.clusterIP).</code></p>
Riddle
<p>+1 to <a href="https://stackoverflow.com/users/5525824/harsh-manvar">Harsh Manvar's</a> answer but let me add also my 3 cents.</p> <blockquote> <p>My question is can any of the ingress controller support routing rules which can do Path based routing to endpoints based on selectors? Instead of routing to another service.</p> </blockquote> <p>To the best of my knowledge, the answer to your question is <strong>no, it can't</strong> as it doesn't even depend on a particular ingress controller implementation. Note that various ingress controllers, no matter how different they may be when it comes to implementation, must conform to the general specification of the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">ingress resource</a>, described in the official kubernetes documentation. You don't have different kinds of ingresses, depending on what controller is used.</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer"><code>Ingress</code></a> and <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer"><code>Service</code></a> work on a different layer of abstraction. While <code>Service</code> exposes a set of pods using a selector e.g.:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: MyApp 👈 </code></pre> <p>path-based routing performed by <code>Ingress</code> is always done between <code>Services</code>:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minimal-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /testpath pathType: Prefix backend: service: name: test 👈 port: number: 80 </code></pre>
mario
<p>I did docker system prune to delete unused images but deleted everything is there way to undo this? is there any ways to fix this out?</p>
Chenman
<p>Hello, sorry for my answer but its a nope...</p> <p>When you use the prune command you are prompted to know if you are sure or not, sadly this is a last warning before the drama :D</p> <p>Hope you still got the dockerfile to rebuild your own images, if they came from internet, just go back from where you get them :D (Try your browser history if you do not remember)</p> <p>Keep the smile bro ! :) <a href="https://docs.docker.com/engine/reference/commandline/system_prune/" rel="nofollow noreferrer">https://docs.docker.com/engine/reference/commandline/system_prune/</a></p>
Bguess
<p>Have the problem with my HAProxy Ingress on Kubernetes. It works well but stopped to implement any ingress changes. I have tried to restart the pod with ingress but receive the next error. The replace of via default configuration is the same result. What can be wrong? Maybe somehow force replace with all details ingress?</p> <p>Log trace :</p> <pre><code>&gt; 2020/05/05 12:36:01 Running on Kubernetes version: v1.16.6 linux/amd64 &gt; [NOTICE] 125/123601 (22) : New worker #1 (23) forked E0505 &gt; 12:36:02.004135 8 runtime.go:73] Observed a panic: "invalid &gt; memory address or nil pointer dereference" (runtime error: invalid &gt; memory address or nil pointer dereference) goroutine 31 [running]: &gt; k8s.io/apimachinery/pkg/util/runtime.logPanic(0x125db80, 0x1e600b0) &gt; /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:69 &gt; +0x7b k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:51 &gt; +0x82 panic(0x125db80, 0x1e600b0) /usr/local/go/src/runtime/panic.go:969 +0x166 &gt; github.com/haproxytech/kubernetes-ingress/controller.ConvertIngressRules(0xc0005f6f00, &gt; 0x2, 0x2, 0x0) /src/controller/types.go:152 +0x302 &gt; github.com/haproxytech/kubernetes-ingress/controller.(*K8s).EventsIngresses.func1(0x13a62a0, &gt; 0xc0000ec450) /src/controller/kubernetes.go:275 +0xc8 &gt; k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...) &gt; /go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:195 &gt; k8s.io/client-go/tools/cache.newInformer.func1(0x1272920, &gt; 0xc0003b8060, 0x1, 0xc0003b8060) &gt; /go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:367 &gt; +0x18a k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc000276630, 0xc0003985d0, 0x0, 0x0, 0x0, 0x0) &gt; /go/pkg/mod/k8s.io/[email protected]/tools/cache/delta_fifo.go:436 &gt; +0x235 k8s.io/client-go/tools/cache.(*controller).processLoop(0xc0003f3480) &gt; /go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:150 &gt; +0x40 k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000017f80) &gt; /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 &gt; +0x5f k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000017f80, 0x3b9aca00, 0x0, 0xc0004ff001, 0xc00007e0c0) &gt; /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 &gt; +0xf8 k8s.io/apimachinery/pkg/util/wait.Until(...) /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 &gt; k8s.io/client-go/tools/cache.(*controller).Run(0xc0003f3480, &gt; 0xc00007e0c0) &gt; /go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:124 &gt; +0x2c1 created by github.com/haproxytech/kubernetes-ingress/controller.(*K8s).EventsIngresses &gt; /src/controller/kubernetes.go:334 +0x291 2020/05/05 12:36:07 &gt; Confiugring default_backend ingress-default-backend from ingress &gt; DefaultService 2020/05/05 12:36:07 HAProxy reloaded </code></pre>
Manish Iarhovich
<p>Usually this error occurs when there is a mismatch in versions between kubelet and kube-apiserver. You need to make sure that kublet version is = or &lt; kube-apiserver version. If not, you'll need to perform an upgrade to make it work. </p>
Wytrzymały Wiktor
<p>I have a K8S cluster running in Azure AKS service.</p> <p>I want to enforce <strong>MustRunAsNonRoot</strong> policy. How to do it?</p> <p>The following policy is created:</p> <pre><code>apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restrict-root spec: privileged: false allowPrivilegeEscalation: false runAsUser: rule: MustRunAsNonRoot seLinux: rule: RunAsAny fsGroup: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - '*' </code></pre> <p>It is deployed in the cluster:</p> <pre><code>$ kubectl get psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES restrict-root false RunAsAny MustRunAsNonRoot RunAsAny RunAsAny false * </code></pre> <p>Admission controller is running in the cluster:</p> <pre><code>$ kubectl get pods -n gatekeeper-system NAME READY STATUS RESTARTS AGE gatekeeper-audit-7b4bc6f977-lvvfl 1/1 Running 0 32d gatekeeper-controller-5948ddcd54-5mgsm 1/1 Running 0 32d gatekeeper-controller-5948ddcd54-b59wg 1/1 Running 0 32d </code></pre> <p>Anyway it is possible to run a simple pod running under root:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: busybox args: [&quot;sleep&quot;, &quot;10000&quot;] securityContext: runAsUser: 0 </code></pre> <p>Pod is running:</p> <pre><code>$ kubectl describe po mypod Name: mypod Namespace: default Priority: 0 Node: aks-default-31327534-vmss000001/10.240.0.5 Start Time: Mon, 08 Feb 2021 23:10:46 +0100 Labels: &lt;none&gt; Annotations: &lt;none&gt; Status: Running </code></pre> <p>Why <strong>MustRunAsNonRoot</strong> is not applied? How to enforce it?</p> <p>EDIT: It looks like AKS engine does not support PodSecurityPolicy (<a href="https://learn.microsoft.com/en-us/azure/aks/faq#what-kubernetes-admission-controllers-does-aks-support-can-admission-controllers-be-added-or-removed" rel="noreferrer">list of supported policies</a>). Then the question is still the same: how to enforce MustRunAsNonRoot rule on workloads?</p>
Michael Chudinov
<p>You shouldn't use <code>PodSecurityPolicy</code> on <strong>Azure AKS cluster</strong> as it has been set for deprecation as of May 31st, 2021 in favor of <a href="https://learn.microsoft.com/en-us/azure/aks/use-pod-security-on-azure-policy" rel="nofollow noreferrer">Azure Policy for AKS</a>. Check <a href="https://learn.microsoft.com/en-us/azure/aks/use-pod-security-policies" rel="nofollow noreferrer">the official docs</a> for further details:</p> <blockquote> <p>Warning</p> <p><strong>The feature described in this document, pod security policy (preview), is set for deprecation and will no longer be available after May 31st, 2021</strong> in favor of <a href="https://learn.microsoft.com/en-us/azure/aks/use-pod-security-on-azure-policy" rel="nofollow noreferrer">Azure Policy for AKS</a>. The deprecation date has been extended from the previous date of October 15th, 2020.</p> </blockquote> <p>So currently you should rather use <a href="https://learn.microsoft.com/en-us/azure/aks/use-pod-security-on-azure-policy" rel="nofollow noreferrer">Azure Policy for AKS</a>, where <a href="http://Azure%20Policy%20for%20Kubernetes%20offers%20two%20built-in%20initiatives,%20which%20secure%20pods,%20baseline%20and%20restricted." rel="nofollow noreferrer">among other built-in policies</a> grouped into initiatives (an initiative in Azure Policy is a collection of policy definitions that are tailored towards achieving a singular overarching goal), you can find a policy which goal is to <a href="https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4" rel="nofollow noreferrer">disallow running of privileged containers</a> on your <strong>AKS cluster</strong>.</p> <p>As to <code>PodSecurityPolicy</code>, for the time being it should still work. Please check <a href="https://learn.microsoft.com/en-us/azure/aks/use-pod-security-policies#create-a-custom-pod-security-policy" rel="nofollow noreferrer">here</a> if you didn't forget about anything e.g. make sure you <a href="https://learn.microsoft.com/en-us/azure/aks/use-pod-security-policies#allow-user-account-to-use-the-custom-pod-security-policy" rel="nofollow noreferrer">set up the corresponding <code>ClusterRole</code> and <code>ClusterRoleBinding</code></a> to allow the policy to be used.</p>
mario
<p>I am deploying my application in a read only kubernetes cluster, so I am using volumes and volumeMounts for tmp folder for apache server. Upon start of apache server within read only pod, I am getting this error:</p> <pre><code>chown: changing ownership of '/var/lock/apache2.fm2cgWmnxk': Operation not permitted </code></pre> <p>I came across this issue <a href="https://stackoverflow.com/questions/43544370/kubernetes-how-to-set-volumemount-user-group-and-file-permissions">Kubernetes: how to set VolumeMount user group and file permissions</a> and tried using SecurityContext.fsGroup but still getting same issue.</p> <p>Here is my deployment.yaml for reference:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: &amp;DeploymentName abc spec: replicas: 1 selector: matchLabels: &amp;appName app: *DeploymentName template: metadata: name: main labels: &lt;&lt;: *appName spec: securityContext: fsGroup: 2000 runAsNonRoot: true runAsUser: 1000 runAsGroup: 3000 fsGroupChangePolicy: &quot;OnRootMismatch&quot; volumes: - name: var-lock emptyDir: {} containers: - name: *DeploymentName image: abc-image ports: - containerPort: 80 volumeMounts: - mountPath: /var/lock name: var-lock readinessProbe: tcpSocket: port: 80 initialDelaySeconds: 180 periodSeconds: 60 livenessProbe: tcpSocket: port: 80 initialDelaySeconds: 300 periodSeconds: 180 imagePullPolicy: Always tty: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name envFrom: - configMapRef: name: *DeploymentName resources: limits: cpu: 1 memory: 2Gi requests: cpu: 1 memory: 2Gi --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: &amp;hpaName abc spec: maxReplicas: 1 minReplicas: 1 scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: *hpaName targetCPUUtilizationPercentage: 60 </code></pre> <p>Any help is appreciated.</p>
T Ravi Theja
<p>Hello, hope you are envoying your Kubernetes journey !</p> <p>I wanted to try this on my kind (Kubernetes in docker) cluster locally. So this is what I've done:</p> <p>First I have setup a kind cluster locally with this configuration (info here: <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">https://kind.sigs.k8s.io/docs/user/quick-start/</a>):</p> <pre><code>kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: so-cluster-1 nodes: - role: control-plane image: kindest/node:v1.23.5 - role: control-plane image: kindest/node:v1.23.5 - role: control-plane image: kindest/node:v1.23.5 - role: worker image: kindest/node:v1.23.5 - role: worker image: kindest/node:v1.23.5 - role: worker image: kindest/node:v1.23.5 </code></pre> <p>after this I created my cluster with this command:</p> <pre><code>kind create cluster --config=config.yaml </code></pre> <p>Next, i have created a test namespace (manifest obtained with: kubectl create ns so-tests -o yaml --dry-run):</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: so-tests </code></pre> <p>From there, i got my environment setted up, so I used your deployment config and replaced the deploymentName, appName and hpaName occurences by &quot;so-71823613&quot; (stack-overflow and you question id), but for the test, I decided to not use the hpa config.</p> <p>next, since you did not provide the image you are using for apache, I used the dockerhub image httpd:2.4.53 (<a href="https://hub.docker.com/layers/httpd/library/httpd/2.4.53/images/sha256-10ed1591781d9fdbaefaafee77067f12e833c699c84ed4e21706ccbd5229fd0a?context=explore" rel="nofollow noreferrer">https://hub.docker.com/layers/httpd/library/httpd/2.4.53/images/sha256-10ed1591781d9fdbaefaafee77067f12e833c699c84ed4e21706ccbd5229fd0a?context=explore</a>)</p> <p>again, since i dont have your configmap config, i decided to comment out the part where you get env variables from the configmap.</p> <p>since the default user in httpd image is &quot;www-data&quot;, I first deployed the pod without any securityContext just to get the id of that user:</p> <pre><code>❯ k exec -it pod/so-71823613-555d8b454-z5ks5 -- id www-data uid=33(www-data) gid=33(www-data) groups=33(www-data) </code></pre> <p>Once that i knew what was the id of the www-data user, I modified the securityContext. I kept the rest of the configuration (probes, volume etc.) as you configured them, here is the manifest now:</p> <p>In the configuration file, the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 33(www-data). The runAsGroup field specifies the primary group ID of 33 for all processes within any containers of the Pod. If this field is omitted, the primary group ID of the containers will be root(0). Any files created will also be owned by user 33 and group 33 when runAsGroup is specified. Since fsGroup field is specified, all processes of the container are also part of the supplementary group ID 33. The owner for volume &quot;/var/lock&quot; and any files created in that volume will be Group ID 33. ... fsGroupChangePolicy - fsGroupChangePolicy defines behavior for changing ownership and permission of the volume before being exposed inside a Pod. This field only applies to volume types that support fsGroup controlled ownership and permissions. This field has two possible values:</p> <p>OnRootMismatch: Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This could help shorten the time it takes to change ownership and permission of a volume. Always: Always change permission and ownership of the volume when volume is mounted.</p> <p>( description from here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a>)</p> <p>So, once i deployed my configuration using:</p> <pre><code>kubectl apply -f deployment.yaml deployment.apps/so-71823613 created </code></pre> <p>I got this error:</p> <pre><code> k logs -f pod/so-71823613-7c5b65df4d-6scg5 AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.2. Set the 'ServerName' directive globally to suppress this message (13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down AH00015: Unable to open logs </code></pre> <p>So, first to fix the first line error, I reconnected into the pod to fetch the httpd.conf file with:</p> <pre><code>k exec -it pod/so-71823613-555d8b454-fgjcs -- cat /usr/local/apache2/conf/httpd.conf &gt; httpd.conf </code></pre> <p>once i get the http.conf file, I modified it, by adding:</p> <pre><code>ServerName localhost:8080 </code></pre> <p>(cf <a href="https://ixnfo.com/en/solution-ah00558-apache2-could-not-reliably-determine-the-servers-fully-qualified-domain-name.html" rel="nofollow noreferrer">https://ixnfo.com/en/solution-ah00558-apache2-could-not-reliably-determine-the-servers-fully-qualified-domain-name.html</a>)</p> <p>Then I put the new httpd.conf file into a configmap named &quot;httpconf&quot;, and modified the deployment to mount the configmap into the right place, to replace the first one (here -&gt; &quot;/usr/local/apache2/conf/httpd.conf&quot;) with:</p> <pre><code> ... volumeMounts: ... - name: &quot;config&quot; mountPath: &quot;/usr/local/apache2/conf/httpd.conf&quot; subPath: &quot;httpd.conf&quot; volumes: ... - name: &quot;config&quot; configMap: name: &quot;httpconf&quot; ... ❯ kubectl apply -f configmap.yaml -f deployment.yaml configmap/httpconf created deployment.apps/so-71823613 created </code></pre> <p>Then i got this error remaining:</p> <pre><code>(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 </code></pre> <p>So, to fix it, I changed to listening port of apache directly into the configmap http.conf file (according to this: <a href="https://askubuntu.com/questions/338218/why-am-i-getting-permission-denied-make-sock-could-not-bind-to-address-when">https://askubuntu.com/questions/338218/why-am-i-getting-permission-denied-make-sock-could-not-bind-to-address-when</a>)</p> <pre><code>Listen 8080 ServerName localhost:8080 </code></pre> <p>since I am now exposing the 8080 port, I also modified the probes and the port in consequence:</p> <pre><code>... ports: - containerPort: 8080 readinessProbe: tcpSocket: port: 8080 initialDelaySeconds: 180 periodSeconds: 60 livenessProbe: tcpSocket: port: 8080 ... </code></pre> <p>After reapplying my config I got this new error:</p> <pre><code>❯ k logs -f pod/so-71823613-7dd7bdb66d-qtf9t [Wed Apr 20 05:50:57.863971 2022] [core:error] [pid 1:tid 139771999915328] (13)Permission denied: AH00099: could not create /usr/local/apache2/logs/httpd.pid.KYUI5g [Wed Apr 20 05:50:57.864061 2022] [core:error] [pid 1:tid 139771999915328] AH00100: httpd: could not log pid to file /usr/local/apache2/logs/httpd.pid </code></pre> <p>To fix that issue, i used your workaround with the emptyDir and added this:</p> <pre><code> volumeMounts: ... - mountPath: /usr/local/apache2/logs/ name: apache2-logs volumes: ... - name: apache2-logs emptyDir: {} </code></pre> <p>here are the manifests:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: so-71823613 spec: replicas: 1 selector: matchLabels: app: so-71823613 template: metadata: name: main labels: app: so-71823613 spec: securityContext: fsGroup: 33 runAsNonRoot: true runAsUser: 33 runAsGroup: 33 fsGroupChangePolicy: &quot;OnRootMismatch&quot; containers: - name: so-71823613 image: httpd:2.4.53 ports: - containerPort: 8080 readinessProbe: tcpSocket: port: 8080 initialDelaySeconds: 180 periodSeconds: 60 livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 300 periodSeconds: 180 imagePullPolicy: Always tty: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name # envFrom: # - configMapRef: # name: so-71823613 resources: limits: cpu: 1 memory: 2Gi requests: cpu: 1 memory: 2Gi volumeMounts: - mountPath: /var/lock name: var-lock - mountPath: /usr/local/apache2/logs/ name: apache2-logs - name: &quot;config&quot; mountPath: &quot;/usr/local/apache2/conf/httpd.conf&quot; subPath: &quot;httpd.conf&quot; volumes: - name: var-lock emptyDir: {} - name: apache2-logs emptyDir: {} - name: &quot;config&quot; configMap: name: &quot;httpconf&quot; --- apiVersion: v1 kind: ConfigMap metadata: name: httpconf data: httpd.conf: | ServerRoot &quot;/usr/local/apache2&quot; Listen 8080 LoadModule mpm_event_module modules/mod_mpm_event.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_core_module modules/mod_authn_core.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule access_compat_module modules/mod_access_compat.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule reqtimeout_module modules/mod_reqtimeout.so LoadModule filter_module modules/mod_filter.so LoadModule mime_module modules/mod_mime.so LoadModule log_config_module modules/mod_log_config.so LoadModule env_module modules/mod_env.so LoadModule headers_module modules/mod_headers.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule version_module modules/mod_version.so LoadModule unixd_module modules/mod_unixd.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so &lt;IfModule !mpm_prefork_module&gt; &lt;/IfModule&gt; &lt;IfModule mpm_prefork_module&gt; &lt;/IfModule&gt; LoadModule dir_module modules/mod_dir.so LoadModule alias_module modules/mod_alias.so &lt;IfModule unixd_module&gt; User www-data Group www-data &lt;/IfModule&gt; ServerAdmin [email protected] ServerName localhost:8080 &lt;Directory /&gt; AllowOverride none Require all denied &lt;/Directory&gt; DocumentRoot &quot;/usr/local/apache2/htdocs&quot; &lt;Directory &quot;/usr/local/apache2/htdocs&quot;&gt; Options Indexes FollowSymLinks AllowOverride None Require all granted &lt;/Directory&gt; &lt;IfModule dir_module&gt; DirectoryIndex index.html &lt;/IfModule&gt; &lt;Files &quot;.ht*&quot;&gt; Require all denied &lt;/Files&gt; ErrorLog /proc/self/fd/2 LogLevel warn &lt;IfModule log_config_module&gt; LogFormat &quot;%h %l %u %t \&quot;%r\&quot; %&gt;s %b \&quot;%{Referer}i\&quot; \&quot;%{User-Agent}i\&quot;&quot; combined LogFormat &quot;%h %l %u %t \&quot;%r\&quot; %&gt;s %b&quot; common &lt;IfModule logio_module&gt; LogFormat &quot;%h %l %u %t \&quot;%r\&quot; %&gt;s %b \&quot;%{Referer}i\&quot; \&quot;%{User-Agent}i\&quot; %I %O&quot; combinedio &lt;/IfModule&gt; CustomLog /proc/self/fd/1 common &lt;/IfModule&gt; &lt;IfModule alias_module&gt; ScriptAlias /cgi-bin/ &quot;/usr/local/apache2/cgi-bin/&quot; &lt;/IfModule&gt; &lt;IfModule cgid_module&gt; &lt;/IfModule&gt; &lt;Directory &quot;/usr/local/apache2/cgi-bin&quot;&gt; AllowOverride None Options None Require all granted &lt;/Directory&gt; &lt;IfModule headers_module&gt; RequestHeader unset Proxy early &lt;/IfModule&gt; &lt;IfModule mime_module&gt; TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz &lt;/IfModule&gt; &lt;IfModule proxy_html_module&gt; Include conf/extra/proxy-html.conf &lt;/IfModule&gt; &lt;IfModule ssl_module&gt; SSLRandomSeed startup builtin SSLRandomSeed connect builtin &lt;/IfModule&gt; # --- # apiVersion: autoscaling/v1 # kind: HorizontalPodAutoscaler # metadata: # name: so-71823613 # spec: # maxReplicas: 1 # minReplicas: 1 # scaleTargetRef: # apiVersion: extensions/v1beta1 # kind: Deployment # name: so-71823613 # targetCPUUtilizationPercentage: 60 </code></pre> <p>after waiting the initialDelaySeconds of the probes, I finally get my pod up and running correctly:</p> <pre><code>Every 1.0s: kubectl get po,svc,cm -o wide DESKTOP-6PBJAOK: Wed Apr 20 03:15:02 2022 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/so-71823613-897768549-mcmb4 1/1 Running 0 4m13s 10.244.4.4 so-cluster-1-worker3 &lt;none&gt; &lt;none&gt; NAME DATA AGE configmap/httpconf 1 4m14s </code></pre> <p>Bonus:</p> <p>I then decided to expose the http deployment with a service, here is the manifest (obtained from &quot; k expose deployment so-71823613 --port 80 --target-port 8080 --dry-run=client -o yaml&quot;:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: so-71823613 spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: so-71823613 </code></pre> <p>as you can see, I port-forwarded the 8080 pod port to 80 in the service (you can also use an ingress controller to expose the service outside of the cluster )</p> <p>tried this on my machine:</p> <pre><code>❯ k port-forward service/so-71823613 8080:80 Forwarding from 127.0.0.1:8080 -&gt; 8080 Forwarding from [::1]:8080 -&gt; 8080 Handling connection for 8080 </code></pre> <p>and here is the result:</p> <p><a href="https://i.stack.imgur.com/HWIXE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HWIXE.png" alt="enter image description here" /></a></p> <p>TADA !</p> <p>To conclude, I tried to reproduce the best i could with your provided information (It was kinda cool), so if this does not work for you, it means that i need more information. Thank you for your lecture. bguess.</p>
Bguess
<p>I have progressDeadlineSeconds set to 120 seconds </p> <p>I deploy and run <code>kubectl rollout status deployment mydeployment</code></p> <p>The deployment failed with <code>0 of 1 updated replicas available - CrashLoopBackOff</code></p> <p>But the kubectl is still hanging forever with message: <code>Waiting for deployment "mydeployment" rollout to finish: 0 of 1 updated replicas are available...</code> </p> <p>Why is this happening, progressDeadlineSeconds is suppose to force it to fail and cause <code>kubectl rollout status deployment</code> to exit with a non-zero return code right?</p>
red888
<p>You are correct, <code>kubectl rollout status</code> returns a non-zero exit code if the Deployment has exceeded the progression deadline. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds" rel="noreferrer">Progress Deadline Seconds</a>:</p> <blockquote> <p><code>.spec.progressDeadlineSeconds</code> is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed progressing - surfaced as a condition with <code>Type=Progressing</code>, <code>Status=False</code>. and <code>Reason=ProgressDeadlineExceeded</code> in the status of the resource. The Deployment controller will keep retrying the Deployment. This defaults to 600. In the future, once automatic rollback will be implemented, the Deployment controller will roll back a Deployment as soon as it observes such a condition.</p> <p>If specified, this field needs to be greater than <code>.spec.minReadySeconds</code>.</p> </blockquote> <p>Which brings us to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#min-ready-seconds" rel="noreferrer">Min Ready Seconds</a></p> <blockquote> <p><code>.spec.minReadySeconds</code> is an optional field that specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available. This defaults to 0 (the Pod will be considered available as soon as it is ready).</p> </blockquote> <p>Without your exact deployment configs it is hard to tell where the problem is but there are few things to check out:</p> <ul> <li><p>Try setting both the <code>progressDeadlineSeconds</code> and <code>minReadySeconds</code> remembering that the later must have a smaller value.</p></li> <li><p><code>progressDeadlineSeconds</code> might not be respected when used with <code>Replicas: 1</code>. Check your <code>maxUnavailable</code> param in order to see if it allows deployment full unavailability. </p></li> <li><p>As a workaround you can specify a timeout period for your <code>kubectl rollout status</code> command. For example: <code>kubectl rollout status deployment mydeployment --timeout=120s</code></p></li> <li><p>If you don't want to wait for the rollout to finish then you can use <code>--watch=false</code>. You will than have to check the status manually by running <code>kubectl describe deployment</code> and <code>kubectl get deployment</code> commands which might not be ideal.</p></li> </ul> <p>Please let me know if that helps.</p>
Wytrzymały Wiktor
<p>I am trying to set up using my GoDaddy certificate as a listener for Kafka. Using this article <a href="https://strimzi.io/docs/operators/in-development/using.html#kafka-listener-certificates-str" rel="nofollow noreferrer">https://strimzi.io/docs/operators/in-development/using.html#kafka-listener-certificates-str</a>.</p> <pre><code>apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: es-kafka-cluster spec: kafka: version: 2.7.0 replicas: 2 listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: es-tls-certificate certificate: certificate.crt key: certificate.key authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 log.message.format.version: &quot;2.7&quot; storage: type: ephemeral zookeeper: replicas: 2 storage: type: ephemeral entityOperator: userOperator: {} topicOperator: {} </code></pre> <p>I am sending a check for a certificate openssl s_client -connect MY_IP:9094 -servername MY_IP</p> <p>Returns my correct certificate. But when I try to execute the command I get an error</p> <pre><code> kafkacat -C -b MY_IP:9094 -X security.protocol=ssl -t schedules % ERROR: Failed to query metadata for topic schedules: Local: Timed out </code></pre> <p>Log from Kafka</p> <pre><code>(SSL handshake failed) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-0-ListenerName(EXTERNAL-9094)-SSL-12 </code></pre>
Vitalii Fedorenko
<pre><code> authentication: type: tls </code></pre> <p>Authentication TLS seems to state your client need to authenticate with mtls, try the following command and post and update please</p> <p>kafkacat -b MY_IP:9094 <br /> -X security.protocol=SSL -X ssl.key.location=private_key.pem -X ssl.key.password=my_key_password <br /> -X ssl.certificate.location=signed_cert.pem.txt <br /> -X ssl.ca.location=ca_cert.pem -L</p> <p>Or remove the authentication clause</p> <p>And add to kafkacat</p> <p>-X ssl.ca.location=ca_cert.pem</p>
Ran Lupovich
<p>I'm trying to use the ingress of kubernetes on an AWS cluster built with Kops. <br/> I'm following this documentation: <a href="https://github.com/kubernetes/kops/tree/master/addons/kube-ingress-aws-controller" rel="nofollow noreferrer">https://github.com/kubernetes/kops/tree/master/addons/kube-ingress-aws-controller</a>. <br/> As you can see, I'm using the <strong>kube-ingress-aws-controller</strong> with the <strong>skipper ingress</strong>.<br/></p> <p>For the <strong>kube-ingress-aws-controller</strong> I have the following script:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: kube-ingress-aws-controller namespace: kube-system labels: application: kube-ingress-aws-controller component: ingress spec: replicas: 1 selector: matchLabels: application: kube-ingress-aws-controller component: ingress template: metadata: labels: application: kube-ingress-aws-controller component: ingress spec: serviceAccountName: kube-ingress-aws containers: - name: controller image: registry.opensource.zalan.do/teapot/kube-ingress-aws-controller:latest env: - name: AWS_REGION value: eu-central-1 </code></pre> <p>For the <strong>skipper ingress</strong>, the script is this one:</p> <pre><code>apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: skipper-ingress namespace: kube-system labels: component: ingress spec: selector: matchLabels: component: ingress updateStrategy: type: RollingUpdate template: metadata: name: skipper-ingress labels: component: ingress application: skipper spec: hostNetwork: true serviceAccountName: skipper-ingress containers: - name: skipper-ingress image: registry.opensource.zalan.do/pathfinder/skipper:latest ports: - name: ingress-port containerPort: 9999 hostPort: 9999 - name: metrics-port containerPort: 9911 args: - "skipper" - "-kubernetes" - "-kubernetes-in-cluster" - "-address=:9999" - "-proxy-preserve-host" - "-serve-host-metrics" - "-enable-ratelimits" - "-experimental-upgrade" - "-metrics-exp-decay-sample" - "-lb-healthcheck-interval=3s" - "-metrics-flavour=codahale,prometheus" - "-enable-connection-metrics" resources: requests: cpu: 200m memory: 200Mi readinessProbe: httpGet: path: /kube-system/healthz port: 9999 initialDelaySeconds: 5 timeoutSeconds: 5 </code></pre> <p>After that I applied a few more scripts to have a functional prove of concept and everything is working.</p> <h1>QUESTION</h1> <p>what's the point of having both ingresses? What's doing the <strong>skipper</strong> one?</p> <p>Shouldn't the kube-ingress-aws-controller be enough?</p>
RuiSMagalhaes
<p>Ordinary AWS loadbalancers enable TLS termination, automated certificate rotation, possible WAF, and Security Groups but the HTTP routing capabilities are very limited. </p> <p>Skipper's main advantages compared to other HTTP routers are:</p> <ul> <li>matching and changing HTTP</li> <li>by default with kube-ingress-aws-controller, just work as you would expect.</li> </ul> <p>HAproxy and Nginx are well understood and good TCP/HTTP proxies, that were built before Kubernetes. But they have drawbacks like:</p> <ul> <li>reliance on static configuration files which comes from a time when routes and their configurations were</li> <li>relatively static the list of annotations to implement even basic features are already quite a big list for users</li> </ul> <p><a href="https://github.com/zalando/skipper" rel="nofollow noreferrer">Skipper</a> was built to:</p> <ul> <li>support dynamically changing route configurations, which happens quite often in Kubernetes </li> <li>gives ability to easily implement automated canary deployments, automated blue-green deployments or shadow traffic</li> </ul> <p>However there are some features that have better support in aws-alb-ingress-controller, HAproxy and nginx. For instance the sendfile() operation. If you need to stream a large file or large amount of files, then you may want to go for one of these options.</p> <p><a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller" rel="nofollow noreferrer">Aws-alb-ingress-controller</a> directly routes traffic to your Kubernetes services, which is both good and bad, because it can reduce latency, but comes with the risk of depending on kube-proxy routing. kube-proxy routing can take up to 30 seconds, ETCD ttl, for finding pods from dead nodes. Skipper enables: passively observe errors from endpoints and are able to drop these from the loadbalancer members. actively checked member pool, which will enable endpoints if these are healthy again from skipper point of view. </p> <p>Additionally the aws-alb-ingress-controller does not support features like ALB sharing, or Server Name Indication which can reduce costs. Features like path rewriting are also not currently supported.</p> <p><a href="https://traefik.io/" rel="nofollow noreferrer">Traefik</a> has a good community and support for Kubernetes. Skipper originates from Project Mosaic which was started in 2015. Back then Traefik was not yet a mature project and still had time to go before the v1.0.0 release. Traefik also does not currently support our Opentracing provider. It also did not support traffic splitting when we started stackset-controller for automated traffic switching. We have also recently done significant work on running Skipper as API gateway within Kubernetes, which could potentially help many teams that run a many small services on Kubernetes. Skipper predicates and filters are a powerful abstraction which can enhance the system easily.</p> <p>So as you can see <strong>kube-ingress-aws-controller</strong> with the <strong>skipper ingress</strong> has much more advantages and possibilities comparing to other similar solutions.</p> <p>More information you can find here: <a href="https://github.com/zalando/skipper/blob/master/docs/kubernetes/ingress-controller.md" rel="nofollow noreferrer">skipper-ingress-controller</a>.</p>
Malgorzata
<p>I tried deploying on EKS, and my config.yaml follows this suggested format:</p> <pre><code>botfront: app: # The complete external host of the Botfront application (eg. botfront.yoursite.com). It must be set even if running on a private or local DNS (it populates the ROOT_URL). host: botfront.yoursite.com mongodb: enabled: true # disable to use an external mongoDB host # Username of the MongoDB user that will have read-write access to the Botfront database. This is not the root user mongodbUsername: username # Password of the MongoDB user that will have read-write access to the Botfront database. This is not the root user mongodbPassword: password # MongoDB root password mongodbRootPassword: rootpassword </code></pre> <p>And I ran this command:</p> <pre><code>helm install -f config.yaml -n botfront --namespace botfront botfront/botfront </code></pre> <p>and the deployment appeared successful with all pods listed as running.</p> <p>But botfront.yoursite.com goes nowhere. I checked the ingress and it matches, but there are no external ip addresses or anything. I don't know how to actually access my botfront site once deployed on kubernetes.</p> <p>What am I missing?</p> <p>EDIT:</p> <p>With nginx lb installed <code>kubectl get ingresses -n botfront</code> now returns:</p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE botfront-app-ingress &lt;none&gt; botfront.cream.com a182b0b24e4fb4a0f8bd6300b440e5fa-423aebd224ce20ac.elb.us-east-2.amazonaws.com 80 4d1h </code></pre> <p>and</p> <p><code>kubectl get svc -n botfront</code> returns:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE botfront-api-service NodePort 10.100.207.27 &lt;none&gt; 80:31723/TCP 4d1h botfront-app-service NodePort 10.100.26.173 &lt;none&gt; 80:30873/TCP 4d1h botfront-duckling-service NodePort 10.100.75.248 &lt;none&gt; 80:31989/TCP 4d1h botfront-mongodb-service NodePort 10.100.155.11 &lt;none&gt; 27017:30358/TCP 4d1h </code></pre>
Stephan
<p>If you run <code>kubectl get svc -n botfront</code>, it will show you all the <code>Services</code> that expose your <code>botfront</code></p> <pre><code>$ kubectl get svc -n botfront NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE botfront-api-service NodePort 10.3.252.32 &lt;none&gt; 80:32077/TCP 63s botfront-app-service NodePort 10.3.249.247 &lt;none&gt; 80:31201/TCP 63s botfront-duckling-service NodePort 10.3.248.75 &lt;none&gt; 80:31209/TCP 63s botfront-mongodb-service NodePort 10.3.252.26 &lt;none&gt; 27017:31939/TCP 64s </code></pre> <p>Each of them is of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a>, which means it exposes your app on the external IP address of each of your <strong>EKS</strong> cluster nodes on a specific port.</p> <p>So if you your <strong>node1</strong> ip happens to be <code>1.2.3.4</code> you can acess <code>botfront-api-service</code> on <code>1.2.3.4:32077</code>. Don't forget to allow access to this port on <strong>firewall/security groups</strong>. If you have any registered domain e.g. <code>yoursite.com</code> you can configure for it a subdomain <code>botfront.yoursite.com</code> and point it to one of your <strong>EKS</strong> nodes. Then you'll be able to access it using your domain. This is the simplest way.</p> <p>To be able to access it in a more effective way than by using specific node's IP and non-standard port, you may want to expose it via <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer"><code>Ingress</code></a> which will create an external load balancer, making your <code>NodePort</code> services available under one external IP adress and standard http port.</p> <p><strong>Update:</strong> I see that this chart already comes with <code>ingress</code> that exposes your app:</p> <pre><code>$ kubectl get ingresses -n botfront NAME HOSTS ADDRESS PORTS AGE botfront-app-ingress botfront.yoursite.com 80 70m </code></pre> <p>If you retrieve its yaml definition by:</p> <pre><code>$ kubectl get ingresses -n botfront -o yaml </code></pre> <p>you'll see that it uses the following annotation:</p> <pre><code>kubernetes.io/ingress.class: nginx </code></pre> <p>which means you need <a href="https://kubernetes.github.io/ingress-nginx" rel="nofollow noreferrer">nginx-ingress controller</a> installed on your <strong>EKS</strong> cluster. This might be one reason why it fails. As you can see in my example, this ingress doesn't get any external IP. That's because <strong>nginx-ingress</strong> wasn't installed on my <strong>GKE</strong> cluster. Not sure about <strong>EKS</strong> but as far as I know it doesn't come with <strong>nginx-ingress</strong> preinstalled.</p> <p>One more thing: I assume that in your <code>config.yaml</code> you put some real domain name that you have registered instead of <code>botfront.yoursite.com</code>. Suppose your domain is <code>yoursite.com</code> and you successfully created subdomain <code>botfront.yoursite.com</code>, you should redirected it to the IP of your load balancer (the one used by your <code>ingress</code>).</p> <p>If you run <code>kubectl get ingresses -n botfront</code> but the <code>ADDRESS</code> is empty, you probably don't have <strong>nginx-ingress</strong> installed and the underlying load balancer cannot be created. If you have here some external IP address, then redirect your registered domain to this address.</p>
mario
<p>I'm trying to do a straight up thing that I would think is simple. I need to have <a href="https://localhost:44301" rel="nofollow noreferrer">https://localhost:44301</a>, <a href="https://localhost:5002" rel="nofollow noreferrer">https://localhost:5002</a>, <a href="https://localhost:5003" rel="nofollow noreferrer">https://localhost:5003</a> to be listened to in my k8s environment in docker desktop, and be proxied using a pfx file/password that I specify and have it forward by the port to pods listening on specific addresses (could be port 80, doesn't matter)</p> <p>The documentation is mind numbingly complex for what looks like it should be straight forward. I can get the pods running, I can use kubectl port-forward and they work fine, but I can't figure out how to get ingress working with ha-proxy or nginx or anything else in a way that makes any sense.</p> <p>Can someone do an ELI5 telling me how to turn this on? I'm on Windows 10 2004 with WSL2 and Docker experimental so I should have access to the ingress stuff they reference in the docs and make clear as mud.</p> <p>Thanks!</p>
James Hancock
<p>As discussed in the comments this is a community wiki answer:</p> <hr> <p>I have managed to create Ingress resource in Kubernetes on Docker in Windows. </p> <p><strong>Steps to reproduce</strong>: </p> <ul> <li>Enable Hyper-V </li> <li>Install Docker for Windows and enable Kubernetes </li> <li>Connect kubectl </li> <li>Enable Ingress </li> <li>Create deployment</li> <li>Create service</li> <li>Create ingress resource </li> <li>Add host into local hosts file </li> <li>Test </li> </ul> <h3>Enable <a href="https://learn.microsoft.com/pl-pl/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v" rel="noreferrer">Hyper-V</a></h3> <p>From Powershell with administrator access run below command: </p> <p><code>Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All</code></p> <p>System could ask you to reboot your machine.</p> <h3>Install Docker for Windows and enable Kubernetes</h3> <p>Install Docker application with all the default options and enable Kubernetes </p> <h3>Connect kubectl</h3> <p>Install <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows" rel="noreferrer">kubectl </a>. </p> <h3>Enable Ingress</h3> <p>Run this commands: </p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml </code></pre> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml </code></pre> <h3><strong>Edit:</strong> Make sure no other service is using port 80</h3> <p>Restart your machine. From a <code>cmd</code> prompt running as admin, do: <code>net stop http</code> Stop the listed services using <code>services.msc</code></p> <p>Use: <code>netstat -a -n -o -b</code> and check for other processes listening on port 80.</p> <h3>Create deployment</h3> <p>Below is simple deployment with pods that will reply to requests: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello spec: selector: matchLabels: app: hello version: 2.0.0 replicas: 3 template: metadata: labels: app: hello version: 2.0.0 spec: containers: - name: hello image: "gcr.io/google-samples/hello-app:2.0" env: - name: "PORT" value: "50001" </code></pre> <p>Apply it by running command: </p> <p><code>$ kubectl apply -f file_name.yaml</code></p> <h3>Create service</h3> <p>For pods to be able for you to communicate with them you need to create a service. </p> <p>Example below: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: hello-service spec: type: NodePort selector: app: hello version: 2.0.0 ports: - name: http protocol: TCP port: 80 targetPort: 50001 </code></pre> <p>Apply this service definition by running command: </p> <p><code>$ kubectl apply -f file_name.yaml</code></p> <h3>Create Ingress resource</h3> <p>Below is simple Ingress resource using service created above: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-ingress spec: rules: - host: kubernetes.docker.internal http: paths: - path: / backend: serviceName: hello-service servicePort: http </code></pre> <p>Take a look at:</p> <pre class="lang-yaml prettyprint-override"><code>spec: rules: - host: hello-test.internal </code></pre> <p><code>hello-test.internal</code> will be used as the <code>hostname</code> to connect to your pods. </p> <p>Apply your Ingress resource by invoking command: </p> <p><code>$ kubectl apply -f file_name.yaml</code></p> <h3>Add host into local hosts file</h3> <p>I found this <a href="https://github.com/docker/for-win/issues/1901" rel="noreferrer">Github link </a> that will allow you to connect to your Ingress resource by <code>hostname</code>.</p> <p>To achieve that add a line <code>127.0.0.1 hello-test.internal</code> to your <code>C:\Windows\System32\drivers\etc\hosts</code> file and save it. You will need Administrator privileges to do that.</p> <p><strong>Edit:</strong> The newest version of Docker Desktop for Windows already adds a hosts file entry: <code>127.0.0.1 kubernetes.docker.internal</code></p> <h3>Test</h3> <p>Display the information about Ingress resources by invoking command: <code>kubectl get ingress</code></p> <p>It should show: </p> <pre><code>NAME HOSTS ADDRESS PORTS AGE hello-ingress hello-test.internal localhost 80 6m2s </code></pre> <p>Now you can access your Ingress resource by opening your web browser and typing </p> <p><code>http://kubernetes.docker.internal/</code> </p> <p>The browser should output: </p> <pre><code>Hello, world! Version: 2.0.0 Hostname: hello-84d554cbdf-2lr76 </code></pre> <p><code>Hostname: hello-84d554cbdf-2lr76</code> is the name of the pod that replied. </p> <p>If this solution is not working please check connections with the command: <code>netstat -a -n -o</code> (<strong>with Administrator privileges</strong>) if something is not using port 80. </p> <hr>
Wytrzymały Wiktor
<p>I would like to block <code>/public/configs</code> in my k8s ingress.</p> <p>My current settings doesnt work.</p> <pre><code> - host: example.com http: paths: - path: /* pathType: ImplementationSpecific backend: service: name: service-myapp port: number: 80 - path: /public/configs pathType: ImplementationSpecific backend: service: name: service-myapp port: number: 88 // fake port </code></pre> <p>Is there any better (easy) way?</p>
teteyi3241
<p>1- Create a dummy service and send it to that:</p> <pre><code> - path: /public/configs pathType: ImplementationSpecific backend: service: name: dummy-service port: number: 80 </code></pre> <p>2- use <code>server-snippets</code> as bellow to return 403 or any error you want:</p> <p>a) for k8s nginx ingress:</p> <pre><code> annotations: nginx.ingress.kubernetes.io/server-snippet: | location ~* &quot;^/public/configs&quot; { deny all; return 403; } </code></pre> <p>b) for nginx ingress:</p> <pre><code> annotations: nginx.org/server-snippet: | location ~* &quot;^/public/configs&quot; { deny all; return 403; } </code></pre>
RoohAllah Godazgar
<p>I'm trying to figure out the pieces and how to fit them together for having a pod be able to control aspects of a deployment, like scaling. I'm thinking I need to set up a service account for it, but I'm not finding the information on how to link it all together, and then how to get the pod to use the service account. I'll be writing this in python, which might add to the complexity of how to use the service account</p>
Brett
<p>Try to set up <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-horizontal-pod-autoscaler-in-kubectl" rel="nofollow noreferrer">Horizontal Pod Autpscaler</a>.</p> <p>The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics). Horizontal Pod Autoscaling does not apply to objects that can’t be scaled, for example, DaemonSets.</p> <p>The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. The controller periodically adjusts the number of replicas in a replication controller or deployment to match the observed average CPU utilization to the target specified by user.</p> <p>Documentations: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">hpa-setup</a>, <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object" rel="nofollow noreferrer">autoscaling</a>.</p>
Malgorzata
<p>This is my Service yaml. When create the svc on GKE.I don't know how can I visit the svc.I can't find a external ip for visiting the svc. How can I visit this svc in standard flow. Is it need to create an ingress?</p> <pre><code>apiVersion: v1 kind: Service metadata: namespace: dev name: ui-svc labels: targetEnv: dev app: ui-svc spec: selector: app: ui targetEnv: dev ports: - name: ui port: 8080 targetPort: 8080 nodePort: 30080 type: NodePort </code></pre> <p><a href="https://i.stack.imgur.com/XcQiN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XcQiN.png" alt="enter image description here" /></a></p>
Pengbo Wu
<p>If you don't use a <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept" rel="noreferrer">private cluster</a> where nodes don't have public IP addresses, you can access your <code>NodePort</code> services using any node's public IP address.</p> <p>What you can see in <code>Services &amp; Ingresses</code> section in the <code>Endpoints</code> column, it's an internal, cluster ip address of your <code>NodePort</code> service.</p> <p>If you want to know what are public IP addresses of your <strong>GKE nodes</strong>, please go to <strong>Compute Engine</strong> &gt; <strong>VM instances</strong>:</p> <p><a href="https://i.stack.imgur.com/9jfIQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9jfIQ.png" alt="enter image description here" /></a></p> <p>You will see the list of all your <strong>Compute Engine VMs</strong> which also includes your <strong>GKE nodes</strong>. Note the IP address in <code>External IP</code> column. You should use it along with port number which you may check in your <code>NodePort</code> service details. Simply click on it's name <code>&quot;ui-svc&quot;</code> to see the details. At the very bottom of the page you should see <code>ports</code> section which may look as follows:</p> <p><a href="https://i.stack.imgur.com/axLbm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/axLbm.png" alt="enter image description here" /></a></p> <p>So in my case I should use <code>&lt;any_node's_public_ip_address&gt;:31251</code>.</p> <p>One more important thing. Don't forget to allow traffic to this port on <strong>Firewall</strong> as by default it is blocked. So you need to explicitly allow traffic to your nodes e.g. on <code>31251</code> port to be able to access it from public internet. Simply go to <strong>VPC Network</strong> &gt; <strong>Firewall</strong> and set the apropriate rule:</p> <p><a href="https://i.stack.imgur.com/dPw60.png" rel="noreferrer"><img src="https://i.stack.imgur.com/dPw60.png" alt="enter image description here" /></a></p> <h3 id="update-md7s">UPDATE:</h3> <p>If you created an <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview" rel="noreferrer">Autopilot Cluster</a>, by default it is a public one, which means its nodes have public IP addresses:</p> <p><a href="https://i.stack.imgur.com/BDdAk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BDdAk.png" alt="enter image description here" /></a></p> <p>If during the cluster creation you've selected a second option i.e. <code>&quot;Private cluster&quot;</code>, your nodes won't have public IPs by design and you won't be able to access your <code>NodePort</code> service on any public IP. So the only option that remains in such scenario is exposing your workload via <code>LoadBalancer</code> service or <code>Ingress</code>, where a single public IP endpoint is created for you, so you can access your workload externally.</p> <p>However if you've chosen the default option i.e. <code>&quot;Public cluster&quot;</code>, you can use your node's public IP's to access your <code>NodePort</code> service in the very same way as if you used a Standard (non-autopilot) cluster.</p> <p>Of course in autopilot mode you won't see your nodes as compute engine VMs in your GCP console, but you can still get their public IPs by running:</p> <pre><code>kubectl get nodes -o wide </code></pre> <p>They will be shown in <code>EXTERNAL-IP</code> column.</p> <p>To connect to your cluster simply click on 3 dots you can see to the right of the cluster name (<code>&quot;Kubernetes Engine&quot;</code> &gt; <code>&quot;Clusters&quot;</code>) &gt; click <code>&quot;Connect&quot;</code> &gt; click <code>&quot;RUN IN CLOUD SHELL&quot;</code>.</p> <p>Since you don't know what network tags have been assigned to your GKE auto-pilot nodes (if any) as you don't manage them and they are not shown in your GCP console, you won't be able to use specified network tags when defining a firewall rule to allow access to your <code>NodePort</code> service port e.g. <code>30543</code> and you would have to choose the option <code>&quot;All instances in the network&quot;</code> instead:</p> <p><a href="https://i.stack.imgur.com/NAvdd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NAvdd.png" alt="enter image description here" /></a></p>
mario
<p>I have created one ec2 instance and added that to loadbalancer while I try to create master node with <code>kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs</code> command I'm getting following logs <code>from kubelet status</code></p> <blockquote> <p>kubelet[11586]: E0305 06:48:26.280438 11586 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready cni config uninitialized</p> </blockquote> <p>When I tried to install CNI plugin it shows </p> <blockquote> <p>Are you using correct host or port?</p> </blockquote> <p>Can someone help me to resolve this</p>
HARINI NATHAN
<p><code>NetworkPluginNotReady message:docker: network plugin is not ready cni config uninitialized</code> means that your CNI is misconfigured or missing. </p> <p>In order to make it work properly you need to specify <code>--pod-network-cidr</code> while executing the <code>kubeadm init</code> command.</p> <p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network" rel="nofollow noreferrer">Here</a> you can find the official documentation with a list of most popular Pod network plugins to choose from like Calico or Flannel.</p>
Wytrzymały Wiktor
<p>I am using minikube </p> <p>My deployment file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: phpdeployment spec: replicas: 3 selector: matchLabels: app: phpapp template: metadata: labels: app: phpapp spec: containers: - image: rajendar38/myhtmlapp:latest name: php ports: - containerPort: 80 ingress apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress spec: backend: serviceName: php-service servicePort: 80 this is my service apiVersion: v1 kind: Service metadata: name: php-service spec: selector: app: phpapp ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 31000 type: NodePort </code></pre> <p>simple php application I build docker image I am able to access in both the ways </p> <ul> <li><a href="http://192.168.99.100/test.html" rel="nofollow noreferrer">http://192.168.99.100/test.html</a></li> <li><a href="http://192.168.99.100:31000/test.html" rel="nofollow noreferrer">http://192.168.99.100:31000/test.html</a></li> </ul> <p>After that I:</p> <ul> <li>updated my php application </li> <li>create the Image again, pushed to Docker Hub</li> <li>deleted all resources </li> <li>kubectl delete all --all</li> <li>Then forced apply deployment and service </li> </ul> <p>but with node port way I am able access old application. but with Ingress way I am able access changes are picked up</p>
Rajendar Talatam
<p>Please take look on similar <a href="https://stackoverflow.com/questions/52522570/how-to-expose-a-kubernetes-service-on-a-specific-nodeport">problem</a>. </p> <p>You have to know that container port is the port container listens on. Service port is the port where kubernetes service is exposed on cluster internal ip and mapped to the container port. Nodeport is the port exposed on the host and mapped to kubernetes service.</p> <p>NodePort lets you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services. Thanks to service type NodePort you are able to expose both ports(31000, 80).</p> <p>Your configuration files should look similar:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: phpdeployment spec: replicas: 3 selector: matchLabels: app: phpapp template: metadata: labels: app: phpapp spec: containers: - image: rajendar38/myhtmlapp:latest name: php command: [ "/bin/bash", "-ce", "tail -f /dev/null" ] ports: - containerPort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /example backend: serviceName: php servicePort: 80 --- apiVersion: v1 kind: Service metadata: name: php spec: selector: app: php ports: - port: 31000 targetPort: 80 protocol: TCP name: type: NodePort </code></pre> <p>Then expose deployment:</p> <pre><code>$ kubectl expose deployment phpdeployment --type=NodePort </code></pre> <p>Official documentations: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">kubernetes-service-nodeport</a>, <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernestes-ingress</a>, <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="nofollow noreferrer">kubernetes-deployment-exposing</a>.</p>
Malgorzata
<p>We are deploying a Java backend and React UI application using docker-compose. Our Docker containers are running Java, Caddy, and Postgres.</p> <p>What's unusual about this architecture is that we are not running the application as a cluster. Each user gets their own server with their own subdomain. Everything is working nicely, but we need a strategy for managing/updating machines as the number of users grows.</p> <p>We can accept some down time in the middle of the night, so we don't need to have high availability.</p> <p>We're just not sure what would be the best way to update software on all machines. And we are pretty new to Docker and have no experience with Kubernetes or Ansible, Chef, Puppet, etc. But we are quick to pick things up.</p> <p>We expect to have hundreds to thousands of users. Each machine runs the same code but has environment variables that are unique to the user. Our original provisioning takes care of that, so we do not anticipate having to change those with software updates. But a solution that can also provide that ability would not be a bad thing.</p> <p>So, the question is, when we make code changes and want to deploy the updated Java jar or the React application, what would be the best way to get those out there in an automated fashion?</p> <p>Some things we have considered:</p> <ul> <li>Docker Hub (concerns about rate limiting)</li> <li>Deploying our own Docker repo</li> <li>Kubernetes</li> <li>Ansible</li> <li><a href="https://containrrr.dev/watchtower/" rel="nofollow noreferrer">https://containrrr.dev/watchtower/</a></li> </ul> <p>Other things that we probably need include GitHub actions to build and update the Docker images.</p> <p>We are open to ideas that are not listed here, because there is a lot we don't know about managing many machines running docker-compose. So please feel free to offer suggestions. Many thanks!</p>
greymatter
<p>In your case I advice you to use <a href="https://kubernetes.io/" rel="nofollow noreferrer">Kubernetes</a> combination with CD tools. One of it is <a href="https://buddy.works/" rel="nofollow noreferrer">Buddy</a>. I think it is the best way to make such updates in an automated fashion. Of course you can use just Kubernetes, but with Buddy or other CD tools you will make it faster and easier. In my answer I am describing Buddy but there are a lot of popular CD tools for automating workflows in Kubernetes like for example: <a href="https://docs.gitlab.com/ee/topics/autodevops/" rel="nofollow noreferrer">GitLab</a> or <a href="https://codefresh.io/kubernetes-deploy/" rel="nofollow noreferrer">CodeFresh.io</a> - you should pick which one is actually best for you. Take a look: <a href="https://medium.com/@OPTASY.com/what-are-the-best-continuous-deployment-tools-for-kubernetes-and-why-top-5-c5f42e44983d" rel="nofollow noreferrer">CD-automation-tools-Kubernetes</a>.</p> <p>With Buddy you can avoid most of these steps while automating updates - (executing <code>kubectl apply,</code> <code>kubectl set image</code> commands ) by doing a simple push to Git.</p> <p>Every time you updates your application code or Kubernetes configuration, you have two possibilities to update your cluster: <code>kubectl apply</code> or <code>kubectl set image</code>.</p> <p>Such workflow most often looks like:</p> <p><strong>1.</strong> Edit application code or configuration .YML file</p> <p><strong>2.</strong> Push changes to your Git repository</p> <p><strong>3.</strong> Build an new Docker image</p> <p><strong>4.</strong> Push the Docker image</p> <p><strong>5.</strong> Log in to your K8s cluster</p> <p><strong>6.</strong> Run <code>kubectl apply</code> or <code>kubectl set image</code> commands to apply changes into K8s cluster</p> <p>Buddy is a CD tool that you can use to automate your whole K8s release workflows like:</p> <ul> <li>managing Dockerfile updates</li> <li>building Docker images and pushing them to the Docker registry</li> <li>applying new images on your K8s cluster</li> <li>managing configuration changes of a K8s Deployment etc.</li> </ul> <p>With Buddy you will have to configure just one pipeline.</p> <p>With every change in your app code or the YAML config file, this tool will apply the deployment and Kubernetes will start transforming the containers to the desired state.</p> Pipeline configuration for running Kubernetes pods or jobs <p>Assume that we have application on a K8s cluster and the its repository contains:</p> <ul> <li>source code of our application</li> <li>a Dockerfile with instructions on creating an image of your app</li> <li>DB migration scripts</li> <li>a Dockerfile with instructions on creating an image that will run the migration during the deployment (db migration runner)</li> </ul> <p>In this case, we can configure a pipeline that will:</p> <p><strong>1.</strong> Build application and migrate images</p> <p><strong>2.</strong> Push them to the Docker Hub</p> <p><strong>3.</strong> Trigger the DB migration using the previously built image. We can define the image, commands and deployment and use YAML file.</p> <p><strong>4.</strong> Use either Apply K8s Deployment or Set K8s Image to update the image in your K8s application.</p> <p>You can adjust above workflow properly to your environment/applications properties.</p> <p>Buddy supports GitLab as a Git provider. Integration of these two tools is easy and only requires authorizing GitLab in your profile. Thanks to this integration you can create pipelines that will build, test and deploy your app code to the server. But of course if you are using GitLab there is no need to set up Buddy as an extra tool because GitLab is also CD tools tool for automating workflows in Kubernetes. More information you can find here: <a href="https://buddy.works/guides/how-optimize-kubernetes-workflow" rel="nofollow noreferrer">buddy-workflow-kubernetes</a>.</p> <p>Read also: <a href="https://enterprisersproject.com/article/2020/7/kubernetes-workflows-and-processes-can-automate" rel="nofollow noreferrer">automating-workflows-kubernetes</a>.</p>
Malgorzata
<p>Let me describe the situation what I faced today:</p> <ol> <li>I received NodeJS application image from Developers and published it to AKS(Azure Kubernetes Services)</li> </ol> <p>There is nothing specific in manifest of this application, this is simple deployment with service on 80 port.</p> <ol start="2"> <li><p>I have configured Ingress using helm package and installed common one from helm repo: stable/nginx-ingress . When it was installed - I have started to configure Ingress.</p></li> <li><p>Below my yaml:</p></li> </ol> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: app-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - backend: serviceName: ui-service servicePort: 80 path: /(.*) - backend: serviceName: ui-service servicePort: 80 path: /test-service/ui(/|$)(.*) </code></pre> <p>Okay, I'm trying to open page: <a href="http://1.2.3.4/" rel="nofollow noreferrer">http://1.2.3.4/</a> - everything works fine, I see web page which redirects me to:</p> <p><a href="http://1.2.3.4/page1" rel="nofollow noreferrer">http://1.2.3.4/page1</a> , in case of I clicked something - <a href="http://1.2.3.4/page2" rel="nofollow noreferrer">http://1.2.3.4/page2</a> etc.</p> <p>However, when I'm trying to open the same web application using:</p> <p><a href="http://1.2.3.4/test-service/ui" rel="nofollow noreferrer">http://1.2.3.4/test-service/ui</a> , I got blank page and errors in console:</p> <pre><code>Resource interpreted as Stylesheet but transferred with MIME type text/html: "http://1.2.3.4/test-services/ui/static/css/test.css". </code></pre> <p>What a difference that I found:</p> <p>In the second case all JS and CSS files has content-type: text/html.</p>
Darii Nurgaleev
<p>Let me describe how I was managed to resolve the issue:</p> <p>In the YAML settings, as you can see <code>nginx.ingress.kubernetes.io/rewrite-target: /$1</code></p> <p>So, for root path directories and applications - it should be:</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: / </code></pre> <p>For example.com/example/</p> <p>It should be:</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: /$1 </code></pre> <p>etc.</p> <p>Now, web page is showing correctly.</p>
Darii Nurgaleev
<p><strong>Description</strong></p> <p>I have a kubernetes pod with <code>initContainer</code> that requires to run as privileged (<code>privileged: true</code>). The second container (not init) does not require such privileges.</p> <p>I want to enable <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">PodSecurityPolicy</a> admission plugin and I was searching for possibility to have different privileges/capabilities for different containers inside the pod.</p> <p>I'm quite sure that it doesn't follow any best practice, but I'm just wondering if this is any close to possible. Maybe I'm misunderstanding some concepts?</p> <p><strong>Question</strong></p> <p>Do you know if there is a way to define PodSecurityPolicy for specific container inside a pod?</p>
Rafał Potempa
<p>First and foremost take a notice on all know the theory of kubernetes pod security policy best practices:</p> <ul> <li>Do not run privileged containers.</li> <li>Do not run containers as root.</li> <li>Do not allow access to the host namespace.</li> <li>Restrict Linux capabilities.</li> </ul> <p>In a Kubernetes pod, containers can optionally run in “privileged” mode, which means that the processes inside the container have almost unrestricted access to resources on the host system. While there are certain use cases where this level of access is necessary, in general, it’s a security risk to let your containers do this. Please take a look: <a href="https://containerjournal.com/topics/container-security/establishing-a-kubernetes-pod-security-policy/" rel="nofollow noreferrer">pod-sp</a>, <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged" rel="nofollow noreferrer">pod-psp-privileged</a>.</p> <p>There are some proposal for refactoring <code>SecurityContext</code> to have pod-level and container-level attributes in order to correctly model pod- and container-level security concerns.</p> <p>For an example, you may want to have a pod with two containers, one of which runs as root with the privileged setting, and one that runs as a non-root UID. Here is an example:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: container-1 securityContext: privileged: true - name: container-2 securityContext: runAsUser: 1002 </code></pre> <p>See more: <a href="https://stupefied-goodall-e282f7.netlify.app/contributors/design-proposals/auth/pod-security-context/" rel="nofollow noreferrer">security-context-example</a>.</p> <p>To specify security settings for a Container, include the <code>securityContext</code> field in the Container manifest. The <code>securityContext</code> field is a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#securitycontext-v1-core" rel="nofollow noreferrer">SecurityContext</a> object. Security settings that you specify for a Container apply only to the individual Container, and they override settings made at the Pod level when there is overlap. Container settings do not affect the Pod's Volumes.</p> <p>But for now it is not possible to create one PodSecurityContext for different containers.securityContext.</p> <p>Instead of giving the pod/container root priviliged users should add to the specified containers linux only specific capabilities improving container isolation. Take a look: <a href="https://www.google.com/url?q=https://kubernetes.io/docs/tasks/configure-pod-container/security-context/%23set-capabilities-for-a-container&amp;sa=D&amp;source=hangouts&amp;ust=1603962258991000&amp;usg=AFQjCNFBvzYkJxwLIy5BgThZkHERyxf_5Q" rel="nofollow noreferrer">set-capabilities-for-a-container</a>.</p> <p>With Linux capabilities, you can grant certain privileges to a process without granting all the privileges of the root user. To add or remove Linux capabilities for a Container, include the capabilities field in the securityContext section of the Container manifest.</p> <p>In contrast to privileged processes that bypass all kernel permission checks, unprivileged processes have to pass full permission checking based on the process’s credentials such as effective UID, GID, and supplementary group list. Starting with kernel 2.2, Linux has divided privileged processes’ privileges into distinct units, known as capabilities. These distinct units/privileges can be independently assigned and enabled for unprivileged processes introducing root privileges to them. Kubernetes users can use Linux capabilities to grant certain privileges to a process without giving it all privileges of the root user. This is helpful for improving container isolation from the host since containers no longer need to write as root — you can just grant certain root privileges to them and that’s it.</p> <p>To add or remove Linux capabilities for a container, you can include the capabilities field in the <code>securityContext</code> section of the container manifest. Let’s see an example:</p> <pre class="lang-yaml prettyprint-override"><code> apiVersion: v1 kind: Pod metadata: name: linux-capabilities-example spec: securityContext: runAsUser: 3000 containers: - name: linux-capabilities image: supergiantkir/k8s-liveliness securityContext: capabilities: add: [&quot;NET_ADMIN&quot;] </code></pre> <p>In this example, <code>CAP_NET_ADMIN</code> is assigned capability to the container. This Linux capability allows a process to perform various network-related operations such as interface configuration, administration of IP firewall, modifying routing tables, enabling multicasting, etc. For the full list of available capabilities, see the official <a href="http://man7.org/linux/man-pages/man7/capabilities.7.html" rel="nofollow noreferrer">Linux documentation</a>.</p> <p>Read more: <a href="https://medium.com/kubernetes-tutorials/defining-privileges-and-access-control-settings-for-pods-and-containers-in-kubernetes-2cef08fc62b7" rel="nofollow noreferrer">linux-capabilities</a>.</p>
Malgorzata
<p>What kind of load balancing HAproxy ingress controller capable of. Can it do load balancing on a Pod level ? or it does it on a Node level load-balancing.</p> <p>Thanks Yaniv</p>
Yaniv Hakim
<p>As mentioned in the <a href="https://www.haproxy.com/documentation/hapee/1-9r1/traffic-management/kubernetes-ingress-controller/#haproxy-ingress-controller-features" rel="nofollow noreferrer">official documentation</a>:</p> <blockquote> <p>The ingress controller gives you the ability to:</p> <ul> <li><p>Use only one IP address and port and direct requests to the correct pod based on the Host header and request path</p></li> <li><p>Secure communication with built-in SSL termination</p></li> <li><p>Apply rate limits for clients while optionally whitelisting IP addresses</p></li> <li><p><strong>Select from among any of HAProxy's load-balancing algorithms</strong></p></li> <li><p>Get superior Layer 7 observability with the HAProxy Stats page and Prometheus metrics</p></li> <li><p>Set maximum connection limits to backend servers to prevent overloading services</p></li> </ul> </blockquote> <p>Also I recommend the following resources:</p> <ul> <li><a href="https://www.haproxy.com/blog/dissecting-the-haproxy-kubernetes-ingress-controller/" rel="nofollow noreferrer">HAProxy Kubernetes Ingress Controller</a></li> </ul> <blockquote> <p>L7 routing is one of the core features of Ingress, allowing incoming requests to be routed to the exact pods that can serve them based on HTTP characteristics such as the requested URL path. Other features include terminating TLS, using multiple domains, and, most importantly, load balancing traffic.</p> </blockquote> <ul> <li><a href="https://github.com/haproxytech/kubernetes-ingress" rel="nofollow noreferrer">GitHub</a></li> </ul> <p>I hope it helps. </p>
Wytrzymały Wiktor
<p>I am developing a simple java app that show the pods of the cluster.</p> <p>This is the app:</p> <pre><code>import io.kubernetes.client.openapi.ApiClient; import io.kubernetes.client.openapi.ApiException; import io.kubernetes.client.openapi.Configuration; import io.kubernetes.client.openapi.apis.CoreV1Api; import io.kubernetes.client.openapi.models.V1Pod; import io.kubernetes.client.openapi.models.V1PodList; import io.kubernetes.client.util.ClientBuilder; import io.kubernetes.client.util.KubeConfig; import java.io.FileReader; import java.io.IOException; /** * A simple example of how to use the Java API from an application outside a kubernetes cluster * * &lt;p&gt;Easiest way to run this: mvn exec:java * -Dexec.mainClass=&quot;io.kubernetes.client.examples.KubeConfigFileClientExample&quot; * */ public class untitled4 { public static void main(String[] args) throws IOException, ApiException { // file path to your KubeConfig String kubeConfigPath = &quot;/home/robin/.kube/config&quot;; // loading the out-of-cluster config, a kubeconfig from file-system ApiClient client = ClientBuilder.kubeconfig(KubeConfig.loadKubeConfig(new FileReader(kubeConfigPath))).build(); // set the global default api-client to the in-cluster one from above Configuration.setDefaultApiClient(client); // the CoreV1Api loads default api-client from global configuration. CoreV1Api api = new CoreV1Api(); // invokes the CoreV1Api client V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null); System.out.println(&quot;Listing all pods: &quot;); for (V1Pod item : list.getItems()) { System.out.println(item.getMetadata().getName()); } } } </code></pre> <p>But I get this error:</p> <pre><code>Exception in thread &quot;main&quot; java.lang.IllegalStateException: Unimplemented at io.kubernetes.client.util.authenticators.GCPAuthenticator.refresh(GCPAuthenticator.java:61) at io.kubernetes.client.util.KubeConfig.getAccessToken(KubeConfig.java:215) at io.kubernetes.client.util.credentials.KubeconfigAuthentication.&lt;init&gt;(KubeconfigAuthentication.java:46) at io.kubernetes.client.util.ClientBuilder.kubeconfig(ClientBuilder.java:276) at untitled4.main(untitled4.java:28) Process finished with exit code 1 </code></pre>
xRobot
<p>There is <a href="https://github.com/kubernetes-client/java/issues/290" rel="nofollow noreferrer">an open issue</a> on <strong>GitHub</strong> related with this problem. For now you can use workarounds like the one, proposed by <a href="https://github.com/jhbae200" rel="nofollow noreferrer">jhbae200</a> in <a href="https://github.com/kubernetes-client/java/issues/290#issuecomment-480205118" rel="nofollow noreferrer">this comment</a>:</p> <blockquote> <p>I am using it like this.</p> <pre><code>package kubernetes.gcp; import com.google.auth.oauth2.AccessToken; import com.google.auth.oauth2.GoogleCredentials; import io.kubernetes.client.util.KubeConfig; import io.kubernetes.client.util.authenticators.Authenticator; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.IOException; import java.time.Instant; import java.util.Date; import java.util.Map; public class ReplacedGCPAuthenticator implements Authenticator { private static final Logger log; private static final String ACCESS_TOKEN = &quot;access-token&quot;; private static final String EXPIRY = &quot;expiry&quot;; static { log = LoggerFactory.getLogger(io.kubernetes.client.util.authenticators.GCPAuthenticator.class); } private final GoogleCredentials credentials; public ReplacedGCPAuthenticator(GoogleCredentials credentials) { this.credentials = credentials; } public String getName() { return &quot;gcp&quot;; } public String getToken(Map&lt;String, Object&gt; config) { return (String) config.get(&quot;access-token&quot;); } public boolean isExpired(Map&lt;String, Object&gt; config) { Object expiryObj = config.get(&quot;expiry&quot;); Instant expiry = null; if (expiryObj instanceof Date) { expiry = ((Date) expiryObj).toInstant(); } else if (expiryObj instanceof Instant) { expiry = (Instant) expiryObj; } else { if (!(expiryObj instanceof String)) { throw new RuntimeException(&quot;Unexpected object type: &quot; + expiryObj.getClass()); } expiry = Instant.parse((String) expiryObj); } return expiry != null &amp;&amp; expiry.compareTo(Instant.now()) &lt;= 0; } public Map&lt;String, Object&gt; refresh(Map&lt;String, Object&gt; config) { try { AccessToken accessToken = this.credentials.refreshAccessToken(); config.put(ACCESS_TOKEN, accessToken.getTokenValue()); config.put(EXPIRY, accessToken.getExpirationTime()); } catch (IOException e) { throw new RuntimeException(e); } return config; } } </code></pre> <p>Running in.</p> <pre><code>//GoogleCredentials.fromStream(--something credential.json filestream--) KubeConfig.registerAuthenticator(new ReplacedGCPAuthenticator(GoogleCredentials.getApplicationDefault())); ApiClient client = Config.defaultClient(); Configuration.setDefaultApiClient(client); CoreV1Api api = new CoreV1Api(); V1PodList list = api.listNamespacedPod(&quot;default&quot;, null, null, null, null, null, null, null, 30, Boolean.FALSE); for (V1Pod item : list.getItems()) { System.out.println(item.getMetadata().getName()); } </code></pre> </blockquote>
mario
<p>I enabled in my backend tls. So every traffic needs to go through &quot;https://.....&quot;. I am able to access it locally or with Port-Forwarding in Kubernetes. But I cannot access it through the DNS (e.g. <a href="https://hostname.net/backend/...." rel="nofollow noreferrer">https://hostname.net/backend/....</a>).</p> <p>I get as answer:</p> <pre><code>Bad Request This combination of host and port requires TLS. </code></pre> <p>I read that the certificates could be wrong, but with port-forwarding everything works, so I don't think this could be the problem. Certificates are self-signed. I have only on my server certificates.</p> <p>Before I add tls, everythinkg works fine.</p> <p>Here is my service and my ingress:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: app-Core namespace: namespace spec: clusterIP: xxx.xxx.xxx.xxx ports: - name: http port: 8080 protocol: TCP targetPort: 8080 selector: app.kubernetes.io/instance: core app.kubernetes.io/name: app sessionAffinity: None type: ClusterIP status: loadBalancer: {} ---------------------------------- apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: core-app-core namespace: namespace spec: rules: - host: hostname http: paths: - backend: serviceName: app-Core servicePort: 8080 path: /backend(/|$)(.*) - backend: serviceName: app-Core servicePort: 8080 path: /camunda(/|$)(.*) status: loadBalancer: ingress: - ip: xxx.xxx.xxx.xxx </code></pre>
goku736
<p>Try to add <code>nginx.ingress.kubernetes.io/backend-protocol: https</code> annontation to your ingress definition.</p> <p>Using <code>backend-protocol</code> annotations is possible to indicate how NGINX should communicate with the backend service. By default NGINX uses <code>HTTP</code>.</p> <p>Take a look: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/" rel="nofollow noreferrer">ingress-tls</a>, <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="nofollow noreferrer">backend-protocol</a>.</p>
Malgorzata
<p>I readed a lot of documentation. I setup a Jenkins on GCC using kubernetes default creation. When I go to enter, jenkins ask me about a password to unlock. Im unable to find that password.</p> <p>Thanks</p>
Arturo
<p>Access the Jenkins container via cloud shell.</p> <p>Get fist get the pod id :</p> <pre><code>kubectl get pods --namespace=yourNamespace jenkins-867df9fcb8-ctfq5 1/1 Running 0 16m </code></pre> <p>Then execute a bash on the pod Id :</p> <pre><code>kubectl exec -it --namespace=yourNamespace jenkins-867df9fcb8-ctfq5 -- bash </code></pre> <p>Then just cd to the directory where the initialAdminPassword is saved and use the "cat" command to print its value.</p>
Maarten Dekker
<p>I want to execute a function written in Node.js, lets assume on an image called <strong>helloworld</strong> every minute on Kubernetes using cronjob.</p> <pre><code>function helloWorld() { console.log('hello world!')' } </code></pre> <p>I don't understand how I can call it in yaml file.</p> <p><strong>Config.yaml</strong></p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: &quot;*/1 * * * *&quot; jobTemplate: spec: template: spec: containers: - name: hello image: helloworld restartPolicy: OnFailure </code></pre>
maopuppets
<p>I think you should use <a href="https://fnproject.io/tutorials/ContainerAsFunction/" rel="nofollow noreferrer">fn</a>. One of the most powerful features of <strong>Fn</strong> is the ability to use custom defined Docker container images as functions. This feature makes it possible to customize your function’s runtime environment including letting you install any Linux libraries or utilities that your function might need. And thanks to the <strong>Fn</strong> CLI’s support for Dockerfiles it’s the same user experience as when developing any function. Deploying your function is how you publish your function and make it accessible to other users and systems. To see the details of what is happening during a function deploy, use the <code>--verbose</code> switch. The first time you build a function of a particular language it takes longer as <strong>Fn</strong> downloads the necessary Docker images. The <code>--verbose</code> option allows you to see this process.</p> <p>New image will be created - example node-app-hello. Then you can configure CronJob.</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello-fn-example spec: schedule: &quot;*/1 * * * *&quot; jobTemplate: spec: template: spec: containers: - name: hello image: example node-app-hello args: - ... restartPolicy: OnFailure </code></pre> <p>You can also add extra command to run hello container.</p> <p>Then simply exacute command:</p> <pre><code>$ kubectl create -f you-cronjob-file.yaml </code></pre> <p>Take a look: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">cron-jobs</a>.</p>
Malgorzata
<p>My system: Ubuntu using microk8s kubectl</p> <p>I'm taking an online course and have run into an issue I can't find a solution to. I can't access the following URL internally in my application</p> <p><a href="http://ingress-nginx-controller.ingress-nginx.svc.cluster.local" rel="nofollow noreferrer">http://ingress-nginx-controller.ingress-nginx.svc.cluster.local</a></p> <p>I get the following error in my web browser</p> <pre><code>&quot;page&quot;: &quot;/&quot;, &quot;query&quot;: {}, &quot;buildId&quot;: &quot;development&quot;, &quot;isFallback&quot;: false, &quot;err&quot;: {&quot;name&quot;: &quot;Error&quot;,&quot;message&quot;: &quot;socket hang up&quot;,&quot;stack&quot;: &quot;Error: socket hang up at connResetException (internal/errors.js:613:14) at Socket.socketOnEnd (_http_client.js:493:23) at Socket.emit (events.js:326:22) at endReadableNT (_stream_readable.js:1226:12) at processTicksAndRejections (internal/process/task_queues.js:80:21)&quot;}, &quot;gip&quot;: true </code></pre> <p>and I get the following dump on node.</p> <pre><code>[client] Error: socket hang up [client] at connResetException (internal/errors.js:613:14) [client] at Socket.socketOnEnd (_http_client.js:493:23) [client] at Socket.emit (events.js:326:22) [client] at endReadableNT (_stream_readable.js:1226:12) [client] at processTicksAndRejections (internal/process/task_queues.js:80:21) { [client] code: 'ECONNRESET', [client] config: { [client] url: 'http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser', [client] method: 'get', [client] headers: { [client] Accept: 'application/json, text/plain, */*', [client] Host: 'tickets.dev', [client] 'User-Agent': 'axios/0.19.2' [client] }, [client] transformRequest: [ [Function: transformRequest] ], [client] transformResponse: [ [Function: transformResponse] ], [client] timeout: 0, [client] adapter: [Function: httpAdapter], [client] xsrfCookieName: 'XSRF-TOKEN', [client] xsrfHeaderName: 'X-XSRF-TOKEN', [client] maxContentLength: -1, [client] validateStatus: [Function: validateStatus], [client] data: undefined [client] }, [client] request: &lt;ref *1&gt; Writable { [client] _writableState: WritableState { [client] objectMode: false, [client] highWaterMark: 16384, [client] finalCalled: false, [client] needDrain: false, [client] ending: false, [client] ended: false, [client] finished: false, [client] destroyed: false, [client] decodeStrings: true, [client] defaultEncoding: 'utf8', [client] length: 0, [client] writing: false, [client] corked: 0, [client] sync: true, [client] bufferProcessing: false, [client] onwrite: [Function: bound onwrite], [client] writecb: null, [client] writelen: 0, [client] afterWriteTickInfo: null, [client] buffered: [], [client] bufferedIndex: 0, [client] allBuffers: true, [client] allNoop: true, [client] pendingcb: 0, [client] prefinished: false, [client] errorEmitted: false, [client] emitClose: true, [client] autoDestroy: true, [client] errored: false, [client] closed: false [client] }, [client] _events: [Object: null prototype] { [client] response: [Function: handleResponse], [client] error: [Function: handleRequestError] [client] }, [client] _eventsCount: 2, [client] _maxListeners: undefined, [client] _options: { [client] protocol: 'http:', [client] maxRedirects: 21, [client] maxBodyLength: 10485760, [client] path: '/api/users/currentuser', [client] method: 'GET', [client] headers: [Object], [client] agent: undefined, [client] agents: [Object], [client] auth: undefined, [client] hostname: 'ingress-nginx-controller.ingress-nginx.svc.cluster.local', [client] port: null, [client] nativeProtocols: [Object], [client] pathname: '/api/users/currentuser' [client] }, [client] _redirectCount: 0, [client] _redirects: [], [client] _requestBodyLength: 0, [client] _requestBodyBuffers: [], [client] _onNativeResponse: [Function (anonymous)], [client] _currentRequest: ClientRequest { [client] _events: [Object: null prototype], [client] _eventsCount: 6, [client] _maxListeners: undefined, [client] outputData: [], [client] outputSize: 0, [client] writable: true, [client] destroyed: false, [client] _last: true, [client] chunkedEncoding: false, [client] shouldKeepAlive: false, [client] useChunkedEncodingByDefault: false, [client] sendDate: false, [client] _removedConnection: false, [client] _removedContLen: false, [client] _removedTE: false, [client] _contentLength: 0, [client] _hasBody: true, [client] _trailer: '', [client] finished: true, [client] _headerSent: true, [client] socket: [Socket], [client] _header: 'GET /api/users/currentuser HTTP/1.1\r\n' + [client] 'Accept: application/json, text/plain, */*\r\n' + [client] 'Host: tickets.dev\r\n' + [client] 'User-Agent: axios/0.19.2\r\n' + [client] 'Connection: close\r\n' + [client] '\r\n', [client] _onPendingData: [Function: noopPendingOutput], [client] agent: [Agent], [client] socketPath: undefined, [client] method: 'GET', [client] maxHeaderSize: undefined, [client] insecureHTTPParser: undefined, [client] path: '/api/users/currentuser', [client] _ended: false, [client] res: null, [client] aborted: false, [client] timeoutCb: null, [client] upgradeOrConnect: false, [client] parser: null, [client] maxHeadersCount: null, [client] reusedSocket: false, [client] host: 'ingress-nginx-controller.ingress-nginx.svc.cluster.local', [client] protocol: 'http:', [client] _redirectable: [Circular *1], [client] [Symbol(kCapture)]: false, [client] [Symbol(kNeedDrain)]: false, [client] [Symbol(corked)]: 0, [client] [Symbol(kOutHeaders)]: [Object: null prototype] [client] }, [client] _currentUrl: 'http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser', [client] [Symbol(kCapture)]: false [client] }, [client] response: undefined, [client] isAxiosError: true, [client] toJSON: [Function (anonymous)] [client] } </code></pre> <p>my ingress-nginx service</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.152.183.138 &lt;pending&gt; 80:32160/TCP,443:30735/TCP 32d ingress-nginx-controller-admission ClusterIP 10.152.183.198 &lt;none&gt; 443/TCP 32d </code></pre> <p>Project source code</p> <p><a href="https://gitlab.com/emendoza1986/ticketingapp_microservicecourse" rel="nofollow noreferrer">https://gitlab.com/emendoza1986/ticketingapp_microservicecourse</a></p> <p>Here is the source page <a href="https://gitlab.com/emendoza1986/ticketingapp_microservicecourse/-/blob/master/client/api/build-client.js" rel="nofollow noreferrer">https://gitlab.com/emendoza1986/ticketingapp_microservicecourse/-/blob/master/client/api/build-client.js</a> that called the link <a href="http://ingress-nginx-controller.ingress-nginx.svc.cluster.local" rel="nofollow noreferrer">http://ingress-nginx-controller.ingress-nginx.svc.cluster.local</a>, as a temporary* patch, I'm routing directly to the service I needed http://auth-srv:3000 to continue the course.</p>
Emmanuel Mendoza
<p><code>Error socket hang up</code> error much always indicates the server closed the connection for various reason (not being able to process the request in time, or running into some error while processing the request, etc).</p> <p>Check if the library you are using isn’t sending requests asynchronously as <code>pm.sendRequest()</code> does.</p> <p>Try to set the <code>Connection: keep-alive</code> header. On the server side disable <code>keepAliveTimeout</code> by setting it equal to <code>0</code>. Another solution is to close free sockets in less than <code>keepAliveTimeout</code> value (by default it 5 seconds). The default http agent does not support such a capability (<code>timeout</code> setting does not do it). So use <a href="https://github.com/node-modules/agentkeepalive#readme" rel="nofollow noreferrer">agentkeepalive</a> lib</p> <pre><code>const HttpsAgent = require('agentkeepalive').HttpsAgent; const agent = new HttpsAgent({ freeSocketTimeout: 5000 }); </code></pre> <p>More info: <a href="https://github.com/request/request/issues/2047" rel="nofollow noreferrer">connection-hang-up</a>.</p> <p>Also if you used <strong><code>require('http')</code></strong> to consume <strong>https</strong> service and it showed &quot;<strong><code>socket hang up</code></strong>&quot;.</p> <p>Try to change <strong><code>require('http')</code></strong> to <strong><code>require('https')</code></strong> instead, and it is working.</p> <p>See more: <a href="https://stackoverflow.com/questions/16995184/nodejs-what-does-socket-hang-up-actually-mean">socket-hang-up</a>. Useful blog: <a href="https://medium.com/@ehzevin/hang-in-there-a-solution-to-socket-hang-up-5e04c600fa89" rel="nofollow noreferrer">solutions-to-socket-hang-up</a>.</p> <p>Overal it's not common to use ingress service internally within the cluster. Ingress resource is designed to manage external access to internal services.</p> <blockquote> <p>Note this is also a security concern as you are exposing the auth service (which is a backend service used by your UI layer) externally.</p> </blockquote> <p>Your network load balancer has pending <code>External IP</code>.</p> <p>MicroK8s comes with metallb, you can enable it like this:</p> <pre><code>microk8s enable metallb </code></pre> <p> should turn into an actual IP address then.</p> <p>Take a look: <a href="https://stackoverflow.com/questions/63142877/kubernetes-ingress-nginx-routing-error-cannot-connect-frontend-to-backend">kubernetes-ingress-nginx-routing-error</a>, <a href="http://nginx.org/en/docs/http/request_processing.html" rel="nofollow noreferrer">http-request_processing</a>.</p>
Malgorzata
<p>I'm new to kubernetes. Recently, I was successfull to manage kubernetes with online server. But, when I move to isolated area (offline server) I can't deploy kubectl image. But all of my environment are running well and I got stuck in this. The different just internet connection. </p> <p>Currently, I can't deploy kubernetes dashboard and some images in offline server. This example of my kubectl command in offline server (I was downloaded the tar file in online server) :</p> <pre><code># docker load &lt; nginx.tar # kubectl create deployment test-nginx --image=nginx # kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default test-nginx-7d97ffc85d-2s4lh 0/1 ImagePullBackOff 0 50s kube-system coredns-6955765f44-2s54f 1/1 Running 1 26h kube-system coredns-6955765f44-wmtq9 1/1 Running 1 26h kube-system etcd-devkubeapp01 1/1 Running 1 26h kube-system kube-apiserver-devkubeapp01 1/1 Running 1 26h kube-system kube-controller-manager-devkubeapp01 1/1 Running 1 26h kube-system kube-flannel-ds-amd64-czn8z 1/1 Running 0 26h kube-system kube-flannel-ds-amd64-d58x4 1/1 Running 0 26h kube-system kube-flannel-ds-amd64-z9w9x 1/1 Running 0 26h kube-system kube-proxy-9wxj2 1/1 Running 0 26h kube-system kube-proxy-mr76b 1/1 Running 1 26h kube-system kube-proxy-w5pvm 1/1 Running 0 26h kube-system kube-scheduler-devkubeapp01 1/1 Running 1 26h # kubectl get nodes NAME STATUS ROLES AGE VERSION devkubeapp01 Ready master 26h v1.17.2 devkubeapp02 Ready minion1 26h v1.17.2 devkubeapp03 Ready minion2 25h v1.17.2 # docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest 5ad3bd0e67a9 6 days ago 127MB k8s.gcr.io/kube-proxy v1.17.2 cba2a99699bd 10 days ago 116MB k8s.gcr.io/kube-apiserver v1.17.2 41ef50a5f06a 10 days ago 171MB k8s.gcr.io/kube-controller-manager v1.17.2 da5fd66c4068 10 days ago 161MB k8s.gcr.io/kube-scheduler v1.17.2 f52d4c527ef2 10 days ago 94.4MB k8s.gcr.io/coredns 1.6.5 70f311871ae1 2 months ago 41.6MB k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 3 months ago 288MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 12 months ago 52.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB </code></pre> <p>My Pod cant running well, so the status CreatingContainer turn into ImagePullBackOff (I was try in online server when I disconnected the Internet the status is same => ImagePullBackOff). Anyone can help to solve this ? Does kubernetes support offline environment to deploy the image ?</p> <p>Thanks.</p>
amsalmaestro
<p>As already stated in my previous comment:</p> <blockquote> <p>I suspect that your <code>imagePullPolicy</code> might be misconfigured.</p> </blockquote> <p>and further proven by the logs you have provided:</p> <blockquote> <p>Error from server (BadRequest): container "nginx" in pod "test-nginx-7d97ffc85d-2s4lh" is waiting to start: trying and failing to pull image</p> </blockquote> <p>the problem lays within the <a href="https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images" rel="nofollow noreferrer"><code>imagePullPolicy</code> configuration</a>.</p> <p>As stated in the <a href="https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images" rel="nofollow noreferrer">official documentation</a>:</p> <blockquote> <p><strong>Pre-pulled Images</strong></p> <p>By default, the kubelet will try to pull each image from the specified registry. However, if the <code>imagePullPolicy</code> property of the container is set to <code>IfNotPresent</code> or <code>Never</code>, then a local image is used (preferentially or exclusively, respectively).</p> <p>If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.</p> </blockquote> <p>So basically as already mentioned by @Eduardo you need to make sure that you have the same images on all nodes and your <code>imagePullPolicy</code> is correctly configured. </p> <p>However, make sure the container always uses the same version of the image, you can specify its <a href="https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier" rel="nofollow noreferrer">digest</a>, for example <code>sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2</code>. The digest uniquely identifies a specific version of the image, so it is never updated by Kubernetes unless you change the digest value.</p> <p>This way you would similar avoid issues in the future as keeping the exact same version of the image cluster wide is the biggest trap in this scenario.</p> <p>I hope this helps and expands on the previous answer (which is correct) as well as proves my point from the very beginning. </p>
Wytrzymały Wiktor
<p>After remove Kubernetes and re-install it on both master and node, I can't no longer install NGINX Ingress Controller to work correctly.</p> <p>First, To remove Kubernetes I have done:</p> <pre><code># On Master k delete namespace,service,job,ingress,serviceaccounts,pods,deployment,services --all k delete node k8s-node-0 sudo kubeadm reset sudo systemctl stop kubelet sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube* -y sudo apt-get autoremove -y sudo rm -rf ~/.kube /etc/cni # On Node sudo kubeadm reset sudo systemctl stop kubelet sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube* -y sudo apt-get autoremove -y sudo rm -rf ~/.kube </code></pre> <p>Then, to re-install everything back, I have done:</p> <pre><code># On Master sudo apt install -y kubelet kubeadm kubectl sudo kubeadm init mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml # On Node sudo apt install -y kubelet kubeadm kubectl sudo kubeadm join 10.0.8.135:6443 --token 31xags.h9mr5dz6ncn632uv --discovery-token-ca-cert-hash sha256:c6b479e2130799a4e4d41c4a02dab54eedc431806171b92f4bbc1978d84bd91d </code></pre> <p>Then to install NGINX Ingress Controller:</p> <pre><code>git clone https://github.com/nginxinc/kubernetes-ingress.git cd kubernetes-ingress/deployments k apply -f common/ns-and-sa.yaml k apply -f rbac/rbac.yaml k apply -f common/default-server-secret.yaml k apply -f common/nginx-config.yaml k apply -f deployment/nginx-ingress.yaml k apply -f daemon-set/nginx-ingress.yaml </code></pre> <p>Then when I execute <code>k get all -n nginx-ingress</code> i got:</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/nginx-ingress-2thp4 0/1 CrashLoopBackOff 7 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/nginx-ingress 1 1 0 1 0 &lt;none&gt; 14m </code></pre> <p>and the detail <code>k describe pods nginx-ingress-2thp4 -n nginx-ingress</code>:</p> <pre><code>Name: nginx-ingress-2thp4 Namespace: nginx-ingress Priority: 0 Node: k8s-node-0/10.0.8.66 Start Time: Mon, 28 Sep 2020 15:22:01 +0700 Labels: app=nginx-ingress controller-revision-hash=646bf8d696 pod-template-generation=1 Annotations: cni.projectcalico.org/podIP: 192.168.11.198/32 cni.projectcalico.org/podIPs: 192.168.11.198/32 Status: Running IP: 192.168.11.198 IPs: IP: 192.168.11.198 Controlled By: DaemonSet/nginx-ingress Containers: nginx-ingress: Container ID: docker://175d13f95564d98c06af5514b0519a035e5ee95872bb428fa94c9c2bfc6776a5 Image: nginx/nginx-ingress:edge Image ID: docker-pullable://nginx/nginx-ingress@sha256:fdb07d0a639d0f2c761b4c5a93f6d5063b972b8ae33252bb7755bb5fb6da4fda Ports: 80/TCP, 443/TCP, 8081/TCP Host Ports: 80/TCP, 443/TCP, 0/TCP Args: -nginx-configmaps=$(POD_NAMESPACE)/nginx-config -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 255 Started: Mon, 28 Sep 2020 15:33:09 +0700 Finished: Mon, 28 Sep 2020 15:33:09 +0700 Ready: False Restart Count: 7 Readiness: http-get http://:readiness-port/nginx-ready delay=0s timeout=1s period=1s #success=1 #failure=3 Environment: POD_NAMESPACE: nginx-ingress (v1:metadata.namespace) POD_NAME: nginx-ingress-2thp4 (v1:metadata.name) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-token-j9hjm (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: nginx-ingress-token-j9hjm: Type: Secret (a volume populated by a Secret) SecretName: nginx-ingress-token-j9hjm Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 15m default-scheduler Successfully assigned nginx-ingress/nginx-ingress-2thp4 to k8s-node-0 Normal Pulled 15m kubelet Successfully pulled image &quot;nginx/nginx-ingress:edge&quot; in 9.106863831s Normal Pulled 15m kubelet Successfully pulled image &quot;nginx/nginx-ingress:edge&quot; in 3.626387366s Normal Pulled 14m kubelet Successfully pulled image &quot;nginx/nginx-ingress:edge&quot; in 3.839665529s Normal Created 14m (x4 over 15m) kubelet Created container nginx-ingress Normal Started 14m (x4 over 15m) kubelet Started container nginx-ingress Normal Pulled 14m kubelet Successfully pulled image &quot;nginx/nginx-ingress:edge&quot; in 3.846965585s Normal Pulling 13m (x5 over 15m) kubelet Pulling image &quot;nginx/nginx-ingress:edge&quot; Warning BackOff 14s (x70 over 15m) kubelet Back-off restarting failed container </code></pre> <p>And logs <code>k logs nginx-ingress -n nginx-ingress</code>:</p> <pre><code>I0928 08:38:16.776841 1 main.go:245] Starting NGINX Ingress controller Version= GitCommit= W0928 08:38:16.797787 1 main.go:284] The '-use-ingress-class-only' flag will be deprecated and has no effect on versions of kubernetes &gt;= 1.18.0. Processing ONLY resources that have the 'ingressClassName' field in Ingress equal to the class. F0928 08:38:16.802335 1 main.go:288] Error when getting IngressClass nginx: ingressclasses.networking.k8s.io &quot;nginx&quot; not found </code></pre> <p>Here is the kubectl version:</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.2&quot;, GitCommit:&quot;f5743093fd1c663cb0cbc89748f730662345d44d&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-09-16T13:41:02Z&quot;, GoVersion:&quot;go1.15&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.2&quot;, GitCommit:&quot;f5743093fd1c663cb0cbc89748f730662345d44d&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-09-16T13:32:58Z&quot;, GoVersion:&quot;go1.15&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>I have done a lot of searching but still couldn't find a way to fix this problem. Also have tried to re-install many time already but it never works.</p>
jujuzi
<p>I think this may happen because you want to use the nginx ingress controller in limited namespaces. Please try this patch applied to your <code>ClusterRole</code> definition <code>nginx-ingress-clusterrole</code>:</p> <pre><code>@@ -157,7 +160,7 @@ rules: - list - watch - apiGroups: - - &quot;extensions&quot; + - &quot;networking.k8s.io&quot; resources: - ingresses verbs: </code></pre> <p>Take a look: <a href="https://github.com/kubernetes/ingress-nginx/issues/4296#issuecomment-509565573" rel="nofollow noreferrer">cluster-role-nginx-controller</a>.</p>
Malgorzata
<p>I am trying to install ceph and configure on a mounted disk. I have the disk location, however, I face a problem when I use the --data parameter. </p> <p><strong>command:</strong> ceph-deploy osd create --data /home/ceph-admin/ceph-data/vda node-ip-address</p> <p><strong>error:</strong> ceph-deploy: error: unrecognized arguments: --data</p> <p><strong>ceph version:</strong> ceph version 14.2.8 (2d095e947a02261ce61424021bb43bd3022d35cb) nautilus (stable)</p> <p><strong>ceph-deploy version:</strong> 1.5.38</p> <p>All the documentations I found uses the --data parameter. is there any workaround? </p> <p>please help!</p> <p>Thanks in advance..</p>
Abdullah Alsowaygh
<p>You have to upgrade your <strong>ceph-deploy</strong> to version <strong>2.x</strong> to be able to deploy nautilus to your cluster. The sample commands to update ceph-deploy on ubuntu.</p> <pre><code>$ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add - $ echo deb https://download.ceph.com/debian-nautilus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list $ sudo apt update $ sudo apt install ceph-deploy </code></pre> <p>Hope it can help.</p>
doubles
<p>I launched minikube with the docker driver on a remote machine and I have used a nodePort service for a particular pod. I believe nodePort exposes the port on the minikube docker container. On doing minikube IP it gave me the IP of the docker container in which minikube runs. How can I port map the port from the minnikube container to the host port so that I can access it remotely. A different approach would other than using driver=none or restarting minikube is appreciated as I do not want to restart my spinnaker cluster.</p>
aayush.ag21
<p>There is a <code>minikube service &lt;SERVICE_NAME&gt; --url</code> command which will give you a url where you can access the service. In order to open the exposed service, the <code>minikube service &lt;SERVICE_NAME&gt;</code> command can be used:</p> <pre><code>$ minikube service example-minikube Opening kubernetes service default/hello-minikube in default browser... </code></pre> <p>This command will open the specified service in your default browser.</p> <p>There is also a <code>--url</code> option for printing the url of the service which is what gets opened in the browser:</p> <pre><code>$ minikube service example-minikube --url http://192.168.99.100:31167 </code></pre> <p>You can run <code>minikube service list</code> to get list of all available services with their corresponding URL's. Also make sure the service points to correct pod by using correct <code>selector</code>.</p> <p>Try also to execute command:</p> <pre><code>ssh -i ssh -i ~/.minikube/machines/minikube/id_rsa docker@$(minikube ip) -L *:30000:0.0.0.0:30000 </code></pre> <p>Take a look: <a href="https://github.com/kubernetes/minikube/issues/877" rel="noreferrer">minikube-service-port-forward</a>, <a href="https://stackoverflow.com/questions/40767164/expose-port-in-minikube/40774861#40774861">expose-port-minikube</a>, <a href="https://github.com/kubernetes/minikube/blob/master/docs/minikube_service.md" rel="noreferrer">minikube-service-documentation</a>.</p>
Malgorzata
<p>I created a namespace to get logs with filebeats and save to elasticsearch. Why not save on elasticsearch the fields about Kubernetes how to example follow?</p> <p><a href="https://www.elastic.co/guide/en/beats/filebeat/master/add-kubernetes-metadata.html" rel="nofollow noreferrer">Kubernetes fields</a></p> <pre><code> "kubernetes" : { "labels" : { "app" : "MY-APP", "pod-template-hash" : "959f54cd", "serving" : "true", "version" : "1.0", "visualize" : "true" }, "pod" : { "uid" : "e20173cb-3c5f-11ea-836e-02c1ee65b375", "name" : "MY-APP-959f54cd-lhd5p" }, "node" : { "name" : "ip-xxx-xx-xx-xxx.ec2.internal" }, "container" : { "name" : "istio" }, "namespace" : "production", "replicaset" : { "name" : "MY-APP-959f54cd" } } </code></pre> <p>Currently is being saved like this: </p> <pre><code> "_source" : { "@timestamp" : "2020-01-23T12:33:14.235Z", "ecs" : { "version" : "1.0.0" }, "host" : { "name" : "worker-node1" }, "agent" : { "hostname" : "worker-node1", "id" : "xxxxx-xxxx-xxx-xxxx-xxxxxxxxxxxxxx", "version" : "7.1.1", "type" : "filebeat", "ephemeral_id" : "xxxx-xxxx-xxxx-xxxxxxxxxxxxx" }, "log" : { "offset" : xxxxxxxx, "file" : { "path" : "/var/lib/docker/containers/xxxx96ec2bfd9a3e4f4ac83581ad90/7fd55e1249aa009df3f8e3250c967bbe541c9596xxxxxac83581ad90-json.log" } }, "stream" : "stdout", "message" : "xxxxxxxx", "input" : { "type" : "docker" } } </code></pre> <p>To follow my filebeat.config: </p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.config: inputs: # Mounted `filebeat-inputs` configmap: path: ${path.config}/inputs.d/*.yml # Reload inputs configs as they change: reload.enabled: false multiline.pattern: '^[[:space:]]' multiline.negate: false multiline.match: after modules: path: ${path.config}/modules.d/*.yml # Reload module configs as they change: reload.enabled: false # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this: #filebeat.autodiscover: # providers: # - type: kubernetes # hints.enabled: true processors: - add_cloud_metadata: - add_kubernetes_metadata: cloud.id: ${ELASTIC_CLOUD_ID} cloud.auth: ${ELASTIC_CLOUD_AUTH} output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] protocol: "http" setup.ilm.enabled: false ilm.enabled: false xpack.monitoring: enabled: true </code></pre> <p>DamemonSet is shown below:</p> <pre><code>apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat spec: template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat hostNetwork: true terminationGracePeriodSeconds: 30 containers: - name: filebeat image: docker.elastic.co/beats/filebeat-oss:7.1.1 args: [ "-c", "/etc/filebeat.yml", "-e", ] env: - name: ELASTICSEARCH_HOST value: xxxxxxxxxxxxx - name: ELASTICSEARCH_PORT value: "9200" securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: inputs mountPath: /usr/share/filebeat/inputs.d readOnly: true - name: data mountPath: /usr/share/filebeat/data - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true volumes: - name: config configMap: defaultMode: 0600 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: inputs configMap: defaultMode: 0600 name: filebeat-inputs # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: path: /var/lib/filebeat-data type: DirectoryOrCreate </code></pre> <p>Before to apply config into kubernetes, I did remove ever registry filebeats of elasticsearch.</p>
Matheus Warmeling Matias
<p>As already stated in my comment. It looks like your <code>ConfigMap</code> is missing the <code>paths:</code> to containers' logs. It should be something like this:</p> <pre><code> type: container paths: - /var/log/containers/*${data.kubernetes.container.id}.log </code></pre> <p>Compare your config file with <a href="https://raw.githubusercontent.com/elastic/beats/7.5/deploy/kubernetes/filebeat-kubernetes.yaml" rel="nofollow noreferrer">this one</a>.</p> <p>I hope it helps.</p>
Wytrzymały Wiktor
<p>What is the difference between selecting the user to run as in the <code>securityContext.runAsUser</code> section of my k8s deployment, vs specifying the user using <code>USER myuser</code> in the Dockerfile? </p> <p>I'm particularly interested in if there are security concerns associated with <code>USER myuser</code> that don't exist under <code>securityContext</code></p>
Mike S
<h3>MustRunAsNonRoot</h3> <p><a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups" rel="nofollow noreferrer">Users and groups</a></p> <blockquote> <p>Requires that the pod be submitted with a <code>non-zero runAsUser</code> or have the <code>USER directive defined</code> (using a numeric UID) in the image. Pods which have specified neither runAsNonRoot nor runAsUser settings will be mutated to set <code>runAsNonRoot=true</code>, thus requiring a defined <code>non-zero numeric USER directive</code> in the container. No default provided. Setting allowPrivilegeEscalation=false is strongly recommended with this strategy.</p> </blockquote> <p>So <code>USER directive</code> is important when you want the container to be started as non-root.</p>
Prakash Krishna
<p>I have problems with connecting volume per iSCSI from Kubernetes. When I try with iscisiadm from worker node, it works. This is what I get from kubectl description pod.</p> <pre><code>Normal Scheduled &lt;unknown&gt; default-scheduler Successfully assigned default/iscsipd to k8s-worker-2 Normal SuccessfulAttachVolume 4m2s attachdetach-controller AttachVolume.Attach succeeded for volume &quot;iscsipd-rw&quot; Warning FailedMount 119s kubelet, k8s-worker-2 Unable to attach or mount volumes: unmounted volumes=[iscsipd-rw], unattached volumes=[iscsipd-rw default-token-d5glz]: timed out waiting for the condition Warning FailedMount 105s (x9 over 3m54s) kubelet, k8s-worker-2 MountVolume.WaitForAttach failed for volume &quot;iscsipd-rw&quot; : failed to get any path for iscsi disk, last err seen:iscsi: failed to attach disk: Error: iscsiadm: No records found(exit status 21) </code></pre> <p>I'm just using <code>iscsi.yaml</code> file from kubernetes.io!</p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: iscsipd spec: containers: - name: iscsipd-rw image: kubernetes/pause volumeMounts: - mountPath: &quot;/mnt/iscsipd&quot; name: iscsipd-rw volumes: - name: iscsipd-rw iscsi: targetPortal: 192.168.34.32:3260 iqn: iqn.2020-07.int.example:sql lun: 0 fsType: ext4 readOnly: true </code></pre> <p>Open-iscsi is installed on all worker nodes(just two of them).</p> <pre><code>● iscsid.service - iSCSI initiator daemon (iscsid) Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: e Active: active (running) since Fri 2020-07-03 10:24:26 UTC; 4 days ago Docs: man:iscsid(8) Process: 20507 ExecStart=/sbin/iscsid (code=exited, status=0/SUCCESS) Process: 20497 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, st Main PID: 20514 (iscsid) Tasks: 2 (limit: 4660) CGroup: /system.slice/iscsid.service ├─20509 /sbin/iscsid └─20514 /sbin/iscsid </code></pre> <p>ISCSI Target is created on the IBM Storwize V7000. Without CHAP.</p> <p>I tried to connect with iscsiadm from worker node and it works.</p> <pre><code>sudo iscsiadm -m discovery -t sendtargets -p 192.168.34.32 192.168.34.32:3260,1 iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1 192.168.34.34:3260,1 iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1 sudo iscsiadm -m node --login Logging in to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.32,3260] (multiple) Logging in to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.34,3260] (multiple) Login to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.32,3260] successful. Login to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.34,3260] successful. Disk /dev/sdb: 100 GiB, 107374182400 bytes, 209715200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 32768 bytes / 32768 bytes Disklabel type: dos Disk identifier: 0x5b3d0a3a Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 209715199 209713152 100G 83 Linux </code></pre> <p>Is anyone facing the same problem?</p>
markoc
<p><strong>Remember to not use a hostname for the target.</strong> Use the IP. For some reason, if the target is a hostname, it barfs with the error about requesting a duplicate session. If the target is an IP, it works fine. I now have multiple iSCSI targets mounted in various pods, and I am absolutely ecstatic.</p> <p>You may also have authentication issue to your iscsi target.</p> <p>If you don't use CHAP authentication yet, you still have to disable authentication. For example, if you use <code>targetcli</code>, you can run below commands to disable it.</p> <pre><code>$ sudo targetcli /&gt; /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute authentication=0 # will disable auth /&gt; /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute generate_node_acls=1 # will force to use tpg1 auth mode by default </code></pre> <p>If this doesn't help you, please share your iscsi target configuration, or guide that you followed.</p> <p><strong>What is important check if all of your nodes have the open-iscsi-package installed.</strong></p> <p>Take a look: <a href="https://github.com/rancher/rancher/issues/12433" rel="nofollow noreferrer">kubernetes-iSCSI</a>, <a href="https://stackoverflow.com/questions/55920173/kubernetes-pod-cannot-mount-iscsi-volume-failed-to-get-any-path-for-iscsi-disk">volume-failed-iscsi-disk</a>, <a href="https://discuss.kubernetes.io/t/solved-iscsi-into-container-fails/1034" rel="nofollow noreferrer">iscsi-into-container-fails</a>.</p>
Malgorzata
<p>I`ve got a specyfic problem. When the container is running in Pods, the application contained in it works correctly, while in kubernetes the status is displayed - Error. In Events there are the following problems:</p> <pre><code>Readiness probe failed: HTTP Probe failed with statuscode: 404 Liveness probe failed: HTTP Probe failed with statuscode: 404 </code></pre> <p>The same errors are displayed in four containers with applications (Spring Boot). On the other hand, when I launch a container with a simple application (writing numbers from 0 to 10), in Kuberneres, the status is success. I'm just learning a Kubernetes, so I'd like to ask for help, which may be the cause of the problems?</p>
xampo
<p>check liveness probe and readiness probe in your deployment yml. Check the path is correct. if still not resolved increase the initialDelay seconds</p>
sudhanshu pati
<p>I had deploy a stateless Go web app with Redis on Kubernetes. Redis pod is running fine but the main issue with application pod and getting error <strong>dial tcp: i/o timeout</strong> in log. Thank you!!</p> <p><a href="https://i.stack.imgur.com/e9hgM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e9hgM.png" alt="enter image description here"></a></p>
shivam
<p>Please take look: <a href="https://stackoverflow.com/questions/57866030/azure-kubernetes-how-can-i-deal-with-error-dial-tcp-10-240-0-410250-i-o-time">aks-vm-timeout</a>.</p> <blockquote> <p>Make sure that the default network security group isn't modified and that both port 22 and 9000 are open for connection to the API server. Check whether the tunnelfront pod is running in the kube-system namespace using the kubectl get pods --namespace kube-system command. If it isn't, force deletion of the pod and it will restart.</p> </blockquote> <p>Also make sure if Redis port is open.</p> <p>More info about troubleshooting: <a href="https://learn.microsoft.com/en-us/azure/aks/troubleshooting#i-cant-get-logs-by-using-kubectl-logs-or-i-cant-connect-to-the-api-server-im-getting-error-from-server-error-dialing-backend-dial-tcp-what-should-i-do" rel="nofollow noreferrer">dial-backend-troubleshooting</a>.</p> <p><strong>EDIT:</strong></p> <p>Answering on your question about tunnelfront:</p> <p><code>tunnelfront</code> is an AKS system component that's installed on every cluster that helps to facilitate secure communication from your hosted Kubernetes control plane and your nodes. It's needed for certain operations like kubectl exec, and will be redeployed to your cluster on version upgrades.</p> <p>Speaking about VM:</p> <p>I would SSH into the it and start watching the disk IO latency using bpf / bcc tools and the docker / kubelet logs. </p>
Malgorzata
<p>I am getting the following error when I am trying to run <code>rake db:migrate</code> on my ec2 instance. I have a RDS postgres instance.</p> <p><code>Errno::EACCES: Permission denied @ rb_sysopen - /app/db/schema.rb</code>**</p> <p>below are the relevant contents of my Dockerfile</p> <pre><code>FROM ubuntu:18.04 RUN apt-get update RUN useradd -m deploy WORKDIR /app RUN mkdir -p vendor COPY vendor/cache vendor/cache RUN bundle install --deployment --local --without test development COPY . . RUN SECRET_KEY_BASE=111 RAILS_ENV=production bin/rake assets:precompile RUN mkdir -p tmp/pids RUN chown -R deploy tmp log USER deploy ENV RAILS_LOG_TO_STDOUT 1 EXPOSE 3000 CMD bin/rake db:migrate &amp;&amp; bundle exec passenger start --address 0.0.0.0 --port 3000 --auto --disable-anonymous-telemetry -e production </code></pre> <p>here is my deployment yaml file</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: eks-learning-backend spec: template: metadata: labels: name: eks-learning-backend spec: containers: - name: rails-app image: zzz.us-east-1.amazonaws.com/eks:16 env: - name: EKS_DATABASE_NAME valueFrom: secretKeyRef: name: database-config key: database_name - name: EKS_DATABASE_HOST value: zzz.us-east-1.rds.amazonaws.com - name: EKS_DATABASE_USERNAME valueFrom: secretKeyRef: name: database-config key: username - name: EKS_DATABASE_PASSWORD valueFrom: secretKeyRef: name: database-config key: password - name: RAILS_MASTER_KEY value: zzxx - name: RAILS_ENV valueFrom: fieldRef: fieldPath: metadata.namespace </code></pre> <p>Any help in this would be really great! Thanks.</p>
opensource-developer
<p>The problem is that the user has insufficient permissions. You only included <code>RUN chown -R deploy tmp log</code> while you also need to give it access to <code>/app/db/</code> dir. Adding additional <code>chown</code> for the DB dir will solve the issue. </p>
Wytrzymały Wiktor
<p>When running skaffold dev command, I get this error:</p> <pre><code>- for: &quot;STDIN&quot;: admission webhook &quot;validate.nginx.ingress.kubernetes.io&quot; denied the request: host &quot;ticketing.dev&quot; and path &quot;/api/users/?(.*)&quot; is already defined in ingress default/ingress-service time=&quot;2021-06-20T19:55:11+03:00&quot; level=warning msg=&quot;Skipping deploy due to error: kubectl apply: exit status 1&quot; </code></pre> <p>when I change the path &quot;/api/users/?(.<em>)&quot; to something like &quot;/api/usersssss/?(.</em>)&quot;, the error dissapears</p> <p>restarting my machine doesn't help</p> <p>any que?</p> <p>ingress-srv.yaml:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-srv annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: 'true' spec: rules: - host: ticketing.dev http: paths: - path: /api/users/?(.*) pathType: Prefix backend: service: name: auth-srv port: number: 3000 </code></pre> <p>skaffold.yaml:</p> <pre><code>apiVersion: skaffold/v2beta17 kind: Config metadata: name: tickets build: artifacts: - image: natankamusher/auth context: auth docker: dockerfile: Dockerfile deploy: kubectl: manifests: - infra/k8s/auth-depl.yaml - infra/k8s/ingress-srv.yaml </code></pre>
Nati Kamusher
<p>run the below command to</p> <pre><code>kubectl delete Ingress ingress-srv </code></pre> <p>ingress-srv is the name of the service</p>
sadab khan
<p>I have a spring boot application hosted on k8s and enforced mTLS on the App itself. I am able to do the mTLS on the connectivity by doing a SSL termination on the Ingress level and then again forwarding the certificates to the springboot pod as well. </p> <p>Problem is, Liveness and Readiness probes as currently not sure how to send the certificates in the readiness/liveness probes? </p> <p>Any help would be appreciated.</p>
nischay goyal
<p>From the official documentation <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">configuring probes</a>:</p> <blockquote> <p>If scheme field is set to HTTPS, the kubelet sends an HTTPS request skipping the certificate verification.</p> </blockquote> <p>This is what the manifest would look like:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: run: nginx name: alive-n-ready-https spec: containers: - name: nginx image: viejo/nginx-mockit livenessProbe: httpGet: path: / port: 443 scheme: HTTPS readinessProbe: httpGet: path: / port: 443 scheme: HTTPS </code></pre> <p>And while without scheme, the probes would fail with <code>400</code> (bad request), as you are sending a http packet to an endpoint that expects https:</p> <pre><code>10.132.15.199 - - [27/May/2020:18:10:36 +0000] "GET / HTTP/1.1" 400 271 "-" "kube-probe/1.17" </code></pre> <p>With <code>scheme: HTTPS</code>, it would succeed:</p> <pre><code>10.132.15.199 - - [27/May/2020:18:26:28 +0000] "GET / HTTP/2.0" 200 370 "-" "kube-probe/1.17" </code></pre>
Malgorzata
<p>The big picture is: I'm trying to install WordPress with plugins in Kubernetes, for development in Minikube.</p> <p>I want to use the official wp-cli Docker image to install the plugins. I am trying to use a write-enabled persistence volume. In Minikube, I turn on the mount to minikube cluster with command:</p> <pre><code>minikube mount ./src/plugins:/data/plugins </code></pre> <p>Now, the PV definition looks like this:</p> <pre><code>--- apiVersion: v1 kind: PersistentVolume metadata: name: wordpress-install-plugins-pv labels: app: wordpress env: dev spec: capacity: storage: 5Gi storageClassName: "" volumeMode: Filesystem accessModes: - ReadWriteOnce hostPath: path: /data/plugins </code></pre> <p>The PVC looks like this:</p> <pre><code>--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wordpress-install-plugins-pvc labels: app: wordpress spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: "" volumeName: wordpress-install-plugins-pv </code></pre> <p>Both the creation and the binding are succesful. The Job definition for plugin installation looks like this:</p> <pre><code>--- apiVersion: batch/v1 kind: Job metadata: name: install-plugins labels: env: dev app: wordpress spec: template: spec: securityContext: fsGroup: 82 # www-data volumes: - name: plugins-volume persistentVolumeClaim: claimName: wordpress-install-plugins-pvc - name: config-volume configMap: name: wordpress-plugins containers: - name: wpcli image: wordpress:cli volumeMounts: - mountPath: "/configmap" name: config-volume - mountPath: "/var/www/html/wp-content/plugins" name: plugins-volume command: ["sh", "-c", "id; \ touch /var/www/html/wp-content/plugins/test; \ ls -al /var/www/html/wp-content; \ wp core download --skip-content --force &amp;&amp; \ wp config create --dbhost=mysql \ --dbname=$MYSQL_DATABASE \ --dbuser=$MYSQL_USER \ --dbpass=$MYSQL_PASSWORD &amp;&amp; \ cat /configmap/wp-plugins.txt | xargs -I % wp plugin install % --activate" ] env: - name: MYSQL_USER valueFrom: secretKeyRef: name: mysql-secrets key: username - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: mysql-secrets key: password - name: MYSQL_DATABASE valueFrom: secretKeyRef: name: mysql-secrets key: dbname restartPolicy: Never backoffLimit: 3 </code></pre> <p>Again, the creation looks fine and all the steps look fine. The problem I have is that apparently the permissions to the mounted volume do not allow the current user to write to the folder. Here's the log contents:</p> <pre><code>uid=82(www-data) gid=82(www-data) groups=82(www-data) touch: /var/www/html/wp-content/plugins/test: Permission denied total 9 drwxr-xr-x 3 root root 4096 Mar 1 20:15 . drwxrwxrwx 3 www-data www-data 4096 Mar 1 20:15 .. drwxr-xr-x 1 1000 1000 64 Mar 1 17:15 plugins Downloading WordPress 5.3.2 (en_US)... md5 hash verified: 380d41ad22c97bd4fc08b19a4eb97403 Success: WordPress downloaded. Success: Generated 'wp-config.php' file. Installing WooCommerce (3.9.2) Downloading installation package from https://downloads.wordpress.org/plugin/woocommerce.3.9.2.zip... Unpacking the package... Warning: Could not create directory. Warning: The 'woocommerce' plugin could not be found. Error: No plugins installed. </code></pre> <p>Am I doing something wrong? I tried different <code>minikube mount</code> options, but nothing really helped! Did anyone run into this issue with minikube?</p>
mhaligowski
<p>This is a long-term issue that prevents a non-root user to write to a container when mounting a <code>hostPath</code> PersistentVolume in Minikube.</p> <p>There are two common workarounds:</p> <ol> <li><p>Simply use the root user.</p></li> <li><p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noreferrer">Configure a Security Context for a Pod or Container</a> using <code>runAsUser</code>, <code>runAsGroup</code> and <code>fsGroup</code>. You can find a detailed info with an example in the link provided.</p></li> </ol> <p>Please let me know if that helped. </p>
Wytrzymały Wiktor
<p>I have setup kubernetes cluster using minikube on local , installed jenkins x on that but while creating project on jx using <strong>jx create spring</strong> I am getting error <code>error: Failed to create repository /demo5 due to: POST https://api.github.com/user/repos: 404 Not Found []</code> I have also tried with <strong>jx create spring --git-username=user_name --git-api-token=token</strong></p>
Sarika Jamdade
<p>I ran into this issue while running through the getting started guide for <a href="https://toolkit.fluxcd.io/get-started/" rel="nofollow noreferrer">https://toolkit.fluxcd.io/get-started/</a>. I ran into this issue while trying to create a repo from a token that I had created. Turns out I not given the token that I was using for the guide enough permissions to be able to create a repo.</p> <p>Try checking the permissions of the token. That immediately resolved the issue for me.</p> <p>I think Github responds with a poor response code choice in this case. Should be an unauthorised response.</p>
Jared Rieger
<p>I am interacting with a GKE cluster and trying to understand what are my permissions</p> <pre><code>➢ kubectl get roles --all-namespaces NAMESPACE NAME AGE istio-system istio-ingressgateway-sds 38d kube-public system:controller:bootstrap-signer 38d kube-system cloud-provider 38d kube-system extension-apiserver-authentication-reader 38d kube-system gce:cloud-provider 38d kube-system sealed-secrets-key-admin 38d kube-system system::leader-locking-kube-controller-manager 38d kube-system system::leader-locking-kube-scheduler 38d kube-system system:controller:bootstrap-signer 38d kube-system system:controller:cloud-provider 38d kube-system system:controller:token-cleaner 38d kube-system system:fluentd-gcp-scaler 38d kube-system system:pod-nanny 38d </code></pre> <p>However I do not see any role associated with me.</p> <p>How am I interacting with the <code>k8s</code> cluster?</p> <p>How can I see whoami and what are my permissions?</p>
pkaramol
<p>The command and output you are sharing refers to Kubernetes RBAC Authorization (not exclusive of GKE). You can find the definition for each role <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles" rel="nofollow noreferrer">HERE</a></p> <p>If you want to be specific to GKE you can use both Cloud Identity and Access Management and Kubernetes RBAC to control access to your GKE cluster.</p> <p>Cloud IAM is not specific to Kubernetes; it provides identity management for multiple Google Cloud Platform products, and operates primarily at the level of the GCP project.</p> <p>Kubernetes RBAC is a core component of Kubernetes and allows you to create and grant roles (sets of permissions) for any object or type of object within the cluster. You can find more information on how RBAC integrates with GKE <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control" rel="nofollow noreferrer">HERE</a></p> <p>You don’t see any roles associated to you because are querying the roles for all the namespaces and most likely you haven’t define a single one.</p> <p>You are interacting with your cluster from the cloud shell. Before connecting to your cluster you must had run the following command.</p> <pre><code>gcloud container clusters get-credentials CLUSTER_NAME --zone ZONE --project PROJECT_ID </code></pre> <p>You authenticate to the cluster using the same user you authenticate with to login to GCP. More information on authentication for kubectl <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#authentication" rel="nofollow noreferrer">HERE</a></p> <p>You can get role binding and cluster roles based on namespace or resource as seen in my example commands.</p> <pre><code>kubectl get rolebinding POD_NAME -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"pod-reader-binding","namespace":"default"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"pod-reader"},"subjects":[{"kind":"User","name":"[email protected]"},{"kind":"ServiceAccount","name":"johndoe"},{"kind":"User","name":"[email protected]"},{"kind":"Group","name":"[email protected]"}]} creationTimestamp: xxxx-xx-xxxx:xx:xxZ name: pod-reader-binding namespace: default resourceVersion: "1502640" selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/default/rolebindings/pod-reader-binding uid: de1775dc-cd85-11e9-a07d-42010aa800c2 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: pod-reader subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: [email protected] </code></pre> <p>In the above examle my user [[email protected]] is a member of APIGroup [rbac.authorization.k8s.io] so his actions on the pod will be limited by the permission he is giving with RBAC for example if you want to give this user readd access you need to specify the the following line in thr YAML</p> <pre><code>verbs: ["get", "watch", "list"] </code></pre> <p>Finally there are many Predefined GKE Roles that grants different permissions to GCP user or service accounts. You can find each role and permissions <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/iam#predefined" rel="nofollow noreferrer">HERE</a></p>
Ernesto U
<p>I want to create an ingress for the kubernetes dashboard, but it loads forever inside the browser: Mac</p> <pre><code>minikube start --driver=docker </code></pre> <pre><code>minikube addons enable ingress </code></pre> <pre><code>minikube addons enable ingress-dns </code></pre> <pre><code>minikube addons enable dashboard </code></pre> <pre><code>minikube addons enable metrics-server </code></pre> <pre><code>❯ kubectl get ns NAME STATUS AGE default Active 4m13s ingress-nginx Active 109s kube-node-lease Active 4m14s kube-public Active 4m14s kube-system Active 4m14s kubernetes-dashboard Active 51s </code></pre> <pre><code> ❯ kubectl get service -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.105.2.220 &lt;none&gt; 8000/TCP 82s kubernetes-dashboard ClusterIP 10.106.101.254 &lt;none&gt; 80/TCP 82s </code></pre> <pre class="lang-yaml prettyprint-override"><code>// dashboard-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dashboard-ingress namespace: kubernetes-dashboard annotations: kubernetes.io/ingress.class: &quot;nginx&quot; spec: rules: - host: dashboard.com http: paths: - path: / pathType: Exact backend: service: name: kubernetes-dashboard port: number: 80 </code></pre> <pre><code>❯ kubectl apply -f dashboard-ingress.yaml ingress.networking.k8s.io/dashboard-ingress created </code></pre> <pre><code>❯ kubectl get ingress -n kubernetes-dashboard NAME CLASS HOSTS ADDRESS PORTS AGE dashboard-ingress &lt;none&gt; dashboard.com 192.168.49.2 80 67s </code></pre> <pre><code>// etc/hosts ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost 192.168.49.2 dashboard.com # Added by Docker Desktop # To allow the same kube context to work on the host and the container: 127.0.0.1 kubernetes.docker.internal # End of section ~ </code></pre> <p>When I now try to load dashboard.com or also 192.168.49.2 in the browser, it just loads forever and does nothing. When i try to load <a href="http://127.0.0.1/" rel="nofollow noreferrer">http://127.0.0.1/</a> I get a nginx 404</p> <p>Am I missing something?</p>
Don
<p>The ingress addon is currently not fully supported on docker driver on MacOs (due the limitation on docker bridge on mac) you need to use <code>minikube tunnel</code> command. <a href="https://minikube.sigs.k8s.io/docs/drivers/docker/#known-issues" rel="nofollow noreferrer">Minikube docs - Known isses</a>, <a href="https://github.com/kubernetes/minikube/issues/13795" rel="nofollow noreferrer">GitHub issue</a></p> <p>Enabling the ingress addon on Mac shows that the ingress will be available on 127.0.0.1. <a href="https://github.com/kubernetes/minikube/pull/12089" rel="nofollow noreferrer">Support Ingress on MacOS, driver docker</a></p> <p>So you only need to add the following line to your /etc/hosts file.</p> <pre><code>127.0.0.1 dashboard.com </code></pre> <p>Create tunnel (it will ask your sudo password)</p> <pre><code>minikube tunnel </code></pre> <p>Then you can verify that the Ingress controller is directing traffic:</p> <pre><code>curl dashboard.com </code></pre> <p>(also I used this Ingress)</p> <pre class="lang-bash prettyprint-override"><code>kubectl apply -f - &lt;&lt; EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dashboard-ingress namespace: kubernetes-dashboard spec: rules: - host: dashboard.com http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: kubernetes-dashboard port: number: 80 EOF </code></pre>
tanx
<p>I use 1 master 2 workers Kubernetes Cluster. master and 1. worker: at location a 2. worker: at location b.</p> <p>Location a and b is very far from each other.</p> <p>I want to run the pods at location a, but if a down it is created at location b.</p> <p>I want to create a pod in the worst scenario at location b.</p> <p>How can I do this in Kubernetes?</p>
public_html
<p>This is a community wiki answer.</p> <p>As @Burak mentioned in his comment:</p> <p>What you're looking for is <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity" rel="nofollow noreferrer">node affinity</a>:</p> <blockquote> <p>– it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.</p> <p>Node affinity is specified as field nodeAffinity of field affinity in the PodSpec.</p> <p>Here’s an example of a pod that uses node affinity:</p> </blockquote> <pre><code>apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/e2e-az-name operator: In values: - e2e-az1 - e2e-az2 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: another-node-label-key operator: In values: - another-node-label-value containers: - name: with-node-affinity image: k8s.gcr.io/pause:2.0 </code></pre> <p>You can find all the necessary details in the linked documentation.</p> <p>Please let me know if that helped. </p>
Wytrzymały Wiktor