Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>Is it possible to connect a GKE cluster to a VPC within AWS? For this specific use case, I want the GKE cluster to be able to talk with the EKS cluster behind a VPC in AWS. </p> <ul> <li>I have the CIDR block for my GKE cluster <code>gcloud container clusters describe _cluster_name_ | grep clusterIpv4Cidr</code></li> <li>I've already created a VPC and cluster in AWS (i.e. I have a VPC ID for my aws VPC)</li> </ul> <p>Do I need to create a VPC for my GKE cluster in addition to the VPC for my EKS cluster, or do I just need the CIDR range for the GKE cluster for AWS? </p> <p>Google searching renders very few results for connecting clusters from different providers. </p>
Baily
<p>In my opinion, it's possible with VPN connection. At first, I think you should have a look at Kubernetes Engine Communication Through VPN <a href="https://github.com/GoogleCloudPlatform/gke-networking-demos/tree/master/gke-to-gke-vpn" rel="nofollow noreferrer">demo</a>. And then, move to the more close example for your case - <a href="https://medium.com/@oleg.pershin/site-to-site-vpn-between-gcp-and-aws-with-dynamic-bgp-routing-7d7e0366036d" rel="nofollow noreferrer">site-to-site VPN between GCP and AWS</a>. In addition, check some Google Cloud Router <a href="https://cloud.google.com/router/docs/concepts/overview" rel="nofollow noreferrer">documentation</a> and <a href="https://cloud.google.com/nat/docs/gke-example" rel="nofollow noreferrer">example</a> for some extra information about networking at GKE.</p>
Serhii Rohoza
<p>I am planning to install Istion on my AKS Cluster using the following configuration, what are all the components this would install? would it install both the Ingress &amp; Egress Gateways?</p> <pre><code>istioctl operator init kubectl create ns istio-system cat &lt;&lt; EOF | kubectl apply -f - apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system name: istio-control-plane spec: # Use the default profile as the base # More details at: https://istio.io/docs/setup/additional-setup/config-profiles/ profile: default # Enable the addons that we will want to use addonComponents: grafana: enabled: true prometheus: enabled: true tracing: enabled: true kiali: enabled: true values: global: # Ensure that the Istio pods are only scheduled to run on Linux nodes defaultNodeSelector: beta.kubernetes.io/os: linux kiali: dashboard: auth: strategy: anonymous EOF </code></pre>
One Developer
<p>The istio operator manifest in your question will not install egress gateway. It is based on default profile which according to istio documentation can be inspected by using <a href="https://istio.io/latest/docs/setup/install/istioctl/#display-the-configuration-of-a-profile" rel="nofollow noreferrer"><code>istioctl profile dump</code></a>:</p> <blockquote> <p><strong>default</strong>: enables components according to the default settings of the <a href="https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/" rel="nofollow noreferrer"><code>IstioOperator</code> API</a>. This profile is recommended for production deployments and for primary clusters in a <a href="https://istio.io/latest/docs/ops/deployment/deployment-models/#multiple-clusters" rel="nofollow noreferrer">multicluster mesh</a>. You can display the default setting by running the command <code>istioctl profile dump</code>.</p> </blockquote> <p>In order to install egress gateway using <code>IstioOperator</code> follow these steps from istio <a href="https://istio.io/latest/docs/setup/install/istioctl/#configure-gateways" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <h3>Configure gateways<a href="https://istio.io/latest/docs/setup/install/istioctl/#configure-gateways" rel="nofollow noreferrer"></a></h3> <p>Gateways are a special type of component, since multiple ingress and egress gateways can be defined. In the <a href="https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/" rel="nofollow noreferrer"><code>IstioOperator</code> API</a>, gateways are defined as a list type. <strong>The <code>default</code> profile installs one ingress gateway, called <code>istio-ingressgateway</code>.</strong> You can inspect the default values for this gateway:</p> <pre><code>istioctl profile dump --config-path components.ingressGateways istioctl profile dump --config-path values.gateways.istio-ingressgateway </code></pre> <p>These commands show both the <code>IstioOperator</code> and Helm settings for the gateway, which are used together to define the generated gateway resources. The built-in gateways can be customized just like any other component.</p> <p><em>From 1.7 onward, the gateway name must always be specified when overlaying. Not specifying any name no longer defaults to <code>istio-ingressgateway</code> or <code>istio-egressgateway</code>.</em></p> <p>A new user gateway can be created by adding a new list entry:</p> <pre><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: components: ingressGateways: - name: istio-ingressgateway enabled: true - namespace: user-ingressgateway-ns name: ilb-gateway enabled: true k8s: resources: requests: cpu: 200m serviceAnnotations: cloud.google.com/load-balancer-type: &quot;internal&quot; service: ports: - port: 8060 targetPort: 8060 name: tcp-citadel-grpc-tls - port: 5353 name: tcp-dns </code></pre> <p>Note that Helm values (<code>spec.values.gateways.istio-ingressgateway/egressgateway</code>) are shared by all ingress/egress gateways. If these must be customized per gateway, it is recommended to use a separate IstioOperator CR to generate a manifest for the user gateways, separate from the main Istio installation:</p> <pre><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: profile: empty components: ingressGateways: - name: ilb-gateway namespace: user-ingressgateway-ns enabled: true # Copy settings from istio-ingressgateway as needed. values: gateways: istio-ingressgateway: debug: error </code></pre> </blockquote> <p>More information about installing istio on AKS can be found <a href="https://learn.microsoft.com/en-us/azure/aks/servicemesh-istio-install?pivots=client-operating-system-linux" rel="nofollow noreferrer">here</a>.</p>
Piotr Malec
<p>I'm using Azure Kubernetes to a host Angular web app using Docker Container. I have created SSL certificate using OpenSSL commands. Now, I need to configure HTTPS and SSL certificate to my web app.</p> <p>Please help me how to set up these?</p> <p>Here's my <strong>Docker</strong> File</p> <pre><code>FROM nginx:latest as nginx RUN rm -rf /usr/share/nginx/html/* COPY /dist/merch-insight/nginx.conf /etc/nginx/conf.d/nginx.conf COPY /dist/merch-insight /usr/share/nginx/html EXPOSE 80 CMD [&quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot;] </code></pre> <p>Here's my <strong>nginx.conf</strong> file</p> <pre><code>server { server_name my-app; charset utf-8; sendfile on; root /usr/share/nginx/html; #Caches static assets location ~ ^/(assets|bower_components|scripts|styles|views) { expires 31d; add_header Cache-Control public; } #Caches Bundles created by angular cli location ~* \.(?:bundle.js|bundle.css)$ { expires 1M; access_log off; add_header Cache-Control &quot;public&quot;; } ## # Main file index.html sending not found locations to the main ## location / { expires -1; add_header Pragma &quot;no-cache&quot;; add_header Cache-Control &quot;no-store, no-cache, must-revalidate, post-check=0, pre-check=0&quot;; index index.html index.htm; try_files $uri $uri/ /index.html; } } </code></pre> <p>And here is my <strong>yaml</strong> file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: apweb version: v1 name: apweb namespace: default spec: selector: matchLabels: run: apweb replicas: 2 template: metadata: labels: run: apweb spec: containers: - name: apweb image: mycontainerregistry.azurecr.io/apweb:dev imagePullPolicy: Always ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: labels: app: apweb version: v1 name: apweb-service namespace: default spec: type: LoadBalancer ports: - port: 80 protocol: TCP targetPort: 80 name: http-web selector: run: apweb </code></pre>
Haavali
<p>In k8s you can use nginx ingress controller (or other ingress options) to configure <a href="https://en.wikipedia.org/wiki/TLS_termination_proxy" rel="nofollow noreferrer">TLS termination</a>.</p> <p>Please take a look at this example from nginx ingress <a href="https://kubernetes.github.io/ingress-nginx/examples/tls-termination/" rel="nofollow noreferrer">documentation</a>.</p> <p><a href="https://kubernetes.github.io/ingress-nginx/examples/PREREQUISITES/#tls-certificates" rel="nofollow noreferrer">Here</a> it is explained how to generate k8s secret for you TLS certificate.</p> <p><a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">Here</a> you can find the full installation guide.</p>
Piotr Malec
<p>I am newbie to kubernetes. Trying to understand that what happens when I try to access <code>google.com</code> from inside the kubernetes pod. <br/> Will the request directly reaches the google.com (offcourse not) or some dns lookup happens in <code>/etc/hosts.allow</code> file first before the call goes outside the pod ? What is the flow or journey of the egress call? <br/> <strong>PS:</strong> I already have default coredns pod running.</p>
gaurav sinha
<p>I think this question could be divided on 2 different topics:</p> <ul> <li><code>DNS</code> resolution.</li> <li><code>Pod</code> networking when trying to reach external sources.</li> </ul> <p>Answering both of this question could be quite lengthy but I will try to give you a baseline to it and add additional documentation that would be more in-depth.</p> <hr /> <h3><code>DNS</code> resolution that is happening inside/outside of the cluster:</h3> <p>As you've already stated you're using <code>CoreDNS</code>. It will be responsible in your setup for your <code>DNS</code> resolution. Your <code>Pods</code> will query it when looking for the domains that are not included locally (for example <code>/etc/hosts</code>). After they've received the responses, they will contact external sources (more on that later).</p> <blockquote> <p>A side note!</p> <p>The <code>DNS</code> resolution (local, query) will depend on the tool you've used. <code>curl</code> will check it but for example <code>nslookup</code> will query the DNS server directly.</p> </blockquote> <p>Your <code>CoreDNS</code> is most likely available under one of the <code>Services</code> in your cluster:</p> <ul> <li><code>$ kubectl get service --all-namespaces</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 79m kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 79m </code></pre> <p>I'd reckon you can find a lot of useful information in the official documentation:</p> <ul> <li><em><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Dns pod service</a></em></li> </ul> <p>You can also follow this guide for more hands on experience:</p> <ul> <li><em><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Administer cluster: DNS debugging resolution</a></em></li> </ul> <hr /> <h3>Pod networking when trying to reach external sources:</h3> <p>Each Kubernetes solution could differ on how exactly is handling networking. Please reach to the documentation of your solution for more details. The main premise of it is that the <code>Pod</code> won't &quot;directly&quot; communicate with the external sources. Below you can find more information on the reasoning behind it:</p> <blockquote> <h3>NAT outgoing</h3> <p>Network Address Translation (<a href="https://en.wikipedia.org/wiki/Network_address_translation" rel="nofollow noreferrer">NAT</a>) is the process of mapping an IP address in a packet to a different IP address as the packet passes through the device performing the NAT. Depending on the use case, NAT can apply to the source or destination IP address, or to both addresses.</p> <p>In the context of Kubernetes egress, NAT is used to allow pods to connect to services outside of the cluster if the pods have IP addresses that are not routable outside of the cluster (for example, if the pod network is an overlay).</p> <p>For example, if a pod in an overlay network attempts to connect to an IP address outside of the cluster, then the node hosting the pod uses SNAT (Source Network Address Translation) to map the non-routable source IP address of the packet to the node’s IP address before forwarding on the packet. The node then maps response packets coming in the opposite direction back to the original pod IP address, so packets flow end-to-end in both directions, with neither pod or external service being aware the mapping is happening.</p> <p>-- <em><a href="https://docs.projectcalico.org/about/about-kubernetes-egress" rel="nofollow noreferrer">Docs.projectcalico.org: About: About Kubernetes Egress</a></em></p> </blockquote> <p>In short assuming no others factors (like additional <code>NAT</code> used by your cloud provider) your <code>Pod</code> will try to contact the external sources with the <code>Node IP</code> (by using <code>Source NAT</code>).</p> <p>You can find more in-depth explanation on the packet life (some aspects are <code>GKE</code> specific) by following:</p> <ul> <li><em><a href="https://www.youtube.com/watch?v=0Omvgd7Hg1I" rel="nofollow noreferrer">Youtube.com: Life of a Packet [I] - Michael Rubin, Google </a></em> - around the <code>17:55</code> minute mark.</li> </ul> <hr /> <h3>Additional resources</h3> <ul> <li><p><em><a href="https://coredns.io/plugins/log/" rel="nofollow noreferrer">Coredns.io: Plugins: Log</a></em> - you can modify the <code>CoreDNS</code> <code>ConfigMap</code> (<code>$ kubectl edit configmap -n kube-system coredns</code> to enable logging to stdout (<code>$ kubectl logs ...</code>) to see more in-depth query resolution.</p> </li> <li><p><em><a href="https://speakerdeck.com/thockin/kubernetes-and-networks-why-is-this-so-dang-hard?slide=57" rel="nofollow noreferrer">Speakerdeck.com: Thockin: Kubernetes and networks why is this so damn hard: Slide 57</a></em> - more on the Kubernetes networking.</p> </li> </ul>
Dawid Kruk
<p>If <code>limit</code> resources are greater than <code>request</code> resources, K8 will assign pod to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-burstable" rel="nofollow noreferrer">QoS Burstable</a></p> <p>Sample configuration:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-metadata namespace: demo spec: containers: - name: test1 image: nginx resources: limits: memory: &quot;14Gi&quot; requests: memory: &quot;4Gi&quot; </code></pre> <ul> <li>Will Kubernetes <strong>always</strong> assign this pod on an instance that will have 14GB memory?</li> <li>Will Kubernetes <strong>always</strong> reserve 14 GB memory for this pod? If yes, then how is it different than <code>QoS Guaranteed</code> class?</li> </ul> <p>I don't want to use <code>QoS Guaranteed</code> as the workload can vary.</p>
Dev
<p>Memory limits are not taken into consideration when scheduling pods.</p> <p><em>Will Kubernetes always assign this pod on an instance that will have 14GB memory?</em></p> <p>By default, this pod will be assigned to any node that meets the request for 4GB memory.</p> <p>On the node side, any pod that uses more than its requested resources is subject to eviction when the node runs out of resources. In other words, Kubernetes never provides any guarantees of availability of resources beyond a Pod's requests.</p> <p>Using a memory limit that exceeds the node's total memory will never be reached.</p> <p><em>Will Kubernetes always reserve 14 GB memory for this pod? If yes, then how is it different than QoS Guaranteed class?</em></p> <p>No, by default kubernetes will reserve the least amount of memory needed which would be 4GB as request.</p> <p>Scheduler also takes into consideration <a href="https://kubernetes.io/docs/reference/scheduling/config/" rel="nofollow noreferrer">scheduler configuration</a> and <a href="https://kubernetes.io/docs/reference/scheduling/policies/" rel="nofollow noreferrer">scheduler policies</a>:</p> <blockquote> <p>Scheduler configuration allows to customize the behavior of the <code>kube-scheduler</code> by writing a configuration file and passing its path as a command line argument.</p> </blockquote> <blockquote> <p>A scheduling Policy can be used to specify the <em>predicates</em> and <em>priorities</em> that the <a href="https://kubernetes.io/docs/reference/generated/kube-scheduler/" rel="nofollow noreferrer">kube-scheduler</a> runs to <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation" rel="nofollow noreferrer">filter and score nodes</a>, respectively.</p> </blockquote>
Piotr Malec
<h1>Objective</h1> <p>I want to deploy Airflow on Kubernetes where pods have access to the same DAGs, in a Shared Persistent Volume. According to the documentation (<a href="https://github.com/helm/charts/tree/master/stable/airflow#using-one-volume-for-both-logs-and-dags" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/airflow#using-one-volume-for-both-logs-and-dags</a>), it seems I have to set and pass these values to Helm: <code>extraVolume</code>, <code>extraVolumeMount</code>, <code>persistence.enabled</code>, <code>logsPersistence.enabled</code>, <code>dags.path</code>, <code>logs.path</code>.</p> <h1>Problem</h1> <p>Any custom values I pass when installing the official Helm chart results in errors similar to:</p> <pre><code>Error: YAML parse error on airflow/templates/deployments-web.yaml: error converting YAML to JSON: yaml: line 69: could not find expected ':' </code></pre> <ul> <li>Works fine: <code>microk8s.helm install --namespace "airflow" --name "airflow" stable/airflow</code></li> <li><strong>Not working</strong>:</li> </ul> <pre><code>microk8s.helm install --namespace "airflow" --name "airflow" stable/airflow \ --set airflow.extraVolumes=/home/*user*/github/airflowDAGs \ --set airflow.extraVolumeMounts=/home/*user*/github/airflowDAGs \ --set dags.path=/home/*user*/github/airflowDAGs/dags \ --set logs.path=/home/*user*/github/airflowDAGs/logs \ --set persistence.enabled=false \ --set logsPersistence.enabled=false </code></pre> <ul> <li><strong>Also not working</strong>: <code>microk8s.helm install --namespace "airflow" --name "airflow" stable/airflow --values=values_pv.yaml</code>, with <code>values_pv.yaml</code>: <a href="https://pastebin.com/PryCgKnC" rel="nofollow noreferrer">https://pastebin.com/PryCgKnC</a> <ul> <li>Edit: Please change <code>/home/*user*/github/airflowDAGs</code> to a path on your machine to replicate the error.</li> </ul></li> </ul> <h1>Concerns</h1> <ol> <li>Maybe it is going wrong because of these lines in the default <code>values.yaml</code>:</li> </ol> <pre><code>## Configure DAGs deployment and update dags: ## ## mount path for persistent volume. ## Note that this location is referred to in airflow.cfg, so if you change it, you must update airflow.cfg accordingly. path: /home/*user*/github/airflowDAGs/dags </code></pre> <p>How do I configure <code>airflow.cfg</code> in a Kubernetes deployement? In a non-containerized deployment of Airflow, this file can be found in <code>~/airflow/airflow.cfg</code>.</p> <ol start="2"> <li>Line 69 in <code>airflow.cfg</code> refers to: <a href="https://github.com/helm/charts/blob/master/stable/airflow/templates/deployments-web.yaml#L69" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/airflow/templates/deployments-web.yaml#L69</a></li> </ol> <p>Which contains <code>git</code>. Are the <code>.yaml</code> wrongly configured, and it falsely is trying to use <code>git pull</code>, but since no git path is specified, this fails?</p> <h1>System</h1> <ul> <li>OS: Ubuntu 18.04 (single machine)</li> <li>MicroK8s: v1.15.4 Rev:876</li> <li><code>microk8s.kubectl version</code>: v1.15.4</li> <li><code>microk8s.helm version</code>: v2.14.3</li> </ul> <h1>Question</h1> <p>How do I correctly pass the right values to the Airflow Helm chart to be able to deploy Airflow on Kubernetes with Pods having access to the same DAGs and logs on a Shared Persistent Volume?</p>
NumesSanguis
<p>Not sure if you have this solved yet, but if you haven't I think there is a pretty simple way close to what you are doing.</p> <p>All of the Deployments, Services, Pods need the persistent volume information - where it lives locally and where it should go within each kube kind. It looks like the values.yaml for the chart provides a way to do this. I'll only show this with dags below, but I think it should be roughly the same process for logs as well.</p> <p>So the basic steps are, 1) tell kube where the 'volume' (directory) lives on your computer, 2) tell kube where to put that in your containers, and 3) tell airflow where to look for the dags. So, you can copy the values.yaml file from the helm repo and alter it with the following.</p> <ol> <li>The <code>airflow</code> section</li> </ol> <p>First, you need to create a volume containing the items in your local directory (this is the <code>extraVolumes</code> below). Then, that needs to be mounted - luckily putting it here will template it into all kube files. Once that volume is created, then you should tell it to mount <code>dags</code>. So basically, <code>extraVolumes</code> creates the volume, and <code>extraVolumeMounts</code> mounts the volume.</p> <pre><code>airflow: extraVolumeMounts: # this will get the volume and mount it to that path in the container - name: dags mountPath: /usr/local/airflow/dags # location in the container it will put the directory mentioned below. extraVolumes: # this will create the volume from the directory - name: dags hostPath: path: "path/to/local/directory" # For you this is something like /home/*user*/github/airflowDAGs/dags </code></pre> <ol start="2"> <li>Tell the airflow config where the dags live in the container (same yaml section as above).</li> </ol> <pre><code>airflow: config: AIRFLOW__CORE__DAGS_FOLDER: "/usr/local/airflow/dags" # this needs to match the mountPath in the extraVolumeMounts section </code></pre> <ol start="3"> <li>Install with helm and your new <code>values.yaml</code> file.</li> </ol> <pre><code>helm install --namespace "airflow" --name "airflow" -f local/path/to/values.yaml stable/airflow </code></pre> <p>In the end, this should allow airflow to see your local directory in the dags folder. If you add a new file, it should show up in the container - though it may take a minute to show up in the UI - I don't think the dagbag process is constantly running? Anyway, hope this helps!</p>
particularB
<p>I have a quick question on Kubernetes ingress. I have both Nginx ingress controller and AWS ALB ingress controller and ingress resources for both Nginx and AWS ALB in a single cluster. Both of these ingress resources are pointed to a single service and deployment file, meaning, bothe the ingress resources are pointed to the same service. However, when I hit the Nginx ingress URL, I'm able to see the desired page, but with the AWS ALB ingress, I can only see the apache default page. I know this doesn't sound practical, but I'm trying to test out something with both these ingress resources. Just wanted to understand, where am I missing out on seeing the application for AWS ALB ingress URL.</p>
Bhargav Mg
<p>Posting this community wiki answer to point that the issue to this question was resolved in the comments.</p> <p>Feel free to edit and expand.</p> <hr /> <p>The solution to the issue:</p> <blockquote> <p>AWS ALB <code>Ingress</code> was pointing to the default <code>apache</code> document root in the pod. <strong>I modified the document root to the application data</strong> and was able to see my application page open up!</p> </blockquote> <hr /> <p>Additional resources that could be useful in this particular example:</p> <ul> <li><em><a href="https://www.tecmint.com/change-root-directory-of-apache-web-server/" rel="nofollow noreferrer">Tecmint.com: Change root directory of a apache web server</a></em> - how to change the <code>apache2</code>: <code>DocumentRoot</code></li> <li><em><a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/tree/main/docs/examples" rel="nofollow noreferrer">Github.com: Kubernetes sigs: AWS load balancer controller: Docs: Examples</a></em> - examples of <code>ALB</code> that satisfies <code>Ingress</code> resource.</li> </ul>
Dawid Kruk
<p>Hello Im learning kubernetes with the minikube. I can access a service via minikubeip:NodePort on the machine where the minikube is running and now I want to access the Service via LAN from other machine. I tried ingress but it didn't work for me.</p> <p>Deployment file:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: aspnetapp-deployment labels: app: aspnetapp spec: replicas: 2 selector: matchLabels: app: aspnetapp template: metadata: labels: app: aspnetapp spec: containers: - name: aspnetapp-cn image: localhost:5000/aspnetapp ports: - containerPort: 80 </code></pre> <p>Service file:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: aspnetapp-service spec: type: NodePort ports: - name: http targetport: 80 port: 80 protocol: TCP selector: app: aspnetapp </code></pre> <p>Ingress file:</p> <pre><code>--- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: aspnetapp-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: http: paths: - path: /aspnetapp backend: serviceName: aspnetapp-service servicePort: 80 </code></pre>
Yatesu
<p>To expose your application to LAN with Ubuntu with a <code>--docker</code> driver you can use:</p> <ul> <li><code>$ kubectl port-forward ...</code></li> </ul> <blockquote> <p>Disclaimer!</p> <ol> <li>Your <code>$ kubectl port-forward</code> should be run on a <strong>host</strong> running minikube.</li> <li>Command above will operate continuously (<code>&amp;</code> can be used to run it in a background)</li> </ol> </blockquote> <p>Example:</p> <p>Let's assume that you have an Ubuntu machine with IP: <code>192.168.0.115</code>.</p> <p>I've created an example using <code>nginx</code> image:</p> <p><code>Deployment.yaml</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre> <p>As for the service exposing your <code>Deployment</code> you can <strong>either</strong>:</p> <ul> <li>Use following command: <ul> <li><code>$ kubectl expose deployment nginx --port=80 --type=NodePort</code></li> </ul> </li> <li>Use definition below:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: nginx spec: type: NodePort selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 </code></pre> <hr /> <hr /> <p>You can expose your <code>nginx</code> in two ways:</p> <ul> <li>Directly with <code>$ kubectl port-forward</code>.</li> <li>Directing the traffic to the <code>Ingress</code> controller.</li> </ul> <hr /> <h3>Direct access</h3> <p>You can expose your <code>Service</code> directly without using <code>Ingress</code> by:</p> <ul> <li><code>$ kubectl port-forward --address=0.0.0.0 deployment/nginx 10000:80</code></li> </ul> <p>Dissecting above command:</p> <ul> <li><code>--address=0.0.0.0</code> - expose outside of localhost</li> <li><code>deployment/nginx</code> - resource/resource_name</li> <li><code>10000:80</code> - port on host machine/port on pod to send the traffic to</li> </ul> <blockquote> <p>Assigning local ports under 1024 will need root access!</p> <p>You will need to login to root and either copy <code>.kube/config</code> to <code>/root/</code> directory or specify where <code>kubectl</code> should look for config!</p> </blockquote> <p>After running above command you should be able to run:</p> <ul> <li><code>curl 192.168.1.115:10000</code></li> </ul> <p>Command <code>$ kubectl port-forward</code> will generate:</p> <pre class="lang-sh prettyprint-override"><code>Forwarding from 0.0.0.0:10000 -&gt; 80 # AT THE START Handling connection for 10000 # CURL FROM 192.168.0.2 </code></pre> <hr /> <h3>Directing the traffic to the <code>Ingress</code> controller</h3> <blockquote> <p>You need to run <code>$ minikube addons enable ingress</code> to have functionalities of <code>Ingress</code> resource</p> </blockquote> <p>In your example you used <code>Ingress</code> resource. In this situation you should:</p> <ul> <li>Create <code>Ingress</code> resource (as you did).</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress spec: rules: - host: http: paths: - path: / backend: serviceName: nginx servicePort: 80 </code></pre> <ul> <li>Forward the traffic to the <code>Ingress</code> controller!</li> </ul> <p><code>Ingress</code> controller after receiving the traffic will forward it further (to your <code>Service</code> and then to <code>Pod</code>)</p> <p>To forward the traffic to your <code>Ingress</code> controller run this command:</p> <ul> <li><code>kubectl port-forward --address=0.0.0.0 --namespace=kube-system deployment/ingress-nginx-controller 80:80</code></li> </ul> <p>Dissecting above command once more:</p> <ul> <li><code>--address=0.0.0.0</code> - expose outside of localhost</li> <li><code>--namespace=kube-system</code> - namespace that the <code>Deployment</code> of <code>Ingress</code> controller resides in</li> <li><code>deployment/ingress-nginx-controller</code> - resource/resource-name</li> <li><code>80:80</code> - port on host machine/port on pod to send the traffic to</li> </ul> <p>Command <code>$ kubectl port-forward</code> will generate:</p> <pre class="lang-sh prettyprint-override"><code>Forwarding from 0.0.0.0:80 -&gt; 80 # AT THE START Handling connection for 80 # CURL FROM 192.168.0.2 </code></pre> <hr /> <p>I also encourage you to use different <code>--driver</code> like for example Virtualbox. You will be able to expose your application without <code>$ kubectl port-forward</code> (NAT).</p> <p>Additional resources:</p> <ul> <li><em><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Kubernetes.io: Service</a></em></li> <li><em><a href="https://stackoverflow.com/questions/51468491/how-kubectl-port-forward-works">Stackoverflow.com: How kubectl port-forward works</a></em></li> </ul>
Dawid Kruk
<p>I have 3 services in 3 different namespaces I want my ingress rules to map to these backends, on path based routes. Can someone please guide on the same. I am using nginx ingress inside azure Kubernetes cluster.</p>
Abhishek Singh
<p>A basic example with an assumption that your <code>nginx ingress</code> is working correctly inside your <code>AKS</code> would be following:</p> <p>List of <code>Pods</code> with their <code>Services</code>:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Pod</th> <th>Namespace</th> <th>Service name</th> </tr> </thead> <tbody> <tr> <td>nginx</td> <td>alpha</td> <td>alpha-nginx</td> </tr> <tr> <td>nginx</td> <td>beta</td> <td>beta-nginx</td> </tr> <tr> <td>nginx</td> <td>omega</td> <td>omega-nginx</td> </tr> </tbody> </table> </div><hr /> <p><code>Ingress</code> definition for this particular setup:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: alpha-ingress namespace: alpha annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: ingressClassName: nginx rules: - host: &quot;kubernetes.kruk.lan&quot; http: paths: - path: /alpha(/|$)(.*) pathType: Prefix backend: service: name: alpha-nginx port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: beta-ingress namespace: beta annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: ingressClassName: nginx rules: - host: &quot;kubernetes.kruk.lan&quot; http: paths: - path: /beta(/|$)(.*) pathType: Prefix backend: service: name: beta-nginx port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: omega-ingress namespace: omega annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: ingressClassName: nginx rules: - host: &quot;kubernetes.kruk.lan&quot; http: paths: - path: /omega(/|$)(.*) pathType: Prefix backend: service: name: omega-nginx port: number: 80 </code></pre> <p>In this example <code>Ingress</code> will analyze and rewrite the requests for the <strong>same domain name</strong> to send the traffic to different namespaces i.e. <code>alpha</code>, <code>beta</code>, <code>omega</code>.</p> <p>When you've have finalized your <code>Ingress</code> resource, you can use <code>curl</code> to validate your configuration.</p> <pre class="lang-bash prettyprint-override"><code>curl kubernetes.kruk.lan/alpha | grep -i &quot;&lt;h1&gt;&quot; &lt;h1&gt;Welcome to nginx from ALPHA namespace!&lt;/h1&gt; </code></pre> <pre class="lang-bash prettyprint-override"><code>curl kubernetes.kruk.lan/beta | grep -i &quot;&lt;h1&gt;&quot; &lt;h1&gt;Welcome to nginx from BETA namespace!&lt;/h1&gt; </code></pre> <pre class="lang-bash prettyprint-override"><code>curl kubernetes.kruk.lan/omega | grep -i &quot;&lt;h1&gt;&quot; &lt;h1&gt;Welcome to nginx from OMEGA namespace!&lt;/h1&gt; </code></pre> <p>I'd encourage you to check following docs on rewrites:</p> <ul> <li><em><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress-nginx: Examples: Rewrite</a></em></li> </ul> <hr /> <p>PS: <code>Pods</code> are default <code>nginx</code> containers/images with added text to <code>/usr/share/nginx/html/index.html</code></p>
Dawid Kruk
<p>With kubernetes, I'm trying to deploy jenkins image &amp; a persistent volume mapped to a NFS share (which is mounted on all my workers)</p> <ul> <li>So, this is my share on my workers :</li> </ul> <pre><code>[root@pp-tmp-test24 /opt]# df -Th /opt/jenkins.persistent Filesystem Type Size Used Avail Use% Mounted on xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP nfs4 10G 9.5M 10G 1% /opt/jenkins.persistent </code></pre> <ul> <li>And My data on this share</li> </ul> <pre><code>[root@pp-tmp-test24 /opt/jenkins.persistent]# ls -l total 0 -rwxr-xr-x. 1 root root 0 Oct 2 11:53 newfile [root@pp-tmp-test24 /opt/jenkins.persistent]# cat newfile hello </code></pre> <ul> <li>Here It is my yaml files to deploy it</li> </ul> <p>My PersistentVolume yaml</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-pv-nfs labels: type: type-nfs spec: storageClassName: class-nfs capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle hostPath: path: /opt/jenkins.persistent </code></pre> <p>My PersistentVolumeClaim yaml</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-pvc-nfs namespace: ns-jenkins spec: storageClassName: class-nfs volumeMode: Filesystem accessModes: - ReadWriteMany resources: requests: storage: 10Gi selector: matchLabels: type: type-nfs </code></pre> <p>And my deployment</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: jenkins namespace: ns-jenkins spec: replicas: 1 selector: matchLabels: app: jenkins template: metadata: labels: app: jenkins spec: containers: - image: jenkins #- image: httpd:latest name: jenkins ports: - containerPort: 8080 protocol: TCP name: jenkins-web volumeMounts: - name: jenkins-persistent-storage mountPath: /var/foo volumes: - name: jenkins-persistent-storage persistentVolumeClaim: claimName: jenkins-pvc-nfs </code></pre> <ul> <li>After <code>kubectl create -f</code> command, all is looking good :</li> </ul> <pre><code># kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE jenkins-pv-nfs 10Gi RWX Recycle Bound ns-jenkins/jenkins-pvc-nfs class-nfs 37s </code></pre> <pre><code># kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ns-jenkins jenkins-pvc-nfs Bound jenkins-pv-nfs 10Gi RWX class-nfs 35s </code></pre> <pre><code># kubectl get pods -A |grep jenkins ns-jenkins jenkins-5bdb8678c-x6vht 1/1 Running 0 14s </code></pre> <pre><code># kubectl describe pod jenkins-5bdb8678c-x6vht -n ns-jenkins Name: jenkins-5bdb8678c-x6vht Namespace: ns-jenkins Priority: 0 Node: pp-tmp-test25.mydomain/172.31.68.225 Start Time: Wed, 02 Oct 2019 11:48:23 +0200 Labels: app=jenkins pod-template-hash=5bdb8678c Annotations: &lt;none&gt; Status: Running IP: 10.244.5.47 Controlled By: ReplicaSet/jenkins-5bdb8678c Containers: jenkins: Container ID: docker://8a3e4871ed64b371818bac59e24d6912e5d2b13c8962c1639d36797fbce8082e Image: jenkins Image ID: docker-pullable://docker.io/jenkins@sha256:eeb4850eb65f2d92500e421b430ed1ec58a7ac909e91f518926e02473904f668 Port: 8080/TCP Host Port: 0/TCP State: Running Started: Wed, 02 Oct 2019 11:48:26 +0200 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/foo from jenkins-persistent-storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-dz6cd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: jenkins-persistent-storage: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: jenkins-pvc-nfs ReadOnly: false default-token-dz6cd: Type: Secret (a volume populated by a Secret) SecretName: default-token-dz6cd Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 39s default-scheduler Successfully assigned ns-jenkins/jenkins-5bdb8678c-x6vht to pp-tmp-test25.mydomain Normal Pulling 38s kubelet, pp-tmp-test25.mydomain Pulling image "jenkins" Normal Pulled 36s kubelet, pp-tmp-test25.mydomain Successfully pulled image "jenkins" Normal Created 36s kubelet, pp-tmp-test25.mydomain Created container jenkins Normal Started 36s kubelet, pp-tmp-test25.mydomain Started container jenkins </code></pre> <ul> <li>On my worker, this is my container </li> </ul> <pre><code># docker ps |grep jenkins 8a3e4871ed64 docker.io/jenkins@sha256:eeb4850eb65f2d92500e421b430ed1ec58a7ac909e91f518926e02473904f668 "/bin/tini -- /usr..." 2 minutes ago Up 2 minutes k8s_jenkins_jenkins-5bdb8678c-x6vht_ns-jenkins_64b66dae-a1da-4d90-83fd-ff433638dc9c_0 </code></pre> <p>So I launch a shell on my container, and I can see my data on <code>/var/foo</code> :</p> <pre><code># docker exec -t -i 8a3e4871ed64 /bin/bash jenkins@jenkins-5bdb8678c-x6vht:/$ df -h /var/foo Filesystem Size Used Avail Use% Mounted on xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP 10G 9.5M 10G 1% /var/foo jenkins@jenkins-5bdb8678c-x6vht:/var/foo$ ls -lZ /var/foo -d drwxr-xr-x. 2 root root system_u:object_r:nfs_t:s0 4096 Oct 2 10:06 /var/foo jenkins@jenkins-5bdb8678c-x6vht:/var/foo$ ls -lZ /var/foo -rwxr-xr-x. 1 root root system_u:object_r:nfs_t:s0 12 Oct 2 10:05 newfile jenkins@jenkins-5bdb8678c-x6vht:/var/foo$ cat newfile hello </code></pre> <p>I'm trying to write data in my <code>/var/foo/newfile</code> but the Permission is denied</p> <pre><code>jenkins@jenkins-5bdb8678c-x6vht:/var/foo$ echo "world" &gt;&gt; newfile bash: newfile: Permission denied </code></pre> <p>Same thing in my <code>/var/foo/ directory</code>, I can't write data </p> <pre><code>jenkins@jenkins-5bdb8678c-x6vht:/var/foo$ touch newfile2 touch: cannot touch 'newfile2': Permission denied </code></pre> <p>So, I tried an another image like <code>httpd:latest</code> in my deployment yaml (keeping the same name in my yaml definition)</p> <pre><code>[...] containers: #- image: jenkins - image: httpd:latest [...] </code></pre> <pre><code># docker ps |grep jenkins fa562400405d docker.io/httpd@sha256:39d7d9a3ab93c0ad68ee7ea237722ed1b0016ff6974d80581022a53ec1e58797 "httpd-foreground" 50 seconds ago Up 48 seconds k8s_jenkins_jenkins-7894877f96-6dj85_ns-jenkins_540b12bd-69df-44d8-b3df-20a0a96cc851_0 </code></pre> <p>In my new container, this time I can Read-Write data :</p> <pre><code>root@jenkins-7894877f96-6dj85:/usr/local/apache2# df -h /var/foo Filesystem Size Used Avail Use% Mounted on xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP 10G 9.6M 10G 1% /var/foo root@jenkins-7894877f96-6dj85:/var/foo# ls -lZ total 0 -rwxr-xr-x. 1 root root system_u:object_r:nfs_t:s0 12 Oct 2 10:05 newfile -rw-r--r--. 1 root root system_u:object_r:nfs_t:s0 0 Oct 2 10:06 newfile2 root@jenkins-7894877f96-6dj85:/var/foo# ls -lZ /var/foo -d drwxr-xr-x. 2 root root system_u:object_r:nfs_t:s0 4096 Oct 2 10:06 /var/foo root@jenkins-7894877f96-6dj85:/var/foo# ls -l total 0 -rwxr-xr-x. 1 root root 6 Oct 2 09:55 newfile root@jenkins-7894877f96-6dj85:/var/foo# echo "world" &gt;&gt; newfile root@jenkins-7894877f96-6dj85:/var/foo# touch newfile2 root@jenkins-7894877f96-6dj85:/var/foo# ls -l total 0 -rwxr-xr-x. 1 root root 12 Oct 2 10:05 newfile -rw-r--r--. 1 root root 0 Oct 2 10:06 newfile2 </code></pre> <p>What I'm doing wrong ? Does the pb is due to <code>jenkins</code> images who do not allow RW access ? Same pb with a local storage (on my worker) with persistent volume.</p> <p>Other thing, perhaps it is stupid : with my jenkins image, I would like to mount the <code>/var/jenkins_home</code> dir to a persistent volume in order to keep jenkins's configuration files. But if I try to mount <code>/var/jenkins_home</code> instead of <code>/var/foo</code>, pod is crashinglookbackoff (because there is already data stored in <code>/var/jenkins_home</code>).</p> <p>thank you all for your help !</p>
Vincent
<p>I noticed You are trying to write as <code>jenkins</code> user on <code>jenkins-5bdb8678c-x6vht</code> that might not have write permissions in that root:root directory.</p> <p>You might want to change that directory permissions to match <code>jenkins</code> user privileges.</p> <p>Try to verify that this is causing this issue by using <code>sudo</code> before writing to file.</p> <p>If you <code>sudo</code> is not installed then exec in with <code>--user</code> flag as <code>root</code> user. So its just like in other cases where writing worked.</p> <p><code>docker exec -t -i -u root 8a3e4871ed64 /bin/bash</code></p>
Piotr Malec
<p>I recently changed the docker daemon from my local Docker Desktop to local minikube following these <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/#1-pushing-directly-to-the-in-cluster-docker-daemon-docker-env" rel="nofollow noreferrer">instructions</a>.</p> <pre><code>@FOR /f &quot;tokens=*&quot; %i IN ('minikube -p minikube docker-env --shell cmd') DO @%i </code></pre> <p>After running some tests, I want to change it back to my previous setup. I already tried to change some environment variable but it did not succeeded.</p> <pre><code>SET DOCKER_HOST=tcp://127.0.0.1:2375 </code></pre>
dinhokz
<p>Run the below command to get the list of Docker Hosts..</p> <pre><code>docker context ls </code></pre> <p>output will be something like below</p> <pre><code>NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm desktop-linux unix:///home/sc2302/.docker/desktop/docker.sock rootless Rootless mode unix:///run/user/1001/docker.sock </code></pre> <p>Now, from the output's, select the context you want to use.. ex: to switch to default context</p> <pre><code>docker context use default </code></pre>
Sadhvik Chirunomula
<p>I am trying to add audit logging in one of my k8s clusters. So far YAML manifest are below</p> <p>Policy file - /etc/kubernetes/policy/audit/policy.yaml</p> <pre><code>apiVersion: audit.k8s.io/v1 kind: Policy rules: # log Secret resources audits, level Metadata - level: Metadata resources: - group: &quot;&quot; resources: [&quot;secrets&quot;] # log node related audits, level RequestResponse - level: RequestResponse userGroups: [&quot;system:nodes&quot;] # for everything else don't log anything - level: None </code></pre> <p>kuber-apiserver file - manifests/kube-apiserver.yaml</p> <pre><code>apiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.0.XX:6443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - --audit-policy-file=/etc/kubernetes/audit-policy/policy.yaml - --audit-log-path=/etc/kubernetes/audit-logs/audit.log - --audit-log-maxsize=7 - kube-apiserver - --encryption-provider-config=/etc/kubernetes/encryption/encryptionconfiguration.yaml - --advertise-address=192.168.0.XX - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key - --etcd-servers=https://127.0.0.1:2379 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 - --service-account-issuer=https://kubernetes.default.svc.cluster.local - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key - --service-cluster-ip-range=10.96.0.0/12 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key image: k8s.gcr.io/kube-apiserver:v1.24.3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 192.168.0.XX path: /livez port: 6443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 name: kube-apiserver readinessProbe: failureThreshold: 3 httpGet: host: 192.168.0.XX path: /readyz port: 6443 scheme: HTTPS periodSeconds: 1 timeoutSeconds: 15 resources: requests: cpu: 250m startupProbe: failureThreshold: 24 httpGet: host: 192.168.0.XX path: /livez port: 6443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 volumeMounts: - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/ca-certificates name: etc-ca-certificates readOnly: true - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /usr/local/share/ca-certificates name: usr-local-share-ca-certificates readOnly: true - mountPath: /usr/share/ca-certificates name: usr-share-ca-certificates readOnly: true - mountPath: /etc/kubernetes/encryption/ name: enc-conf readOnly: true - mountPath: /etc/kubernetes/audit-policy/policy.yaml name: audit-policy readOnly: true - mountPath: /etc/kubernetes/audit-logs name: audit-logs readOnly: false hostNetwork: true priorityClassName: system-node-critical securityContext: seccompProfile: type: RuntimeDefault volumes: - name: audit-policy hostPath: path: /etc/kubernetes/audit-policy/policy.yaml type: File - name: audit-logs hostPath: path: /etc/kubernetes/audit-logs type: DirectoryOrCreate - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /etc/ca-certificates type: DirectoryOrCreate name: etc-ca-certificates - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /usr/local/share/ca-certificates type: DirectoryOrCreate name: usr-local-share-ca-certificates - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificates - hostPath: path: /etc/kubernetes/encryption type: DirectoryOrCreate name: enc-conf status: {} </code></pre> <p>I double checked conf and path, The strange part is kube-apiserver is not logging into <code>cat kube-system_kube-apiserver-XX-kube-master-1_c514a6246640287303eb130a626552f2/kube-apiserver/5.log</code>, there are no logs in any of the files under <code>kube-system_kube-apiserver-XX-kube-master-1_c514a6246640287303eb130a626552f2</code></p> <pre><code>crictl ps </code></pre> <p>Doesn't show a container running with name <em>api</em>, so not able to get any logs.</p> <p>From <code>journalctl -u kubelet | grep &quot;policy&quot;</code></p> <pre><code>Sep 22 15:25:32 i11806-kube-master-1 kubelet[1187]: E0922 15:25:32.312777 1187 pod_workers.go:951] &quot;Error syncing pod, skipping&quot; err=&quot;failed to \&quot;StartContainer\&quot; for \&quot;kube-apiserver\&quot; with RunContainerError: \&quot;failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: \\\&quot;--audit-policy-file=/etc/kubernetes/audit-policy/policy.yaml\\\&quot;: stat --audit-policy-file=/etc/kubernetes/audit-policy/policy.yaml: no such file or directory: unknown\&quot;&quot; pod=&quot;kube-system/kube-apiserver-i11806-kube-master-1&quot; podUID=43aa05fda9c680dd3c0c77b8e1c95dac </code></pre> <p>Any help is appreciated.</p>
Rahul Sharma
<p>Your <code>Pod</code> definition for <code>kube-apiserver</code> is incorrect. Take a look:</p> <ul> <li>Incorrect:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>spec: containers: - command: - --audit-policy-file=/etc/kubernetes/audit-policy/policy.yaml - --audit-log-path=/etc/kubernetes/audit-logs/audit.log - --audit-log-maxsize=7 - kube-apiserver # &lt;-- WRONG - --encryption-provider-config=/etc/kubernetes/encryption/encryptionconfiguration.yaml - --advertise-address=192.168.0.XX </code></pre> <ul> <li>Correct:</li> </ul> <pre class="lang-yaml prettyprint-override"><code> containers: - command: - kube-apiserver # &lt;-- CORRECT - --audit-policy-file=/etc/kubernetes/audit-policy/policy.yaml - --audit-log-path=/etc/kubernetes/audit-logs/audit.log - --audit-log-maxsize=7 - --encryption-provider-config=/etc/kubernetes/encryption/encryptionconfiguration.yaml - --advertise-address=192.168.0.XX </code></pre> <p>You have put the parameters for <code>kube-apiserver</code> before the actual command to run the <code>kubeapi-server</code>.</p>
Dawid Kruk
<p>How can Apache Airflow's <code>KubernetesPodOperator</code> pull docker images from a private repository? </p> <p>The <code>KubernetesPodOperator</code> has an <code>image_pull_secrets</code> which you can pass a <code>Secrets</code> object to authenticate with the private repository. But the secrets object can only represent an environment variable, or a volume - neither of which fit my understanding of how Kubernetes uses secrets to authenticate with private repos. </p> <p>Using <code>kubectl</code> you can create the required secret with something like </p> <pre><code>$ kubectl create secret docker-registry $SECRET_NAME \ --docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \ --docker-username=AWS \ --docker-password="${TOKEN}" \ --docker-email="${EMAIL}" </code></pre> <p>But how can you create the authentication secret in Airflow? </p>
danodonovan
<p>There is <code>secret</code> object with <code>docker-registry</code> type according to kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials" rel="nofollow noreferrer">documentation</a> which can be used to authenticate to private repository.</p> <p>As You mentioned in Your question; You can use <code>kubectl</code> to <strong>create</strong> secret of <code>docker-registry</code> <strong>type</strong> that you can then try to pass with <code>image_pull_secrets</code>.</p> <p>However depending on platform You are using this might have <strong>limited</strong> or <strong>no use at all</strong> according to <a href="https://kubernetes.io/docs/concepts/containers/images/#configuring-nodes-to-authenticate-to-a-private-registry" rel="nofollow noreferrer">kubernetes documentation</a>:</p> <blockquote> <h3>Configuring Nodes to Authenticate to a Private Registry</h3> <p><strong>Note:</strong> If you are running on Google Kubernetes Engine, there will already be a <code>.dockercfg</code> on each node with credentials for Google Container Registry. You cannot use this approach.</p> <p><strong>Note:</strong> If you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will manage and update the ECR login credentials. You cannot use this approach.</p> <p><strong>Note:</strong> This approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.</p> <p><strong>Note:</strong> Kubernetes as of now only supports the <code>auths</code> and <code>HttpHeaders</code> section of docker config. This means credential helpers (<code>credHelpers</code> or <code>credsStore</code>) are not supported.</p> </blockquote> <p>Making this work on mentioned platforms is possible but it would require automated scripts and third party tools.</p> <p>Like in Amazon ECR example: <a href="https://github.com/awslabs/amazon-ecr-credential-helper" rel="nofollow noreferrer">Amazon ECR Docker Credential Helper</a> would be needed to periodically pull AWS credentials to docker registry configuration and then have another script to update kubernetes docker-registry secrets.</p> <p>As for Airflow itself I don't think it has functionality to create its own docker-repository secrets. You can request functionality like that in <a href="https://issues.apache.org/jira/projects/AIRFLOW/issues/" rel="nofollow noreferrer">Apache Airflow JIRA</a>.</p> <p>P.S.</p> <p>If You still have issues with Your K8s cluster you might want to create new question on stack addressing them.</p>
Piotr Malec
<p>I have a service named Foo that is currently running. It directs traffic it receives to a running Pod as well. Since the service is of type LoadBalancer and runs in Google Cloud - it has it's own external IP.</p> <p>I'm currently doing maintenance and testing on various services and would like to temporarily STOP service Foo from working, then RESUMING it again. That is, anyone that hits the IP for service Foo would get a 404, but then later on I resume it - they would start getting answers back.</p> <p>The reason why I don't just flat out delete the service then create a new one is because I wish to maintain the original IP address for the Foo service. I have tests that directly reference that IP and do not wish to have to continuously change them. i also have a few clients in production relying on that IP so I can't risk losing it. </p> <p>Any indication then on how to temporarily STOP / RESUME a kubernetes service in Google cloud, while preserving it's IP? </p> <p>Thanks</p>
Daltrey Waters
<p><strong>Kubernetes itself does not have mechanism to stop a service.</strong> </p> <hr> <p>When you create a <code>Service</code> type of LoadBalancer in <code>GKE</code>, it automatically creates a forwarding rule for external access. You can disable that rule (not delete!) to stop <strong>external</strong> traffic accessing your <code>Service.</code></p> <p>To disable the forwarding rule:</p> <ul> <li>Check the associated IP address with a LoadBalancer by either: <ul> <li>issuing: <code>$ kubectl get svc</code></li> <li>going to: <code>GCP Dashboard -&gt; Kubernetes Engine -&gt; Services &amp; Ingress</code></li> </ul></li> <li>Go to <code>GCP Dashboard -&gt; VPC Network -&gt; External IP addresses</code></li> <li>Find your LB's IP and <strong>copy name of the forwarding rule associated with it</strong></li> <li>Go to <code>GCP Dashboard -&gt; VPC Network -&gt; Firewall</code></li> <li>Search for mentioned forwarding rule </li> <li>Edit it </li> <li>On the bottom of edit site you should have an option to disable it like picture below: </li> </ul> <p><a href="https://i.stack.imgur.com/D8Fkt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D8Fkt.png" alt="GCP1"></a></p> <hr> <p>From a <code>GKE</code> perspective you can create a service type of <code>LoadBalancer</code> with a <strong>static IP address</strong> that will be bound and available to your project as long as it's not released. Even if you delete a <code>Service</code> in your <code>GKE</code> cluster it will still be available to bound to your recreated <code>Service</code>. </p> <p>You can do it by either: </p> <h2>Reserving static IP address before <code>Service</code> creation</h2> <ul> <li>Go to <code>GCP Dashboard -&gt; VPC Network -&gt; External IP addresses -&gt; Reserve Static Address</code> </li> <li>Create a static IP </li> <li>Note the IP address created </li> <li>Create a <code>Service</code> type of LoadBalancer with previously created IP address. Example below: </li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: hello-service-lb spec: selector: app: hello ports: - name: hello-port port: 80 targetPort: 50001 nodePort: 30051 type: LoadBalancer loadBalancerIP: PASTE_HERE_IP_ADDRESS </code></pre> <p>Please take a specific look on part: </p> <pre class="lang-yaml prettyprint-override"><code> loadBalancerIP: PASTE_HERE_IP_ADDRESS </code></pre> <p><strong>as this line is required to have previously created static IP address.</strong> </p> <p>Deleting this Service will: </p> <ul> <li>Delete a <code>Service</code> in <code>GKE</code></li> <li>Delete the association between <code>Service</code> and IP address in <code>GCP Dashboard</code></li> <li>It will not delete the reserved static IP address</li> </ul> <h2>Creating a <code>Service</code> before reserving static IP address</h2> <p>Assuming that you have already created a <code>Service</code> type of LoadBalancer you can:</p> <ul> <li>Go to <code>GCP Dashboard -&gt; VPC Network -&gt; External IP addresses</code></li> <li>Found the IP address associated with your LoadBalancer</li> <li>Change type of this IP address from: <code>Ephemeral</code> to <code>Static</code>. This will ensure that this IP will not be released when Service got deleted.</li> <li><strong>You will need to edit your Service definition when recreating it to include</strong>: </li> </ul> <pre class="lang-yaml prettyprint-override"><code> loadBalancerIP: PASTE_HERE_IP_ADDRESS </code></pre> <p>If you changed your IP address type from <code>Ephemeral</code> to <code>Static</code>, deleting your <code>Service</code> will not release your <code>Static</code> IP address. </p> <hr> <p>Please take a look on additional documentation: </p> <ul> <li><a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address" rel="nofollow noreferrer">Cloud.google.com: IP addresses: Reserve static external IP</a></li> <li><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#step_2a_using_a_service" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: LoadBalancer</a> </li> </ul> <p>Please let me know if you have any questions in that. </p>
Dawid Kruk
<p>I ran Kubespray in lxc containers with below configuration:<em>(server_ram:8G | all nodes in ubuntu:18.04)</em></p> <pre><code>| NAME | STATE | IPV4 +---------+---------+------------------- | ansible | RUNNING | 10.21.185.23 (eth0) | node1 | RUNNING | 10.21.185.158 (eth0) | node2 | RUNNING | 10.21.185.186 (eth0) | node3 | RUNNING | 10.21.185.65 (eth0) | node4 | RUNNING | 10.21.185.106 (eth0) | node5 | RUNNING | 10.21.185.14 (eth0) </code></pre> <blockquote> <p>In root@ansible: when i ran kubespray command to build cluster i encountered with this Error:</p> </blockquote> <pre><code> TASK [kubernetes/preinstall : Disable swap] ****************** fatal: [node1]: FAILED! =&gt; {"changed": true, "cmd": ["/sbin/swapoff", "-a"], "delta": "0:00:00.020302", "end": "2020-05-13 07:21:24.974910", "msg": "non-zero return code", "rc": 255, "start": "2020-05-13 07:21:24.954608", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} fatal: [node2]: FAILED! =&gt; {"changed": true, "cmd": ["/sbin/swapoff", "-a"], "delta": "0:00:00.010084", "end": "2020-05-13 07:21:25.051443", "msg": "non-zero return code", "rc": 255, "start": "2020-05-13 07:21:25.041359", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} fatal: [node3]: FAILED! =&gt; {"changed": true, "cmd": ["/sbin/swapoff", "-a"], "delta": "0:00:00.008382", "end": "2020-05-13 07:21:25.126695", "msg": "non-zero return code", "rc": 255, "start": "2020-05-13 07:21:25.118313", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} fatal: [node4]: FAILED! =&gt; {"changed": true, "cmd": ["/sbin/swapoff", "-a"], "delta": "0:00:00.006829", "end": "2020-05-13 07:21:25.196145", "msg": "non-zero return code", "rc": 255, "start": "2020-05-13 07:21:25.189316", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} </code></pre> <blockquote> <p>lxc containers configuration:(include:node1,node2,node3,node4,node5)</p> </blockquote> <pre><code>architecture: x86_64 config: image.architecture: amd64 image.description: ubuntu 18.04 LTS amd64 (release) (20200506) image.label: release image.os: ubuntu image.release: bionic image.serial: "20200506" image.version: "18.04" limits.cpu: "2" limits.memory: 2GB limits.memory.swap: "false" linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw sys:rw" security.nesting: "true" security.privileged: "true" volatile.base_image: 93b9eeb85479af2029203b4a56a2f1fdca6a0e1bf23cdc26b567790bf0f3f3bd volatile.eth0.hwaddr: 00:16:3e:5a:91:9a volatile.idmap.base: "0" volatile.idmap.next: '[]' volatile.last_state.idmap: '[]' volatile.last_state.power: RUNNING devices: {} ephemeral: false profiles: - default stateful: false description: "" </code></pre> <blockquote> <p>When i try to swapoff manually in nodes i receive nothing.</p> </blockquote> <pre><code>root@node1:~# /sbin/swapoff -a root@node1:~# </code></pre> <p>it will be so helpful if anyone has an idea.</p>
Sajjad Hadafi
<p>I divided this answer on 2 parts: </p> <ul> <li><strong>TL;DR</strong> Why Kubespray fails on <code>swapoff -a</code></li> <li>How to install Kubernetes with Kubespray on LXC containers </li> </ul> <hr> <h2>TL;DR</h2> <p><code>Kubespray</code> fails because he gets non exit zero code (255) when running <code>swapoff -a</code>. </p> <blockquote> <p>A non-zero exit status indicates failure. This seemingly counter-intuitive scheme is used so there is one well-defined way to indicate success and a variety of ways to indicate various failure modes. </p> <p> <em><a href="https://www.gnu.org/software/bash/manual/html_node/Exit-Status.html" rel="nofollow noreferrer">Gnu.org: Exit Status</a></em> </p> </blockquote> <p>Even if you set <code>limits.memory.swap: "false"</code> in the profile associated with the containers it will still produce this error. </p> <p>There is a workaround for it by disabling swap in your <strong>host</strong> system. You can do it by: </p> <ul> <li><code>$ swapoff -a</code></li> <li>delete line associated with swap in <code>/etc/fstab</code></li> <li><code>$ reboot</code></li> </ul> <p>After that your container should produce <strong>zero exit code</strong> when issuing <code>$ swapoff -a</code></p> <hr> <h2>How to install Kubernetes with Kubespray on LXC containers</h2> <p><strong>Assuming that you created your <code>lxc</code> containers and have full ssh access to them</strong>, there are still things to take into consideration before running <code>kubespray</code>. </p> <p>I ran <code>kubespray</code> on <code>lxc</code> containers and stumbled upon issues with: </p> <ul> <li>storage space</li> <li>docker packages</li> <li><code>kmsg</code></li> <li>kernel modules </li> <li><code>conntrack</code> </li> </ul> <h3>Storage space</h3> <p>Please make sure you have enough storage within your storage pool as lack of it will result in failure to provision the cluster. Default storage pool size could be not big enough to hold 5 nodes. </p> <h3>Docker packages</h3> <p>When provisioning the cluster please make sure that you have the newest <code>kubespray</code> version available as the older ones had an issue with docker packages not compatible with each other. </p> <h3>Kmsg</h3> <blockquote> <p>The /dev/kmsg character device node provides userspace access to the kernel's printk buffer.</p> <p> <em><a href="https://www.kernel.org/doc/Documentation/ABI/testing/dev-kmsg" rel="nofollow noreferrer">Kernel.org: Documentation: dev-kmsg</a></em> </p> </blockquote> <p>By default <code>kubespray</code> will fail to provision the cluster when the <code>/dev/kmsg</code> is not available on the node (lxc container). </p> <p><code>/dev/kmsg</code> is not available on <code>lxc</code> container and this will cause a failure of <code>kubespray</code> provisioning. </p> <p>There is a workaround for it. <strong>In each <code>lxc</code> container run</strong>: </p> <blockquote> <pre class="lang-sh prettyprint-override"><code># Hack required to provision K8s v1.15+ in LXC containers mknod /dev/kmsg c 1 11 chmod +x /etc/rc.d/rc.local echo 'mknod /dev/kmsg c 1 11' &gt;&gt; /etc/rc.d/rc.local </code></pre> <p> <em><a href="https://github.com/justmeandopensource/kubernetes/blob/master/lxd-provisioning/bootstrap-kube.sh" rel="nofollow noreferrer">Github.com: Justmeandopensource: lxd-provisioning: bootstrap-kube.sh</a></em> </p> </blockquote> <p>I tried other workarounds like: </p> <ul> <li>add <code>lxc.kmsg = 1</code> to <code>/etc/lxc/default.conf</code> - <a href="https://github.com/lxc/lxd/issues/4393#issuecomment-378181793" rel="nofollow noreferrer">deprecated</a> </li> <li>running <code>echo 'L /dev/kmsg - - - - /dev/console' &gt; /etc/tmpfiles.d/kmsg.conf</code> inside the container and then restarting is causing the <code>systemd-journald</code> to sit at 100% usage of a core.</li> </ul> <h3>Kernel modules</h3> <blockquote> <p>The LXC/LXD system containers do not load kernel modules for their own use. What you do, is get the host it load the kernel module, and this module could be available in the container.</p> <p><a href="https://discuss.linuxcontainers.org/t/how-to-add-kernel-modules-into-an-lxc-container/5033/3" rel="nofollow noreferrer">Linuxcontainers.org: How to add kernel modules to LXC container</a></p> </blockquote> <p><code>Kubespray</code> will check if certain kernel modules are available within your nodes. </p> <p>You will need to add following modules <strong>on your host</strong>: </p> <ul> <li><code>ip_vs</code></li> <li><code>ip_vs_sh</code></li> <li><code>ip_vs_rr</code></li> <li><code>ip_vs_wrr</code></li> </ul> <p>You can add above modules with <code>$ modprobe MODULE_NAME</code> or follow this link: <a href="https://www.cyberciti.biz/faq/linux-how-to-load-a-kernel-module-automatically-at-boot-time/" rel="nofollow noreferrer">Cyberciti.biz: Linux how to load a kernel module automatically</a>.</p> <h3>Conntrack</h3> <p>You will need to install <code>conntrack</code> and load a module named <code>nf_conntrack</code>: </p> <ul> <li><code>$ apt install conntrack -y</code></li> <li><code>modprobe nf_conntrack</code></li> </ul> <p>Without above commands <code>kubespray</code> will fail on step of checking the availability of <code>conntrack</code>. </p> <p><strong>With this change in place you should be able to run Kubernetes cluster with <code>kubespray</code> within <code>lxc</code> environment and get output of nodes similar to this:</strong></p> <pre class="lang-sh prettyprint-override"><code>root@k8s1:~# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s1 Ready master 14h v1.18.2 10.224.47.185 &lt;none&gt; Ubuntu 18.04.4 LTS 5.4.0-31-generic docker://18.9.7 k8s2 Ready master 14h v1.18.2 10.224.47.98 &lt;none&gt; Ubuntu 18.04.4 LTS 5.4.0-31-generic docker://18.9.7 k8s3 Ready &lt;none&gt; 14h v1.18.2 10.224.47.46 &lt;none&gt; Ubuntu 18.04.4 LTS 5.4.0-31-generic docker://18.9.7 k8s4 Ready &lt;none&gt; 14h v1.18.2 10.224.47.246 &lt;none&gt; Ubuntu 18.04.4 LTS 5.4.0-31-generic docker://18.9.7 </code></pre>
Dawid Kruk
<p>I have followed <a href="https://aws.amazon.com/blogs/containers/fluent-bit-for-amazon-eks-on-aws-fargate-is-here/" rel="noreferrer">this guide</a> to configure Fluent Bit and Cloudwatch on my EKS cluster, but currently all of the logs go to one log group. I tried to follow a separate tutorial that used a kubernetes plugin for Fluent Bit to tag the services before the reached the [OUTPUT] configuration. This caused issues because Fargate EKS currently does not handle Fluent Bit [INPUT] configurations as per the <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html" rel="noreferrer">bottom of this doc</a>.</p> <p>Has anyone encountered this before? I'd like to split the logs up into separate services.</p> <p>Here is my current YAML file .. I added the parser and filter to see if I could gain any additional information to work with over on Cloudwatch.</p> <pre><code>kind: Namespace apiVersion: v1 metadata: name: aws-observability labels: aws-observability: enabled --- kind: ConfigMap apiVersion: v1 metadata: name: aws-logging namespace: aws-observability data: parsers.conf: | [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep On filters.conf: | [FILTER] Name kubernetes Match kube.* Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token # Kube_Tag_Prefix kube.var.log.containers. Kube_URL https://kubernetes.default.svc:443 Merge_Log On Merge_Log_Key log_processed Use_Kubelet true Buffer_Size 0 Dummy_Meta true output.conf: | [OUTPUT] Name cloudwatch_logs Match * region us-east-1 log_group_name fluent-bit-cloudwatch2 log_stream_prefix from-fluent-bit- auto_create_group On </code></pre>
Frederick Haug
<p>So I found out that it is actually simple to do this.</p> <p>The default tag of input on fluent bit contains the name of the service you are logging from, so you can actually stack multiple [OUTPUT] blocks each using the wildcard operator around the name of your service <em></em>. That was all I had to do to get the streams to get sent to different log groups. Here is my YAML for reference.</p> <pre><code>kind: Namespace apiVersion: v1 metadata: name: aws-observability labels: aws-observability: enabled --- kind: ConfigMap apiVersion: v1 metadata: name: aws-logging namespace: aws-observability data: output.conf: | [OUTPUT] Name cloudwatch_logs Match *logger* region us-east-1 log_group_name logger-fluent-bit-cloudwatch log_stream_prefix from-fluent-bit- auto_create_group On [OUTPUT] Name cloudwatch_logs Match *alb* region us-east-1 log_group_name alb-fluent-bit-cloudwatch log_stream_prefix from-fluent-bit- auto_create_group On </code></pre>
Frederick Haug
<p>I am trying to mount a persistent volume on pods (via a deployment).</p> <pre><code>apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - image: ... volumeMounts: - mountPath: /app/folder name: volume volumes: - name: volume persistentVolumeClaim: claimName: volume-claim --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: volume-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi </code></pre> <p>However, the pod stays in "ContainerCreating" status and the events show the following error message.</p> <pre><code>Unable to mount volumes for pod "podname": timeout expired waiting for volumes to attach or mount for pod "namespace"/"podname". list of unmounted volumes=[volume]. list of unattached volumes=[volume] </code></pre> <p>I verified that the persistent volume claim is ok and bound to a persistent volume.</p> <p>What am I missing here?</p>
znat
<p>When you create a <code>PVC</code> without specifying a <code>PV</code> or type of <code>StorageClass</code> in GKE clusters it will fall back to default option:</p> <ul> <li><code>StorageClass: standard</code></li> <li><code>Provisioner: kubernetes.io/gce-pd</code></li> <li><code>Type: pd-standard</code></li> </ul> <p>Please take a look on official documentation: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">Cloud.google.com: Kubernetes engine persistent volumes</a></p> <p>There could be a lot of circumstances that can produce error message encountered. </p> <p>As it's unknown how many replicas are in your deployment as well as number of nodes and how pods were scheduled on those nodes, I've tried to reproduce your issue and I encountered the same error with following steps (GKE cluster was freshly created to prevent any other dependencies that might affect the behavior).</p> <p><strong>Steps</strong>: </p> <ul> <li>Create a PVC</li> <li>Create a Deployment with <code>replicas &gt; 1</code></li> <li>Check the state of pods </li> <li>Additional links</li> </ul> <h3>Create a PVC</h3> <p>Below is example <code>YAML</code> definition of a <code>PVC</code> the same as yours: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: volume-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi </code></pre> <p>After applying above definition please check if it created successfully. You can do it by using below commands: </p> <ul> <li><code>$ kubectl get pvc volume-claim</code></li> <li><code>$ kubectl get pv</code></li> <li><code>$ kubectl describe pvc volume-claim</code></li> <li><code>$ kubectl get pvc volume-claim -o yaml</code></li> </ul> <h3>Create a Deployment with <code>replicas &gt; 1</code></h3> <p>Below is example <code>YAML</code> definition of deployment with <code>volumeMounts</code> and <code>replicas</code> > 1: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: ubuntu-deployment spec: selector: matchLabels: app: ubuntu replicas: 10 # amount of pods must be &gt; 1 template: metadata: labels: app: ubuntu spec: containers: - name: ubuntu image: ubuntu command: - sleep - "infinity" volumeMounts: - mountPath: /app/folder name: volume volumes: - name: volume persistentVolumeClaim: claimName: volume-claim </code></pre> <p>Apply it and wait for a while. </p> <h3>Check the state of pods</h3> <p>You can check the state of pods with below command: </p> <p><code>$ kubectl get pods -o wide</code></p> <p>Output of above command: </p> <pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE IP NODE ubuntu-deployment-2q64z 0/1 ContainerCreating 0 4m27s &lt;none&gt; gke-node-1 ubuntu-deployment-4tjp2 1/1 Running 0 4m27s 10.56.1.14 gke-node-2 ubuntu-deployment-5tn8x 0/1 ContainerCreating 0 4m27s &lt;none&gt; gke-node-1 ubuntu-deployment-5tn9m 0/1 ContainerCreating 0 4m27s &lt;none&gt; gke-node-3 ubuntu-deployment-6vkwf 0/1 ContainerCreating 0 4m27s &lt;none&gt; gke-node-1 ubuntu-deployment-9p45q 1/1 Running 0 4m27s 10.56.1.12 gke-node-2 ubuntu-deployment-lfh7g 0/1 ContainerCreating 0 4m27s &lt;none&gt; gke-node-3 ubuntu-deployment-qxwmq 1/1 Running 0 4m27s 10.56.1.13 gke-node-2 ubuntu-deployment-r7k2k 0/1 ContainerCreating 0 4m27s &lt;none&gt; gke-node-3 ubuntu-deployment-rnr72 0/1 ContainerCreating 0 4m27s &lt;none&gt; gke-node-3 </code></pre> <p>Take a look on above output:</p> <ul> <li>3 pods are in <code>Running</code> state</li> <li>7 pods are in <code>ContainerCreating</code> state </li> </ul> <p><strong>All of the <code>Running</code> pods are located on the same <code>gke-node-2</code></strong></p> <p>You can get more detailed information why pods are in <code>ContainerCreating</code> state by:</p> <p><code>$ kubectl describe pod NAME_OF_POD_WITH_CC_STATE</code></p> <p>The <code>Events</code> part in above command shows: </p> <pre class="lang-sh prettyprint-override"><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 14m default-scheduler Successfully assigned default/ubuntu-deployment-2q64z to gke-node-1 Warning FailedAttachVolume 14m attachdetach-controller Multi-Attach error for volume "pvc-7d756147-6434-11ea-a666-42010a9c0058" Volume is already used by pod(s) ubuntu-deployment-qxwmq, ubuntu-deployment-9p45q, ubuntu-deployment-4tjp2 Warning FailedMount 92s (x6 over 12m) kubelet, gke-node-1 Unable to mount volumes for pod "ubuntu-deployment-2q64z_default(9dc28e95-6434-11ea-a666-42010a9c0058)": timeout expired waiting for volumes to attach or mount for pod "default"/"ubuntu-deployment-2q64z". list of unmounted volumes=[volume]. list of unattached volumes=[volume default-token-dnvnj] </code></pre> <p>Pod cannot pass <code>ContainerCreating</code> state because of failed mounting of a <code>volume</code>. Mentioned <code>volume</code> is already used by other pods on a different node. </p> <blockquote> <p><strong>ReadWriteOnce:</strong> The Volume can be mounted as read-write by a single node.</p> </blockquote> <h3>Additional links</h3> <p>Please take a look at: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#access_modes" rel="nofollow noreferrer">Cloud.google.com: Access modes of persistent volumes</a>. </p> <p>There is detailed answer on topic of access mode: <a href="https://stackoverflow.com/a/60308557">Stackoverflow.com: Why can you set multiple accessmodes on a persistent volume</a></p> <p>As it's unknown what you are trying to achieve please take a look on comparison between Deployments and Statefulsets: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#deployments_vs_statefulsets" rel="nofollow noreferrer">Cloud.google.com: Persistent Volume: Deployments vs statefulsets</a>.</p>
Dawid Kruk
<p>I want to update the ingress configuration and which will apply on ingress instance running on kuberntes cluter on gcloud.</p> <p>For this I have performed two steps:</p> <ol> <li>Firstly, people ask that set both annotation in <code>ingress.yml</code> and then re-create ingress will solve the issue mentioned on <a href="https://github.com/nginxinc/kubernetes-ingress/issues/21#issuecomment-408618569" rel="nofollow noreferrer">this</a>. </li> </ol> <blockquote> <pre><code>kubernetes.io/ingress.class: "gce" nginx.ingress.kubernetes.io/proxy-body-size: 20m </code></pre> </blockquote> <p>After deleting the ingress from cluster and create the ingress again also declared me unlucky.</p> <p><strong>ingress.yml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress namespace: default annotations: kubernetes.io/ingress.class: "gce" nginx.ingress.kubernetes.io/proxy-body-size: 20m nginx.org/client-max-body-size: "20m" </code></pre> <ol start="2"> <li>Secondly, configure the ConfigMap file on the gcloul cluster, so that our ingress configuration will update, but come up with the negative result mentioned on <a href="https://stackoverflow.com/questions/55347770/nginx-ingress-kubernetes-io-proxy-body-size-not-working">this</a>.</li> </ol> <p><strong>nginx-config.yml</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nginx-config namespace: default data: proxy-body-size: "20m" </code></pre> <p>So how can I update my ingress properties such as annotation <code>nginx.ingress.kubernetes.io/proxy-body-size</code>, so that I can upload data more than 1 MB (where my cluster deployed on GKE)? </p> <p>Any help would be appreciated. Thanks</p>
Zeb
<p>You are misinterpreting the annotations part in your <code>Ingress</code> resource. Let me elaborate on that. </p> <p>The problem is that you trying to use <a href="https://github.com/kubernetes/ingress-gce" rel="nofollow noreferrer">GCE controller</a> and apply annotations specifically for <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">NGINX Ingress controller</a>. You cannot use NGINX Ingress controller annotations with GCE controller. </p> <p>For your configuration to work you would need to deploy NGINX Ingress controller. </p> <p>You can deploy it by following <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">official documentation</a>. </p> <p>After deploying NGINX Ingress controller the part of the <code>Ingress</code> definition should look like that: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress namespace: default annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/proxy-body-size: "20m" </code></pre> <p>Take a specific look at part below: </p> <pre><code> kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/proxy-body-size: "20m" </code></pre> <p>Please refer to official <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">documentation</a> when applying annotations for NGINX Ingress controller. </p>
Dawid Kruk
<p>I have a currently functioning Istio application. I would now like to add HTTPS using the Google Cloud managed certs. I setup the ingress there like this...</p> <pre><code>apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert namespace: istio-system spec: domains: - mydomain.co --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: managed-cert-ingress namespace: istio-system annotations: kubernetes.io/ingress.global-static-ip-name: managed-cert networking.gke.io/managed-certificates: managed-cert kubernetes.io/ingress.class: &quot;gce&quot; spec: defaultBackend: service: name: istio-ingressgateway port: number: 443 --- </code></pre> <p>But when I try going to the site (<a href="https://mydomain.co" rel="nofollow noreferrer">https://mydomain.co</a>) I get...</p> <pre><code>Secure Connection Failed An error occurred during a connection to earth-615.mydomain.co. Cannot communicate securely with peer: no common encryption algorithm(s). Error code: SSL_ERROR_NO_CYPHER_OVERLAP </code></pre> <p>The functioning virtual service/gateway looks like this...</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: ingress-gateway namespace: istio-system annotations: kubernetes.io/ingress.global-static-ip-name: earth-616 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http2 protocol: HTTP2 hosts: - &quot;*&quot; --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: test-app namespace: foo spec: hosts: - &quot;*&quot; gateways: - &quot;istio-system/ingress-gateway&quot; http: - match: - uri: exact: / route: - destination: host: test-app port: number: 8000 </code></pre>
Jackie
<p>Pointing k8s ingress towards istio ingress would result in additional latency and additional requirement for the istio gateway to use <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/" rel="nofollow noreferrer">ingress sni passthrough</a> to accept the HTTPS (already TLS terminated traffic).</p> <p>Instead the best practice here would be to use the certificate directly with istio Secure Gateway.</p> <p>You can use the certificate and key issued by Google CA. e.g. from <a href="https://cloud.google.com/certificate-authority-service" rel="nofollow noreferrer">Certificate Authority Service</a> and create a k8s secret to hold the certificate and key. Then configure istio Secure Gateway to terminate the TLS traffic as documented in <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/" rel="nofollow noreferrer">here</a>.</p>
Piotr Malec
<p>I’m getting an error when using terraform to provision node group on AWS EKS. Error: error waiting for EKS Node Group (xxx) creation: <code>NodeCreationFailure: Unhealthy nodes in the kubernetes cluster.</code></p> <p>And I went to console and inspected the node. There is a message <code>“runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker network plugin is not ready: cni config uninitialized”</code>.</p> <p>I have 5 private subnets and connect to Internet via NAT.</p> <p>Is someone able to give me some hint on how to debug this?</p> <p>Here are some details on my env.</p> <pre><code>Kubernetes version: 1.18 Platform version: eks.3 AMI type: AL2_x86_64 AMI release version: 1.18.9-20201211 Instance types: m5.xlarge </code></pre> <p>There are three workloads set up in the cluster.</p> <pre><code>coredns, STATUS (2 Desired, 0 Available, 0 Ready) aws-node STATUS (5 Desired, 5 Scheduled, 0 Available, 0 Ready) kube-proxy STATUS (5 Desired, 5 Scheduled, 5 Available, 5 Ready) </code></pre> <p>go inside the <code>coredns</code>, both pods are in pending state, and conditions has <code>“Available=False, Deployment does not have minimum availability”</code> and <code>“Progress=False, ReplicaSet xxx has timed out progressing”</code> go inside the one of the pod in <code>aws-node</code>, the status shows <code>“Waiting - CrashLoopBackOff”</code></p>
user3691191
<p>Add pod network add-on</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml </code></pre>
Parth Shah
<p>I have an instance of CRD <code>A</code> and an instance of CRD <code>B</code>. <code>B</code> has an owner reference to <code>A</code> with <code>BlockOwnerDeletion</code> set to <code>true</code>. Both instances have a finalizer.</p> <p>When I delete <code>A</code> a <code>DeletionTimestamp</code> is set but no <code>foregroundDeletion</code> finalizer is present. Even when I explicitly add the <code>foregroundDeletion</code> finalizer to <code>A</code> before deleting. This all happens before <code>B</code> has been deleted.</p> <p>The documentation says:</p> <blockquote> <p>In foreground cascading deletion, the root object first enters a “deletion in progress” state. In the “deletion in progress” state, the following things are true:</p> <ul> <li>The object is still visible via the REST API</li> <li>The object’s deletionTimestamp is set</li> <li>The object’s metadata.finalizers contains the value “foregroundDeletion”.</li> </ul> <p>Once the “deletion in progress” state is set, the garbage collector deletes the object’s dependents. Once the garbage collector has deleted all “blocking” dependents (objects with ownerReference.blockOwnerDeletion=true), it deletes the owner object.</p> <p>Note that in the “foregroundDeletion”, only dependents with ownerReference.blockOwnerDeletion=true block the deletion of the owner object. Kubernetes version 1.7 added an admission controller that controls user access to set blockOwnerDeletion to true based on delete permissions on the owner object, so that unauthorized dependents cannot delay deletion of an owner object.</p> <p>If an object’s ownerReferences field is set by a controller (such as Deployment or ReplicaSet), blockOwnerDeletion is set automatically and you do not need to manually modify this field</p> </blockquote> <p>This, to me, suggests that if <code>B</code> has an owner reference to <code>A</code> with <code>BlockOwnerDeletion==true</code> the finalizer <code>foregroundDeletion</code> should be added to <code>A</code>.</p> <p>Am I completely misunderstanding this?</p>
granra
<p><strong>According to:</strong></p> <blockquote> <p>This, to me, suggests that if <code>B</code> has an owner reference to <code>A</code> with <code>BlockOwnerDeletion==true</code> the finalizer <code>foregroundDeletion</code> should be added to <code>A</code>.</p> </blockquote> <p>You are understanding it correctly according to documentation. </p> <p>If you think that observed behavior is different from what the official documentation says it should be reported as an issue on github page. </p> <p>Link: <a href="https://github.com/kubernetes/kubernetes/issues" rel="nofollow noreferrer">Github.com: Kubernetes issues</a></p> <hr> <p>I've tried to reproduce some parts of it on basic examples and here's what I found: </p> <p><strong>Taking:</strong></p> <ul> <li>basic pod with <code>NGINX</code> image as a <strong>parent</strong></li> <li>basic pod with <code>NGINX</code> image as a <strong>child</strong> </li> </ul> <p><strong>Steps:</strong> </p> <ul> <li>Create a <strong>parent</strong> </li> <li>Copy <code>uid</code> of parent to <code>ownerReference</code> of <strong>child</strong> and create it</li> <li><p>Check the behavior:</p> <ul> <li><code>kubectl delete</code></li> <li>default <code>curl</code> </li> <li><code>curl</code> with <code>foregroundDeletion</code> option</li> </ul></li> </ul> <h2>Create a <strong>parent</strong></h2> <p>Here is example <code>YAML</code> definition of basic pod: </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: nginx-owner namespace: default spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx-owner </code></pre> <h2>Copy <code>uid</code> of parent to <code>ownerReference</code> of <strong>child</strong> and create it</h2> <p>Get the <code>uid</code> of <strong>parent</strong> pod by running command: </p> <p><code>$ kubectl get pods nginx-owner -o yaml | grep uid | cut -d ":" -f 2</code></p> <p>Paste it inside of <code>YAML</code> definition of <strong>child</strong> </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: nginx-child namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true kind: Pod name: nginx-owner uid: HERE! spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx-child </code></pre> <p>Save and run it. </p> <h2>Check the behavior</h2> <p>Check if both pods are running and if <strong>child</strong> has <code>ownerReference</code> to <strong>parent</strong> </p> <h3><code>kubectl delete</code></h3> <p>Deleting <strong>parent</strong> pod with <code>$ kubectl delete pod nginx-owner</code> deletes:</p> <ul> <li><code>nginx-owner</code> <strong>(parent)</strong> </li> <li><code>nginx-child</code> <strong>(child)</strong> without any issues. </li> </ul> <p>Annotations in <strong>parent</strong> pod right after deleting it: </p> <pre><code> deletionGracePeriodSeconds: 30 deletionTimestamp: "2020-02-27T14:17:48Z" </code></pre> <h3>default <code>curl</code></h3> <p>Assuming access to Kubernetes API on <code>localhost</code> and port <code>8080</code> with deletion command: <code>$ curl -X DELETE localhost:8080/api/v1/default/pod/nginx-owner</code></p> <p>Deleting <strong>parent</strong> pod by API access with default options: </p> <ul> <li>deletes <strong>parent</strong> first </li> <li>deletes <strong>child</strong> second </li> </ul> <p>The same annotations are present in <strong>parent</strong> as in <code>kubectl delete</code></p> <pre><code> deletionGracePeriodSeconds: 30 deletionTimestamp: "2020-02-27T15:33:18Z" </code></pre> <h3><code>curl</code> with <code>foregroundDeletion</code> option</h3> <p>Assuming access to Kubernetes API on <code>localhost</code> and port <code>8080</code> with modified deletion command to include <code>foregroundDeletion</code> option: </p> <pre class="lang-sh prettyprint-override"><code>$ curl -X DELETE localhost:8080/api/v1/namespaces/default/pods/nginx-owner -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' -H "Content-Type: application/json" </code></pre> <p><code>curl</code> based on <a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/" rel="nofollow noreferrer">Kubernetes.io: Garbage collection</a></p> <p>Will: </p> <ul> <li>force <strong>parent</strong> into <code>Terminating</code> state </li> <li>do not delete <strong>child</strong> (<code>Running</code> state) </li> <li>stuck <strong>parent</strong> pod in <code>Terminating</code> state </li> </ul> <p>Annotations for <strong>parent</strong> pod: </p> <pre><code> deletionGracePeriodSeconds: 30 deletionTimestamp: "2020-02-27T15:44:52Z" finalizers: - foregroundDeletion </code></pre> <p>It added <code>finalizers</code> argument with <code>foregroundDeletion</code> but pod does not get deleted for a reason unknown to me. </p> <p><strong>EDIT:</strong></p> <p>I found the error why <code>foreGroundDeletion</code> wasn't working. There was an issue with:</p> <pre><code> ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true kind: Pod name: nginx-owner uid: HERE! </code></pre> <p>It should be:</p> <pre><code> ownerReferences: - apiVersion: v1 blockOwnerDeletion: true kind: Pod name: nginx-owner uid: HERE! </code></pre>
Dawid Kruk
<p>If I set up a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">kubernetes cronjob</a> with for example</p> <pre><code>spec: schedule: "*/5 * * * *" concurrencyPolicy: Forbid </code></pre> <p>then it will create a job every 5 minutes.</p> <p>However if the job takes e.g. 4 minutes, then it will create another job 1 minute after the previous job completed.</p> <p>Is there a way to make it create a job every 5 minutes <em>after</em> the previous job finished?</p> <p>You might say; just make the schedule <code>*/9 * * * *</code> to account for the 4 minutes the job takes, but the job might not be predictable like that.</p>
David S.
<p>Unfortunately there is no possibility within Kubernetes <code>CronJob</code> to specify a situation when the timer starts (for example 5 minutes) after a job is completed. </p> <p>A word about <code>cron</code>: </p> <blockquote> <p>The software utility <strong>cron</strong> is a time-based <a href="https://en.wikipedia.org/wiki/Job_scheduler" rel="nofollow noreferrer" title="Job scheduler">job scheduler</a> in <a href="https://en.wikipedia.org/wiki/Unix-like" rel="nofollow noreferrer" title="Unix-like">Unix-like</a> computer <a href="https://en.wikipedia.org/wiki/Operating_system" rel="nofollow noreferrer" title="Operating system">operating systems</a>. Users that set up and maintain software environments use cron to schedule jobs (commands or <a href="https://en.wikipedia.org/wiki/Shell_script" rel="nofollow noreferrer" title="Shell script">shell scripts</a>) to run periodically at <strong>fixed times, dates, or intervals.</strong></p> <p>-- <em><a href="https://en.wikipedia.org/wiki/Cron" rel="nofollow noreferrer">Wikipedia.org: Cron</a></em> </p> </blockquote> <p>The behavior of your <code>CronJob</code> within Kubernetes environment can be modified by:</p> <ul> <li>As said <code>Schedule</code> in <code>spec</code> definition <pre class="lang-sh prettyprint-override"><code> schedule: "*/5 * * * *" </code></pre></li> <li><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#starting-deadline" rel="nofollow noreferrer">startingDeadline</a> field that is optional and it describe a deadline in seconds for starting a job. If it doesn't start in that time period it will be counted as failed. After a 100 missed schedules it will no longer be scheduled. </li> <li><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="nofollow noreferrer">Concurrency policy</a> that will specify how concurrent executions of the same <code>Job</code> are going to be handled: <ul> <li>Allow - concurrency will be allowed</li> <li>Forbid - if previous <code>Job</code> wasn't finished the new one will be skipped </li> <li>Replace - current <code>Job</code> will be replaced with a new one </li> </ul></li> <li><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#suspend" rel="nofollow noreferrer">Suspend</a> parameter if it is set to <code>true</code>, all subsequent executions are suspended. This setting does not apply to already started executions.</li> </ul> <p>You could refer to official documentation: <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">CronJobs</a></p> <p>As it's unknown what type of <code>Job</code> you want to run you could try to: </p> <ul> <li>Write a shell script in type of: </li> </ul> <pre class="lang-sh prettyprint-override"><code> while true do HERE_RUN_YOUR_JOB_AND_WAIT_FOR_COMPLETION.sh sleep 300 # ( 5 * 60 seconds ) done </code></pre> <ul> <li>Create an image that mimics usage of above script and use it as pod in Kubernetes. </li> <li>Try to get logs from this pod if it's necessary as described <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes" rel="nofollow noreferrer">here</a></li> </ul> <p>Another way would be to create a pod that could <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#programmatic-access-to-the-api" rel="nofollow noreferrer">connect to Kubernetes API</a>.</p> <p>Take a look on additional resources about <code>Jobs</code>:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="nofollow noreferrer">Kubernetes.io: Fine parallel processing work queue</a></li> <li><a href="https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/" rel="nofollow noreferrer">Kubernetes.io: Coarse parallel processing work queue</a></li> </ul> <p>Please let me know if you have any questions to that. </p>
Dawid Kruk
<p>I have been trying to integrate spark interpreter on zeppelin (v0.7.3) on a Kubernetes cluster. However, as a complication of having k8s version 1.13.10 on the servers <a href="https://issues.apache.org/jira/browse/SPARK-28921" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/SPARK-28921</a></p> <p>I needed to upgrade my spark k8s-client to v4.6.1 as indicated here <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/issues/591#issuecomment-526376703" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/issues/591#issuecomment-526376703</a></p> <p>But when I try executing a spark command <code>sc.version</code> on zeppelin-ui, I get:</p> <pre><code>ERROR [2019-10-25 03:45:35,430] ({pool-2-thread-4} Job.java[run]:181) - Job failed java.lang.NullPointerException at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38) at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(SparkInterpreter.java:398) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:387) at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:843) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) </code></pre> <p>Here are the spark-submit configurations I have, but I don't think the error was from these (since I've run these before and they worked fine)</p> <pre><code>spark.kubernetes.driver.docker.image=x spark.kubernetes.executor.docker.image=x spark.local.dir=/tmp/spark-local spark.executor.instances=5 spark.dynamicAllocation.enabled=true spark.shuffle.service.enabled=true spark.kubernetes.shuffle.labels="x" spark.dynamicAllocation.maxExecutors=5 spark.dynamicAllocation.minExecutors=1 spark.kubernetes.docker.image.pullPolicy=IfNotPresent spark.kubernetes.resourceStagingServer.uri="http://xxx:xx" </code></pre> <p>I have tried downgrading the spark-k8s client to 3.x.x until 4.0.x but I get the HTTP error. Thus, I've decided to stick to v4.6.1 . Opening the zeppelin-interpreter logs, I find the following stack-trace:</p> <pre><code>ERROR [2019-10-25 03:45:35,428] ({pool-2-thread-4} Utils.java[invokeMethod]:40) - java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38) at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33) at org.apache.zeppelin.spark.SparkInterpreter.createSparkSession(SparkInterpreter.java:378) at org.apache.zeppelin.spark.SparkInterpreter.getSparkSession(SparkInterpreter.java:233) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:841) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491) at org.apache.zeppelin.scheduler.Job.run(Job.java:175) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NoClassDefFoundError: io/fabric8/kubernetes/api/model/apps/Deployment at io.fabric8.kubernetes.client.internal.readiness.Readiness.isReady(Readiness.java:62) at org.apache.spark.scheduler.cluster.k8s.KubernetesExternalShuffleManagerImpl$$anonfun$start$1.apply(KubernetesExternalShuffleManager.scala:82) at org.apache.spark.scheduler.cluster.k8s.KubernetesExternalShuffleManagerImpl$$anonfun$start$1.apply(KubernetesExternalShuffleManager.scala:81) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at org.apache.spark.scheduler.cluster.k8s.KubernetesExternalShuffleManagerImpl.start(KubernetesExternalShuffleManager.scala:80) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$$anonfun$start$1.apply(KubernetesClusterSchedulerBackend.scala:212) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$$anonfun$start$1.apply(KubernetesClusterSchedulerBackend.scala:212) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.start(KubernetesClusterSchedulerBackend.scala:212) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173) at org.apache.spark.SparkContext.&lt;init&gt;(SparkContext.scala:509) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901) ... 20 more INFO [2019-10-25 03:45:35,430] ({pool-2-thread-4} SparkInterpreter.java[createSparkSession]:379) - Created Spark session ERROR [2019-10-25 03:45:35,430] ({pool-2-thread-4} Job.java[run]:181) - Job failed java.lang.NullPointerException at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38) at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(SparkInterpreter.java:398) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:387) at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:843) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491) at org.apache.zeppelin.scheduler.Job.run(Job.java:175) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) INFO [2019-10-25 03:45:35,431] ({pool-2-thread-4} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1571975134433 finished by scheduler org.apache.zeppelin.spark.SparkInterpreter819422312 </code></pre> <p>I expect to run this command:</p> <pre><code>%spark %sc.version </code></pre> <p>P.S. This is my first post here, so if I did not follow certain rules, kindly correct me. Thanks!</p>
Joshua Villanueva
<p>After some rigorous research and help from my colleagues, I was able to verify that <code>io/fabric8/kubernetes/api/model/apps/Deployment</code> didn't exist at Kubernetes-model-v2.0.0. Upgrading the jar to v3.0.0 fixed the issue. </p>
Joshua Villanueva
<p>I'm trying to configure gzip to work in a python application that runs on a kubernetes with nginx-ingress in GKE. But I discovered that it is no use to enable gzip in the ingress-controller config-map because I need to enable compression on the backend as I understand it.</p> <p>How can I enable compression on the backend of my python application to run gzip on nginx controller?</p> <p>My main problem is that searching here in stackoverflow I know I need to put the compression in the backend, just do not know how to do this.</p>
Jonatas Oliveira
<p>Focusing specifically on the title of this question and extending on the example of such setup as pointed by user @Raunak Jhawar.</p> <p>You can configure your <code>nginx-ingress</code> to compress the data by updating the <code>ingress-nginx-controller</code> <strong>configmap</strong>.</p> <p>This would work on the path:</p> <ul> <li><code>Pod</code> ----&gt; <code>NGINX Ingress controller</code> - <strong><code>GZIP</code></strong> -&gt; <code>Client</code> (Web browser)</li> </ul> <p>To enable such setup you will need to edit the <code>Ingress</code> controller configmap like below:</p> <ul> <li><code>$ kubectl edit configmap -n ingress-nginx ingress-nginx-controller</code></li> </ul> <pre class="lang-yaml prettyprint-override"><code>data: # ADD IF NOT PRESENT use-gzip: &quot;true&quot; # ENABLE GZIP COMPRESSION gzip-types: &quot;*&quot; # SPECIFY MIME TYPES TO COMPRESS (&quot;*&quot; FOR ALL) </code></pre> <p>You can find more reference and options to configure by following below link:</p> <ul> <li><em><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="noreferrer">Kubernetes.github.io: Ingress NGINX: User guide: Nginx-configuration: Configmap</a></em></li> </ul> <blockquote> <p><strong>A side note!</strong></p> <p>You can also use other methods of editing resources like: <code>$ kubectl patch</code></p> </blockquote> <p>This changes would make the <code>nginx-ingress-controller</code> Pod to be automatically reconfigured.</p> <p>I've included an example of such setup below.</p> <hr /> <p>To check if the compression occurs and if it's working I've used following setup:</p> <ul> <li><code>NGINX Ingress Controller</code> spawned by: <ul> <li><em><a href="https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke" rel="noreferrer">Kubernetes.github.io: Ingress NGINX: Deploy: GCE-GKE</a></em></li> </ul> </li> <li><code>NGINX</code> pod with a <code>5mb.txt</code> file filled with <strong><code>0</code></strong>'s</li> <li><code>Service</code> and <code>Ingress</code> resource that will expose the <code>NGINX</code> pod with <code>NGINX Ingress Controller</code></li> </ul> <p>You can check if your setup with <code>nginx-ingress</code> supoorts <code>gzip</code> compression by either:</p> <ul> <li><p>Checking with <code>Developer tools</code> with a browser of your choosing:</p> <ul> <li><code>Chrome</code> -&gt; <code>F12</code> -&gt; <code>Network</code> -&gt; Go to site (or refresh) and press on example file (look on <code>Response Header</code>):</li> </ul> <p><a href="https://i.stack.imgur.com/KtWUq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KtWUq.png" alt="RESPONSE HEADER" /></a></p> </li> <li><p>You can also use <code>curl</code> command like below (<a href="https://stackoverflow.com/a/9140223/12257134">source</a>):</p> <ul> <li><code>$ curl $URL$ --silent --write-out &quot;%{size_download}\n&quot; --output /dev/null</code> - get size <strong>without</strong> compression</li> <li><code>$ curl $URL$ --silent -H &quot;Accept-Encoding: gzip,deflate&quot; --write-out &quot;%{size_download}\n&quot; --output /dev/null</code> - get the size <strong>with</strong> compression (if supported)</li> </ul> </li> </ul> <p>Above methods have shown the compression rate of about <code>99%</code> (<code>5MB</code> file compressed to <code>50KB</code>)</p> <hr /> <p>I also encourage you to check below links for additional reference:</p> <ul> <li><em><a href="https://stackoverflow.com/a/9140223/12257134">Stackoverflow.com: How can I tell if my server is serving GZipped content? </a></em></li> <li><em><a href="https://serverfault.com/questions/496098/gzip-compression-with-nginx">Serverfault.com: Gzip compression with nginx </a></em> - link about images (I have learned the hard way)</li> </ul>
Dawid Kruk
<p>I'm using <code>helm template</code> command to template a file but cannot escape a space character in yaml sequence. I have tried with <code>""</code> and <code>''</code> but the result remains the same.</p> <p><strong>template.yaml:</strong></p> <pre><code>scriptsApproval: {{ toYaml .Values.scriptApproval }} </code></pre> <p><strong>values.yaml:</strong></p> <pre><code>scriptsApproval: - string1 abc ijk lmn - string2 abc ijk lmn - string3 abc ijk lmn </code></pre> <p>Getting Results after running <code>helm template</code></p> <p><strong>result.yaml:</strong></p> <pre><code>scriptsApproval: - string1 abc ijk lmn - string2 abc ijk lmn - string3 abc ijk lmn </code></pre>
U. Ahmad
<p>You can use Helm's <code>|quote</code> function described <a href="https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#quote-strings-dont-quote-integers" rel="nofollow noreferrer">here</a> and <a href="https://helm.sh/docs/chart_best_practices/#formatting-templates" rel="nofollow noreferrer">here</a></p> <pre><code>{{ toYaml .Values.scriptApproval }} </code></pre> <p>Would be something like</p> <pre><code>{{ range .Values.scriptApproval }} {{ . | quote }} {{ end }} </code></pre> <p>*Untested</p>
Christiaan Vermeulen
<p>i am very new to Spring Boot and the application.properties. I have the problem, that i need to be very flexible with my database port, because i have two different databases. Therefore i want to read the port from a environment variable. I tried the following:</p> <pre><code>spring.data.mongodb.uri = mongodb://project1:${db-password}@abc:12345/project </code></pre> <p>This code works fine, if my Database has the port 12345. But if i now try to read the port from an environment variable there is a problem. I tried this:</p> <pre><code>spring.data.mongodb.uri = mongodb://project1:${db-password}@abc:${port}/project </code></pre> <p>The problem is the following: I am using k8 and Jenkins. The environment variable "port" is given to my program in my k8 and this works fine for "db-password", but not for the port. My Jenkins says: "The connection string contains an invalid host 'abd:${port}'. The port '${port}' is not a valid, it must be an integer between 0 and 65535"</p> <p>So now to my question: How can i read a port as an environment variable, without getting this error?</p> <p>Thank you in advance!</p>
slaayaah
<p>To inject environment variable to the pods you can do the following: </p> <h3>Configmap</h3> <p>You can create <code>ConfigMap</code> and configure your pods to use it. </p> <p>Steps required:</p> <ul> <li>Create <code>ConfigMap</code></li> <li>Update/Create the deployment with ConfigMap</li> <li>Test it</li> </ul> <h3>Create ConfigMap</h3> <p>I provided simple <code>ConfigMap</code> below to store your variables:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: example-config data: port: "12345" </code></pre> <p>To apply it and be able to use it invoke following command: </p> <p><code>$ kubectl create -f example-configmap.yaml</code></p> <p>The <code>ConfigMap</code> above will create the environment variable <code>port</code> with value of <code>12345</code>. </p> <p>Check if <code>ConfigMap</code> was created successfully: </p> <p><code>$ kubectl get configmap</code></p> <p>Output should be like this: </p> <pre class="lang-sh prettyprint-override"><code>NAME DATA AGE example-config 1 21m </code></pre> <p>To get the detailed information you can check it with command:</p> <p><code>$ kubectl describe configmap example-config</code></p> <p>With output:</p> <pre><code>Name: example-config Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Data ==== port: ---- 12345 Events: &lt;none&gt; </code></pre> <h3>Update/Create the deployment with ConfigMap</h3> <p>I provided simple deployment with <code>ConfigMap</code> included: </p> <pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 envFrom: - configMapRef: name: example-config ports: - containerPort: 80 </code></pre> <p>Configuration responsible for using <code>ConfigMap</code>: </p> <pre><code> envFrom: - configMapRef: name: example-config </code></pre> <p>After that you need to run your deployment with command: </p> <p><code>$ kubectl create -f configmap-test.yaml</code></p> <p>And check if it's working: </p> <p><code>$ kubectl get pods</code></p> <p>With output: </p> <pre><code>NAME READY STATUS RESTARTS AGE nginx-deployment-84d6f58895-b4zvz 1/1 Running 0 23m nginx-deployment-84d6f58895-dp4c7 1/1 Running 0 23m </code></pre> <h3>Test it</h3> <p>To test if environment variable is working you need to get inside the pod and check for yourself.</p> <p>To do that invoke the command:</p> <p><code>$ kubectl exec -it NAME_OF_POD -- /bin/bash</code></p> <p>Please provide the variable NAME_OF_POD with appropriate one for your case. </p> <p>After successfully getting into container run: </p> <p><code>$ echo $port</code></p> <p>It should show: </p> <pre class="lang-sh prettyprint-override"><code>root@nginx-deployment-84d6f58895-b4zvz:/# echo $port 12345 </code></pre> <p>Now you can use your environment variables inside pods. </p>
Dawid Kruk
<p>We're currently trying to deploy Kong in a GKE cluster and the goal is to delegate the certificate management to Google's Load Balancer (the SSL termination should be made here).</p> <p>The problem we faced is that all Google's documentation is focus on deploying some service and use their exclusive Load Balancer that connects directly to the Ingress declared.</p> <p>The configuration which currently works (without Kong) is the following:</p> <pre><code># values.yml (from Service X inside GKE, using Helm) ... ingress: enabled: true hostname: example.com annotations: kubernetes.io/ingress.class: gce kubernetes.io/ingress.allow-http: &quot;false&quot; kubernetes.io/ingress.global-static-ip-name: example-static-ip ingress.gcp.kubernetes.io/pre-shared-cert: example-cert ... </code></pre> <p>However, when we change <code>gce</code> for <code>kong</code> as the ingress.class, all other annotations don't continue to work. This is expected, as now Kong's proxy is the one being the <em>Load Balancer</em> and should be the one that tells Google's LB how to generate itself.</p> <p>According to this <a href="https://v1-18.docs.kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws" rel="nofollow noreferrer">documentation</a>, it should be fairly simple to add those annotations to Kong proxy service.</p> <p>Based on this chain of events:</p> <ul> <li>K8s Ingress creates Kong proxy service</li> <li>Kong proxy service generates Google's LB</li> </ul> <p>The configuration to customize the LB should be made inside Kong's service (as I understand):</p> <pre><code># values.yml (Kong, using Helm) ... proxy: type: LoadBalancer annotations: {} &lt;-- Here http: ... tls: ... ... </code></pre> <p>However, for GCP there are only a few according to the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress#summary_of_external_ingress_annotations" rel="nofollow noreferrer">docs</a>, and none of them have the desire effect (cannot set certificate to use, define which type of LB to create, etc.)</p> <p>All things into account, is there any way to achieve our main goal which would be:</p> <p><em>&quot;Deploy Kong API Gateway through Helm inside GKE and delegate SSL termination to custom Google's LB.&quot;</em></p>
manuelnucci
<p><strong>TL;DR</strong></p> <p><strong>Unfortunately there is no possibility to use Google Managed Certificates with Kong Ingress.</strong></p> <p>To be exact Google Managed Certificates in <code>GKE</code> can be used <strong>only</strong> with:</p> <ul> <li>Ingress for External HTTP(S) Load Balancing</li> </ul> <p>As pointed by documentation:</p> <blockquote> <p><strong>Note:</strong> This feature is only available with Ingress for External HTTP(S) Load Balancing.</p> <p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: How to: Managed certs</a></em></p> </blockquote> <hr /> <hr /> <h3>Explanation</h3> <p>According to the documentation (slightly modified):</p> <blockquote> <p>When you create an Ingress object with below class:</p> <ul> <li><code>kubernetes.io/ingress.class: gce</code></li> </ul> <p>the <a href="https://github.com/kubernetes/ingress-gce" rel="nofollow noreferrer">GKE Ingress</a> controller creates a <a href="https://cloud.google.com/load-balancing/docs/https" rel="nofollow noreferrer">Google Cloud HTTP(S) Load Balancer</a> and configures it according to the information in the Ingress and its associated Services.</p> <p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#ingress_for_external_and_internal_traffic" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Ingress: Ingress for external and internal traffic</a></em></p> </blockquote> <p>Using different <code>Ingress</code> controllers like (nginx-ingress, traefik, <strong>kong</strong>) require you to use <code>Service</code> of type <code>LoadBalancer</code>.</p> <p>Using above <code>Service</code> in <code>GKE</code> will automatically create <a href="https://cloud.google.com/load-balancing/docs/network" rel="nofollow noreferrer">External TCP/UDP Network Load Balancer</a> (L4) pointing to your <code>Ingress</code> controller. From this point the traffic will be redirected to specific services based on the <code>Ingress</code> resource with appropriate <code>ingress.class</code>.</p> <blockquote> <p>A tip!</p> <p>You can see in the helm chart of Kong that it's using the same way!</p> <ul> <li><code>helm install kong/kong kong-ingress --dry-run --debug</code></li> </ul> </blockquote> <p>To have the secure connection between the client and kong you will need to <strong>either</strong>:</p> <ul> <li>Use <code>cert-manager</code> to provision the certificates for the <code>Ingress</code> controller. <ul> <li><em><a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">Cert-manager.io: Docs</a></em></li> </ul> </li> <li>Provision the certificates in other way and provide them as a secret to be used by Ingress controller. <ul> <li><em><a href="https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets" rel="nofollow noreferrer">Kubernetes.io: Secret: TLS Secrets</a></em></li> </ul> </li> </ul> <blockquote> <p><strong>Side note</strong>: In both ways the SSL termination will happen at the Ingress controller.</p> </blockquote> <hr /> <p>Answering the part of the question:</p> <blockquote> <p>The configuration to customize the LB should be made inside Kong's service (as I understand):</p> <pre class="lang-yaml prettyprint-override"><code># values.yml (Kong, using Helm) ... proxy: type: LoadBalancer annotations: {} &lt;-- Here ... </code></pre> <p>However, for GCP there are only a few according to the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress#summary_of_external_ingress_annotations" rel="nofollow noreferrer">docs</a>, and none of them have the desire effect (cannot set certificate to use, define which type of LB to create, etc.)</p> </blockquote> <p>As said earlier <code>Service</code> of type <code>LoadBalancer</code> in <code>GKE</code> will configure L4 <code>TCP</code>/<code>UDP</code> LoadBalancer which is not designed to be responsible for handling SSL traffic (SSL termination).</p> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://cloud.google.com/load-balancing/docs/network" rel="nofollow noreferrer">Cloud.google.com: Load Balancing: Docs: Network</a></em></li> <li><em><a href="https://github.com/Kong/kubernetes-ingress-controller" rel="nofollow noreferrer">Github.com: Kong: Kubernetes ingress controller</a></em></li> </ul>
Dawid Kruk
<p>I have a sample application (web-app, backend-1, backend-2) deployed on minikube all under a JWT policy, and they all have proper destination rules, Istio sidecar and MTLS enabled in order to secure the east-west traffic.</p> <pre><code>apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: oidc spec: targets: - name: web-app - name: backend-1 - name: backend-2 peers: - mtls: {} origins: - jwt: issuer: "http://myurl/auth/realms/test" jwksUri: "http://myurl/auth/realms/test/protocol/openid-connect/certs" principalBinding: USE_ORIGIN </code></pre> <p>When I run the following command I receive a 401 unauthorized response when requesting the data from the backend, which is due to $TOKEN not being forwarded to backend-1 and backend-2 headers during the http request.</p> <pre><code>$&gt; curl http://minikubeip/api "Authorization: Bearer $TOKEN" </code></pre> <p>Is there a way to forward http headers to backend-1 and backend-2 using native kubernetes/istio? Am I forced to make application code changes to accomplish this?</p> <p><strong>Edit:</strong> This is the error I get after applying my oidc policy. When I curl web-app with the auth token I get </p> <blockquote> <p>{"errors":[{"code":"APP_ERROR_CODE","message":"401 Unauthorized"}</p> </blockquote> <p>Note that when I curl backend-1 or backend-2 with the same auth-token I get the appropriate data. Also, there is no other destination rule/policy applied to these services currently, policy enforcement is on, and my istio version is 1.1.15. This is the policy I am applying:</p> <pre><code>apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: default namespace: default spec: # peers: # - mtls: {} origins: - jwt: issuer: "http://10.148.199.140:8080/auth/realms/test" jwksUri: "http://10.148.199.140:8080/auth/realms/test/protocol/openid-connect/certs" principalBinding: USE_ORIGIN </code></pre>
V. Ro
<blockquote> <p>should the token be propagated to backend-1 and backend-2 without any other changes?</p> </blockquote> <p>Yes, policy should transfer token to both backend-1 and backend-2</p> <h2>There is a <a href="https://github.com/istio/istio/issues/15122" rel="nofollow noreferrer">github issue</a> , where users had same issue like You</h2> <p>A few informations from there:</p> <blockquote> <p>The JWT is verified by an Envoy filter, so you'll have to check the Envoy logs. For the code, see <a href="https://github.com/istio/proxy/tree/master/src/envoy/http/jwt_auth" rel="nofollow noreferrer">https://github.com/istio/proxy/tree/master/src/envoy/http/jwt_auth</a></p> <p>Pilot retrieves the JWKS to be used by the filter (it is inlined into the Envoy config), you can find the code for that in pilot/pkg/security</p> </blockquote> <h2>And another problem with that in <a href="https://stackoverflow.com/questions/54988412/keycloak-provides-invalid-signature-with-istio-and-jwt">stackoverflow</a></h2> <p>where accepted answer is:</p> <blockquote> <p>The problem was resolved with two options: 1. Replace Service Name and port by external server ip and external port (for issuer and jwksUri) 2. Disable the usage of mTLS and its policy (Known issue: <a href="https://github.com/istio/istio/issues/10062" rel="nofollow noreferrer">https://github.com/istio/istio/issues/10062</a>).</p> </blockquote> <h2>From istio documentation</h2> <blockquote> <p>For each service, Istio applies the narrowest matching policy. The order is: service-specific &gt; namespace-wide &gt; mesh-wide. If more than one service-specific policy matches a service, Istio selects one of them at random. Operators must avoid such conflicts when configuring their policies.</p> <p>To enforce uniqueness for mesh-wide and namespace-wide policies, Istio accepts only one authentication policy per mesh and one authentication policy per namespace. Istio also requires mesh-wide and namespace-wide policies to have the specific name default.</p> <p>If a service has no matching policies, both transport authentication and origin authentication are disabled.</p> </blockquote>
chd
<p>I'm new in Kubernetes and currenlty I'm researching about profiling in Kubernetes. I want to log deployment process in Kubernetes (creating pod, restart pod, etc) and want to know the time and resources(RAM, CPU) needed in each process (for example when downloading image, building deployment, pod, etc).</p> <p>Is there a way or tool for me to log this process? Thank you!</p>
jsishere
<p>I am not really sure you can achieve the outcome you want without extensive knowledge about certain components and some deep dive coding. </p> <h2>What can be retrieved from Kubernetes:</h2> <h3>Information about events</h3> <p>Like pod creation, termination, allocation with timestamps: </p> <p><code>$ kubectl get events --all-namespaces</code> </p> <p>Even in the <code>json</code> format there is nothing about CPU/RAM usage in this events.</p> <h3>Information about pods</h3> <p><code>$ kubectl get pods POD_NAME -o json</code></p> <p>No information about CPU/RAM usage. </p> <p><code>$ kubectl describe pods POD_NAME</code></p> <p>No information about CPU/RAM usage either. </p> <h3>Information about resource usage</h3> <p>There is some tools to monitor and report basic resource usage:</p> <p><code>$ kubectl top node</code></p> <p>With output: </p> <pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% MASTER 90m 9% 882Mi 33% WORKER1 47m 5% 841Mi 31% WORKER2 37m 3% 656Mi 24% </code></pre> <p><code>$ kubectl top pods --all-namespaces</code></p> <p>With output: </p> <pre><code>NAMESPACE NAME CPU(cores) MEMORY(bytes) default nginx-local-84ddb99b55-2nzdb 0m 1Mi default nginx-local-84ddb99b55-nxfh5 0m 1Mi default nginx-local-84ddb99b55-xllw2 0m 1Mi </code></pre> <p>There is CPU/RAM usage but in basic form. </p> <h3>Information about deployments</h3> <p><code>$ kubectl describe deployment deployment_name</code> </p> <p>Provided output gives no information about CPU/RAM usage. </p> <h3>Getting information about resources</h3> <p>Getting resources like CPU/RAM usage specific to some actions like pulling the image or scaling the deployment could be problematic. Not all processes are managed by Kubernetes and additional tools at OS level might be needed to fetch that information.</p> <p>For example pulling an image for deployment engages the kubelet agent as well as the <a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/" rel="nofollow noreferrer">CRI</a> to talk to Docker or other Container Runtime your cluster is using. Adding to that, the Container Runtime not only downloads the image, it does other actions that are not directly monitored by Kubernetes.</p> <p>For another example HPA (Horizontal Pod Autoscaler) is Kubernetes abstraction and getting it's metrics would be highly dependent on how the metrics are collected in the cluster in order to determine the best way to fetch them.</p> <p>I would highly encourage you to share what exactly (case by case) you want to monitor. </p>
Dawid Kruk
<p>I currently have a ROM, RWO persistent volume claim that I regularly use as a read only volume in a deployment that sporadically gets repopulated by some job using it as a read write volume while the deployment is scaled down to 0. However, since in-tree plugins will be deprecated in future versions of kubernetes, I'm planning to migrate this process to volumes using csi drivers.</p> <p>In order to clarify my current use of this kind of volumes, I'll put a sample yaml configuration file using the basic idea:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: standard accessModes: - ReadOnlyMany - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: batch/v1 kind: Job metadata: name: test spec: template: spec: containers: - name: test image: busybox # Populate the volume command: - touch - /foo/bar volumeMounts: - name: test mountPath: /foo/ subPath: foo volumes: - name: test persistentVolumeClaim: claimName: test restartPolicy: Never --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: test name: test spec: replicas: 0 selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - name: test image: busybox command: - sh - '-c' - | # Check the volume has been populated ls /foo/ # Prevent the pod from exiting for a while sleep 3600 volumeMounts: - name: test mountPath: /foo/ subPath: foo volumes: - name: test persistentVolumeClaim: claimName: test readOnly: true </code></pre> <p>so the job populates the the volume and later the deployment is scaled up. However, replacing the <code>storageClassName</code> field <code>standard</code> in the persistent volume claim by <code>singlewriter-standard</code> does not even allow the job to run.</p> <p>Is this some kind of bug? Is there some workaround to this using volumes using the csi driver?</p> <p>If this is a bug, I'd plan to migrate to using sci drivers later; however, if this is not a bug, how should I migrate my current workflow since in-tree plugins will eventually be deprecated?</p> <p><strong>Edit:</strong></p> <p>The version of the kubernetes server is <code>1.17.9-gke.1504</code>. As for the storage classes, they are the <code>standard</code> and <code>singlewriter-standard</code> default storage classes:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: labels: addonmanager.kubernetes.io/mode: EnsureExists kubernetes.io/cluster-service: &quot;true&quot; name: standard parameters: type: pd-standard provisioner: kubernetes.io/gce-pd reclaimPolicy: Delete volumeBindingMode: Immediate --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: components.gke.io/component-name: pdcsi-addon components.gke.io/component-version: 0.5.1 storageclass.kubernetes.io/is-default-class: &quot;true&quot; labels: addonmanager.kubernetes.io/mode: EnsureExists name: singlewriter-standard parameters: type: pd-standard provisioner: pd.csi.storage.gke.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer </code></pre> <p>While the error is not shown in the job but in the pod itself (this is just for the <code>singlewriter-standard</code> storage class):</p> <p><code>Warning FailedAttachVolume attachdetach-controller AttachVolume.Attach failed for volume &quot;...&quot; : CSI does not support ReadOnlyMany and ReadWriteOnce on the same PersistentVolume</code></p>
Raúl Arturo Chávez Sarmiento
<p>The message you encountered:</p> <pre class="lang-sh prettyprint-override"><code>Warning FailedAttachVolume attachdetach-controller AttachVolume.Attach failed for volume &quot;...&quot; : CSI does not support ReadOnlyMany and ReadWriteOnce on the same PersistentVolume </code></pre> <p>is not a bug. The <code>attachdetach-controller</code> is showing this error as it doesn't know in which <code>accessMode</code> it should mount the volume:</p> <blockquote> <p>For [ReadOnlyMany, ReadWriteOnce] PV, the external attacher simply does not know if the attachment is going to be consumed as read-only(-many) or as read-write(-once)</p> <p>-- <em><a href="https://github.com/kubernetes-csi/external-attacher/issues/153#issuecomment-500347886" rel="nofollow noreferrer">Github.com: Kubernetes CSI: External attacher: Issues: 153</a></em></p> </blockquote> <p>I encourage you to check the link above for a full explanation.</p> <hr /> <blockquote> <p>I currently have a ROM, RWO persistent volume claim that I regularly use as a read only volume in a deployment that sporadically gets repopulated by some job using it as a read write volume</p> </blockquote> <p>You can combine the steps from below guides:</p> <ul> <li>Turn on the <code>CSI</code> Persistent disk driver in GKE <ul> <li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Gce-pd-csi-driver</a></em></li> </ul> </li> <li>Create a <code>PVC</code> with <code>pd.csi.storage.gke.io</code> provisioner (you will need to modify <code>YAML</code> definitions with <code>storageClassName: singlewriter-standard</code>): <ul> <li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks</a></em></li> </ul> </li> </ul> <p>Citing the documentation on steps to take (from <code>ReadOnlyMany</code> guide) that should fulfill the setup you've shown:</p> <blockquote> <p>Before using a persistent disk in read-only mode, you must format it.</p> <p>To format your persistent disk:</p> <ul> <li>Create a persistent disk <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd" rel="nofollow noreferrer">manually</a> or by using <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#dynamic_provisioning" rel="nofollow noreferrer">dynamic provisioning</a>.</li> <li>Format the disk and populate it with data. To format the disk, you can: <ul> <li>Reference the disk as a <code>ReadWriteOnce</code> volume in a Pod. Doing this results in GKE automatically formatting the disk, and enables the Pod to pre-populate the disk with data. When the Pod starts, make sure the Pod writes data to the disk.</li> <li>Manually mount the disk to a VM and format it. Write any data to the disk that you want. For details, see <a href="https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting" rel="nofollow noreferrer">Persistent disk formatting</a>.</li> </ul> </li> <li>Unmount and detach the disk: <ul> <li>If you referenced the disk in a Pod, delete the Pod, wait for it to terminate, and wait for the disk to automatically detach from the node.</li> <li>If you mounted the disk to a VM, detach the disk using <code>gcloud compute instances detach-disk</code>.</li> </ul> </li> <li>Create Pods that access the volume as <code>ReadOnlyMany</code> as shown in the following section.</li> </ul> <p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks</a></em></p> </blockquote> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md" rel="nofollow noreferrer">Github.com: Kubernetes: Design proposals: Storage: CSI</a></em></li> <li><em><a href="https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/" rel="nofollow noreferrer">Kubernetes.io: Blog: Container storage interface</a></em></li> <li><em><a href="https://kubernetes-csi.github.io/docs/drivers.html" rel="nofollow noreferrer">Kubernetes-csi.github.io: Docs: Drivers</a></em></li> </ul> <hr /> <h3>EDIT</h3> <p>Following the official documentation:</p> <ul> <li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks</a></em></li> </ul> <p>Please treat it as an example.</p> <p>Dynamically create a <code>PVC</code> that will be used with <code>ReadWriteOnce</code> accessMode:</p> <p><code>pvc.yaml</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-rwo spec: storageClassName: singlewriter-standard accessModes: - ReadWriteOnce resources: requests: storage: 81Gi </code></pre> <p>Run a <code>Pod</code> with a <code>PVC</code> mounted to it:</p> <p><code>pod.yaml</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: busybox-pvc spec: containers: - image: k8s.gcr.io/busybox name: busybox command: - &quot;sleep&quot; - &quot;36000&quot; volumeMounts: - mountPath: /test-mnt name: my-volume volumes: - name: my-volume persistentVolumeClaim: claimName: pvc-rwo </code></pre> <p>Run following commands:</p> <ul> <li><code>$ kubectl exec -it busybox-pvc -- /bin/sh</code></li> <li><code>$ echo &quot;Hello there!&quot; &gt; /test-mnt/hello.txt</code></li> </ul> <p>Delete the <code>Pod</code> and wait for the drive to be unmounted. Please do not delete <code>PVC</code> as deleting it:</p> <blockquote> <p>When you delete a claim, the corresponding PersistentVolume object and the provisioned Compute Engine persistent disk are also deleted.</p> <p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#dynamic_provisioning" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Persistent Volumes: Dynamic provisioning</a></em></p> </blockquote> <hr /> <p>Get the name (it's in <code>VOLUME</code> column) of the earlier created disk by running:</p> <ul> <li><code>$ kubectl get pvc</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-rwo Bound pvc-11111111-2222-3333-4444-555555555555 81Gi RWO singlewriter-standard 52m </code></pre> <p>Create a <code>PV</code> and <code>PVC</code> with following definition:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv-rox spec: storageClassName: singlewriter-standard capacity: storage: 81Gi accessModes: - ReadOnlyMany claimRef: namespace: default name: pvc-rox # &lt;-- important gcePersistentDisk: pdName: &lt;INSERT HERE THE DISK NAME FROM EARLIER COMMAND&gt; # pdName: pvc-11111111-2222-3333-4444-555555555555 &lt;- example fsType: ext4 readOnly: true --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-rox # &lt;-- important spec: storageClassName: singlewriter-standard accessModes: - ReadOnlyMany resources: requests: storage: 81Gi </code></pre> <p>You can test if your disk is in <code>ROX</code> accessMode when the spawned <code>Pods</code> were scheduled on multiple nodes and all of them have the <code>PVC</code> mounted:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx replicas: 15 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: /test-mnt name: volume-ro readOnly: true volumes: - name: volume-ro persistentVolumeClaim: claimName: pvc-rox readOnly: true </code></pre> <ul> <li><code>$ kubectl get deployment nginx</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>NAME READY UP-TO-DATE AVAILABLE AGE nginx 15/15 15 15 3m1s </code></pre> <ul> <li><code>$ kubectl exec -it nginx-6c77b8bf66-njhpm -- cat /test-mnt/hello.txt</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>Hello there! </code></pre>
Dawid Kruk
<p>I have installed Istio using the helm chart with the following settings:</p> <pre><code>helm template --set kiali.enabled=true --set tracing.enabled=true --set pilot.traceSampling=100 --set grafana.enabled=true --set sidecarInjectorWebhook.enabled=true install/kubernetes/helm/istio --name istio --namespace istio-system &gt; istio.yaml </code></pre> <p>When I check the services running in the cluster under the <code>istio-system</code> namespace I see multiple services around tracing. </p> <pre><code>jaeger-agent ClusterIP None &lt;none&gt; 5775/UDP,6831/UDP,6832/UDP jaeger-collector ClusterIP 10.100.66.107 &lt;none&gt; 14267/TCP,14268/TCP tracing ClusterIP 10.100.81.123 &lt;none&gt; 80/TCP zipkin ClusterIP 10.100.64.9 &lt;none&gt; 9411/TCP </code></pre> <p>Since Jaeger is the default setting, I was expecting to see only the <code>jaeger-collector</code>. It is not clear as to what the role of <code>jaeger-agent</code>, <code>tracing</code> and <code>zipkin</code> are, any ideas ? ,</p>
mithrandir
<p>Just mentioning beforehand (you might already know) that a Kubernetes Service is not a "service" as in a piece of code. It is a way for Kubernetes components &amp; deployments to communicate with one another through an interface which always stays the same, regardless of how many pods or servers there are. </p> <p>When Istio deploys it's tracing mechanism, it deploys modular parts so it can deploy them independently, and also scale them independently, very much like micro-services. </p> <p>Generally a Kubernetes deployed utility will be deployed as a few parts which make up the bigger picture. For instance in your case: </p> <p>jaeger-agent - This is the components which will collect all the traffic and tracing from your nodes.</p> <p>jaeger-collector - This is the place where all of the jaeger-agents will push the logs and traces they find on the node, and the collector will aggregate these as a trace may span multiple nodes. </p> <p>tracing - might be the component which injects the tracing ID's into network traffic for the agent to watch.</p> <p>zipkin - could be the UI which allows debugging with traces, or replaying requests etc. </p> <p>The above might not be absolutely correct, but I hope you get the idea of why multiple parts would be deployed. </p> <p>In the same way we deploy mysql, and our containers separately, Kubernetes projects are generally deployed as a set of deployments or pods. </p>
Christiaan Vermeulen
<p>Do you know of any gotcha's or requirements that would not allow using a single ES/kibana as a target for fluentd in multiple k8 clusters?</p> <p>We are engineering rolling out a new kubernetes model. I have requirements to run multiple kubernetes clusters, lets say 4-6. Even though the workload is split in multiple k8 clusters, I do not have a requirement to split the logging and believe it would be easier to find the logs for pods in all clusters in a centralized location. Also less maintenance for kibana/elasticsearch.</p> <p>Using EFK for Kubernetes, can I point Fluentd from multiple k8 clusters at a single ElasticSearch/Kibana? I don't think I'm the first one with this thought however I haven't been able to find any discussion of doing this. Found lots of discussions of setting up efk but all that I have found only discuss a single k8 to its own elasticsearch/kibana.</p> <p>Has anyone else gone down the path of using a single es/kibana to service logs from multiple kubernetes clusters? We'll plunge ahead with testing it out but seeing if anyone else has already gone down this road.</p>
Chad Ernst
<p>I dont think you should create an elastic instance for each kubernetes cluster, you can run a main elastic instance and index it all logs.</p> <p>But even if you don`t have an elastic instance for each kubernetes client, i think you sohuld have a drp, so lets says instead moving your logs of all pods to elastic directly, maybe move it to kafka, and then split it to two elastic clusters.</p> <p>Also it is very depend on the use case, if every kubernetes cluster is on different regions, and you need the pod`s logs in low latency (&lt;1s), so maybe one elastic instance is not the right answer.</p>
ShemTov
<p>beginner here. I am currently trying to configure Ingress to do two things - if the fibonacci route exists, redirect to the function and pass the parameter, if the route doesn't exist, redirect to another website and attach the input there.</p> <p>So, for example, there are two basic scenarios.</p> <ol> <li><a href="https://xxx.amazonaws.com/fibonacci/10" rel="nofollow noreferrer">https://xxx.amazonaws.com/fibonacci/10</a> -&gt; calls fibonacci function with parameter 10 (that works)</li> <li><a href="https://xxx.amazonaws.com/users/jozef" rel="nofollow noreferrer">https://xxx.amazonaws.com/users/jozef</a> -&gt; calls redirect function which redirects to <a href="https://api.github.com/users/jozef" rel="nofollow noreferrer">https://api.github.com/users/jozef</a></li> </ol> <p>I think the service doing the redirect is written correctly, it looks like this.</p> <pre><code>kind: Service apiVersion: v1 metadata: name: api-gateway-redirect-service spec: type: ExternalName externalName: api.github.com ports: - protocol: TCP targetPort: 443 port: 80 # Default port for image </code></pre> <p>This is how my Ingress looks like. Experimented with default-backend annotation as well as various placement of the default backend, nothing worked. When I try to curl <a href="https://xxx.amazonaws.com/users/jozef" rel="nofollow noreferrer">https://xxx.amazonaws.com/users/jozef</a>, I keep getting 301 message but the location is unchanged. The final output looks like this</p> <pre><code>HTTP/1.1 301 Moved Permanently Server: openresty/1.15.8.2 Date: Wed, 13 Nov 2019 15:52:14 GMT Content-Length: 0 Connection: keep-alive Location: https://xxx.amazonaws.com/users/jozef * Connection #0 to host xxx.amazonaws.com left intact * Maximum (50) redirects followed curl: (47) Maximum (50) redirects followed </code></pre> <p>Does someone have an idea what am I doing wrong? This is my Ingress. Also, if it helps, we use Kubernetes version 1.14.6. Thanks a million</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-nginx annotations: nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;false&quot; nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - path: /fibonacci/(.*) backend: serviceName: fibonacci-k8s-service servicePort: 80 - path: /(.*) backend: serviceName: api-gateway-redirect-service servicePort: 80 </code></pre>
Jozef
<p>The resolution to the problem was the addition of the <code>'Host: hostname'</code> header in the curl command. </p> <p>The service that was handling the request needed <code>Host: hostname</code> header to properly reply to this request. After the <code>hostname</code> header was provided the respond was correct. </p> <p>Links: </p> <p><a href="https://curl.haxx.se/docs/" rel="nofollow noreferrer">Curl docs</a></p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress docs</a></p>
Dawid Kruk
<p>Installed velero-client v1.1.0 from git.</p> <p>Installed velero service with the following command </p> <pre><code>velero install --provider aws --bucket velero --secret-file credentials-velero \ --use-volume-snapshots=false --use-restic --backup-location-config \ region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000,publicUrl=http://&lt;ip:node-port&gt; </code></pre> <p>And I am getting following error:</p> <pre><code>An error occurred: some backup storage locations are invalid: backup store for location "default" is invalid: rpc error: code = Unknown desc = AccessDenied: Access Denied </code></pre> <p>I want to deploy it on k8s.</p>
Priyanka
<p>This issue is because of my aws access key and secret key are invalid. Later I have given valid credentials. So, now its working fine.</p>
Priyanka
<p>I have simple Spring Boot App and Kafka with working SSL connection (other apps, not Spring Boot, have successful connection). I haven't access to kafka brokers properties. My app is a client for kafka. And this app running in container inside kubernetes. My spring boot have access to keystore.p12, ca-cert, kafka.pem, kafka.key files (it's in directory inside container).</p> <p>In configuration I use</p> <pre><code>spring.kafka.security.protocol=SSL spring.kafka.ssl.protocol=SSL spring.kafka.ssl.key-store-type=PKCS12 spring.kafka.ssl.key-store-location=file:///path/to/keystore.p12 spring.kafka.ssl.key-store-password=password spring.kafka.ssl.trust-store-type=PKCS12 spring.kafka.ssl.trust-store-location=file:///path/to/keystore.p12 (it's the same file, and I think it's incorrect) spring.kafka.ssl.trust-store-password=password spring.kafka.properties.ssl.endpoint.identification.algorithm= spring.kafka.enable.ssl.certificate.verification=false </code></pre> <p>Everytime I receive ERROR</p> <pre><code>org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?] at sun.security.ssl.TransportContext.fatal(TransportContext.java:349) ~[?:?] at sun.security.ssl.TransportContext.fatal(TransportContext.java:292) ~[?:?] at sun.security.ssl.TransportContext.fatal(TransportContext.java:287) ~[?:?] at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654) ~[?:?] at sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473) ~[?:?] at sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369) ~[?:?] at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392) ~[?:?] at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443) ~[?:?] at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1074) ~[?:?] at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1061) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:?] at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1008) ~[?:?] at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:430) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:514) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:368) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:291) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.Selector.poll(Selector.java:481) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551) [kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1389) [kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1320) [kafka-clients-3.0.0.jar!/:?] at java.lang.Thread.run(Thread.java:829) [?:?] Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:439) ~[?:?] at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:306) ~[?:?] at sun.security.validator.Validator.validate(Validator.java:264) ~[?:?] at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:313) ~[?:?] at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:276) ~[?:?] at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141) ~[?:?] at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:632) ~[?:?] at sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473) ~[?:?] at sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369) ~[?:?] at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392) ~[?:?] at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443) ~[?:?] at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1074) ~[?:?] at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1061) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:?] at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1008) ~[?:?] at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:430) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:514) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:368) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:291) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.Selector.poll(Selector.java:481) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551) [kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1389) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1320) ~[kafka-clients-3.0.0.jar!/:?] at java.lang.Thread.run(Thread.java:829) ~[?:?] Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141) ~[?:?] at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126) ~[?:?] at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297) ~[?:?] at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:434) ~[?:?] at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:306) ~[?:?] at sun.security.validator.Validator.validate(Validator.java:264) ~[?:?] at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:313) ~[?:?] at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:276) ~[?:?] at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141) ~[?:?] at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:632) ~[?:?] at sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473) ~[?:?] at sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369) ~[?:?] at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392) ~[?:?] at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443) ~[?:?] at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1074) ~[?:?] at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1061) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:?] at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1008) ~[?:?] at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:430) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:514) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:368) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:291) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.common.network.Selector.poll(Selector.java:481) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551) [kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1389) ~[kafka-clients-3.0.0.jar!/:?] at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1320) ~[kafka-clients-3.0.0.jar!/:?] at java.lang.Thread.run(Thread.java:829) ~[?:?] </code></pre> <p>I try different variations: only key store, only trust store, delete last two properties in config (endpoint.identification.algorithm and certificate.verification). Should I try to create truststore and import the certificates I have in container? I don't understand the right way for this.What is the right configuration and right way to use certificates I have?</p>
Sasha Korn
<p>The problem was with wrong syntax of properties. The right way to do it</p> <pre><code>spring.kafka.properties.ssl.keystore.type=PKCS12 spring.kafka.properties.ssl.keystore.location=/path/to/keystore.p12 spring.kafka.properties.ssl.keystore.password=password spring.kafka.properties.ssl.truststore.type=PKCS12 spring.kafka.properties.ssl.truststore.location=/path/to/keystore.p12 (it's the same file, it's correct!!) spring.kafka.properties.ssl.truststore.password=password </code></pre> <p>And yes, it's absolutely acceptable to use the same p12 file both keystore and truststore.</p>
Sasha Korn
<p>I am running kubernetes inside 'Docker Desktop' on Mac OS High Sierra.</p> <p><a href="https://i.stack.imgur.com/C8raD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C8raD.png" alt="enter image description here"></a></p> <p>Is it possible to change the flags given to the kubernetes api-server with this setup?</p> <p>I can see that the api-server is running.</p> <p><a href="https://i.stack.imgur.com/RkVoK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RkVoK.png" alt="enter image description here"></a></p> <p>I am able to exec into the api-server container. When I kill the api-server so I could run it with my desired flags, the container is immediately killed.</p> <p><a href="https://i.stack.imgur.com/oSYaL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oSYaL.png" alt="enter image description here"></a></p>
user674669
<p>I there is no a deployment for <code>kube-apiserver</code> since those pods are static so they are created and managed by <code>kubelet</code>.</p> <p>The way to change <code>kube-api</code>'s parameters is like @hanx mentioned:</p> <ol> <li>ssh into the master node (not a container);</li> <li>update the file under <code>/etc/kubernetes/manifests/</code>; As soon as you save the file, the changes will take effect.</li> </ol>
Alex
<p>I have a trivially small Spark application written in Java that I am trying to run in a K8s cluster using <code>spark-submit</code>. I built an image with Spark binaries, my uber-JAR file with all necessary dependencies (in <code>/opt/spark/jars/my.jar</code>), and a config file (in <code>/opt/spark/conf/some.json</code>).</p> <p>In my code, I start with</p> <pre class="lang-java prettyprint-override"><code>SparkSession session = SparkSession.builder() .appName(&quot;myapp&quot;) .config(&quot;spark.logConf&quot;, &quot;true&quot;) .getOrCreate(); Path someFilePath = FileSystems.getDefault().getPath(&quot;/opt/spark/conf/some.json&quot;); String someString = new String(Files.readAllBytes(someFilePath)); </code></pre> <p>and get this exception at <code>readAllBytes</code> from the Spark driver:</p> <pre><code>java.nio.file.NoSuchFileException: /opt/spark/conf/some.json </code></pre> <p>If I run my Docker image manually I can definitely see the file <code>/opt/spark/conf/some.json</code> as I expect. My Spark job runs as root so file permissions should not be a problem.</p> <p>I have been assuming that, since the same Docker image, with the file indeed present, will be used to start the driver (and executors, but I don't even get to that point), the file should be available to my application. Is that not so? Why wouldn't it see the file?</p>
mustaccio
<p>You seem to get this exception from one of your worker nodes, not from the container.</p> <p>Make sure that you've specified all files needed as <code>--files</code> option for <code>spark-submit</code>.</p> <pre><code>spark-submit --master yarn --deploy-mode cluster --files &lt;local files dependecies&gt; ... </code></pre> <p><a href="https://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management</a></p>
andreoss
<p>I am been struggling to get my simple 3 node Kubernetes cluster running. </p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION ubu1 Ready master 31d v1.13.4 ubu2 Ready master,node 31d v1.13.4 ubu3 Ready node 31d v1.13.4 </code></pre> <p>I tried creating a PVC, which was stuck in Pending forever. So I deleted it, but now it is stuck in Terminating status. </p> <pre><code>$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE task-pv-claim Terminating task-pv-volume 100Gi RWO manual 26d </code></pre> <p>How can I create a PV that is properly created and useable for the demos described on the official kubernetes web site? </p> <p>PS: I used <code>kubespray</code> to get this up and running.</p> <p>On my Ubuntu 16.04 VMs, this is the Docker version installed:</p> <pre><code>ubu1:~$ docker version Client: Version: 18.06.2-ce API version: 1.38 Go version: go1.10.3 Git commit: 6d37f41 Built: Sun Feb 10 03:47:56 2019 OS/Arch: linux/amd64 Experimental: false </code></pre> <p>Thanks in advance.</p>
farhany
<p><code>kubectl edit pv (pv name)</code></p> <p>Find the following in the manifest file</p> <pre class="lang-yaml prettyprint-override"><code>finalizers: - kubernetes.io/pv-protection </code></pre> <p>... and delete it.</p> <p>Then exit, and run this command to delete the pv</p> <pre class="lang-bash prettyprint-override"><code>kubectl delete pv (pv name) --grace-period=0 --force </code></pre>
Dragomir Ivanov
<p>I have set up container insights as described in the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-metrics.html" rel="nofollow noreferrer">Documentation</a></p> <p>Is there a way to remove some of the metrics sent over to CloudWatch ?</p> <p>Details :</p> <p>I have a small cluster ( 3 client facing namespaces, ~ 8 services per namespace ) with some custom monitoring, logging, etc in their own separate namespaces, and I just want to use CloudWatch for critical client facing metrics.</p> <p>The problem I am having is that the Agent sends over 500 metrics to CloudWatch, where I am really only interested in a few of the important ones, especially as AWS bills per metric.</p> <p>Is there any way to limit which metrics get sent to CloudWatch?</p> <p>It would be especially helpful if I could only sent metrics from certain namespaces, for example, exclude the kube-system namespace </p> <p>My configmap is:</p> <pre><code> cwagentconfig.json: | { "logs": { "metrics_collected": { "kubernetes": { "cluster_name": "*****", "metrics_collection_interval": 60 } }, "force_flush_interval": 5 } } </code></pre> <p>I have searched for a while now, but clouldn't really find anything on:</p> <pre><code> "metrics_collected": { "kubernetes": { </code></pre>
devops to dev
<p>I've looked as best I can and you're right, there's little or nothing to find on this topic. Before I make the obvious-but-unhelpful suggestions of either using Prometheus or asking on the AWS forums, a quick look at what the CloudWatch agent actually does.</p> <p>The Cloudwatch agent gets container metrics either from from cAdvisor, which runs as part of kubelet on each node, or from the kubernetes metrics-server API (which also gets it's metrics from kubelet and cAdvisor). cAdvisor is well documented, and it's likely that the Cloudwatch agent uses the <a href="https://github.com/google/cadvisor/blob/master/docs/storage/prometheus.md" rel="nofollow noreferrer">Prometheus format metrics cAdvisor produces</a> to construct it's own list of metrics. </p> <p>That's just a guess though unfortunately, since the Cloudwatch agent doesn't seem to be open source. That also means it <em>may</em> be possible to just set a 'measurement' option within the kubernetes section and select metrics based on Prometheus metric names, but probably that's not supported. <em>(if you do ask AWS, the Premium Support team should keep an eye on the forums, so you might get lucky and get an answer without paying for support)</em></p> <p>So, if you can't cut down metrics created by Container Insights, what are your other options? Prometheus is <a href="https://www.metricfire.com/prometheus-tutorials/how-to-deploy-prometheus-on-kubernetes?utm_source=sof&amp;utm_medium=organic&amp;utm_campaign=prometheus" rel="nofollow noreferrer">easy to deploy</a>, and you can set up recording rules to cut down on the number of metrics it actually saves. It doesn't push to Cloudwatch by default, but you can keep the metrics locally if you have some space on your node for it, or use a <a href="https://www.metricfire.com/prometheus-tutorials/prometheus-storage?utm_source=sof&amp;utm_medium=organic&amp;utm_campaign=prometheus" rel="nofollow noreferrer">remote storage</a> service like MetricFire (the company I work for, to be clear!) which provides Grafana to go along with it. You can also <a href="https://github.com/prometheus/cloudwatch_exporter" rel="nofollow noreferrer">export metrics from Cloudwatch</a> and use Prometheus as your single source of truth, but that means more storage on your cluster.</p> <p>If you prefer to view your metrics in Cloudwatch, there are tools like <a href="https://github.com/cloudposse/prometheus-to-cloudwatch" rel="nofollow noreferrer">Prometheus-to-cloudwatch</a> which actually scrape Prometheus endpoints and send data to Cloudwatch, much like (I'm guessing) the Cloudwatch Agent does. This service actually has include and exclude settings for deciding which metrics are sent to Cloudwatch.</p> <p>I've written a blog post on <a href="https://www.metricfire.com/prometheus-tutorials/aws-kubernetes?utm_source=sof&amp;utm_medium=organic&amp;utm_campaign=prometheus" rel="nofollow noreferrer">EKS Architecture and Monitoring</a> in case that's of any help to you. Good luck, and let us know which option you go for!</p>
Shevaun Frazier
<p>I have the cluster setup below in AKS</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: hpa-example spec: replicas: 3 selector: matchLabels: app: hpa-example template: metadata: labels: app: hpa-example spec: containers: - name: hpa-example image: gcr.io/google_containers/hpa-example ports: - name: http-port containerPort: 80 resources: requests: cpu: 200m --- apiVersion: v1 kind: Service metadata: name: hpa-example spec: ports: - port: 31001 nodePort: 31001 targetPort: http-port protocol: TCP selector: app: hpa-example type: NodePort --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hpa-example-autoscaler spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hpa-example minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 50 </code></pre> <p>The idea of this is to check AutoScaling</p> <p>I need to have this available externally so I added</p> <pre><code>apiVersion: v1 kind: Service metadata: name: load-balancer-autoscaler spec: selector: app: hpa-example ports: - port: 31001 targetPort: 31001 type: LoadBalancer </code></pre> <p>This now gives me an external IP however, I cannot connect to it in Postman or via a browser</p> <p>What have I missed?</p> <p>I have tried to change the ports between 80 and 31001 but that makes no difference</p>
Paul
<p>As posted by user @David Maze:</p> <blockquote> <p>What's the exact URL you're trying to connect to? What error do you get? (On the load-balancer-autoscaler service, the targetPort needs to match the name or number of a ports: in the pod, or you could just change the hpa-example service to type: LoadBalancer.)</p> </blockquote> <p>I reproduced your scenario and found out issue in your configuration that could deny your ability to connect to this <code>Deployment</code>.</p> <p>From the perspective of <code>Deployment</code> and <code>Service</code> of type <code>NodePort</code> everything seems to work okay.</p> <p>If it comes to the <code>Service</code> of type <code>LoadBalancer</code> on the other hand:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: load-balancer-autoscaler spec: selector: app: hpa-example ports: - port: 31001 targetPort: 31001 # &lt;--- CULPRIT type: LoadBalancer </code></pre> <p>This definition will send your traffic directly to the pods on port <strong><code>31001</code></strong> and <strong>it should send it to the port <code>80</code></strong> (this is the port your app is responding on). You can change it either by:</p> <ul> <li><code>targetPort: 80</code></li> <li><code>targetPort: http-port</code></li> </ul> <blockquote> <p>You could also change the <code>Service</code> of the <code>NodePort</code> (<code>hpa-example</code>) to <code>LoadBalancer</code> as pointed by user @David Maze!</p> </blockquote> <p>After changing this definition you will be able to run:</p> <p><code>$ kubectl get service</code></p> <pre class="lang-sh prettyprint-override"><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE load-balancer-autoscaler LoadBalancer 10.4.32.146 AA.BB.CC.DD 31001:31497/TCP 9m41s </code></pre> <ul> <li><code>curl AA.BB.CC.DD:31001</code> and get the reply of <code>OK!</code></li> </ul> <hr /> <p>I encourage you to look on the additional resources regarding Kubernetes services:</p> <ul> <li><em><a href="https://learn.microsoft.com/en-us/azure/aks/concepts-network#services" rel="nofollow noreferrer">Docs.microsoft.com: AKS: Network: Services</a></em></li> <li><em><a href="https://stackoverflow.com/questions/41509439/whats-the-difference-between-clusterip-nodeport-and-loadbalancer-service-types">Stackoverflow.com: Questions: Difference between nodePort and LoadBalancer service types</a></em></li> <li><em><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Service</a></em></li> </ul>
Dawid Kruk
<p>I have installed k3s on a Cloud VM. (k3s is very similar to k8s. )</p> <p>k3s server start as a master node. </p> <p>And the master node's label shows internal-ip is 192.168.xxx.xxx. And the master node's annotations shows public-ip is also 192.168.xxx.xxx.</p> <p>But the <strong>real public-ip of CloudVM is 49.xx.xx.xx.</strong> So agent from annother machine cannot connecting this master node. Because agent always tries to connect proxy "wss://192.168.xxx.xxx:6443/...".</p> <p><strong>If I run ifconfig on the Cloud VM, public-ip(49.xx.xx.xx) does not show. So k3s not find the right internal-ip or public-ip.</strong></p> <p>I try to start k3s with --bind-address=49.xx.xx.xx , but start fail. I guess no NIC bind with this ip-address. </p> <p>How to resolve this problem, If I try to create a virtual netcard with address 49.xx.xx.xx ?</p>
alen
<p>The best option to connect Kubernetes master and nodes is using private network.</p> <h2>How to setup K3S master and single node cluster:</h2> <h3>Prerequisites:</h3> <ul> <li>All the machines need to be inside the same private network. For example 192.168.0.0/24 </li> <li>All the machines need to communicate with each other. You can ping them with: <code>$ ping IP_ADDRESS</code></li> </ul> <p>In this example there are 2 virtual machines:</p> <ul> <li>Master node (k3s) with private ip of 10.156.0.13</li> <li>Worker node (k3s-2) with private ip of 10.156.0.8 </li> </ul> <p><a href="https://i.stack.imgur.com/W9uiU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W9uiU.png" alt="enter image description here"></a></p> <h3>Establish connection between VM's</h3> <p>The most important thing is to check if the machines can connect with each other. As I said, the best way would be just to ping them. </p> <h3>Provision master node</h3> <p>To install K3S on master node you need to invoke command from root user:</p> <p><code>$ curl -sfL https://get.k3s.io | sh -</code></p> <p>The output of this command should be like this:</p> <pre><code>[INFO] Finding latest release [INFO] Using v0.10.2 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v0.10.2/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v0.10.2/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Creating /usr/local/bin/ctr symlink to k3s [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s </code></pre> <p>Check if master node is working: </p> <p><code>$ kubectl get nodes</code></p> <p>Output of above command should be like this: </p> <pre><code>NAME STATUS ROLES AGE VERSION k3s Ready master 2m14s v1.16.2-k3s.1 </code></pre> <p>Retrieve the <strong>IMPORTANT_TOKEN</strong> from master node with command:</p> <p><code>$ cat /var/lib/rancher/k3s/server/node-token</code></p> <p>This token will be used to connect agent node to master node. <strong>Copy it</strong></p> <h3>Connect agent node to master node</h3> <p>Ensure that node can communicate with master. After that you can invoke command from root user: </p> <p><code>$ curl -sfL https://get.k3s.io | K3S_URL=https://MASTER_NODE_IP:6443 K3S_TOKEN=IMPORTANT_TOKEN sh -</code></p> <p><strong>Paste your IMPORTANT_TOKEN into this command.</strong></p> <p>In this case the MASTER_NODE_IP is the 10.156.0.13. </p> <p>Output of this command should look like this: </p> <pre><code>[INFO] Finding latest release [INFO] Using v0.10.2 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v0.10.2/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v0.10.2/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Creating /usr/local/bin/ctr symlink to k3s [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service [INFO] systemd: Enabling k3s-agent unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service. [INFO] systemd: Starting k3s-agent </code></pre> <h3>Test</h3> <p>Invoke command on master node to check if agent connected successfully: </p> <p><code>$ kubectl get nodes</code> </p> <p>Node which you added earlier should be visible here: </p> <pre><code>NAME STATUS ROLES AGE VERSION k3s Ready master 15m v1.16.2-k3s.1 k3s-2 Ready &lt;none&gt; 3m19s v1.16.2-k3s.1 </code></pre> <p>Above output concludes that the provisioning has happened correctly. </p> <p>EDIT1: From this point you can deploy pods and expose them into public IP space. </p> <h2>EDIT2:</h2> <p>You can connect the K3S master and worker nodes on public IP network but there are some prerequisites. </p> <h3>Prerequsities:</h3> <ul> <li>Master node need to have port 6443/TCP open</li> <li>Ensure that master node has reserved static IP address </li> <li>Ensure that firewall rules are configured to allow access only by IP address of worker nodes (static ip addresses for nodes can help with that) </li> </ul> <h3>Provisioning of master node</h3> <p>The deployment of master node is the same as above. The only difference is that you need to get his public ip address. </p> <p>Your master node does not need to show your public IP in commands like:</p> <ul> <li><code>$ ip a</code> </li> <li><code>$ ifconfig</code></li> </ul> <h3>Provisioning worker nodes</h3> <p>The deployment of worker nodes is different only in manner of changing IP address of master node from private one to public one. Invoke this command from root account:<br> <code>curl -sfL https://get.k3s.io | K3S_URL=https://PUBLIC_IP_OF_MASTER_NODE:6443 K3S_TOKEN=IMPORTANT_TOKEN sh -</code></p> <h3>Testing the cluster</h3> <p>To ensure that nodes are connected properly you need to invoke command:</p> <p><code>$ kubectl get nodes</code></p> <p>The output should be something like this: </p> <pre><code>NAME STATUS ROLES AGE VERSION k3s-4 Ready &lt;none&gt; 68m v1.16.2-k3s.1 k3s-1 Ready master 69m v1.16.2-k3s.1 k3s-3 Ready &lt;none&gt; 69m v1.16.2-k3s.1 k3s-2 Ready &lt;none&gt; 68m v1.16.2-k3s.1 </code></pre> <p>All of the nodes should be visible here. </p>
Dawid Kruk
<p>I want to know if there is any ready to use OR how I can create a grafana dashboard with below specifications:</p> <p>I want a dashboard that shows each pod as a cube or circle or any shape. If the pod is using like 80% of its resource limit (cpu/memory) the color of that shape changes from green to red.</p> <p>I have to mention that I have a Prometheus + grafana in place and I am using them and I just need to know how to create such a dashboard.</p>
AVarf
<p>Grafana includes a panel type called <a href="https://grafana.com/docs/features/panels/singlestat/" rel="nofollow noreferrer">Single Stat Panel</a> which should do what you need. It can be set to change background or text colour based on thresholds you determine, so if you have an output metric which is a percentage, you can specify what percentage to change at. It does 2 stage changes, so you can use traffic lights to indicate a warning before a metric gets to emergency levels.</p> <p>If you have multiple similar metrics you want to create panels for, you can use Grafana's <a href="https://grafana.com/docs/reference/templating/" rel="nofollow noreferrer">templating variables</a> to get a list of unique pod identifiers (depending on what labels are available to you), and then use the <a href="https://grafana.com/docs/reference/templating/#repeating-panels" rel="nofollow noreferrer">repeat panel</a> option to automatically create one panel per pod. WARNING: if you have a huge number of pods, this option could stall or crash your browser! If you think this will be a problem then I recommend making one dashboard with all the metrics for a single pod, and using a variable to switch between them.</p>
Shevaun Frazier
<p>I'm trying get image files of Google Cloud Storage (GCS) in my Node.js application using Axios client. On develop mode using my PC I pass a Bearer Token and all works properly.</p> <p>But, I need to use this in production in a cluster hosted on Google Kubernetes Engine (GKE).</p> <p>I made recommended tuturials to create a service account (GSA), then I vinculed with kubernetes account (KSA), via Workload identity approach, but when I try get files througt one endpoint on my app, I'm receiving:</p> <pre><code>{&quot;statusCode&quot;:401,&quot;message&quot;:&quot;Unauthorized&quot;} </code></pre> <p>What is missing to make?</p> <hr /> <h2>Update: What I've done:</h2> <ol> <li>Create Google Service Account</li> </ol> <p><a href="https://cloud.google.com/iam/docs/creating-managing-service-accounts" rel="noreferrer">https://cloud.google.com/iam/docs/creating-managing-service-accounts</a></p> <ol start="2"> <li>Create Kubernetes Service Account</li> </ol> <pre><code># gke-access-gcs.ksa.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: gke-access-gcs </code></pre> <pre><code>kubectl apply -f gke-access-gcs.ksa.yaml </code></pre> <ol start="3"> <li>Relate KSAs and GSAs</li> </ol> <pre><code>gcloud iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ --member &quot;serviceAccount:cluster_project.svc.id.goog[k8s_namespace/ksa_name]&quot; \ gsa_name@gsa_project.iam.gserviceaccount.com </code></pre> <ol start="4"> <li>Note the KSA and complete the link between KSA and GSA</li> </ol> <pre><code>kubectl annotate serviceaccount \ --namespace k8s_namespace \ ksa_name \ iam.gke.io/gcp-service-account=gsa_name@gsa_project.iam.gserviceaccount.com </code></pre> <ol start="5"> <li>Set Read and Write role:</li> </ol> <pre><code>gcloud projects add-iam-policy-binding project-id \ --member=serviceAccount:[email protected] \ --role=roles/storage.objectAdmin </code></pre> <ol start="6"> <li>Test access:</li> </ol> <pre><code>kubectl run -it \ --image google/cloud-sdk:slim \ --serviceaccount ksa-name \ --namespace k8s-namespace \ workload-identity-test </code></pre> <p>The above command works correctly. Note that was passed <code>--serviceaccount</code> and <code>workload-identity</code>. Is this necessary to GKE?</p> <p>PS: I don't know if this influences, but I am using SQL Cloud with proxy in the project.</p>
btd1337
<h3>EDIT</h3> <p>Issue portrayed in the question is related to the fact that axios client <strong>does not use</strong> the Application Default Credentials (as official Google libraries) mechanism that <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="noreferrer">Workload Identity</a> takes advantage of. The ADC checks:</p> <blockquote> <ul> <li>If the environment variable <code>GOOGLE_APPLICATION_CREDENTIALS</code> is set, ADC uses the service account file that the variable points to.</li> <li>If the environment variable <code>GOOGLE_APPLICATION_CREDENTIALS</code> isn't set, ADC uses the default service account that Compute Engine, Google Kubernetes Engine, App Engine, Cloud Run, and Cloud Functions provide.</li> </ul> <p>-- <em><a href="https://cloud.google.com/docs/authentication/production#auth-cloud-implicit-nodejs" rel="noreferrer">Cloud.google.com: Authentication: Production </a></em></p> </blockquote> <p><strong>This means that axios client will need to fall back to the <code>Bearer token</code> authentication method to authenticate against Google Cloud Storage.</strong></p> <p>The authentication with <code>Bearer token</code> is described in the official documentation as following:</p> <blockquote> <h3>API authentication</h3> <p>To make requests using OAuth 2.0 to either the Cloud Storage <a href="https://cloud.google.com/storage/docs/xml-api/overview" rel="noreferrer">XML API</a> or <a href="https://cloud.google.com/storage/docs/json_api/v1" rel="noreferrer">JSON API</a>, include your application's access token in the <code>Authorization</code> header in every request that requires authentication. You can generate an access token from the <a href="https://developers.google.com/oauthplayground/" rel="noreferrer">OAuth 2.0 Playground</a>.</p> <pre><code>Authorization: Bearer OAUTH2_TOKEN </code></pre> <p>The following is an example of a request that lists objects in a bucket.</p> <blockquote> <p><a href="https://cloud.google.com/storage/docs/authentication#json-api" rel="noreferrer">JSON API</a></p> <p>Use the <a href="https://cloud.google.com/storage/docs/json_api/v1/objects/list" rel="noreferrer">list</a> method of the Objects resource.</p> <pre><code>GET /storage/v1/b/example-bucket/o HTTP/1.1 Host: www.googleapis.com Authorization: Bearer ya29.AHES6ZRVmB7fkLtd1XTmq6mo0S1wqZZi3-Lh_s-6Uw7p8vtgSwg </code></pre> </blockquote> <p>-- <a href="https://cloud.google.com/storage/docs/authentication#apiauth" rel="noreferrer">Cloud.google.com: Storage: Docs: Api authentication</a></p> </blockquote> <hr /> <p>I've included <strong>basic example</strong> of a code snippet using Axios to query the Cloud Storage (requires <code>$ npm install axios</code>):</p> <pre class="lang-js prettyprint-override"><code>const Axios = require('axios'); const config = { headers: { Authorization: 'Bearer ${OAUTH2_TOKEN}' } }; Axios.get( 'https://storage.googleapis.com/storage/v1/b/BUCKET-NAME/o/', config ).then( (response) =&gt; { console.log(response.data.items); }, (err) =&gt; { console.log('Oh no. Something went wrong :('); // console.log(err) &lt;-- Get the full output! } ); </code></pre> <p>I left below example of Workload Identity setup with a node.js official library code snippet as it could be useful to other community members.</p> <hr /> <p>Posting this answer as I've managed to use <code>Workload Identity</code> and a simple <code>nodejs</code> app to send and retrieve data from <code>GCP bucket</code>.</p> <p>I included some bullet points for troubleshooting potential issues.</p> <hr /> <h2>Steps:</h2> <ul> <li>Check if <code>GKE</code> cluster has <code>Workload Identity</code> enabled.</li> <li>Check if your <code>Kubernetes service account</code> is associated with your <code>Google Service account</code>.</li> <li>Check if example workload is using correct <code>Google Service account</code> when connecting to the API's.</li> <li>Check if your <code>Google Service account</code> is having correct permissions to access your <code>bucket</code>.</li> </ul> <p>You can also follow the official documentation:</p> <ul> <li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="noreferrer">Cloud.google.com: Kubernetes Engine: Workload Identity</a></em></li> </ul> <hr /> <p>Assuming that:</p> <ul> <li>Project (ID) named: <code>awesome-project</code> &lt;- <strong>it's only example</strong></li> <li>Kubernetes namespace named: <code>bucket-namespace</code></li> <li>Kubernetes service account named: <code>bucket-service-account</code></li> <li>Google service account named: <code>google-bucket-service-account</code></li> <li>Cloud storage bucket named: <code>workload-bucket-example</code> &lt;- <strong>it's only example</strong></li> </ul> <p>I've included the commands:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl create namespace bucket-namespace $ kubectl create serviceaccount --namespace bucket-namespace bucket-service-account $ gcloud iam service-accounts create google-bucket-service-account $ gcloud iam service-accounts add-iam-policy-binding --role roles/iam.workloadIdentityUser --member &quot;serviceAccount:awesome-project.svc.id.goog[bucket-namespace/bucket-service-account]&quot; google-bucket-service-account@awesome-project.iam.gserviceaccount.com $ kubectl annotate serviceaccount --namespace bucket-namespace bucket-service-account iam.gke.io/gcp-service-account=google-bucket-service-account@awesome-project-ID.iam.gserviceaccount.com </code></pre> <p>Using the guide linked above check the service account authenticating to API's:</p> <ul> <li><code>$ kubectl run -it --image google/cloud-sdk:slim --serviceaccount bucket-service-account --namespace bucket-namespace workload-identity-test</code></li> </ul> <p>The output of <code>$ gcloud auth list</code> should show:</p> <pre class="lang-sh prettyprint-override"><code> Credentialed Accounts ACTIVE ACCOUNT * google-bucket-service-account@AWESOME-PROJECT.iam.gserviceaccount.com To set the active account, run: $ gcloud config set account `ACCOUNT` </code></pre> <blockquote> <p>Google service account created earlier should be present in the output!</p> </blockquote> <p>Also it's required to add the permissions for the service account to the bucket. You can either:</p> <ul> <li>Use <code>Cloud Console</code></li> <li>Run: <code>$ gsutil iam ch serviceAccount:google-bucket-service-account@awesome-project.iam.gserviceaccount.com:roles/storage.admin gs://workload-bucket-example</code></li> </ul> <p>To download the file from the <code>workload-bucket-example</code> following code can be used:</p> <pre class="lang-js prettyprint-override"><code>// Copyright 2020 Google LLC /** * This application demonstrates how to perform basic operations on files with * the Google Cloud Storage API. * * For more information, see the README.md under /storage and the documentation * at https://cloud.google.com/storage/docs. */ const path = require('path'); const cwd = path.join(__dirname, '..'); function main( bucketName = 'workload-bucket-example', srcFilename = 'hello.txt', destFilename = path.join(cwd, 'hello.txt') ) { const {Storage} = require('@google-cloud/storage'); // Creates a client const storage = new Storage(); async function downloadFile() { const options = { // The path to which the file should be downloaded, e.g. &quot;./file.txt&quot; destination: destFilename, }; // Downloads the file await storage.bucket(bucketName).file(srcFilename).download(options); console.log( `gs://${bucketName}/${srcFilename} downloaded to ${destFilename}.` ); } downloadFile().catch(console.error); // [END storage_download_file] } main(...process.argv.slice(2)); </code></pre> <p>The code is exact copy from:</p> <ul> <li><em><a href="https://googleapis.dev/nodejs/storage/latest/" rel="noreferrer">Googleapis.dev: NodeJS: Storage</a></em></li> <li><em><a href="https://github.com/googleapis/nodejs-storage/blob/master/samples/downloadFile.js" rel="noreferrer">Github.com: Googleapis: Nodejs-storage: downloadFile.js</a></em></li> </ul> <p>Running this code should produce an output:</p> <pre class="lang-sh prettyprint-override"><code>root@ubuntu:/# nodejs app.js gs://workload-bucket-example/hello.txt downloaded to /hello.txt. </code></pre> <pre class="lang-sh prettyprint-override"><code>root@ubuntu:/# cat hello.txt Hello there! </code></pre>
Dawid Kruk
<p>I have a minikube cluster with two pods (with ubuntu containers). What I need to do is route test traffic from one port to another through this minikube cluster. This traffic should be sent through these two pods like in the picture. I am a beginner in this Kubernetes stuff so I really don't know how to do this and which way to go... Please, help me or give me some hints.</p> <p>I am working on ubuntu server ver. <strong>18.04.</strong></p> <p><a href="https://i.stack.imgur.com/AwRfs.png" rel="nofollow noreferrer">enter image description here</a></p>
Skyeee
<p>I agree with an answer provided by @Harsh Manvar and I would also like to expand a little bit on this topic.</p> <p>There already is an answer with a similar setup. I encourage you to check it out:</p> <ul> <li><em><a href="https://stackoverflow.com/questions/64350158/how-to-access-a-service-from-other-machine-in-lan/64422184#64422184">Stackoverflow.com: Questions: How to access a service from other machine in LAN</a></em></li> </ul> <p>There are different <a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="nofollow noreferrer">drivers</a> that could be used to run your <code>minikube</code>. <strong>They will have differences when it comes to dealing with inbound traffic.</strong> I missed the part that was telling about the driver used in the setup (comment). If it's the <code>Docker</code> shown in the tags, you could follow below example.</p> <hr /> <h1>Example</h1> <p>Steps:</p> <ul> <li>Spawn <code>nginx-one</code> and <code>nginx-two</code> <code>Deployments</code> to imitate <code>Pods</code> from the image</li> <li>Create a service that will be used to send traffic from <code>nginx-one</code> to <code>nginx-two</code></li> <li>Create a service that will allow you to connect to <code>nginx-one</code> from LAN</li> <li>Test the setup</li> </ul> <h3>Spawn <code>nginx-one</code> and <code>nginx-two</code> <code>Deployments</code> to imitate <code>Pods</code> from the image</h3> <p>You can use following definitions to spawn two <code>Deployments</code> where each one will have a single <code>Pod</code>:</p> <ul> <li><code>nginx-one.yaml</code></li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-one spec: selector: matchLabels: app: nginx-one replicas: 1 template: metadata: labels: app: nginx-one spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre> <ul> <li><code>nginx-two.yaml</code></li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-two spec: selector: matchLabels: app: nginx-two replicas: 1 template: metadata: labels: app: nginx-two spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre> <h3>Create a service that will be used to send traffic from <code>nginx-one</code> to <code>nginx-two</code></h3> <p>You will need to use a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> to send the traffic from <code>nginx-one</code> to <code>nginx-two</code>. Example of such <code>Service</code> could be following:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: nginx-two-service spec: type: ClusterIP # could be changed to NodePort selector: app: nginx-two # IMPORTANT ports: - name: http protocol: TCP port: 80 targetPort: 80 </code></pre> <p>After applying this definition you will be able to send the traffic to <code>nginx-two</code> by using the service name (<code>nginx-two-service</code>)</p> <blockquote> <p>A side note!</p> <p>You can use the IP of the <code>Pod</code> without the <code>Service</code> but this is not a recommended way.</p> </blockquote> <h3>Create a service that will allow you to connect to <code>nginx-one</code> from LAN</h3> <p>Assuming that you want to expose your <code>minikube</code> instance to LAN with <code>Docker</code> driver you will need to create a service and expose it. Example of such setup could be the following:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: nginx-one-service spec: type: ClusterIP # could be changed to NodePort selector: app: nginx-one # IMPORTANT ports: - name: http protocol: TCP port: 80 targetPort: 80 </code></pre> <p>You will also need to run:</p> <ul> <li><code>$ kubectl port-forward --address 0.0.0.0 service/nginx-one-service 8000:80</code></li> </ul> <p>Above command (ran on your <code>minikube</code> host!) will expose your <code>nginx-one-service</code> to be available on LAN. It will map port 8000 on the machine that ran this command to the port 80 of this service. You can check it by executing from another machine at LAN:</p> <ul> <li><code>curl IP_ADDRESS_OF_MINIKUBE_HOST:8000</code></li> </ul> <blockquote> <p>A side note!</p> <p>You will need root access to have your inbound traffic enter on ports lesser than 1024.</p> </blockquote> <h3>Test the setup</h3> <p>You will need to check if there is a communication between the objects as shown in below &quot;connection diagram&quot;.</p> <p><code>PC</code> -&gt; <code>nginx-one</code> -&gt; <code>nginx-two</code> -&gt; <code>example.com</code></p> <p>The testing methodology could be following:</p> <p><code>PC</code> -&gt; <code>nginx-one</code>:</p> <ul> <li>Run on a machine in your LAN: <ul> <li><code>curl MINIKUBE_IP_ADDRESS:8000</code></li> </ul> </li> </ul> <p><code>nginx-one</code> -&gt; <code>nginx-two</code>:</p> <ul> <li>Exec into your <code>nginx-one</code> <code>Pod</code> and run command: <ul> <li><code>$ kubectl exec -it NGINX_POD_ONE_NAME -- /bin/bash</code></li> <li><code>$ curl nginx-two-service</code></li> </ul> </li> </ul> <p><code>nginx-two</code> -&gt; <code>example.com</code>:</p> <ul> <li>Exec into your <code>nginx-two</code> <code>Pod</code> and run command: <ul> <li><code>$ kubectl exec -it NGINX_POD_TWO_NAME -- /bin/bash</code></li> <li><code>$ curl example.com</code></li> </ul> </li> </ul> <p>If you completed above steps you can swap <code>nginx</code> <code>Pods</code> for your own software.</p> <hr /> <p>Additional notes and resources:</p> <p>I encourage you to check <code>kubeadm</code> as it's the tool to create your own Kubernetes clusters:</p> <ul> <li><em><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Create cluster kubeadm</a></em></li> </ul> <p>As you said:</p> <blockquote> <p>I am a beginner in this Kubernetes stuff so I really don't know how to do this and which way to go... Please, help me or give me some hints.</p> </blockquote> <p>You could check following links for more resources:</p> <ul> <li><em><a href="https://kubernetes.io/" rel="nofollow noreferrer">Kubernetes.io</a></em></li> <li><em><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Kubernetes: Docs: Concepts: Workloads: Controllers: Deployment</a></em></li> <li><em><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Service</a></em></li> </ul>
Dawid Kruk
<p>I am new to Kubernetes and trying to create a AWS CodePipeline to deploy service to EKS stack.</p> <p>I am following <a href="https://eksworkshop.com/intermediate/220_codepipeline/" rel="nofollow noreferrer">this</a> tutorial I have followed all the steps including creating a role and adding permissions, so that <strong>CodeBuild</strong> will be able to talk with EKS.</p> <p>The issue I am facing right now is when CodePipeline runs, it is failing for below command in the <strong>CodeBuild</strong> phase.</p> <p><code>kubectl apply -f hello-k8s.yml</code></p> <p>and giving this error</p> <pre><code>[Container] 2019/12/04 07:41:43 Running command kubectl apply -f hello-k8s.yml unable to recognize &quot;hello-k8s.yml&quot;: Unauthorized unable to recognize &quot;hello-k8s.yml&quot;: Unauthorized </code></pre> <p>I am not very much sure whether its a credentials issue, because I have used all the steps to add user/role as per tutorial.</p> <p>Can anyone please help me on this?</p>
Pratik
<p>Deploying Yaml manifests to Kubernetes from CodeBuild requires these steps:</p> <p>The high-level process includes the following steps:</p> <ol> <li><p>Create an IAM Service role for CodeBuild</p></li> <li><p>Map the CodeBuild Service role in EKS using “aws-auth” ConfigMap</p></li> <li><p>Create source files in Code repository</p></li> <li><p>Create and Start a CodeBuild Project</p></li> <li><p>Confirm the required objects are created in EKS cluster</p></li> </ol> <h3>Create an IAM Service role for CodeBuild (Don't use existing service role as it includes a '/path/')</h3> <p>Run the following commands to Create a CodeBuild Service Role and attach the required policies:</p> <pre><code>TRUST = "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"codebuild.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\" } ] }" $ echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "eks:Describe*", "Resource": "*" } ] }' &gt; /tmp/iam-role-policy $ aws iam create-role --role-name CodeBuildKubectlRole --assume-role-policy-document "$TRUST" --output text --query 'Role.Arn' $ aws iam put-role-policy --role-name CodeBuildKubectlRole --policy-name eks-describe --policy-document file:///tmp/iam-role-policy $ aws iam attach-role-policy --role-name CodeBuildKubectlRole --policy-arn arn:aws:iam::aws:policy/CloudWatchLogsFullAccess $ aws iam attach-role-policy --role-name CodeBuildKubectlRole --policy-arn arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess </code></pre> <h3>Map the CodeBuild Service role in EKS using “aws-auth” ConfigMap</h3> <p>Edit the ‘aws-auth’ ConfigMap and add the Role Mapping for the CodeBuild service role:</p> <pre><code>$ vi aws-auth.yaml apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: arn:aws:iam::AccountId:role/devel-worker-nodes-NodeInstanceRole-14W1I3VCZQHU7 username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes - rolearn: arn:aws:iam::AccountId:role/CodeBuildKubectlRole username: build groups: - system:masters $ kubectl apply -f aws-auth.yaml </code></pre> <h3>Create source files in Code repository</h3> <p>Create a repository in Github/CodeCommit with sample files as follows:</p> <pre><code>. ├── buildspec.yml └── deployment └── pod.yaml </code></pre> <p>A sample repository is located here: <a href="https://github.com/shariqmus/codebuild-to-eks" rel="noreferrer">https://github.com/shariqmus/codebuild-to-eks</a></p> <p>Notes:</p> <ul> <li><p>The buildspec.yml file installs kubectl, aws-iam-authenticator and configure kubectl in CodeBuild environment</p></li> <li><p>Update the buildspec.yml file with the correct region and cluster_name on Line 16</p></li> <li><p>Add the deployment YAML files in the “deployment” directory</p></li> </ul> <h3>Create and Start a Build Project</h3> <ol> <li><p>Open the CodeBuild console</p></li> <li><p>Click ‘Create Build Project’ button</p></li> <li><p>Name the Project</p></li> <li><p>Use a CodeCommit repository where you have added the attached files : “buildspec.yml” and “pod.yaml”</p></li> <li><p>Use Managed Image > Ubuntu > Standard 1.0</p></li> <li><p>In the Role Name, select “CodeBuildKubectlRole”</p></li> <li><p>Click ‘Create Build Project’ button</p></li> <li><p>Create ‘Start Build’ button to start a Build</p></li> </ol> <h3>Confirm the required objects are created in EKS cluster</h3> <p>You can confirm this with a simple command, e.g.</p> <pre><code>$ kubectl get all --all-namespaces </code></pre>
shariqmaws
<p>I'am searching on how to send multicast udp packet to pods in my kubernetes cluster</p> <p>after some investigation in this issue i realized that all my pods can see packet only if it running on the same node other pods that living on another node can't see the routed packet,</p> <p>i have test it on my <a href="https://cloud.google.com/" rel="nofollow noreferrer">gcp</a> account , i haven't test it on any other k8s cloud providers i have implement it using java spring boot integration see my <a href="https://github.com/ashraf-revo/megana" rel="nofollow noreferrer">git repo</a></p> <p>i have implemented two modules </p> <pre><code> &lt;modules&gt; &lt;module&gt;livefeed&lt;/module&gt; #read packet on the network on 4444 port &lt;module&gt;livesender&lt;/module&gt; # multicast 1 packet every 1 second &lt;/modules&gt; </code></pre> <p>i have made my deployment kind <code>DaemonSet</code> to make sure kubernetes schedule every pod on different node </p> <p>i am using spring integration to read routed packet as <code> @Bean public IntegrationFlow processUniCastUdpMessage() { return IntegrationFlows .from(new MulticastReceivingChannelAdapter("224.0.0.1", 4444)) .handle(x -&gt; log.info(new String(((byte[]) x.getPayload())))) .get(); }</code></p> <p>I hope someone can help me if should i configure vpn on gcp or something else. </p>
ashraf revo
<p>See <a href="https://stackoverflow.com/questions/48304357/multicast-traffic-to-kubernetes">this thread</a>, you need to configure your kubernetes cluster to add the following config for multicasting to function correctly:</p> <pre><code>hostNetwork: true dnsPolicy: ClusterFirstWithHostNet </code></pre> <p>Hope this helps.</p>
Parth Mehta
<p>On GKE I created a <code>statefulset</code> containing a <code>volumeClaimTemplates</code>. Then all the related <code>PersistentVolumesClaims</code>, <code>PersistentVolumes</code> and <code>Google Persistent Disks</code> are automatically created:</p> <pre><code>kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 76m qserv-data-qserv-worker-0 Bound pvc-c5e060dc-88cb-4630-8229-c4b1fcb4f64b 3Gi RWO qserv 76m qserv-data-qserv-worker-1 Bound pvc-5dfffc24-165c-4e2c-a1fa-fa11dd45616f 3Gi RWO qserv 76m qserv-data-qserv-worker-2 Bound pvc-14aa9a63-fae0-4328-aaaa-17db2dee4b79 3Gi RWO qserv 76m qserv-data-qserv-worker-3 Bound pvc-8b701396-42ab-4d15-8b68-9b03ce5a2d07 3Gi RWO qserv 76m qserv-data-qserv-worker-4 Bound pvc-7c49e7a0-fd73-467d-b677-820d899f41ee 3Gi RWO qserv 76m </code></pre> <pre><code>kubectl get pv pvc-14aa9a63-fae0-4328-aaaa-17db2dee4b79 3Gi RWO Retain Bound default/qserv-data-qserv-worker-2 qserv 77m pvc-5dfffc24-165c-4e2c-a1fa-fa11dd45616f 3Gi RWO Retain Bound default/qserv-data-qserv-worker-1 qserv 77m pvc-7c49e7a0-fd73-467d-b677-820d899f41ee 3Gi RWO Retain Bound default/qserv-data-qserv-worker-4 qserv 77m pvc-8b701396-42ab-4d15-8b68-9b03ce5a2d07 3Gi RWO Retain Bound default/qserv-data-qserv-worker-3 qserv 77m pvc-c5e060dc-88cb-4630-8229-c4b1fcb4f64b 3Gi RWO Retain Bound default/qserv-data-qserv-worker-0 qserv 77m </code></pre> <pre><code>gcloud compute disks list NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS ... pvc-14aa9a63-fae0-4328-aaaa-17db2dee4b79 us-central1-c zone 3 pd-balanced READY pvc-5dfffc24-165c-4e2c-a1fa-fa11dd45616f us-central1-c zone 3 pd-balanced READY pvc-7c49e7a0-fd73-467d-b677-820d899f41ee us-central1-c zone 3 pd-balanced READY pvc-8b701396-42ab-4d15-8b68-9b03ce5a2d07 us-central1-c zone 3 pd-balanced READY pvc-c5e060dc-88cb-4630-8229-c4b1fcb4f64b us-central1-c zone 3 pd-balanced READY </code></pre> <p>Is there a simple way to extract PVC/PV yaml file so that I can re-create all PVs/PVCs using the same Google Disks. (This might be useful to move the data to a new GKE cluster in case I delete the current one, or to restore the data if somebody remove accidentally the PVCs/PVs)</p> <pre><code>kubectl get pv,pvc -o yaml &gt; export.yaml </code></pre> <p>Above command does not work because there is too much technical fields set at runtime which prevent <code>kubectl apply -f export.yaml</code> to work. Would you know a way to remove these fields from <code>export.yaml</code></p>
Fabrice Jammes
<p>As asked in the question:</p> <blockquote> <p>Is there a simple way to extract PVC/PV yaml file so that I can re-create all PVs/PVCs using the same Google Disks.</p> </blockquote> <p>Some scripting would be needed to extract a manifest that could be used without any hassles.</p> <p>I found a StackOverflow thread about similar question (how to export manifests):</p> <ul> <li><em><a href="https://stackoverflow.com/questions/61392206/kubectl-export-is-deprecated-any-alternative">Stackoverflow.com: Questions: Kubectl export is deprecated any alternative</a></em></li> </ul> <blockquote> <p>A side note!</p> <p>I also stumbled upon <a href="https://github.com/itaysk/kubectl-neat" rel="nofollow noreferrer">kubectl neat</a> (a plugin for <code>kubectl</code>) which will be referenced later in that answer.</p> <p>As correctly pointed by the author of the post, <code>kubectl neat</code> will show the message at the time of installation:</p> <blockquote> <p><strong>WARNING: You installed plugin &quot;neat&quot; from the krew-index plugin repository.</strong></p> <p><strong>These plugins are not audited for security by the Krew maintainers.</strong> <strong>Run them at your own risk.</strong></p> </blockquote> </blockquote> <p>I would consider going with some backup solution as more a viable option due to the fact of data persistence and in general data protection in case of any failure.</p> <p>From backup solution side you can look here:</p> <ul> <li><em><a href="https://portworx.com/how-to-migrate-stateful-applications-from-one-gcp-region-to-another-with-portworx-kubemotion/" rel="nofollow noreferrer">Portwortx.com: How to migrate stateful application from one gcp region to another with portwortx kubemotion</a></em></li> <li><em><a href="https://velero.io/" rel="nofollow noreferrer">Velero.io</a></em></li> </ul> <p><code>PV</code>'s in <code>GKE</code> are in fact <code>Google Persistent Disks</code>. You can create a snapshot/image of a disk as a backup measure. You can also use this feature to test how your migration behaves:</p> <ul> <li><em><a href="https://cloud.google.com/compute/docs/disks/create-snapshots" rel="nofollow noreferrer">Cloud.google.com: Compute: Docs: Disks: Create snapshots</a></em></li> <li><em><a href="https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images" rel="nofollow noreferrer">Cloud.google.com: Compute: Docs: Images: Create delete deprecate private images</a></em></li> </ul> <hr /> <p><strong>Please consider below example as a workaround.</strong></p> <p>I've managed to migrate an example <code>Statefulset</code> from one cluster to another with data stored on <code>gce-pd</code>.</p> <p>Once more, I encourage you to check this docs about using preexisting disks in <code>GKE</code> to create a <code>Statefulset</code>:</p> <ul> <li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: How to: Persistent Volumes: Preexisting PD</a></em></li> </ul> <p>Assuming that you used manifest from official Kubernetes site:</p> <ul> <li><em><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Controllers: Statefulset</a></em></li> </ul> <p>You can migrate it to another cluster by:</p> <ul> <li><strong>Setting the <code>ReclaimPolicy</code> to <code>Retain</code> on each <code>PV</code> used</strong>. &lt;-- IMPORTANT</li> <li>Using <code>kubectl neat</code> to extract needed manifests</li> <li>Editing previously extracted manifests</li> <li>Deleting the existing workload on old cluster</li> <li>Creating a workload on new cluster</li> </ul> <h3>Setting the <code>ReclaimPolicy</code> to <code>Retain</code> on each <code>PV</code> used</h3> <p>You will need to check if your <code>PV</code> <code>ReclaimPolicy</code> is set to <code>Retain</code>. This would stop <code>gce-pd</code> deletion after <code>PVC</code> and <code>PV</code> are deleted from the cluster. You can do it by following Kubernetes documentation:</p> <ul> <li><em><a href="https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Administer cluster: Change PV reclaim policy</a></em></li> </ul> <p>More reference:</p> <ul> <li><em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#dynamic_provisioning" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Persistent Volumes: Dynamic provisioning</a></em></li> </ul> <h3>Using <code>kubectl neat</code> to extract needed manifests</h3> <p>There are many ways to extract the manifest from Kubernetes API. I stumbled upon <code>kubectl neat</code> <a href="https://github.com/kubermatic/fubectl/pull/58#issuecomment-729546861" rel="nofollow noreferrer">here</a>. <a href="https://github.com/itaysk/kubectl-neat" rel="nofollow noreferrer">kubectl-neat</a> will remove some of the fields in the manifests.</p> <p>I used it in a following manner:</p> <ul> <li><code>$ kubectl get statefulset -o yaml | kubectl neat &gt; final-sts.yaml</code></li> <li><code>$ kubectl get pvc -o yaml | kubectl neat &gt; final-pvc.yaml</code></li> <li><code>$ kubectl get pv -o yaml | kubectl neat &gt; final-pv.yaml</code></li> </ul> <blockquote> <p>Disclaimer!</p> <p>This workaround will use the names of the dynamically created disks in <code>GCP</code>. If you were to create new disks (from snapshot for example) you would need to modify whole setup (use preexsiting disks guide referenced earlier).</p> </blockquote> <p>Above commands would store manifests of a <code>StatefulSet</code> used in the Kubernetes examples.</p> <h3>Editing previously extracted manifests</h3> <p>You will need to edit this manifests to be used in newly created cluster. This part could be automated:</p> <ul> <li><code>final-pv.yaml</code> - delete the <code>.claimRef</code> in <code>.spec</code></li> </ul> <h3>Deleting the existing workload on old cluster</h3> <p>You will need to release used disks so that the new cluster could use them. You will need to delete this <code>Statefulset</code> and accompanying <code>PVC</code>'s and <code>PV</code>'s. Please make sure that the <code>PV</code>'s <code>reclaimPolicy</code> is set to <code>Retain</code>.</p> <h3>Creating a workload on new cluster</h3> <p>You will need to use previously created manifest and apply them on a new cluster:</p> <ul> <li><code>$ kubectl apply -f final-pv.yaml</code></li> <li><code>$ kubectl apply -f final-pvc.yaml</code></li> <li><code>$ kubectl apply -f final-sts.yaml</code></li> </ul> <hr /> <p>As for exporting manifests you could also look (if feasible) on Kubernetes client libraries:</p> <ul> <li><em><a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Using API: Client libraries</a></em></li> </ul>
Dawid Kruk
<p>I'm running a deployment in Kubernetes with a ConfigMap, including a set of configuration commands as a script. It works as expected when there's only 1 replica.</p> <p>Is there a way where I can create a second pod that executes a different set of commands from the first pod?</p> <p>For example when the first pod is being created, it will execute <code>script-1.sh</code>. But I want to execute a different script (maybe like <code>script-2.sh</code>) in the second pod and so on. Only for the first time pod I need to execute <code>script-1.sh</code>. For the next pods which are created, it should execute <code>script-2.sh</code> not <code>script-1.sh</code>.</p> <p>Is there a way I can do this?</p> <pre class="lang-yaml prettyprint-override"><code>volumeMounts: - name: config-script mountPath: /scripts volumes: - name: config-script configMap: name: config-script-1 defaultMode: 0744 </code></pre>
Dusty
<p>Without the specification on what exactly your scripts are doing, it could be hard to point you to the most viable and correct solution.</p> <p>As stated previously in my comment, I do agree with answer provided by Krishna Chaurasia. It's not intended use to run a <code>Deployment</code> when each of the replicas has some differences from one another.</p> <p>By not taking into consideration:</p> <ul> <li>What exactly this scripts are doing.</li> <li>The amount of replicas in each <code>Deployment</code>.</li> <li>If/how they are connected to each other.</li> <li>etc.</li> </ul> <p>You can create a Helm chart that would iterate over certain values and produce either:</p> <ul> <li><code>X</code> amount of <code>Pods</code> that are running with their desired scripts.</li> <li><code>X</code> amounts of <code>Deployments</code>/<code>StatefulSets</code> (<strong>with single <code>replica</code></strong> each) that are running with their desired scripts.</li> </ul> <hr /> <h3><strong>Please do consider this as a workaround of how it can be done, not acknowledging the use case!</strong></h3> <p>With following setup of a <code>Helm</code> Chart:</p> <pre class="lang-sh prettyprint-override"><code>test-directory: ├── Chart.yaml ├── templates │ ├── configmap.yaml │ └── deployment.yaml └── values.yaml 1 directory, 4 files </code></pre> <p>Where the contents of the following files are:</p> <ul> <li><code>Chart.yaml</code></li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v2 name: ubuntu-scripts description: A Helm chart for Kubernetes version: 0.1.0 </code></pre> <ul> <li><code>values.yaml</code></li> </ul> <pre class="lang-yaml prettyprint-override"><code>scripts: - script-one # see the configmap, what it does - script-two # see the configmap, what it does </code></pre> <ul> <li><code>configmap.yaml</code></li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: scripts-configmap data: script-one: | echo &quot;Hello there!&quot; script-two: | echo &quot;General Kenobi!&quot; </code></pre> <ul> <li><code>deployment.yaml</code></li> </ul> <pre class="lang-yaml prettyprint-override"><code>{{- range .Values.scripts }} apiVersion: v1 kind: Pod metadata: name: ubuntu-{{ . }} labels: app: ubuntu-{{ . }} spec: restartPolicy: Never containers: - name: ubuntu image: ubuntu imagePullPolicy: Always command: - sleep - infinity volumeMounts: - name: placeholder mountPath: /scripts volumes: - name: placeholder configMap: name: scripts-configmap items: - key: {{ . }} path: script.sh defaultMode: 0744 --- {{- end }} </code></pre> <p>After applying above resources with:</p> <ul> <li><code>helm install NAME . </code> ( being in the <code>Helm</code> Chart directory)</li> </ul> <blockquote> <p>A side note!</p> <p><code>Helm</code> will iterate over the <code>Values.scripts</code> (included in <code>values.yaml</code>) running a loop for each of the items in it. It will also parse the value that is been iterating over in place of <code>{{ . }}</code>.</p> <p>You can see how it will behave by running:</p> <ul> <li><code>helm install ubuntu . --dry-run --debug</code></li> </ul> </blockquote> <p>You will see:</p> <pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE ubuntu-script-one 1/1 Running 0 4m12s ubuntu-script-two 1/1 Running 0 4m12s </code></pre> <p>And checking the mounted scripts:</p> <ul> <li><code>kubectl exec -it ubuntu-script-one -- sh /scripts/script.sh</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>Hello there! </code></pre> <ul> <li><code>kubectl exec -it ubuntu-script-two -- sh /scripts/script.sh</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>General Kenobi! </code></pre> <blockquote> <p>Disclaimer!</p> <p>You can also substitute the <code>Pod</code> for <code>Deployment</code>/<code>StatefulSet</code> (adding additional fields) but only with: <code>replicas: 1</code>.</p> </blockquote> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Kubernetes.io: Docs: Deployment</a></em></li> <li><em><a href="https://helm.sh/" rel="nofollow noreferrer">Helm.sh</a></em></li> </ul>
Dawid Kruk
<p>I am attempting to complete the following tutorial for deploying Wordpress on GKE: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk</a></p> <p>I have used terraform for provisioning the gcp resources, instead of gcp as the tutorial recommends. Here is the deployment that is resulting in a CrashLoopBackOff state.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: wordpress labels: app: wordpress spec: replicas: 1 selector: matchLabels: app: wordpress template: metadata: labels: app: wordpress spec: containers: - image: wordpress name: wordpress env: - name: WORDPRESS_DB_HOST value: 127.0.0.1:3306 # These secrets are required to start the pod. - name: WORDPRESS_DB_USER valueFrom: secretKeyRef: name: cloudsql-db-credentials key: username - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: cloudsql-db-credentials key: password ports: - containerPort: 80 name: wordpress volumeMounts: - name: wordpress-persistent-storage mountPath: /var/www/html # Change archtek-wordpress:us-west1:archtek-wordpress-postgres-instance here to include your GCP # project, the region of your Cloud SQL instance and the name # of your Cloud SQL instance. The format is # :: - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: [&quot;/cloud_sql_proxy&quot;, &quot;-instances=archtek-wordpress:us-west1:archtek-wordpress-mysql-instance=tcp:3306&quot;, # If running on a VPC, the Cloud SQL proxy can connect via Private IP. See: # https://cloud.google.com/sql/docs/mysql/private-ip for more info. # &quot;-ip_address_types=PRIVATE&quot;, &quot;-credential_file=/secrets/cloudsql/key.json&quot;] securityContext: runAsUser: 2 # non-root user allowPrivilegeEscalation: false volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true imagePullPolicy: Always volumes: - name: wordpress-persistent-storage persistentVolumeClaim: claimName: wordpress-volumeclaim - name: cloudsql-instance-credentials secret: secretName: cloudsql-instance-credentials </code></pre> <p>When I describe the pod, I see the following in the logs:</p> <pre><code>wordpress-54c68dbf59-5djfx wordpress MySQL Connection Error: (2002) Connection refused </code></pre> <p>To rule out the idea that the credentials are invalid, I took the username and password used to create <code>cloudsql-db-credentials</code>, the k8s secret referenced in my deployment yaml, and ran this.</p> <pre><code>$: gcloud sql connect archtek-wordpress-mysql-instance -u wordpress </code></pre> <p>I can connect, no problem. But what I discovered I also cannot do is this:</p> <pre><code>$: mysql -u wordpress -p'$CLOUD_SQL_PASSWORD' \ () -h 35.197.7.98 -P 3306 \ -D archtek-wordpress:us-west1:archtek-wordpress-mysql-instance -v </code></pre> <p>which returns:</p> <pre><code>ERROR 2003 (HY000): Can't connect to MySQL server on '35.197.7.98' (60) </code></pre> <p>I know that when using the <code>gcloud</code> client to connect to a cloudsql database, it whitelists for ip for a 5 minute period prior to authentication, which might explain why the <code>mysql</code> client fails to authenticate. However, I'm not sure if this rationale holds up for my deployment in the cluster. Does it also need to be whitelisted for cloudsql to accept auth requests?</p> <p>Here is the terraform file for provisioning the cloudsql instance:</p> <pre><code>resource &quot;google_sql_database_instance&quot; &quot;postgres&quot; { name = &quot;archtek-wordpress-mysql-instance&quot; database_version = &quot;MYSQL_5_7&quot; settings { tier = &quot;db-f1-micro&quot; availability_type = &quot;ZONAL&quot; } } </code></pre>
Kyle Green
<p>The error you encountered when trying to connect outside from <code>GKE</code> cluster:</p> <blockquote> <p>ERROR 2003 (HY000): Can't connect to MySQL server on '35.197.7.98' (60)</p> </blockquote> <p>It's because the site (IP) you are connecting from is not authorized to do that. Using:</p> <ul> <li><code>$ gcloud sql connect ...</code></li> </ul> <p>is allowing to connect to a <code>SQL</code> instance for a 5 minute period.</p> <p>You don't need to authorize network you are connecting from when using <code>CloudSQL proxy</code> within <code>GKE</code>.</p> <p>You can see the authorization part in <code>GCP -&gt; SQL -&gt; Instance -&gt; Connections</code>.</p> <p>Also, you can see the logs of your pods (<code>wordpress</code> and <code>cloudsql-proxy</code>) to determine which pod is causing problems (other than <code>$ kubectl describe</code>):</p> <ul> <li><code>$ kubectl logs POD_NAME -c wordpress</code></li> </ul> <hr /> <p>More reference on <code>CloudSQL</code> and <code>sql-proxy</code>:</p> <ul> <li><p><em><a href="https://cloud.google.com/sql/docs/mysql/diagnose-issues" rel="nofollow noreferrer">Cloud.google.com: SQL: MySQL: Diagnose issues</a></em></p> </li> <li><p><em><a href="https://cloud.google.com/sql/docs/mysql/sql-proxy#troubleshooting" rel="nofollow noreferrer">Cloud.google.com: SQL: MySQL: sql-proxy: Troubleshooting</a></em></p> </li> </ul> <hr /> <p>Assuming that you created your user for <code>CloudSQL</code> instance with <code>.tf</code> file from your github page, there could be a reason why it's failing. The part responsible for creating a user <code>wordpress</code> has wrong host parameter (below example is edited):</p> <pre><code>resource &quot;google_sql_user&quot; &quot;users&quot; { name = &quot;wordpress&quot; instance = google_sql_database_instance.postgres.name # host = &quot;*&quot; &lt;- BAD host = &quot;%&quot; # &lt;- GOOD password = random_password.password.result } </code></pre> <p>I couldn't connect to the server with a following parameter: <code>host = &quot;*&quot;</code>. Changing it from <code>&quot;*&quot;</code> to <code>&quot;%&quot;</code> solved my issue.</p> <hr /> <p>I've managed to create <code>.tf</code> files which are similar to the parts of the official <code>GKE</code> guide:</p> <ul> <li><em><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Tutorials: Persistent disk </a></em></li> </ul> <p>Guide to connect Terraform to the <code>GCP</code> project:</p> <ul> <li><em><a href="https://www.terraform.io/docs/providers/google/guides/getting_started.html" rel="nofollow noreferrer">Terraform.io: Google: Guides: Getting started</a></em></li> </ul> <p>Files used:</p> <ul> <li><code>main.tf</code></li> <li><code>vpc.tf</code> - create a <code>VPC</code> (basing on github linked in the comment)</li> <li><code>gke.tf</code> - create a <code>GKE</code> cluster in new <code>VPC</code></li> <li><code>mysql.tf</code> - create a <code>CloudSQL</code> instance and a user <code>wordpress</code></li> <li><code>pvc.tf</code> - create a <code>PVC</code> for <code>Wordpress</code> deployment</li> <li><code>sa.tf</code> - create a <code>ServiceAccount</code> and bind it with permission required to access <code>CloudSQL</code> instances</li> <li><code>secret.tf</code> - create a key for above <code>ServiceAccount</code> and create <code>Kubernetes</code> secrets for <code>Wordpress</code> and <code>CloudSQL</code> pods</li> <li><code>deployment.tf</code> - create a deployment that will run <code>Wordpress</code> and <code>cloudsql-proxy</code></li> </ul> <p>I ran below commands each time I added a new file (in above order):</p> <ul> <li><code>$ terraform init</code></li> <li><code>$ terraform apply</code></li> </ul> <p><code>main.tf</code>:</p> <pre><code>provider &quot;google&quot; { project = &quot;ENTER-YOUR-PROJECT-ID&quot; region = &quot;europe-west3&quot; zone = &quot;europe-west3-c&quot; } variable project { type = string default = &quot;ENTER-YOUR-PROJECT-ID&quot; } variable zone { type = string default = &quot;europe-west3-c&quot; } variable region { type = string default = &quot;europe-west3&quot; } </code></pre> <p><code>vpc.tf</code>:</p> <pre><code>resource &quot;google_compute_network&quot; &quot;terraform-network&quot; { name = &quot;terraform-network&quot; auto_create_subnetworks = &quot;false&quot; } resource &quot;google_compute_subnetwork&quot; &quot;terraform-subnet&quot; { name = &quot;terraform-subnet&quot; region = var.region network = google_compute_network.terraform-network.name ip_cidr_range = &quot;10.0.0.0/24&quot; } </code></pre> <p><code>gke.tf</code>:</p> <pre><code>resource &quot;google_container_cluster&quot; &quot;gke-terraform&quot; { name = &quot;gke-terraform&quot; location = var.zone initial_node_count = 1 network = google_compute_network.terraform-network.name subnetwork = google_compute_subnetwork.terraform-subnet.name } </code></pre> <p>I also ran:</p> <ul> <li><code>$ gcloud container clusters get-credentials gke-terraform --zone=europe-west3-c</code></li> </ul> <p><code>mysql.tf</code></p> <pre><code>resource &quot;google_sql_database_instance&quot; &quot;cloudsql&quot; { name = &quot;cloudsql-terraform&quot; database_version = &quot;MYSQL_5_7&quot; settings { tier = &quot;db-f1-micro&quot; availability_type = &quot;ZONAL&quot; } } data &quot;google_sql_database_instance&quot; &quot;cloudsql&quot; { name = &quot;cloudsql-terraform&quot; } resource &quot;random_password&quot; &quot;wordpress-cloudsql-password&quot; { length = 18 special = true override_special = &quot;_%@&quot; } resource &quot;local_file&quot; &quot;password-file&quot; { content = random_password.wordpress-cloudsql-password.result filename = &quot;./password-file&quot; } resource &quot;google_sql_user&quot; &quot;cloudsql-wordpress-user&quot; { name = &quot;wordpress&quot; instance = google_sql_database_instance.cloudsql.name host = &quot;%&quot; password = random_password.wordpress-cloudsql-password.result } </code></pre> <p><code>pvc.tf</code>:</p> <pre><code>resource &quot;google_compute_disk&quot; &quot;terraform-pd&quot; { name = &quot;terraform-disk&quot; type = &quot;pd-standard&quot; zone = &quot;europe-west3-c&quot; } resource &quot;kubernetes_persistent_volume&quot; &quot;terraform-pv&quot; { metadata { name = &quot;wordpress-pv&quot; } spec { capacity = { storage = &quot;10Gi&quot; } storage_class_name = &quot;standard&quot; access_modes = [&quot;ReadWriteOnce&quot;] persistent_volume_source { gce_persistent_disk { pd_name = google_compute_disk.terraform-pd.name } } } } resource &quot;kubernetes_persistent_volume_claim&quot; &quot;terraform-pvc&quot; { metadata { name = &quot;wordpress-pvc&quot; } spec { access_modes = [&quot;ReadWriteOnce&quot;] storage_class_name = &quot;standard&quot; resources { requests = { storage = &quot;10Gi&quot; } } volume_name = kubernetes_persistent_volume.terraform-pv.metadata.0.name } } </code></pre> <p><code>sa.tf</code>:</p> <pre><code>resource &quot;google_service_account&quot; &quot;cloudsql-proxy-terraform&quot; { account_id = &quot;cloudsql-proxy-terraform&quot; display_name = &quot;cloudsql-proxy-terraform&quot; } data &quot;google_service_account&quot; &quot;cloudsql-proxy-terraform&quot; { account_id = &quot;cloudsql-proxy-terraform&quot; } resource &quot;google_project_iam_binding&quot; &quot;cloudsql-proxy-binding&quot; { project = var.project role = &quot;roles/cloudsql.client&quot; members = [ &quot;serviceAccount:${google_service_account.cloudsql-proxy-terraform.email}&quot;, ] } </code></pre> <p><code>secret.tf</code>:</p> <pre><code>resource &quot;google_service_account_key&quot; &quot;cloudsql-proxy-key&quot; { service_account_id = google_service_account.cloudsql-proxy-terraform.name } resource &quot;kubernetes_secret&quot; &quot;cloudsql-instance-credentials-terraform&quot; { metadata { name = &quot;cloudsql-instance-credentials-terraform&quot; } data = { &quot;key.json&quot; = base64decode(google_service_account_key.cloudsql-proxy-key.private_key) } } resource &quot;kubernetes_secret&quot; &quot;cloudsql-db-credentials-terraform&quot; { metadata { name = &quot;cloudsql-db-credentials-terraform&quot; } data = { &quot;username&quot; = &quot;wordpress&quot; &quot;password&quot; = random_password.wordpress-cloudsql-password.result } } </code></pre> <p><code>deployment.tf</code>:</p> <pre><code>resource &quot;kubernetes_deployment&quot; &quot;wordpress-deployment&quot; { metadata { name = &quot;wordpress-deployment&quot; labels = { app = &quot;wordpress&quot; } } spec { replicas = 1 selector { match_labels = { app = &quot;wordpress&quot; } } template { metadata { labels = { app = &quot;wordpress&quot; } } spec { container { image = &quot;wordpress&quot; name = &quot;wordpress&quot; env { name = &quot;WORDPRESS_DB_HOST&quot; value = &quot;127.0.0.1:3306&quot; } env { name = &quot;WORDPRESS_DB_USER&quot; value_from { secret_key_ref { name = kubernetes_secret.cloudsql-db-credentials-terraform.metadata.0.name key = &quot;username&quot; } } } env { name = &quot;WORDPRESS_DB_PASSWORD&quot; value_from { secret_key_ref { name = kubernetes_secret.cloudsql-db-credentials-terraform.metadata.0.name key = &quot;password&quot; } } } port { name = &quot;http&quot; container_port = 80 protocol = &quot;TCP&quot; } volume_mount { mount_path = &quot;/var/www/html&quot; name = &quot;wordpress-persistent-storage&quot; } } container { image = &quot;gcr.io/cloudsql-docker/gce-proxy:1.11&quot; name = &quot;cloudsql-proxy&quot; command = [&quot;/cloud_sql_proxy&quot;, &quot;-instances=${google_sql_database_instance.cloudsql.connection_name}=tcp:3306&quot;, &quot;-credential_file=/secrets/cloudsql/key.json&quot;] security_context { run_as_user = 2 allow_privilege_escalation = &quot;false&quot; } volume_mount { mount_path = &quot;/secrets/cloudsql&quot; name = &quot;cloudsql-instance-credentials-terraform&quot; read_only = &quot;true&quot; } } volume { name = &quot;wordpress-persistent-storage&quot; persistent_volume_claim { claim_name = &quot;wordpress-pvc&quot; } } volume { name = &quot;cloudsql-instance-credentials-terraform&quot; secret { secret_name = &quot;cloudsql-instance-credentials-terraform&quot; } } } } } } </code></pre> <p>After checking if the resources created correctly ( <code>$ kubectl logs POD_NAME -c CONTAINER_NAME</code>) you can expose your <code>Wordpress</code> with:</p> <ul> <li><code>$ kubectl expose deployment wordpress-deployment --type=LoadBalancer --port=80</code></li> </ul>
Dawid Kruk
<p>I'm running a cluster on EKS, and following the tutorial to deploy one using the command <code>eksctl create cluster --name prod --version 1.17 --region eu-west-1 --nodegroup-name standard-workers --node-type t3.medium --nodes 3 --nodes-min 1 --nodes-max 4 --ssh-access --ssh-public-key public-key.pub --managed</code>.</p> <p>Once I'm done with my tests (mainly installing and then uninstalling helm charts), and i have a clean cluster with no jobs running, i then try to delete it with <code>eksctl delete cluster --name prod</code>, causing these errors.</p> <pre><code>[ℹ] eksctl version 0.25.0 [ℹ] using region eu-west-1 [ℹ] deleting EKS cluster &quot;test&quot; [ℹ] deleted 0 Fargate profile(s) [✔] kubeconfig has been updated [ℹ] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress [ℹ] 2 sequential tasks: { delete nodegroup &quot;standard-workers&quot;, delete cluster control plane &quot;test&quot; [async] } [ℹ] will delete stack &quot;eksctl-test-nodegroup-standard-workers&quot; [ℹ] waiting for stack &quot;eksctl-test-nodegroup-standard-workers&quot; to get deleted [✖] unexpected status &quot;DELETE_FAILED&quot; while waiting for CloudFormation stack &quot;eksctl-test-nodegroup-standard-workers&quot; [ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure [✖] AWS::CloudFormation::Stack/eksctl-test-nodegroup-standard-workers: DELETE_FAILED – &quot;The following resource(s) failed to delete: [ManagedNodeGroup]. &quot; [✖] AWS::EKS::Nodegroup/ManagedNodeGroup: DELETE_FAILED – &quot;Nodegroup standard-workers failed to stabilize: [{Code: Ec2SecurityGroupDeletionFailure,Message: DependencyViolation - resource has a dependent object,ResourceIds: [[REDACTED]]}]&quot; [ℹ] 1 error(s) occurred while deleting cluster with nodegroup(s) [✖] waiting for CloudFormation stack &quot;eksctl-test-nodegroup-standard-workers&quot;: ResourceNotReady: failed waiting for successful resource state </code></pre> <p>To fix them I had to manually delete AWS VPCs and then ManagednodeGroups, to then delete everything again.</p> <p>I tried again with the steps above (creating and deleting with the commands provided in the official getting started documentation), but I get the same errors upon deleting.</p> <p>It seems extremely weird that I have to manually delete resources when doing something like this. Is there a fix for this problem, am i doing something wrong, or is this standard procedure?</p> <p>All commands are run through the official eksctl cli, and I'm following the <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html" rel="nofollow noreferrer">official eksctl deployment</a></p>
shaki
<p>If we try to delete the corresponding Security Group to which the Node Group EC2 is attached to, we will find the root cause.</p> <p>Mostly it will say there is a Network Interface attached.</p> <p>So the solution is to delete that linked Network Interface manually. Now the Node Group will be deleted without any error.</p>
Karthikeyan S
<p><a href="https://i.stack.imgur.com/EjxHn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EjxHn.png" alt=") "></a></p> <p>I want to connect to my <code>Postgres DB</code> . I use deployment <code>NodePort</code> IP for the host field and also data from config file :</p> <pre><code>data: POSTGRES_DB: postgresdb POSTGRES_PASSWORD: my_password POSTGRES_USER: postgresadmin </code></pre> <p><a href="https://i.stack.imgur.com/nHT2w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nHT2w.png" alt="enter image description here"></a></p> <p>But I get error . What do I do wrong ? If you need more info - let me know .</p>
Andrey Radkevich
<p>Unless you are connected to your cluster through VPN (or direct connect), you can't access 10.121.8.109. It's a private IP address and only available for apps and services within you VPC.</p> <p>You need to create public access for your node port service. Try <code>kubectl get service</code> to find out the External IP for your service. Then try to connect to your IP address from External IP.</p> <p>Rather than using NodePort service, you are better off using Load Balancer type service which might give you better flexibility in managing this especially in a production env. But it will cost a little more Likelihood of an IP Address to change is high, but load balancer or ingress service would automatically manage this for you through a fixed DNS. So you need to weigh the pros and cons of using service type based on your workload.</p>
Parth Mehta
<p><a href="https://i.stack.imgur.com/Fbhna.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Fbhna.png" alt="enter image description here" /></a></p> <p>I encourage an issue regarding setup the TLS Cert-Manager Controller on GKE for WSO2 API Management.</p> <hr /> <p>I am using WSO2 product Docker images available from WSO2 Private Docker Registry, following the Helm Chart for the deployment of WSO2 API Manager with WSO2 API Manager Analytics on Github (<a href="https://github.com/wso2/kubernetes-apim/blob/master/advanced/am-pattern-1/README.md" rel="nofollow noreferrer">README</a>). And I successfully deployed the WSO2 API Manager with Nginx Ingress Controller (<a href="https://medium.com/bluekiri/deploy-a-nginx-ingress-and-a-certitificate-manager-controller-on-gke-using-helm-3-8e2802b979ec" rel="nofollow noreferrer">deploy-a-nginx-ingress-and-a-certitificate-manager-controller-on-gke</a>).</p> <hr /> <p>I want to create a Kubernetes cluster on Google Cloud Platform using an Nginx Ingress Controller to integrate with a certificate manager to automate the process of issue and renew the required certificates.</p> <hr /> <p>I easily replicate the TLS Cert-Manager Controller on GKE for HelloWorld example from the same medium tutorial (<a href="https://medium.com/bluekiri/deploy-a-nginx-ingress-and-a-certitificate-manager-controller-on-gke-using-helm-3-8e2802b979ec" rel="nofollow noreferrer">deploy-a-nginx-ingress-and-a-certitificate-manager-controller-on-gke</a>).</p> <p><a href="https://i.stack.imgur.com/PFLOs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PFLOs.jpg" alt="enter image description here" /></a></p> <p><strong>hello-app-ingress.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: cert-manager.io/issuer: letsencrypt-production kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;networking.k8s.io/v1beta1&quot;,&quot;kind&quot;:&quot;Ingress&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{&quot;cert-manager.io/issuer&quot;:&quot;letsencrypt-production&quot;,&quot;kubernetes.io/ingress.class&quot;:&quot;nginx&quot;,&quot;nginx.ingress.kubernetes.io/ssl-redirect&quot;:&quot;true&quot;},&quot;name&quot;:&quot;hello-app-ingress&quot;,&quot;namespace&quot;:&quot;default&quot;},&quot;spec&quot;:{&quot;rules&quot;:[{&quot;host&quot;:&quot;test.japangly.xyz&quot;,&quot;http&quot;:{&quot;paths&quot;:[{&quot;backend&quot;:{&quot;serviceName&quot;:&quot;hello-app&quot;,&quot;servicePort&quot;:8080},&quot;path&quot;:&quot;/helloworld&quot;}]}}],&quot;tls&quot;:[{&quot;hosts&quot;:[&quot;test.japangly.xyz&quot;],&quot;secretName&quot;:&quot;test-japangly-xyz-tls&quot;}]}} kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; creationTimestamp: &quot;2020-08-30T04:27:12Z&quot; generation: 3 name: hello-app-ingress namespace: default resourceVersion: &quot;6478&quot; selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/hello-app-ingress uid: ea2d8b13-e9b6-4cb0-873d-76ed40253e4f spec: rules: - host: test.japangly.xyz http: paths: - backend: serviceName: hello-app servicePort: 8080 path: /helloworld tls: - hosts: - test.japangly.xyz secretName: test-japangly-xyz-tls status: loadBalancer: ingress: - ip: 35.239.145.46 </code></pre> <p>However, not working the WSO2 API Management, all I get is</p> <blockquote> <p>Kubernetes Ingress Controller Fake Certificate</p> </blockquote> <p><a href="https://i.stack.imgur.com/xx4d5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xx4d5.png" alt="enter image description here" /></a></p> <p><strong>wso2am-pattern-1-am-ingress.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: cert-manager.io/issuer: letsencrypt-production kubernetes.io/ingress.class: nginx meta.helm.sh/release-name: wso2am-pattern-1 meta.helm.sh/release-namespace: wso2-apim nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/session-cookie-hash: sha1 nginx.ingress.kubernetes.io/session-cookie-name: route nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; creationTimestamp: &quot;2020-08-30T04:41:10Z&quot; generation: 4 labels: app.kubernetes.io/managed-by: Helm name: wso2am-pattern-1-am-ingress namespace: wso2-apim resourceVersion: &quot;88840&quot; selfLink: /apis/extensions/v1beta1/namespaces/wso2-apim/ingresses/wso2am-pattern-1-am-ingress uid: 58f4b549-a565-493b-9f9f-72ad76877819 spec: rules: - host: am.japangly.xyz http: paths: - backend: serviceName: wso2am-pattern-1-am-service servicePort: 9443 path: / tls: - hosts: - am.japangly.xyz secretName: am-japangly-xyz-tls status: loadBalancer: ingress: - ip: 35.239.145.46 </code></pre>
Japang LY
<p>As I said in the comment:</p> <blockquote> <p>I ran setup like yours and noticed that the <code>cert-manager</code> was creating the secret but was not provisioning it further. <code>Issuer</code> is namespaced resource and needs to be in namespace where your <code>Ingress</code> resides. Please tell if your namespace <code>wso2-apim</code> have the <code>Issuer</code> needed to provide the certificate. For troubleshooting you can run <code>$ kubectl describe certificate -n namespace</code>. Also the fake Kubernetes certificate is used when there is an issue with a <code>tls: secret</code> part.</p> </blockquote> <p>I wanted to give more insight on what the potential issue may be and some other tips working with <code>nginx-ingress</code>.</p> <hr /> <p>Certificate showing as <code>Kubernetes Ingress Controller Fake Certificate</code> will kick in when there are issues with the actual secret storing the certificate used in the <code>Ingress</code> definition.</p> <p>One of the possible situations where <code>Fake Certificate</code> will kick in is in the lack of the actual secret with a certificate.</p> <hr /> <p>As pointed in the part of my comment, <code>Issuer</code> is a namespaced resource and it needs to be in a namespace that <code>Ingress</code> and <code>secret</code> is created. It will create a <code>secret</code> but it will not progress further with signing.</p> <p>Looking on your setup:</p> <ul> <li><code>nginx-ingress</code> controller spawned in <code>nginx</code> namespace (<strong>GOOD</strong>)</li> <li><code>cert-manager</code> spawned in namespace <code>cert-manager</code> namespace (<strong>GOOD</strong>)</li> <li><code>Issuer</code> for certificates spawned in <code>default</code> namespace (<strong>POTENTIAL ISSUE</strong>)</li> <li><code>WSO2</code> application spawned in <code>wso2-apim</code> namespace (<strong>POTENTIAL ISSUE</strong>)</li> </ul> <p>To make it work you can either:</p> <ul> <li>Run your <code>WSO2</code> application in <code>default</code> namespace same as <code>Issuer</code></li> <li>Create an <code>Issuer</code> in <code>wso2-apim</code> namespace</li> <li>Create a <a href="https://docs.cert-manager.io/en/release-0.11/reference/clusterissuers.html" rel="nofollow noreferrer">ClusterIssuer</a></li> </ul> <p>As pointed by official documentation:</p> <blockquote> <p>An <code>Issuer</code> is a namespaced resource, and it is not possible to issue certificates from an <code>Issuer</code> in a different namespace. This means you will need to create an <code>Issuer</code> in each namespace you wish to obtain <code>Certificates</code> in.</p> <p><em><a href="https://cert-manager.io/docs/concepts/issuer/" rel="nofollow noreferrer">Cert-manager.io: Docs: Concepts: Issuer</a></em></p> </blockquote> <hr /> <p>As for troubleshooting steps you can invoke following commands:</p> <ul> <li><code>$ kubectl describe issuer ISSUER_NAME -n namespace</code></li> <li><code>$ kubectl describe certificate CERTIFICATE_NAME -n namespace</code></li> <li><code>$ kubectl describe secret SECRET_NAME -n namespace</code></li> </ul> <hr /> <p>Assuming that:</p> <ul> <li>You have a working Kubernetes cluster</li> <li>You have spawned <code>nginx-ingress</code> and have the <code>LoadBalancer IP</code> provisioned correctly</li> <li>You have a domain name pointing to the <code>LoadBalancer IP</code> of your <code>nginx-ingress</code> controller</li> </ul> <p>After that you spawned:</p> <ul> <li><code>example</code> namespace</li> <li><code>hello-app</code> as in medium guide: <ul> <li><code>$ kubectl create deployment hello-app --image=gcr.io/google-samples/hello-app:1.0 -n example</code></li> </ul> </li> <li>exposed it locally <ul> <li><code>$ kubectl expose deployment hello-app --type=NodePort --port=8080 -n example</code></li> </ul> </li> <li>created an <code>Ingress</code> resource like below:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress namespace: example annotations: kubernetes.io/ingress.class: nginx cert-manager.io/issuer: &quot;letsencrypt-prod&quot; spec: tls: - hosts: - super.example.com secretName: super-example-tls rules: - host: super.example.com http: paths: - path: / backend: serviceName: hello-app servicePort: 8080 </code></pre> <p>With no <code>Issuer</code> in the <code>example</code> namespace the logs from the certificate will look like that:</p> <ul> <li><code>$ kubectl describe certificate super-example-tls -n example</code></li> </ul> <pre class="lang-yaml prettyprint-override"><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Issuing 6m33s cert-manager Issuing certificate as Secret does not exist Normal Generated 6m33s cert-manager Stored new private key in temporary Secret resource &quot;super-example-tls-XXXXX&quot; Normal Requested 6m33s cert-manager Created new CertificateRequest resource &quot;super-example-tls-XXXXX&quot; </code></pre> <p><code>Issuer.yaml</code> used in an example for more reference:</p> <pre class="lang-yaml prettyprint-override"><code>kind: Issuer metadata: name: letsencrypt-prod namespace: example spec: acme: # The ACME server URL server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: PUT_EMAIL_ADDRESS_HERE # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-prod # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: nginx </code></pre> <p>After you create an <code>Issuer</code> you should see a new event in <code>certificate</code>:</p> <pre class="lang-yaml prettyprint-override"><code> Normal Issuing 25s cert-manager The certificate has been successfully issued </code></pre>
Dawid Kruk
<p>I know minikube should be used for local only, but i'd like to create a test environment for my applications.<br /> In order to do that, I wish to expose my applications running inside the minikube cluster to external access (from any device on public internet - like a 4G smartphone).</p> <p><strong>note :</strong> I run minikube with <code>--driver=docker</code></p> <p><strong>kubectl get services</strong></p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web8080 NodePort 10.99.39.162 &lt;none&gt; 8080:31613/TCP 3d1h </code></pre> <p><strong>minikube ip</strong></p> <pre><code>192.168.49.2 </code></pre> <p>One way to do it is as follows :</p> <pre><code>firewall-cmd --add-port=8081/tcp kubectl port-forward --address 0.0.0.0 services/web8080 8081:8080 </code></pre> <p>then I can access it using :</p> <pre><code>curl localhost:8081 (directly from the machine running the cluster inside a VM) curl 192.168.x.xx:8081 (from my Mac in same network - this is the private ip of the machine running the cluster inside a VM) curl 84.xxx.xxx.xxx:8081 (from a phone connected in 4G - this is the public ip exposed by my router) </code></pre> <p>I don't want to use this solution because <code>kubectl port-forward</code> is weak and need to be run every time the port-forwarding is no longer active.</p> <p>How can I achieve this ?</p> <p><strong>(EDITED) - USING LOADBALANCER</strong></p> <p>when using <code>LoadBalancer</code> type and <code>minikube tunnel</code>, I can expose the service only inside the machine running the cluster.</p> <p><strong>kubectl get services</strong></p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service LoadBalancer 10.111.61.218 10.111.61.218 8080:31831/TCP 3d3h </code></pre> <p><code>curl 10.111.61.218:8080</code> (inside the machine running the cluster) is working<br /> but <code>curl 192.168.x.xx:8080</code> (from my Mac on same LAN) is not working</p> <p>Thanks</p>
samasoulé
<p><code>Minikube</code> as a development tool for a single node Kubernetes cluster provides inherent isolation layer between Kubernetes and the external devices (being specific the <strong>inbound</strong> traffic to your cluster from <code>LAN</code>/<code>WAN</code>).</p> <p>Different <a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="nofollow noreferrer">--drivers</a> are allowing for flexibility when it comes to the place where your Kubernetes cluster will be spawned and how it will behave network wise.</p> <blockquote> <p>A side note (workaround)!</p> <p>As your <code>minikube</code> already resides in a <code>VM</code> and uses <code>--driver=docker</code> you could try to use <code>--driver=none</code> (you will be able to <code>curl VM_IP:NodePort</code> from the <code>LAN</code>). It will spawn your Kubernetes cluster directly on the <code>VM</code>.</p> <p>Consider checking it's documentation as there are some certain limitations/disadvantages:</p> <ul> <li><em><a href="https://minikube.sigs.k8s.io/docs/drivers/none/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Drivers: None</a></em></li> </ul> </blockquote> <hr /> <p>As this setup is already basing on the <code>VM</code> (with unknown hypervisor) <strong>and the cluster is intended to be exposed outside of your LAN</strong>, I suggest you going with the production-ready setup. This will inherently eliminate the connectivity issues you are facing. Kubernetes cluster will be provisioned directly on a <code>VM</code> and not in the <code>Docker</code> container.</p> <p>Explaining the <code>--driver=docker</code> used: It will spawn a container on a host system with Kubernetes inside of it. Inside of this container, <code>Docker</code> will be used once again to spawn the necessary <code>Pods</code> to run the Kubernetes cluster.</p> <p>As for the tools to provision your Kubernetes cluster you will need to chose the option that suits your needs the most. <strong>Some</strong> of them are the following:</p> <ul> <li><em><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">Kubeadm</a></em></li> <li><em><a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">Kubespray</a></em></li> <li><em><a href="https://microk8s.io/" rel="nofollow noreferrer">MicroK8S</a></em></li> </ul> <p>After you created your Kubernetes cluster on a <code>VM</code> you could forward the traffic from your router directly to your <code>VM</code>.</p> <hr /> <p>Additional resources that you might find useful:</p> <ul> <li><em><a href="https://stackoverflow.com/questions/62559281/expose-kubernetes-cluster-to-internet/62697373#62697373">Stackoverflow.com: Questions Expose Kubernetes cluster to the Internet (Virtualbox with minikube)</a></em></li> </ul>
Dawid Kruk
<p>I installed the stable/prometheus helm chart with some minor changes proposed at <a href="https://github.com/helm/charts/pull/17268" rel="nofollow noreferrer">helm/charts#17268</a> to make it compatible with Kubernetes v1.16</p> <p>After installation, none of the Kubernetes grafana dashboards show correct values. I am using 8769 (<a href="https://grafana.com/grafana/dashboards/8769" rel="nofollow noreferrer">https://grafana.com/grafana/dashboards/8769</a>) dashboard which provides many information on cpu, memory, network, etc. This dashboard is working properly on older k8s versions but on v1.16 it shows no results. I also randomly tried some other dashboards (8588, 6879, 10551) but they either just show the requested resource for each pod and not the live usage or showing nothing.</p> <p>What these dashboards do is they send a promql query to prometheus and get the results. For example this is the promql query for cpu usage from 8769 dashboard:</p> <pre><code>sum (rate (container_cpu_usage_seconds_total{id!="/",namespace=~"$Namespace",pod_name=~"^$Deployment.*$"}[1m])) by (pod_name) </code></pre> <p>I don't know if I have to change the promql or the problem is somewhere else.</p>
AVarf
<blockquote> <p>Kubernetes 1.16 removes the labels pod_name and container_name from cAdvisor metrics, duplicates of pod and container.</p> </blockquote> <p>You need change pod_name -> pod, container_name -> container in Grafana dashboards JSON models.</p>
am4
<p>A semi related question: <a href="https://stackoverflow.com/questions/59374234/options-for-getting-logs-in-kubernetes-pods">Options for getting logs in kubernetes pods</a></p> <p>I am running a tomcat application in Google Kubernetes Engine and that has output to log files like catalina.log, localhost.log and more. As these are not the usual stdout, I have several options and questions regarding the best way of pulling log files to a shared folder / volume in a Kubernetes environment.</p> <p><strong>Option 1:</strong><br> Batch job that uses kubectl cp to move the log files to host, but I don't think this is advisable as pods die frequently and crucial log files will be lost.</p> <p><strong>Option 2:</strong><br> I'm not sure if this is possible as I am still learning how persistent volumes work compared to docker, but is it possible to mount a PVC with the same mountPath as the tomcat/logs folder so that the logs gets written to the PVC directly?</p> <p>In Docker, I used to supply the container run command with a mount-source to specify the volume used for log consolidation:</p> <pre><code>docker container run -d -it --rm --mount source=logs,target=/opt/tomcat/logs .... </code></pre> <p>I am wondering if this is possible in the Kubernetes environment, for example, in the deployment or pod manifest file:</p> <pre><code> volumeMounts: - mountPath: /opt/tomcat/logs/ name: logs volumes: - name: logs persistentVolumeClaim: claimName: logs </code></pre> <p><strong>Option 3:</strong><br> I want to avoid complicating my setup for now, but if all options are exhausted, I would be setting up ElasticSearch, Kibana and Filebeat to ship my log files. </p>
user10518
<p>The solution is actually quite simple after I figured out how everything works. Hopefully this helps someone. I went ahead for option #2.</p> <p>First define a pvc for my tomcat log files:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: tomcat-logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> <p>In my deployment.yaml, reference it to the PVC created:</p> <pre><code>... volumeMounts: - mountPath: opt/tomcat/logs name: tomcat-logs volumes: - name: tomcat-logs persistentVolumeClaim: claimName: tomcat-logs ... </code></pre> <p>As noted, the PVC is mounted as root, the container will not be able to write to it if they do not have sufficient privileges. For my case, it was my DockerFile that defined my user, and changing it to root resolves it. </p> <p>Edit: If running the DockerFile as root is not viable, you can escalate the privileges in the deployment by adding:</p> <pre><code>... spec: securityContext: runAsUser: 0 .... securityContext: privileged: true </code></pre> <p>A related question here: <a href="https://stackoverflow.com/questions/46873796/allowing-access-to-a-persistentvolumeclaim-to-non-root-user/46907452#46907452">Allowing access to a PersistentVolumeClaim to non-root user</a></p>
user10518
<p>I've tried the following to get HTTP to redirect to HTTPS. I'm not sure where I'm going wrong.</p> <p><code>ingress-nginx</code> object:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: service.beta.kubernetes.io/aws-load-balancer-type: nlb service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:... service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https spec: type: LoadBalancer selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: http </code></pre> <p><code>my-ingress</code> object:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress namespace: my-namespace annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/secure-backends: "true" spec: tls: - hosts: - app.example.com rules: - host: app.example.com http: paths: - path: / backend: serviceName: my-service servicePort: 80 </code></pre> <p>I get a <code>308 Permanent Redirect</code> on HTTP and HTTPS. I guess this makes sense as the NLB is performing the SSL termination and therefore forwarding HTTP to the Nginx service? I guess I would need to move the SSL termination from the NLB to the Nginx service?</p> <p>Thanks</p>
jamesrogers93
<p>I believe you do need to move the SSL termination to the ingress controller because I am having the same issue and I appear to be in a permanent redirect situation. The traffic comes into the NLB on 443 and is terminated and sends to the backend instances over port 80. The ingress sees the traffic on port 80 and redirects to https:// and thus begins the infinite loop. </p>
fubarLives
<p>I am new to kubernetes env. And I am learning kubernetes using minikube. I have a situation where I have to configure fdw (foreign table concept) in PostgreSQL.</p> <p>I am trying to implement this fdw concept in kubernetes. For that I have two pods meta pod where we create the foreign table and server, Data pod where the actual table exist. To create a meta pod you should have an Docker image consist of following script.</p> <p><strong>Script.sh</strong></p> <pre><code>#!/bin/bash psql -d metap -U papu -c &quot;CREATE EXTENSION if not exists postgres_fdw;&quot; psql -d metap -U papu -c &quot;CREATE SERVER if not exists dataserver FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'datap.default.svc.cluster.local', dbname 'datap', port '5432');&quot; psql -d metap -U papu -c &quot;CREATE USER MAPPING if not exists FOR ais SERVER dataserver OPTIONS (user 'papu', password 'papu');&quot; psql -d metap -U papu -c &quot;CREATE FOREIGN TABLE if not exists dream (id integer, val text) SERVER dataserver OPTIONS (schema_name 'public', table_name 'dream');&quot; </code></pre> <p>The yaml file for the two pods is given below.</p> <p><strong>p-config.yaml</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: p-config labels: app: post data: POSTGRES_DB: datap POSTGRES_USER: papu POSTGRES_PASSWORD: papu </code></pre> <p><strong>datap.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: datap labels: app: datap spec: ports: - port: 5432 name: datap clusterIP: None selector: app: datap --- apiVersion: apps/v1 kind: StatefulSet metadata: name: datap spec: serviceName: &quot;datap&quot; replicas: 1 selector: matchLabels: app: datap template: metadata: labels: app: datap spec: containers: - name: datap image: postgres:latest envFrom: - configMapRef: name: p-config ports: - containerPort: 5432 name: datap volumeMounts: - name: datap mountPath: /var/lib/postgresql/data subPath: datap volumeClaimTemplates: - metadata: name: datap spec: accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 2Gi </code></pre> <p>My use case is that I already have data pod up and running, now I have to create the foreign table through running script in meta pod dynamically.</p> <p>For that I am using life cycle hook. when I run this configuration, <em><strong>the foreign table is created and fdw connection is established</strong></em>. <strong>But the logs say life-cycle hook is not Executed</strong>. Is this a bug? Or any problem in my configuration?</p> <pre><code>$ kubectl describe pod metap-0 Name: metap-0 Namespace: default Priority: 0 Node: minikube/10.0.2.15 Start Time: Fri, 20 Sep 2019 15:50:41 +0530 Labels: app=metap controller-revision-hash=metap-648ddb5465 statefulset.kubernetes.io/pod-name=metap-0 Annotations: &lt;none&gt; Status: Running IP: 172.17.0.10 Controlled By: StatefulSet/metap Containers: metap: Container ID: Image: &lt;script containing image &gt;:latest Port: 5432/TCP Host Port: 0/TCP State: Running Started: Fri, 20 Sep 2019 15:51:29 +0530 Last State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 20 Sep 2019 15:51:14 +0530 Finished: Fri, 20 Sep 2019 15:51:15 +0530 Ready: True Restart Count: 2 Environment: &lt;none&gt; Mounts: /var/lib/postgresql/data from metap (rw,path=&quot;metap&quot;) /var/run/secrets/kubernetes.io/serviceaccount from default-token Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: mpostgredb: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: metap-metap-0 ReadOnly: false default-token-r2ncm: Type: Secret (a volume populated by a Secret) SecretName: default-token-r2ncm Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 71s (x2 over 71s) default-scheduler pod has unbound immediate PersistentVolumeClaims Normal Scheduled 69s default-scheduler Successfully assigned default/metap-0 to minikube Warning FailedPostStartHook 67s kubelet, minikube Exec lifecycle hook ([/bin/sh -c script.sh]) for Container &quot;metap&quot; in Pod &quot;metap-0_default(6a367766-cd7e-4bab-826a-908e33622bcf)&quot; failed - error: command '/bin/sh -c script.sh' exited with 2: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket &quot;/var/run/postgresql/.s.PGSQL.5432&quot;? psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket &quot;/var/run/postgresql/.s.PGSQL.5432&quot;? psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket &quot;/var/run/postgresql/.s.PGSQL.5432&quot;? psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket &quot;/var/run/postgresql/.s.PGSQL.5432&quot;? , message: &quot;psql: could not connect to server: No such file or directory\n\tIs the server running locally and accepting\n\tconnections on Unix domain socket \&quot;/var/run/postgresql/.s.PGSQL.5432\&quot;?\npsql: could not connect to server: No such file or directory\n\tIs the server running locally and accepting\n\tconnections on Unix domain socket \&quot;/var/run/postgresql/.s.PGSQL.5432\&quot;?\npsql: could not connect to server: No such file or directory\n\tIs the server running locally and accepting\n\tconnections on Unix domain socket \&quot;/var/run/postgresql/.s.PGSQL.5432\&quot;?\npsql: could not connect to server: No such file or directory\n\tIs the server running locally and accepting\n\tconnections on Unix domain socket \&quot;/var/run/postgresql/.s.PGSQL.5432\&quot;?\n&quot; Warning FailedPostStartHook 35s kubelet, minikube Exec lifecycle hook ([/bin/sh -c script.sh]) for Container &quot;metap&quot; in Pod &quot;metap-0_default(6a367766-cd7e-4bab-826a-908e33622bcf)&quot; failed - error: command '/bin/sh -c script.sh' exited with 1: psql: FATAL: the database system is starting up psql: FATAL: the database system is starting up ERROR: server &quot;dataserver&quot; does not exist ERROR: server &quot;dataserver&quot; does not exist , message: &quot;psql: FATAL: the database system is starting up\npsql: FATAL: the database system is starting up\nERROR: server \&quot;dataserver\&quot; does not exist\nERROR: server \&quot;dataserver\&quot; does not exist\n&quot; Normal Killing 35s (x2 over 67s) kubelet, minikube FailedPostStartHook Warning BackOff 33s (x2 over 34s) kubelet, minikube Back-off restarting failed container Normal Created 21s (x3 over 68s) kubelet, minikube Created container metap Normal Started 21s (x3 over 68s) kubelet, minikube Started container metap Normal Pulling 21s (x3 over 68s) kubelet, minikube Pulling image &quot; &lt;script containing image &gt;:latest&quot; Normal Pulled 21s (x3 over 68s) kubelet, minikube Successfully pulled image &quot; &lt;script containing image &gt;:latest&quot; $ kubectl logs metap-0 2019-09-20 10:21:29.500 UTC [1] LOG: listening on IPv4 address &quot;0.0.0.0&quot;, port 5432 2019-09-20 10:21:29.500 UTC [1] LOG: listening on IPv6 address &quot;::&quot;, port 5432 2019-09-20 10:21:29.502 UTC [1] LOG: listening on Unix socket &quot;/var/run/postgresql/.s.PGSQL.5432&quot; 2019-09-20 10:21:29.514 UTC [22] LOG: database system was shut down at 2019-09-20 10:21:15 UTC 2019-09-20 10:21:29.518 UTC [1] LOG: database system is ready to accept connections </code></pre> <p><em><strong>kubectl version</strong></em></p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;15&quot;, GitVersion:&quot;v1.15.3&quot;, GitCommit:&quot;2d3c76f9091b6bec110a5e63777c332469e0cba2&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2019-08-19T11:13:54Z&quot;, GoVersion:&quot;go1.12.9&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;15&quot;, GitVersion:&quot;v1.15.2&quot;, GitCommit:&quot;f6278300bebbb750328ac16ee6dd3aa7d3549568&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2019-08-05T09:15:22Z&quot;, GoVersion:&quot;go1.12.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>*** Minikube version ***</p> <pre><code>minikube version: v1.3.1 commit: ca60a424ce69a4d79f502650199ca2b52f29e631 </code></pre>
SHARON XAVIER
<p>using init containers you can check if the second container is already up and running then run the script</p> <pre><code> spec: initContainers: - name: check-second-ready image: postgres command: ['sh', '-c', 'until pg_isready -h connection/url/to/second/conatiner -p 5432; do echo waiting for database; sleep 2; done;'] containers: first container config... ------ </code></pre>
SHARON XAVIER
<p>I have a Kubernetes cluster (Docker and containerd) where I deployed the <a href="https://github.com/weaveworks/weave" rel="nofollow noreferrer">Weave CNI plugin</a>.</p> <p>When inspecting the master node processes (<code>ps -aef --forest</code>) I can see that the <code>containerd-shim</code> process that runs the weave plugin has 3 processes in it's tree:</p> <pre><code>31175 16241 \_ containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/836489.. -address /run/containerd/contai 31199 31175 | \_ /bin/sh /home/weave/launch.sh 31424 31199 | | \_ /home/weave/weaver --port=6783 --datapath=datapath --name=36:e4:33:8 31656 31175 | \_ /home/weave/kube-utils -run-reclaim-daemon -node-name=ubuntu -peer-name=36:e4 </code></pre> <p>What I fail to understand is how the <code>kube-utils</code> process (pid 31656), which is issued from the <code>launch.sh</code> script process (pid 31199) <strong>is a sibling process of it and not a child process?</strong></p> <p>I have tried to create a similar environment to emulate this scenario, by creating a docker image from the following:</p> <pre><code>FROM ubuntu:18.04 ADD ./launch.sh /home/temp/ ENTRYPOINT [&quot;/home/temp/launch.sh&quot;] </code></pre> <p>Where <code>launch.sh</code> in my case is similar in the idea to <a href="https://github.com/weaveworks/weave/blob/master/prog/weave-kube/launch.sh" rel="nofollow noreferrer">that of weave</a>:</p> <pre><code>#!/bin/sh start() { sleep 2000&amp; } start &amp; sleep 4000 </code></pre> <p>After deploying this to the cluster I get the following process tree:</p> <pre><code>114944 16241 \_ containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/d9a6904 -address /run/containerd/contai 114972 114944 \_ /bin/sh /home/temp/launch.sh 115002 114972 \_ sleep 4000 115003 114972 \_ sleep 2000 </code></pre> <p>And you can see that both processes are children of the main container process and not a sibling.</p> <p><strong>According to the weave scenario above, I would expect that the <code>sleep 2000</code> process would be a sibling to the <code>launch.sh</code> process and not a child.</strong></p> <p>Any idea how to explain the weave situation above? how can I reproduce this locally? or in what scenario is a sibling process created to the container process?</p> <p>Thank you all.</p>
omricoco
<blockquote> <p>According to the weave scenario above, I would expect that the sleep 2000 process would be a sibling to the launch.sh process and not a child.</p> </blockquote> <p>I reproduced the setup you were having and encountered similar situation (one of the <code>sleep</code> command was not a sibling to <code>launch.sh</code>). To achieve that you will need following parameters in your <code>Deployment</code> or <code>Pod</code> YAML:</p> <ul> <li><code>hostPid</code></li> <li> <pre><code>securityContext: privileged: true </code></pre> </li> </ul> <p>You can read more about <code>hostPid</code> here:</p> <ul> <li><em><a href="https://medium.com/@chrispisano/limiting-pod-privileges-hostpid-57ce07b05896" rel="nofollow noreferrer">Medium.com: Limiting pod privileges</a></em></li> <li><em><a href="https://stackoverflow.com/questions/41977957/what-do-hostpid-and-hostipc-options-mean-in-a-kubernetes-pod">Stackoverflow.com: Questions: What do hostpid and hostipc options mean in a kubernetes pod</a></em></li> </ul> <p>You can read more about <code>securityContext</code> here:</p> <ul> <li><em><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">Kubernetes.io: Configure pod container: Security context</a></em></li> </ul> <hr /> <p>It's working with <code>Weave</code> as it's having parameters mentioned above. You can look them up here:</p> <ul> <li><em><a href="https://github.com/weaveworks/weave/blob/master/prog/weave-kube/weave-daemonset-k8s-1.11.yaml#L180" rel="nofollow noreferrer">Github.com: Weaveworks: weave-daemonset-k8s-1.11.yaml </a></em>: <ul> <li>line <code>141</code></li> <li>line <code>172</code></li> <li>line <code>180</code></li> </ul> </li> </ul> <p>Also this processes are running by:</p> <ul> <li><em><a href="https://github.com/weaveworks/weave/blob/master/prog/weave-kube/launch.sh#L186" rel="nofollow noreferrer">Github.com: Weaveworks: weave-kube: launch.sh: Line 186</a></em></li> </ul> <hr /> <h2>Example</h2> <p>This is an example to show how you can have a setup where the <code>sleep</code> command will be a sibling to <code>launch.sh</code>.</p> <p>The process can differ:</p> <ul> <li>using <code>ConfigMap</code> with a script as an entrypoint</li> <li>building an image with all the files included</li> </ul> <p><code>launch.sh</code> file:</p> <pre class="lang-sh prettyprint-override"><code>#!/bin/bash start() { sleep 10030 &amp; } start &amp; ( sleep 10040 &amp;) sleep 10050 &amp; /bin/sh -c 'sleep 10060' </code></pre> <h3>Using <code>ConfigMap</code> with a script as an entrypoint</h3> <p>You can use above script to create a <code>configMap</code> which will be used to run a pod:</p> <ul> <li><code>$ kubectl create cm --from-file=launch.sh</code></li> </ul> <p><code>Pod</code> YAML definition:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: labels: run: bashtest name: bashtest spec: containers: - image: ubuntu name: bashtest command: [&quot;/mnt/launch.sh&quot;] resources: {} securityContext: privileged: true volumeMounts: - mountPath: /mnt/launch.sh name: ep subPath: launch.sh dnsPolicy: ClusterFirst restartPolicy: Always hostPID: true volumes: - name: ep configMap: defaultMode: 0750 items: - key: launch.sh path: launch.sh name: entrypoint </code></pre> <h3>Building an image with all the files included</h3> <p>You can also build an image. Please remember that this image is only for <strong>example purposes</strong>.</p> <p><code>Dockerfile</code>:</p> <pre><code>FROM ubuntu:18.04 ADD ./launch.sh / RUN chmod 777 ./launch.sh ENTRYPOINT [&quot;/launch.sh&quot;] </code></pre> <p><code>Pod</code> YAML definition:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: process labels: app: ubuntu spec: containers: - image: gcr.io/dkruk-test-00/bashtest imagePullPolicy: Always name: ubuntu securityContext: privileged: true hostPID: true restartPolicy: Always </code></pre> <hr /> <p>After applying the manifest for this resources (either with built image or with a <code>ConfigMap</code>), you should be able to run (on a node that is running this <code>Pod</code>):</p> <ul> <li><code>$ ps -aef --forest</code></li> </ul> <p>and see the output similar to this (only part):</p> <pre class="lang-sh prettyprint-override"><code>root 2297272 290 0 09:44 ? 00:00:00 \_ containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/5c802039033683464d5a586 root 2297289 2297272 0 09:44 ? 00:00:00 \_ /bin/bash /launch.sh root 2297306 2297289 0 09:44 ? 00:00:00 | \_ sleep 10050 root 2297307 2297289 0 09:44 ? 00:00:00 | \_ /bin/sh -c sleep 10060 root 2297310 2297307 0 09:44 ? 00:00:00 | \_ sleep 10060 root 2297305 2297272 0 09:44 ? 00:00:00 \_ sleep 10040 root 2297308 2297272 0 09:44 ? 00:00:00 \_ sleep 10030 </code></pre>
Dawid Kruk
<p>I want to apply <code>VPA</code> <strong>vertical pod auto scaling</strong> for database pods. Can we use <code>VPA</code> for database auto scaling (Vertical) as <code>VPA</code> requires at least 2 replicas (ref : <a href="https://github.com/kubernetes/autoscaler/issues/1665#issuecomment-464679271" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/issues/1665#issuecomment-464679271</a>) as it delete pods when set criteria is reached. So pods are deleted hence also data.</p> <p>What is good practice for using <code>VPA</code> with database pods?</p>
Developer Desk
<p><code>VPA</code> - Vertical pod autoscaler can work in 2 ways:</p> <ul> <li>Recommendation mode - it will recommend the requests and limits for pods based on resources used</li> <li>Auto mode - it will automatically analyze the usage and set the request and limits on pods. This will result in pod termination to recreate it with new specification as stated here:</li> </ul> <blockquote> <p>Due to Kubernetes limitations, the only way to modify the resource requests of a running Pod is to recreate the Pod. If you create a <code>VerticalPodAutoscaler</code> with an <code>updateMode</code> of &quot;Auto&quot;, the <code>VerticalPodAutoscaler</code> evicts a Pod if it needs to change the Pod's resource requests.</p> <p><em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscaler" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Vertical pod autoscaler</a></em></p> </blockquote> <p>Please refer to above link for more information regarding the concepts of <code>VPA</code>.</p> <p>The fact that it needs at least 2 replicas is most probably connected with the fact of high availability. As the pods are getting evicted to support new limits they are unable to process the request. If it came to situation where there is only 1 replica at the time, this replica wouldn't be able to respond to requests when in terminating/recreating state.</p> <p>There is an official guide to run VPA on <code>GKE</code>:</p> <ul> <li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/vertical-pod-autoscaling" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: How to: Vertical pod autoscaling</a></em></li> </ul> <p><code>VPA</code> supports: <code>Deployments</code> as well as <code>StatefulSets</code>.</p> <blockquote> <h3>StatefulSet</h3> <p>Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.</p> <p><strong>If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution.</strong></p> <p><em><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">Kubernetes.io: StatefulSet</a></em></p> </blockquote> <p>Configuring <code>StatefulSet</code> with <code>PersistentVolumes</code> will ensure that the data stored on <code>PV</code> will not be deleted in case of pod termination.</p> <p>To be able to use your database with <code>replicas</code> &gt; <code>1</code> you will need to have <em><a href="https://www.ibm.com/support/knowledgecenter/SSXW43_8.9.13/com.ibm.i2.ibase.admin.doc/what_is_database_replication.html" rel="nofollow noreferrer">replication</a></em> implemented within your database environment.</p> <p>There are guides/resources/solutions on running databases within Kubernetes environment. Please choose the solution most appropriate to your use case. Some of them are:</p> <ul> <li><em><a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">Kubernetes.io: Run replicated stateful application</a></em></li> <li><em><a href="https://github.com/zalando/postgres-operator" rel="nofollow noreferrer">Github.com: Zalando: Postgres operator</a></em></li> <li><em><a href="https://github.com/oracle/mysql-operator" rel="nofollow noreferrer">Github.com: Oracle: Mysql operator</a></em></li> </ul> <p>After deploying your database you will be able to run below command to extract the name of the <code>StatefulSet</code>:</p> <ul> <li><code>$ kubectl get sts</code></li> </ul> <p>You can then apply the name of the <code>StatefulSet</code> to the <code>VPA</code> like below:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: DB-VPA spec: targetRef: apiVersion: &quot;apps/v1&quot; kind: StatefulSet name: &lt;INSERT_DB_STS_HERE&gt; updatePolicy: updateMode: &quot;Auto&quot; </code></pre> <p>I encourage you also to read this article:</p> <ul> <li><em><a href="https://cloud.google.com/blog/products/databases/to-run-or-not-to-run-a-database-on-kubernetes-what-to-consider" rel="nofollow noreferrer">Cloud.google.com: Blog: To run or not to run a database on Kubernetes, what to consider</a></em></li> </ul>
Dawid Kruk
<p>I need to understand how we can delete a docker image using Kubernetes. I'm using Jenkins for pipeline automation and Jenkins only has access to the master node, not the slaves. So when I generate the deploy everything works fine, the deploy makes the slaves pull from the repository and I get everything going.</p> <p>But if Jenkins kills the deploy and tries to remove the image, it only deletes the image on the master node and not on other slaves. So I don't want to manually delete the image.</p> <p>Is there a way to delete images on slave nodes from the master node?</p>
Bedjase
<p>Kubernetes is responsible for deleting images, It is kubelet that makes garbage collection on nodes, including image deletion, it is customizable. Deleting images by external methods is not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.</p> <p>Kubelet verifies if the storage available for the images is more than 85% full, in that case it deletes some images to make room. Min and Max threshold can be customized in the file /var/lib/kubelet/config.yaml</p> <p>imageGCHighThresholdPercent is the percent of disk usage after which image garbage collection is always run.</p> <p>ImageGCLowThresholdPercent is the percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to.</p> <p>Default values are :</p> <p>ImageGCHighThresholdPercent 85</p> <p>ImageGCLowThresholdPercent 80</p>
Enzo
<p>I'm trying to mount existing google cloud Persistent Disk(balanced) to Jenkins in Kubernetes. In the root of the disk located fully configured Jenkins. I want to bring up Jenkins in k8s with already prepared configuration on google Persistent Disk.</p> <p>I'm using latest chart from the <a href="https://charts.jenkins.io" rel="nofollow noreferrer">https://charts.jenkins.io</a> repo</p> <p>Before run <code>helm install</code> I'm applying pv and pvc.</p> <p><strong>PV</strong> for existent disk:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-persistent-volume spec: storageClassName: standard capacity: storage: 50Gi accessModes: - ReadWriteOnce csi: driver: pd.csi.storage.gke.io volumeHandle: projects/Project/zones/us-central1-a/disks/jenkins-pv fsType: ext4 </code></pre> <p><strong>PVC</strong></p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: jenkins-pvc namespace: jenkins spec: volumeName: jenkins-persistent-volume accessModes: - &quot;ReadWriteOnce&quot; resources: requests: storage: &quot;50Gi&quot; </code></pre> <p><a href="https://i.stack.imgur.com/W1k4j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W1k4j.png" alt="pv" /></a> <a href="https://i.stack.imgur.com/XQaPK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XQaPK.png" alt="pvc" /></a></p> <p>Files in Persistent Google Disk are 100% <strong>1000:1000</strong> permissions (uid, gid)</p> <p>I made only one change in official helm chart, it was in values file</p> <pre><code> existingClaim: &quot;jenkins-pvc&quot; </code></pre> <p>After running <code>helm install jenkins-master . -n jenkins</code> I'm getting next: <a href="https://i.stack.imgur.com/wzbxM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wzbxM.png" alt="failed pod" /></a></p> <p>Just for ensure that problem not from GCP side. I mount pvc to busybox and it works perfect.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: busybox image: busybox:1.32.0 command: - &quot;/bin/sh&quot; args: - &quot;-c&quot; - &quot;while true; do echo $(date) &gt;&gt; /app/buffer; cat /app/buffer; sleep 5; done;&quot; volumeMounts: - name: my-volume mountPath: /app volumes: - name: my-volume persistentVolumeClaim: claimName: jenkins-pvc </code></pre> <p><a href="https://i.stack.imgur.com/3PBIb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3PBIb.png" alt="busybox" /></a></p> <p>I tried to change a lot of values in values.yaml also tried use old charts, or even <strong>bitnami charts</strong> with deployment instead of stateful set, but always error is the same. Could somebody shows my the right way please.</p> <p><strong>Storage classes</strong> <a href="https://i.stack.imgur.com/wgGbL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wgGbL.png" alt="storage classes" /></a></p>
Артем Черемісін
<p>Change storageClass</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-persistent-volume spec: storageClassName: standard </code></pre> <p>to <code>default</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-persistent-volume spec: storageClassName: default </code></pre> <p>Also check your existing storageClasses: <code>kubectl get sc</code></p> <p>Alternatively, set storageClass to &quot;&quot; in values:</p> <pre class="lang-yaml prettyprint-override"><code>global: imageRegistry: &quot;&quot; ## E.g. ## imagePullSecrets: ## - myRegistryKeySecretName ## imagePullSecrets: [] storageClass: &quot;&quot; # &lt;- </code></pre> <p><em>I would have post it in comment, but too low repution to comment</em></p>
Tiriyon
<p>I am trying to start kubernetes dashboard in docker for desktop and it's working fine. but all time i need to start <code>kubectl proxy</code> and if i close that powershell window then dashboard working stop.</p> <p>Is there any way to start dashboard without proxy or proxy start all time? how i can access this dashboard in network ?</p>
Vinit Patel
<p>In order to persistently expose the dashboard you have to add a service to your cluster.</p> <p>Create a yaml file with the following content (Let's call it <strong>dash-serv.yaml</strong>):</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-nodeport namespace: kube-system spec: ports: - port: 80 protocol: TCP targetPort: 9090 nodePort: 32123 selector: k8s-app: kubernetes-dashboard sessionAffinity: None type: NodePort </code></pre> <p>then run <code>kubectl apply -f dash-serv.yaml</code> and test your dashboard access on <a href="http://localhost:32123" rel="nofollow noreferrer">http://localhost:32123</a>.</p>
Will R.O.F.
<p>I'm trying to use HPA with external metrics to scale down a deployment to 0. I'm using GKE with version 1.16.9-gke.2.</p> <p>According to <a href="https://github.com/kubernetes/kubernetes/issues/69687" rel="nofollow noreferrer">this</a> I thought it would be working but it's not. I'm still facing : <code>The HorizontalPodAutoscaler "classifier" is invalid: spec.minReplicas: Invalid value: 0: must be greater than or equal to 1</code></p> <p>Below is my HPA definition : </p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: classifier spec: minReplicas: 0 maxReplicas: 15 metrics: - external: metricName: loadbalancing.googleapis.com|https|request_count targetAverageValue: "1" type: External scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: classifier </code></pre> <p>Thanks a lot for your help !</p>
Thomas G
<blockquote> <p>According to <a href="https://github.com/kubernetes/kubernetes/issues/69687" rel="noreferrer">this</a> I thought it would be working but it's not.</p> </blockquote> <p><strong>The fact that some features are working in the Kubernetes does not mean that they are enabled in managed solutions like <code>GKE</code>.</strong></p> <p>This feature is enabled by a <strong>feature gate</strong> called <strong><code>HPAScaleToZero</code></strong>. It is in <strong><code>Alpha</code></strong> state since Kubernetes version 1.16. It is disabled by default according to below link. Please take a look on official documentation regarding feature gates here: <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="noreferrer">Kubernetes.io: Docs: Feature Gates</a></p> <p>Going further:</p> <blockquote> <p>New features in Kubernetes are listed as Alpha, Beta, or Stable, depending upon their status in development. In most cases, Kubernetes features that are listed as Beta or Stable are included with GKE</p> <p> <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overview#kubernetes_versions_and_features" rel="noreferrer">Cloud.google.com: Kubernetes Engine: Kubernetes versions and features</a></em> </p> </blockquote> <p>As you can see by:</p> <blockquote> <p><code>The HorizontalPodAutoscaler &quot;classifier&quot; is invalid: spec.minReplicas: Invalid value: 0: must be greater than or equal to 1</code></p> </blockquote> <p><strong>This feature is disabled in &quot;standard&quot; <code>GKE</code> clusters.</strong></p> <hr /> <p>There is an option to have <strong><code>HPAScaleToZero</code></strong> enabled. This entails running an <strong>alpha</strong> cluster:</p> <blockquote> <p>The term alpha cluster means that alpha APIs are enabled, both for Kubernetes and GKE, regardless of the version of Kubernetes the cluster runs. Periodically, Google offers customers the ability to test GKE versions that are not generally available, for testing and validation.</p> <p> <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters" rel="noreferrer">Cloud.google.com: Kubernetes Engine: Alpha clusters</a></em> </p> </blockquote> <p>Please have in mind that running <strong>alpha</strong> cluster have some drawbacks:</p> <blockquote> <h2>Limitations</h2> <p>Alpha clusters have the following limitations:</p> <ul> <li><strong>Not covered by the GKE SLA</strong></li> <li>Cannot be upgraded</li> <li>Node auto-upgrade and auto-repair are disabled on alpha clusters</li> <li><strong>Automatically deleted after 30 days</strong></li> <li>Do not receive security updates</li> </ul> </blockquote>
Dawid Kruk
<p>Below is the example to add Virtual Server and VirtualServerRoute in kubernetes-ingress.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: virtualserver spec: host: localhost routes: - path: / route: virtualserverroute --- apiVersion: k8s.nginx.org/v1 kind: VirtualServerRoute metadata: name: virtualserverroute spec: host: localhost upstreams: - name: proxy service: proxy port: 80 - name: webserverv1 service: webserverv1 port: 80 - name: webserverv2 service: webserverv2 port: 80 subroutes: - path: /webserverv1 action: pass: webserverv1 - path: /webserverv2 action: pass: webserverv2 - path: / action: pass: proxy</code></pre> </div> </div> </p> <p>Anyone knows How to get list of NGINX VirtualServer and VirtualServerRoute for that ingress in K8S?</p>
Dhaval Thakkar
<blockquote> <p>Anyone knows How to get list of NGINX VirtualServer and VirtualServerRoute for that ingress in K8S?</p> </blockquote> <p>You can list resources like <code>VirtualServer</code> and <code>VirtualServerRoute</code> by invoking below command: </p> <ul> <li><code>$ kubectl get VirtualServer</code> or <code>kubectl get vs</code></li> <li><code>$ kubectl get VirtualServerRoute</code> or <code>kubectl get vsr</code></li> </ul> <p><strong>Please have in mind that above resources are <code>Custom Resources</code> and they should be added to Kubernetes.</strong> </p> <hr> <p><code>VirtualServer</code> as well as <code>VirtualServerRoute</code> are connected <strong>specifically</strong> to the Nginx Ingress Controller created by NginxInc. </p> <p>Github link: <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer"> Nginxinc: Kubernetes Ingress</a></p> <p>As said on the Github site: </p> <blockquote> <p>Note: this project is <strong>different</strong> from the NGINX Ingress controller in <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">kubernetes/ingress-nginx</a> repo. See <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md" rel="nofollow noreferrer">this doc</a> to find out about the key differences.</p> <p> <em><a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">Github.com: Nginxinc: Kubernetes Ingress</a></em> </p> </blockquote> <p>To be able to create: </p> <ul> <li><code>VirtualServer</code></li> <li><code>VirtualServerRoute</code> </li> </ul> <p>resources you will need to follow <a href="https://github.com/nginxinc/kubernetes-ingress#getting-started" rel="nofollow noreferrer">this documentation</a>.</p> <p>If you are using the manifests with <code>git</code> please make sure that you apply following manifests: </p> <pre class="lang-sh prettyprint-override"><code>$ kubectl apply -f common/vs-definition.yaml $ kubectl apply -f common/vsr-definition.yaml $ kubectl apply -f common/ts-definition.yaml </code></pre> <p>As they are <code>CRD's</code> for above resources. </p> <p>After successful provisioning of <code>nginx-ingress</code> you should be able to create <code>VirtualServer</code> and <code>VirtualServerRoute</code> and get more information about them with: </p> <ul> <li><code>kubectl describe vs</code></li> <li><code>kubectl describe vsr</code> </li> </ul>
Dawid Kruk
<p>I have run into a bit of a trouble for what is seems to be an easy question. </p> <p>My scenario: I have a k8s job which can be run at any time (not a cronJob) which in turn creates a pod to perform some tasks. Once the pod performs its task it completes, thus completing the job that spawned it.</p> <p>What I want: I want to alert via prometheus if the pod is in a running state for more than 1h signalling that the task is taking too much time. I'm interested to alert ONLY when duration symbolised by the arrow in the attached image exceeds 1h. Also have no alerts triggered when the pod is no longer running.<a href="https://i.stack.imgur.com/AUC1S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AUC1S.png" alt="enter image description here"></a></p> <p>What I tried: The following prometheus metric, which is an instant vector that can be either 0(pod not running) or 1(pod is running):</p> <pre><code>kube_pod_status_ready{condition="true",pod_name=~".+POD-A.+"} </code></pre> <p>I figured I tried to use this metric with the following formula for computing the duration for when the metric was one during a day</p> <pre><code>(1 - avg_over_time(kube_pod_status_ready{condition="true",pod_name=~".+POD-A.+"}[1d])) * 86400 &gt; 3600 </code></pre> <p>Because these pods come and go and are not always present I'm encountering the following problems:</p> <ul> <li>The expr above starts from the 86400 value and eventually drops once the container is running this would trigger an alert</li> <li>The pod eventually goes away and I would not like to send out fake alerts for pods which are no longer running(although they took over 1h to run)</li> </ul>
Paul Chibulcuteanu
<p>Thanks to the suggestion of @HelloWorld i think this would be the best solution to achieve what I wanted:</p> <pre><code>(sum_over_time(kube_pod_status_ready{condition="true",pod_name=~".+POD-A.+"}[1d:1s]) &gt; 3600) and (kube_pod_status_ready{condition="true",pod_name=~".+POD-A.+"}==1) </code></pre> <ul> <li>Count the number of times pods is running in the past day/6h/3h and verify if that exceeds 1h(3600s) AND </li> <li>Check if the pod is still running - so that it doesn't take into consideration old pods or if the pod terminates.</li> </ul>
Paul Chibulcuteanu
<p>I've read that by default, clusters on GKE write all logs to stackdriver. On the GKE UI, I can see the container logs of all my deployments, but am looking to get these logs from an API programmatically.</p> <p>I've tried to do something like <code>gcloud logging read "resource.labels.pod_id="&lt;pod-id&gt;""</code> but nothing is returned.</p> <p>Do I need to do something special in order to enable logging on the cluster? And if not, how do I access logs for a specific deployment/pod/container? </p> <p>Further, are these logs persistent? As in, when the deployment dies, can I still access these logs?</p>
Jay K.
<p>You have several options to read the <code>GKE</code> pods logs from Stackdriver. Some of them are: </p> <blockquote> <h2>Reading logs</h2> <p>To read log entries in Logging, you can do any of the following:</p> <ul> <li>Use the Logs Viewer in the Google Cloud Console.</li> <li>Call the Logging API through the Client Libraries for your programming language.</li> <li>Call the Logging API REST endpoints directly. See the Logging API reference documentation.</li> <li><p>Use the Cloud SDK. For more information, see the gcloud logging command-line interface.</p> <p> <em><a href="https://cloud.google.com/logging/docs/setup" rel="nofollow noreferrer">Cloud.google.com: Logging: Setup </a></em> </p></li> </ul> </blockquote> <p>As for:</p> <blockquote> <p>Do I need to do something special in order to enable logging on the cluster? And if not, how do I access logs for a specific deployment/pod/container?</p> </blockquote> <p>Please refer to: <a href="https://cloud.google.com/logging/docs/access-control" rel="nofollow noreferrer">Cloud.google.com: Logging: Access control</a></p> <p>Answering below: </p> <blockquote> <p>Further, are these logs persistent? As in, when the deployment dies, can I still access these logs?</p> </blockquote> <p>Yes you can still access this logs, even if the deployment was deleted. You can still access the logs even if you delete your cluster. Logs stored in Stackdriver have <strong>retention</strong> policies which will store the logs for set amount of time. Please refer to: </p> <ul> <li><a href="https://cloud.google.com/logging/quotas" rel="nofollow noreferrer">Cloud.google.com: Logging: Quotas</a></li> <li><a href="https://cloud.google.com/logging/docs/storage" rel="nofollow noreferrer">Cloud.google.com: Logging: Storage</a></li> </ul> <hr> <p>Please take a look on the example below which shows how to access logs with <code>gcloud</code> command:</p> <p><strong>Steps:</strong> </p> <ul> <li>Create a <code>Deployment</code> which will send logs to Stackdriver</li> <li>Check if the logs are stored in Stackdriver</li> <li>Get the logs from Stackdriver with <code>gcloud</code></li> </ul> <h2>Create a Deployment</h2> <p>Please follow a Google Cloud Platform guide to spawn a <code>Deployment</code> which will send data to Stackdriver: </p> <ul> <li><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/custom-metrics-autoscaling#before-you-begin" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Custom Metrics autoscaling: Before you begin</a> - start </li> <li><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/custom-metrics-autoscaling#exporting_metrics_from_the_application" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Custom Metrics autoscaling: Exporting</a> - end </li> </ul> <h2>Check if the logs are in Stackdriver.</h2> <p>Logs exported by above deployment will be stored in Stackdriver. The example of it should look like this: </p> <pre><code>{ insertId: "REDACTED" labels: { k8s-pod/pod-template-hash: "545464fb5" k8s-pod/run: "custom-metric-sd" } logName: "projects/REDACTED/logs/stderr" receiveTimestamp: "2020-05-26T10:17:16.161949129Z" resource: { labels: { cluster_name: "gke-logs" container_name: "sd-dummy-exporter" location: "ZONE" namespace_name: "default" pod_name: "custom-metric-sd-545464fb5-2rdvx" project_id: "REDACTED" } type: "k8s_container" } severity: "ERROR" textPayload: "2020/05/26 10:17:10 Finished writing time series with value: 0xc420015290 " timestamp: "2020-05-26T10:17:10.356684667Z" } </code></pre> <p>Above log entry will help with creating a <code>gcloud</code> command to get only specified logs from the Stackdriver. </p> <h2>Get the logs from Stackdriver with <code>gcloud</code></h2> <p>As you pointed: </p> <blockquote> <p>I've tried to do something like gcloud logging read "resource.labels.pod_id=""" but nothing is returned.</p> </blockquote> <p>The fact that nothing is returned is most probably connected with the fact that the requested resource was not found with the command you used. </p> <p>To get this logs from Stackdriver invoke below command: </p> <pre class="lang-sh prettyprint-override"><code>$ gcloud logging read "resource.type=k8s_container AND resource.labels.container_name=sd-dummy-exporter" </code></pre> <p>Dividing above command on smaller pieces: </p> <ul> <li><code>resource.type=k8s_container</code> - it will get the logs with a type of <code>k8s_container</code> </li> <li><code>resource.labels.container_name=XYZ</code> - it will get the logs with a specified <code>container_name</code>. </li> </ul> <p>This pieces are directly connected with example singular log entry mentioned earlier. </p> <p>A tip: </p> <blockquote> <p><code>resources.labels.container_name</code> can be used to collect logs from multiple pods as the specific pod can be referenced with <code>pod_name</code>. </p> </blockquote>
Dawid Kruk
<p>Is it somehow possible to expose a kubernetes service to the outside world? I am currently developping an application which need to communicate with a service, and to do so I need to know the pod ip and port address, which I withing the kubernetes cluster can get with the kubernetes services linked to it, but outside the cluster I seem to be unable to find it, or expose it?</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kafka-broker spec: ports: - name: broker port: 9092 protocol: TCP targetPort: kafka selector: app: kafka sessionAffinity: None type: ClusterIP </code></pre> <p>I could containerize the application, put it in a pod, and run it within kubernetes, but for fast development it seems tedious to have to go through this, for testing such a small things such as connectivity? </p> <p>Someway i can expose the service, and thereby reach the application in its selector?</p>
some
<p>In order to expose your Kubernetes service to the internet you must change the <code>ServiceType</code>.</p> <p>Your service is using the default which is <code>ClusterIP</code>, it exposes the Service on a cluster-internal IP, making it <strong>only reachable within the cluster</strong>.</p> <p><strong>1 - If you use cloud provider like AWS or GCP The best option for you is to use the <code>LoadBalancer</code> Service Type:</strong> which automatically exposes to the internet using the provider Load Balancer.</p> <p>Run: <code>kubectl expose deployment deployment-name --type=LoadBalancer --name=service-name</code></p> <p><em>Where <code>deployment-name</code> must be replaced by your actual deploy name. and the same goes for the desired <code>service-name</code></em></p> <p>wait a few minutes and the <code>kubectl get svc</code> command will give you the external IP and PORT:</p> <pre><code>owilliam@minikube:~$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 4d21h nginx-service-lb LoadBalancer 10.96.125.208 0.0.0.0 80:30081/TCP 36m </code></pre> <p><strong>2 - If you are running Kubernetes locally (like Minikube) the best option is the <code>Nodeport</code> Service Type</strong>: It it exposes the service to the Cluster Node( the hosting computer). Which is safer for testing purposes than exposing the service to the whole internet.</p> <p>Run: <code>kubectl expose deployment deployment-name --type=NodePort --name=service-name</code></p> <p><em>Where <code>deployment-name</code> must be replaced by your actual deploy name. and the same goes for the desired <code>service-name</code></em></p> <p>Bellow are my outputs after exposing an Nginx webserver to the <code>NodePort</code> for your reference:</p> <pre><code>user@minikube:~$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 4d21h service-name NodePort 10.96.33.84 &lt;none&gt; 80:31198/TCP 4s user@minikube:~$ minikube service list |----------------------|---------------------------|-----------------------------|-----| | NAMESPACE | NAME | TARGET PORT | URL | |----------------------|---------------------------|-----------------------------|-----| | default | kubernetes | No node port | | default | service-name | http://192.168.39.181:31198 | | kube-system | kube-dns | No node port | | kubernetes-dashboard | dashboard-metrics-scraper | No node port | | kubernetes-dashboard | kubernetes-dashboard | No node port | |----------------------|---------------------------|-----------------------------|-----| user@minikube:~$ curl http://192.168.39.181:31198 &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; ...//// suppressed output &lt;p&gt;&lt;em&gt;Thank you for using nginx.&lt;/em&gt;&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; user@minikube:~$ </code></pre>
Will R.O.F.
<p>I have a hosted VPS and I would like to use this machine as a single node Kubernetes test environment. Is it possible to create a single-node Kubernetes cluster on the VPS, deploy pods to it using, for example, GitLab and test the application from outside the machine? I would like to locally develop, push to git and then deploy on this testing/staging environment.</p> <p>Thank you</p>
Green
<p>Answering the part of whole question:</p> <blockquote> <p>I have a hosted VPS and I would like to use this machine as a single node kubernetes test environment.</p> </blockquote> <p>A good starting point could be to cite the parts of my own answer from <a href="https://serverfault.com/questions/1046581/is-it-possible-to-install-kubernetes-manually-in-my-existing-gcp-vm-instance/1047006#1047006">Serverfault</a>:</p> <blockquote> <p>There are a lot of options to choose from. Each solution will have it's advantages and disadvantages. It will also depend on the operating system your VM is deployed with.</p> <p><strong>Some</strong> of the options are the following:</p> <ul> <li><a href="https://microk8s.io/" rel="nofollow noreferrer">MicroK8S</a> - as pointed by user @Sekru</li> <li><a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">Minikube</a></li> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">Kind</a></li> <li><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">Kubeadm</a></li> <li><a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">Kubespray</a></li> <li><a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">Kelsey Hightower: Kubernetes the hard way</a></li> </ul> <p>Each of the solutions linked above have a link to it's respective homepage. You can find there installation steps/tips. Each solution is different and I encourage you to check if selected option suits your needs.</p> </blockquote> <p>You'll need to review the networking part of each of above solutions as some of them will have easier/more difficult process to expose your workload outside of the environment (make it accessible from the Internet).</p> <p>It all boils down to what are your requirements/expectations and what are the requirements for each of the solutions.</p> <hr /> <h3>MicroK8S setup:</h3> <p>I do agree with an answer provided by community member @Sekru but I also think it could be beneficiary to add an example for such setup. Assuming that you have a <code>microk8s</code> compatible OS:</p> <ul> <li><code>sudo snap install microk8s --classic</code></li> <li><code>sudo microk8s enable ingress</code></li> <li><code>sudo microk8s kubectl create deployment nginx --image=nginx</code></li> <li><code>sudo microk8s kubectl expose deployment nginx --port=80 --type=NodePort</code></li> <li><code>sudo microk8s kubectl apply -f ingress.yaml</code> where <code>ingress.yaml</code> is a file with following content:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minimal-ingress spec: ingressClassName: public # &lt;-- IMPORTANT rules: - http: paths: - path: / pathType: Prefix backend: service: name: nginx port: number: 80 </code></pre> <p>After above steps you should be able to contact your <code>Deployment</code> from the place outside of your host by:</p> <ul> <li><code>curl http://IP-ADDRESS</code></li> </ul> <blockquote> <p>Side notes!</p> <ul> <li>A setup like that will allow expose your workload on a <code>NodePort</code> (allocated port on each node from <code>30000</code> to <code>32767</code>).</li> <li>From the security perspective I would consider using your <code>VPS</code> provider firewalls to limit the traffic coming to your instance to allow only subnets that you are connecting from.</li> </ul> </blockquote> <hr /> <p>From the perspective of Gitlab integration with Kubernetes, I'd reckon you could find useful information by following it's page:</p> <ul> <li><em><a href="https://about.gitlab.com/" rel="nofollow noreferrer">About.gitlab.com</a></em></li> </ul> <p>Additional resources about Kubernetes:</p> <ul> <li><em><a href="https://kubernetes.io/docs/home/" rel="nofollow noreferrer">Kubernetes.io: Docs: Home</a></em></li> </ul>
Dawid Kruk
<p>I setup my ASP.NET Core project to use gRPC. I have a server and a client.</p> <p>Locally, it is working fine. But when I deploy to Docker, the client cannot call it anymore.</p> <p>Is there anything wrong with my settings below?</p> <p><strong>Server</strong></p> <p><em>Startup.cs</em></p> <pre><code>//Configure Services services.AddGrpc(); ... //Configure app.UseEndpoints(endpoints =&gt; { endpoints.MapGrpcService&lt;ProgramService&gt;(); endpoints.MapControllers(); }); </code></pre> <p><em>appsettings.Dev.json</em></p> <pre><code>&quot;Kestrel&quot;: { &quot;Endpoints&quot;: { &quot;gRPC&quot;: { &quot;Url&quot;: &quot;http://localhost:8000&quot;, &quot;Protocols&quot;: &quot;Http2&quot; } } } </code></pre> <p><em>Dockerfile</em></p> <pre><code>WORKDIR /app EXPOSE 80 8000 ENV ASPNETCORE_URLS=http://+:8000 </code></pre> <p><em>kubernetes deployment.yml</em></p> <pre><code>containers: - name: tenant-api-grpc ports: - containerPort: 8000 </code></pre> <p><em>kubernetes service.yml</em></p> <pre><code>spec: type: ClusterIP ports: - name: &quot;8000&quot; port: 8000 targetPort: 8000 </code></pre> <p><strong>Client</strong></p> <p><em>Startup.cs</em></p> <pre><code>services.AddGrpcClient&lt;Tenant.Proto.Program.ProgramClient&gt;(o =&gt; { o.Address = new Uri(Configuration[&quot;ServiceUrlConfiguration:ServiceTenantUrl&quot;]); }); </code></pre> <p><em>appsettings.Dev.json</em></p> <pre><code>&quot;ServiceUrlConfiguration&quot;: { &quot;ServiceTenantUrl&quot;: &quot;http://tenant-api-grpc-svc.dev.svc.cluster.local:8000/&quot; } </code></pre>
Water
<p>It is hard to analyze your porblem with the given information.</p> <p>I would suggest to check the base url. When you start server and client in kestrel you find the services under localhost. If you deploy the services using docker-compose the services find each other under the name of the service respectively. If you deploy the containers independently, you can access localhost from within the container using host.docker.internal.</p>
Clemens
<p>Can please somebody help me? This is my first post here, and I am really exited to start posting here and helping people but I need help first.</p> <p>I am deploying my own Postgres database on Minikube. For db, password and username I am using secrets.</p> <p>Data is encoded with base64</p> <ol> <li>POSTGRES_USER = website_user</li> <li>POSTGRES_DB = website</li> <li>POSTGRES_PASSWORD = pass</li> </ol> <p>I also exec into container to see if I could see these envs and they were there.</p> <p>The problem is when I try to enter into postgres with psql. I checked minikube ip and typed correct password(pass) after this command:</p> <pre><code>pqsl -h 192.168.99.100 -U website_user -p 31315 website </code></pre> <p>Error</p> <blockquote> <p>Password for user website_user:<br> psql: FATAL: password authentication failed for user "website_user"</p> </blockquote> <p>Also if I exec into my pod: </p> <pre><code>kubectl exec -it postgres-deployment-744fcdd5f5-7f7vx bash </code></pre> <p>And try to enter into postgres I get:</p> <pre><code>psql -h $(hostname -i) -U website_user -p 5432 website </code></pre> <p>Error:</p> <blockquote> <p>Password for user website_user:<br> psql: FATAL: password authentication failed for user "website_user"</p> </blockquote> <p>I am lacking something here.I tried also <code>ps aux</code> in container, and everything seems to be find postgres processes are running</p> <pre><code>kubectl get all </code></pre> <p>Output:</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/postgres-deployment-744fcdd5f5-7f7vx 1/1 Running 0 18m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 19m service/postgres-service NodePort 10.109.235.114 &lt;none&gt; 5432:31315/TCP 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/postgres-deployment 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/postgres-deployment-744fcdd5f5 1 1 1 18m # Secret store apiVersion: v1 kind: Secret metadata: name: postgres-credentials type: Opaque data: POSTGRES_USER: d2Vic2l0ZV91c2VyCg== POSTGRES_PASSWORD: cGFzcwo= POSTGRES_DB: d2Vic2l0ZQo= --- # Persistent Volume apiVersion: v1 kind: PersistentVolume metadata: name: postgres-pv labels: type: local spec: storageClassName: manual capacity: storage: 2Gi accessModes: - ReadWriteOnce hostPath: path: /data/postgres-pv --- # Persistent Volume Claim apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pvc labels: type: local spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 2Gi volumeName: postgres-pv --- # Deployment apiVersion: apps/v1 kind: Deployment metadata: name: postgres-deployment spec: selector: matchLabels: app: postgres-container template: metadata: labels: app: postgres-container spec: containers: - name: postgres-container image: postgres:9.6.6 env: - name: POSTGRES_USER valueFrom: secretKeyRef: name: postgres-credentials key: POSTGRES_USER - name: POSTGRES_DB valueFrom: secretKeyRef: name: postgres-credentials key: POSTGRES_DB - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres-credentials key: POSTGRES_PASSWORD ports: - containerPort: 5432 volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres-volume-mount volumes: - name: postgres-volume-mount persistentVolumeClaim: claimName: postgres-pvc --- apiVersion: v1 kind: Service metadata: name: postgres-service spec: selector: app: postgres-container ports: - port: 5432 protocol: TCP targetPort: 5432 type: NodePort </code></pre>
filipmacek
<p><strong>You created all your values with:</strong></p> <ul> <li><code>$ echo "value" | base64</code></li> <li><strong>which instead you should use: <code>$ echo -n "value" | base64</code></strong></li> </ul> <p>Following official man page of <code>echo</code>: </p> <blockquote> <h2>Description</h2> <p>Echo the <strong>STRING</strong>(s) to standard output.</p> <p><strong>-n</strong> = do not output the trailing newline</p> </blockquote> <p><strong>TL;DR</strong>: You need to edit your <code>Secret</code> definition with new values:</p> <ul> <li><code>$ echo -n "website_user" | base64</code></li> <li><code>$ echo -n "website" | base64</code></li> <li><code>$ echo -n "pass" | base64</code></li> </ul> <hr> <p>You created your <code>Secret</code> with a trailing newline. Please take a look at below example:</p> <ul> <li><code>POSTGRES_USER</code>: <ul> <li><code>$ echo "website_user" | base64</code> <ul> <li>output: <code>d2Vic2l0ZV91c2VyCg==</code> which is the same as yours </li> </ul></li> <li><code>$ echo -n "website_user" | base64</code> <ul> <li>output: <code>d2Vic2l0ZV91c2Vy</code> which is the <strong>correct</strong> value </li> </ul></li> </ul></li> <li><code>POSTGRES_PASSWORD</code>: <ul> <li><code>$ echo "pass" | base64</code> <ul> <li>output: <code>cGFzcwo=</code> which is the same as yours </li> </ul></li> <li><code>$ echo -n "pass" | base64</code> <ul> <li>output: <code>cGFzcw==</code> which is the <strong>correct</strong> value </li> </ul></li> </ul></li> <li><code>POSTGRES_DB</code>: <ul> <li><code>$ echo "website" | base64</code> <ul> <li>output: <code>d2Vic2l0ZQo=</code> which is the same as yours </li> </ul></li> <li><code>$ echo -n "website" | base64</code> <ul> <li>output: <code>d2Vic2l0ZQ==</code> which is the <strong>correct</strong> value </li> </ul></li> </ul></li> </ul> <p>Your <code>Secret</code> should look like that: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Secret metadata: name: postgres-credentials type: Opaque data: POSTGRES_USER: d2Vic2l0ZV91c2Vy POSTGRES_PASSWORD: cGFzcw== POSTGRES_DB: d2Vic2l0ZQ== </code></pre> <p>If you create it with a new <code>Secret</code> you should be able to connect to the database: </p> <pre class="lang-sh prettyprint-override"><code>root@postgres-deployment-64d697868c-njl7q:/# psql -h $(hostname -i) -U website_user -p 5432 website Password for user website_user: psql (9.6.6) Type "help" for help. website=# </code></pre> <p>Please take a look on additional links:</p> <ul> <li><a href="https://github.com/kubernetes/kubernetes/issues/70241#issuecomment-434242145" rel="noreferrer">Github.com: Kubernetes: issues: Config map vs secret to store credentials for Postgres deployment</a></li> <li><a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">Kubernetes.io: Secrets</a></li> </ul>
Dawid Kruk
<p>How to change the existing GKE cluster to GKE private cluster? Will I be able to connect to the Kubectl API from internet based on firewall rules or should I have a bastion host? I don't want to implement <code>Cloud Nat</code> or <code>nat gateway</code>. I have a squid proxy VM that can handle internet access for pods. I just need to be able to connect to Kubectl to apply or modify anything.</p> <p>I'm unsure how to modify the existing module I wrote to make the nodes private and I'm not sure if the cluster will get deleted if I try and apply the new changes related to private gke cluster.</p> <pre><code>resource &quot;google_container_cluster&quot; &quot;primary&quot; { name = &quot;prod&quot; network = &quot;prod&quot; subnetwork = &quot;private-subnet-a&quot; location = &quot;us-west1-a&quot; remove_default_node_pool = true initial_node_count = 1 depends_on = [var.depends_on_vpc] } resource &quot;google_container_node_pool&quot; &quot;primary_nodes&quot; { depends_on = [var.depends_on_vpc] name = &quot;prod-node-pool&quot; location = &quot;us-west1-a&quot; cluster = google_container_cluster.primary.name node_count = 2 node_config { preemptible = false machine_type = &quot;n1-standard-2&quot; metadata = { disable-legacy-endpoints = &quot;true&quot; } oauth_scopes = [ &quot;https://www.googleapis.com/auth/logging.write&quot;, &quot;https://www.googleapis.com/auth/monitoring&quot;, &quot;https://www.googleapis.com/auth/devstorage.read_only&quot;, &quot;https://www.googleapis.com/auth/compute&quot;, ] } } </code></pre>
user630702
<p>Answering the part of the question:</p> <blockquote> <p>How to change the existing GKE cluster to GKE private cluster?</p> </blockquote> <p><a href="https://i.stack.imgur.com/AfzeE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AfzeE.png" alt="GKE Private cluster network settings" /></a></p> <p><strong><code>GKE</code> setting: <code>Private cluster</code> is immutable.</strong> This setting can only be set during the <code>GKE</code> cluster provisioning.</p> <p>To create your cluster as a private one you can either:</p> <ul> <li>Create a new <code>GKE</code> private cluster.</li> <li>Duplicate existing cluster and set it to private: <ul> <li>This setting is available in <code>GCP Cloud Console</code> -&gt; <code>Kubernetes Engine</code> -&gt; <code>CLUSTER-NAME</code> -&gt; <code>Duplicate</code></li> <li>This setting will clone the configuration of your infrastructure of your previous cluster but not the workload (<code>Pods</code>, <code>Deployments</code>, etc.)</li> </ul> </li> </ul> <blockquote> <p>Will I be able to connect to the Kubectl API from internet based on firewall rules or should I have a bastion host?</p> </blockquote> <p>Yes, you could but it will heavily depend on the configuration that you've chosen during the <code>GKE</code> cluster creation process.</p> <p>As for ability to connect to your <code>GKE</code> private cluster, there is a dedicated documentation about it:</p> <ul> <li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: How to: Private clusters</a></em></li> </ul> <hr /> <p>As for how you can create a private cluster with Terraform, there is the dedicated site with configuration options specific to <code>GKE</code>. There are also parameters responsible for provisioning a <code>private</code> cluster:</p> <ul> <li><em><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster" rel="nofollow noreferrer">Registry.terraform.io: Providers: Hashicorp: Google: Latest: Docs: Resources: Container cluster</a></em></li> </ul> <p>As for a basic example of creating a private <code>GKE</code> cluster with Terraform:</p> <ul> <li><code>main.tf</code></li> </ul> <pre><code>provider &quot;google&quot; { project = &quot;INSERT_PROJECT_HERE&quot; region = &quot;europe-west3&quot; zone = &quot;europe-west3-c&quot; } </code></pre> <ul> <li><code>gke.tf</code></li> </ul> <pre><code>resource &quot;google_container_cluster&quot; &quot;primary-cluster&quot; { name = &quot;gke-private&quot; location = &quot;europe-west3-c&quot; initial_node_count = 1 private_cluster_config { enable_private_nodes = &quot;true&quot; enable_private_endpoint = &quot;false&quot; # this option will make your cluster available through public endpoint master_ipv4_cidr_block = &quot;172.16.0.0/28&quot; } ip_allocation_policy { cluster_secondary_range_name = &quot;&quot; services_secondary_range_name = &quot;&quot; } node_config { machine_type = &quot;e2-medium&quot; } } </code></pre> <blockquote> <p>A side note!</p> <p>I've created a public <code>GKE</code> cluster, modified the <code>.tf</code> responsible for it's creation to support private cluster. After running: <code>$ terraform plan</code> <strong>Terraform responded with the information that the cluster will be recreated</strong>.</p> </blockquote>
Dawid Kruk
<p>I want to deploy my service as a ClusterIP but am not able to apply it for the given error message:</p> <pre><code>[xetra11@x11-work coopr-infrastructure]$ kubectl apply -f teamcity-deployment.yaml deployment.apps/teamcity unchanged ingress.extensions/teamcity unchanged The Service "teamcity" is invalid: spec.ports[0].nodePort: Forbidden: may not be used when `type` is 'ClusterIP' </code></pre> <p>This here is my .yaml file:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: teamcity labels: app: teamcity spec: replicas: 1 selector: matchLabels: app: teamcity template: metadata: labels: app: teamcity spec: containers: - name: teamcity-server image: jetbrains/teamcity-server:latest ports: - containerPort: 8111 --- apiVersion: v1 kind: Service metadata: name: teamcity labels: app: teamcity spec: type: ClusterIP ports: - port: 8111 targetPort: 8111 protocol: TCP selector: app: teamcity --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: teamcity annotations: kubernetes.io/ingress.class: nginx spec: backend: serviceName: teamcity servicePort: 8111 </code></pre>
xetra11
<p>Apply a configuration to the resource by filename:</p> <pre><code>kubectl apply -f [.yaml file] --force </code></pre> <p>This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.</p> <p>2) If the first one fails, you can force replace, delete and then re-create the resource:</p> <pre><code>kubectl replace -f grav-deployment.yml </code></pre> <p>This command is only used when grace-period=0. If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.</p>
Md Daud Walizarif
<p>We are using <strong>Traefik 2.1.3</strong> as <a href="https://docs.traefik.io/routing/providers/kubernetes-crd/" rel="nofollow noreferrer">Kubernetes Ingress Controller</a>.<br> Replacing an <strong>Nginx</strong> we are unable to mimic the option:</p> <pre><code>proxy_read_timeout 60s; </code></pre> <p>I would have expected a <a href="https://docs.traefik.io/middlewares/overview/" rel="nofollow noreferrer">Middleware</a> for this task but there isn't.</p> <p>Is there an alternative? Ideas?</p>
Thomas8
<blockquote> <p><strong>Unfortunately there is no mention on this specific feature in Traefik documentations.</strong></p> </blockquote> <p>The closest match is the <code>transport.respondingTimeouts.readTimeout</code> as I mentioned.</p> <ul> <li>There is a request for this explanation on Github: <a href="https://github.com/containous/traefik/issues/4580" rel="nofollow noreferrer">https://github.com/containous/traefik/issues/4580</a>,</li> </ul> <p>It's still open but is stalled since March 2019 so It looks like it can be done but there is nowhere saying how.</p> <p>I'd suggest you stay with Nginx or rethink your cycle strategy to fit your current needs while waiting for Traefik updates on this.</p>
Will R.O.F.
<p>I've just deployed graylog on my kubernetes cluster.</p> <p>I need to be able to expose udp port as ingress rule, under graylog.localhost/gelf. Currently, my services are:</p> <pre><code>$ kubectl get service -o wide -l app.kubernetes.io/name=graylog NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR graylog-1583177737-master ClusterIP 10.43.131.54 &lt;none&gt; 9000/TCP 20m app.kubernetes.io/instance=graylog-1583177737,app.kubernetes.io/name=graylog,graylog-role=master graylog-1583177737-web ClusterIP 10.43.141.128 &lt;none&gt; 9000/TCP 20m app.kubernetes.io/instance=graylog-1583177737,app.kubernetes.io/name=graylog graylog-1583177737-udp ClusterIP 10.43.188.69 &lt;none&gt; 12201/UDP 20m app.kubernetes.io/instance=graylog-1583177737,app.kubernetes.io/name=graylog </code></pre> <p>My service <code>graylog-1583177737-udp</code> is as below:</p> <pre><code>$ kubectl describe service graylog-1583177737-udp Name: graylog-1583177737-udp Namespace: graylog Labels: app.kubernetes.io/component=UDP app.kubernetes.io/instance=graylog-1583177737 app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=graylog app.kubernetes.io/version=3.1 helm.sh/chart=graylog-1.5.2 Annotations: &lt;none&gt; Selector: app.kubernetes.io/instance=graylog-1583177737,app.kubernetes.io/name=graylog Type: ClusterIP IP: 10.43.188.69 Port: gelf 12201/UDP TargetPort: 12201/UDP Endpoints: 10.42.0.48:12201,10.42.1.47:12201 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>My ingress controller is traefik.</p>
Jordi
<p>Please correct me if I am wrong but it will be possible with new version of traefik <code>2.2</code>.</p> <p>The <code>UDP</code> support as described on Github traefik's project page: <a href="https://github.com/containous/traefik/projects/3?card_filter_query=udp" rel="nofollow noreferrer">Github.com: traefik project site</a> will be available in the version <code>2.2</code> which is now a release candidate. </p> <p>On the time of writing this, the current newest downloadable version of traefik from docker image repository is version <code>2.1.6</code>. </p> <p>Please have a look at:</p> <ul> <li><a href="https://github.com/containous/traefik/issues/5048#" rel="nofollow noreferrer">Github.com: Traefik UDP support issue</a></li> <li><a href="https://docs.traefik.io/v2.2/routing/entrypoints/" rel="nofollow noreferrer">Traefik.io: Entrypoints with UDP support on version 2.2</a></li> <li><a href="https://docs.traefik.io/v2.2/routing/services/" rel="nofollow noreferrer">Traefik.io: Services on version 2.2</a></li> <li><a href="https://docs.traefik.io/routing/entrypoints/" rel="nofollow noreferrer">Traefik.io: Entrypoints on version 2.1 (latest)</a></li> </ul> <p>Please let me know if you have any questions on that. </p>
Dawid Kruk
<p>I created an ingress on GKE for kibana, and the browser returns 404 (although it works with the original service), and I guess it's because I need to route to the <code>/app/kibana</code> endpoint. How can I do that?</p> <p>This is my ingress:</p> <pre><code>kind: Ingress metadata: name: my-ingress namespace: default annotations: ingress.kubernetes.io/affinity: cookie kubernetes.io/ingress.class: gce kubernetes.io/ingress.global-static-ip-name: my-proxy ingress.kubernetes.io/affinity: cookie ingress.kubernetes.io/session-cookie-hash: sha1 ingress.kubernetes.io/session-cookie-name: route nginx.ingress.kubernetes.io/affinity: cookie kubernetes.io/ingress.class: gce kubernetes.io/ingress.allow-http: &quot;false&quot; annotations: spec: tls: - secretName: my-tls rules: - http: paths: - path: /kibana backend: serviceName: kibana-nodeport servicePort: 5601 </code></pre> <p>my nodeport service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kibana-nodeport namespace: default spec: ports: - port: 5601 protocol: TCP targetPort: 5601 selector: k8s-app: kibana sessionAffinity: None type: NodePort </code></pre> <p>the kibana orginial service:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: component: kibana name: kibana-ext namespace: default spec: externalTrafficPolicy: Cluster loadBalancerSourceRanges: - x.x.x.x/32 ports: - name: http port: 5601 protocol: TCP targetPort: 5601 selector: k8s-app: kibana sessionAffinity: None type: LoadBalancer </code></pre> <p><strong>UPDATE:</strong></p> <p>When I set Kibana's path to &quot;/*&quot;, it works. Otherwise I get 404.</p>
Idan
<p>As the original issue was resolved by an answer provided by user @paltaa and fixed by original poster:</p> <blockquote> <p>I changed the kibana.yml to be rewritten to /kibana plus changing to your suggestion and it worked! thanks – Idan</p> </blockquote> <p>I wanted to add some additional resources/information and a &quot;guide&quot; that could help when dealing with similar issues.</p> <hr /> <p>Steps to run Kibana in <code>GKE</code> with <code>ingress-gke</code>:</p> <blockquote> <p>Please remember that this is a basic setup for example purposes only.</p> </blockquote> <ul> <li>Provision the resources with Helm</li> <li>Create an <code>Ingress</code> resource</li> <li>Change the Healthcheck in GCP Cloud Console (Web UI)</li> <li>Test</li> </ul> <hr /> <h3>Provision the resources with Helm</h3> <p>I used below Github page for a reference when installing:</p> <ul> <li><em><a href="https://github.com/elastic/helm-charts/tree/master/kibana" rel="nofollow noreferrer">Github.com: Elastic: Helm-charts: Kibana</a></em></li> </ul> <blockquote> <p>Method of spawning this setup could be different but the principles should be the same (values).</p> </blockquote> <p>Commands used (specific to Helm3):</p> <ul> <li><code>$ helm repo add elastic https://helm.elastic.co</code></li> <li><code>$ helm install es elastic/elasticsearch</code></li> <li><code>$ helm pull elastic/kibana --untar</code></li> <li><code>$ cd kibana/ &amp;&amp; nano values.yaml</code></li> </ul> <p>The changes made in the <code>values.yaml</code> are following:</p> <pre class="lang-sh prettyprint-override"><code>healthCheckPath: &quot;/test/app/kibana&quot; </code></pre> <pre class="lang-sh prettyprint-override"><code># Allows you to add any config files in /usr/share/kibana/config/ # such as kibana.yml # Will work with http://DOMAIN.NAME/test/ kibanaConfig: kibana.yml: | server.basePath: /test server.rewriteBasePath: true </code></pre> <pre class="lang-sh prettyprint-override"><code>service: type: NodePort # &lt;-- Changed from ClusterIP for ingress-gke loadBalancerIP: &quot;&quot; port: 5601 nodePort: &quot;&quot; labels: {} annotations: {} </code></pre> <ul> <li><code>$ helm install ki .</code></li> </ul> <hr /> <h3>Create an <code>Ingress</code> resource</h3> <p>I used this <code>Ingress</code> resource to have an access to the Kibana:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kibana-ingress spec: tls: - secretName: ssl-certificate rules: - host: DOMAIN.NAME http: paths: - path: /test/* backend: serviceName: ki-kibana servicePort: 5601 </code></pre> <p><strong>As for the annotations used in the question, please review them as some of them could be specific to <code>ingress-nginx</code> and will not work with <code>ingress-gke</code>.</strong></p> <hr /> <p>After applying this resource I couldn't connect to Kibana because of the path of a Healthcheck created:</p> <ul> <li>Healthcheck created path (incorrect): <code>/</code></li> <li>Healthcheck changed path (correct): <code>/test/app/kibana</code></li> </ul> <p>You can change the Healthcheck by following:</p> <ul> <li><code>GCP Cloud Console (Web UI)</code> --&gt; <code>Kubernetes Engine</code> --&gt; <code>Services &amp; Ingress</code> --&gt; <code>kibana-ingress</code> --&gt; <code>backend services (unhealthy)</code> --&gt; <code>Health Check</code> --&gt; <code>Edit</code></li> </ul> <p><a href="https://i.stack.imgur.com/Woy6v.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Woy6v.png" alt="Healthcheck" /></a></p> <hr /> <h3>Test</h3> <p>After all of above steps you can open a web browser and enter:</p> <ul> <li><code>https://DOMAIN.NAME/test</code></li> </ul> <p>and be greeted by Elastic's Web UI.</p> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://cloud.google.com/load-balancing/docs/health-check-concepts" rel="nofollow noreferrer">Cloud.google.com: Load-balancing: Docs: Healthcheck concepts</a></em></li> <li><em><a href="https://logz.io/blog/deploying-the-elk-stack-on-kubernetes-with-helm/" rel="nofollow noreferrer">Logz.io: Blog: Deploying the ELK stack on Kubernetes with Helm</a></em></li> <li><em><a href="https://www.elastic.co/guide/en/kibana/master/settings.html" rel="nofollow noreferrer">Elastic.co: Guide: Kibana: Master: Settings</a></em></li> </ul>
Dawid Kruk
<p>I've recently been making use of the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="noreferrer">GKE Workload Identity</a> feature. I'd be interested to know in more detail how the <code>gke-metadata-server</code> component works.</p> <ol> <li>GCP client code (<code>gcloud</code> or other language SDKs) falls through to the GCE metadata method</li> <li>Request made to <code>http://metadata.google.internal/path</code></li> <li>(guess) Setting <code>GKE_METADATA_SERVER</code> on my node pool configures this to resolve to the <code>gke-metadata-server</code> pod on that node.</li> <li>(guess) the <code>gke-metadata-server</code> pod with --privileged and host networking has a means of determining the source (pod IP?) then looking up the pod and its service account to check for the <code>iam.gke.io/gcp-service-account</code> annotation.</li> <li>(guess) the proxy calls the metadata server with the pods 'pseudo' identity set (e.g. <code>[PROJECT_ID].svc.id.goog[[K8S_NAMESPACE]/[KSA_NAME]]</code>) to get a token for the service account annotated on its Kubernetes service account.</li> <li>If this account has token creator / workload ID user rights to the service account presumably the response from GCP is a success and contains a token, which is then packaged and set back to the calling pod for authenticated calls to other Google APIs.</li> </ol> <p>I guess the main puzzle for me right now is the verification of the calling pods identity. Originally I thought this would use the TokenReview API but now I'm not sure how the Google client tools would know to use the service account token mounted into the pod...</p> <p><em>Edit</em> follow-up questions:</p> <p>Q1: In between step 2 and 3, is the request to <code>metadata.google.internal</code> routed to the GKE metadata proxy by the setting <code>GKE_METADATA_SERVER</code> on the node pool? </p> <p>Q2: Why does the metadata server pod need host networking?</p> <p>Q3: In the video here: <a href="https://youtu.be/s4NYEJDFc0M?t=2243" rel="noreferrer">https://youtu.be/s4NYEJDFc0M?t=2243</a> it's taken as a given that the pod makes a GCP call. How does the GKE metadata server identify the pod making the call to start the process?</p>
Charlie Egan
<p>Before going into details, please familiarize yourself with these components:</p> <p><em>OIDC provider</em>: Runs on Google’s infrastructure, provides cluster specific metadata and signs authorized JWTs.</p> <p><em>GKE metadata server</em>: It runs as a DaemonSet meaning one instance on every node, exposes pod specific metadata server (it will provide backwards compatibility with old client libraries), emulates existing node metadata server.</p> <p><em>Google IAM</em>: issues access token, validates bindings, validates OIDC signatures. </p> <p><em>Google cloud</em>: accepts access tokens, does pretty much anything.</p> <p><em>JWT</em>: JSON Web token</p> <p><em>mTLS</em>: Mutual Transport Layer Security</p> <p>The steps below explain how GKE metadata server components work:</p> <p><strong>Step 1</strong>: An authorized user binds the cluster to the namespace.</p> <p><strong>Step 2</strong>: Workload tries to access Google Cloud service using client libraries.</p> <p><strong>Step 3</strong>: GKE metadata server is going to request an OIDC signed JWT from the control plane. That connection is authenticated using mutual TLS (mTLS) connection with node credential. </p> <p><strong>Step 4</strong>: Then the GKE metadata server is going use that OIDC signed JWT to request an access token for the <em>[identity namespace]/[Kubernetes service account]</em> from IAM. IAM is going to validate that the appropriate bindings exist on identity namespace and in the OIDC provider.</p> <p><strong>Step 5</strong>: And then IAM validates that it was signed by the cluster’s correct OIDC provider. It will then return an access token for the <em>[identity namespace]/[kubernetes service account].</em></p> <p><strong>Step 6</strong>: Then the metadata server sends the access token it just got back to IAM. IAM will then exchange that for a short lived GCP service account token after validating the appropriate bindings.</p> <p><strong>Step 7</strong>: Then GKE metadata server returns the GCP service account token to the workload.</p> <p><strong>Step 8</strong>: The workload can then use that token to make calls to any Google Cloud Service.</p> <p>I also found a <a href="https://youtu.be/s4NYEJDFc0M?t=1243" rel="noreferrer">video</a> regarding Workload Identity which you will find useful.</p> <p><strong><em>EDIT</em> Follow-up questions' answers:</strong></p> <p>Below are answers to your follow-up questions: </p> <p><strong>Q1</strong>: In between step 2 and 3, is the request to metadata.google.internal routed to the gke metadata proxy by the setting GKE_METADATA_SERVER on the node pool?</p> <p>You are right, GKE_METADATA_SERVER is set on the node pool. This exposes a metadata API to the workloads that is compatible with the V1 Compute Metadata APIs. Once workload tries to access Google Cloud service, the GKE metadata server performs a lookup (the metadata server checks to see if a pod exists in the list whose IP matches the incoming IP of the request) before it goes on to request the OIDC token from the control plane.</p> <p>Keep in mind that <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/NodeConfig#nodemetadata" rel="noreferrer">GKE_METADATA_SERVER</a> enumeration feature can only be enabled if Workload Identity is enabled at the cluster level.</p> <p><strong>Q2</strong>: Why does the metadata server pod need host networking?</p> <p>The gke-metadata-server intercepts all GCE metadata server requests from pods, however pods using the host network are not intercepted.</p> <p><strong>Q3</strong>: How does the GKE metadata server identify the pod making the call to start the process?</p> <p>The pods are identified using iptables rules.</p>
Md Daud Walizarif
<p>I'm telling containerd to use awslogs using <code>/etc/docker/daemon.json</code> file as proposed in the documentation in <a href="https://docs.docker.com/config/containers/logging/awslogs/" rel="nofollow noreferrer">https://docs.docker.com/config/containers/logging/awslogs/</a></p> <p>By default, the aws stream name is set to a randomly generated container-id which is meaningless when you list the streams inside a group.</p> <p>awslogs driver has an option to set awslogs-stream to a specific name but that won't satisfy my needs since I want different containers to use different streams.</p> <p>I guess what I want to do is to tell docker to compose the stream-id from the image name and the container-id, but I couldn't find an option for that.</p> <p>Theoretically, I can set the stream name directly in the <code>docker run</code> command, but that is not good enough because I use Kubernetes to launch those containers so I'm not sure how to set the stream_name from the application yml file.</p> <p>Any ideas how to accomplish my needs?</p>
Eytan Naim
<p>You're correct, there's no sign of <code>--log-opt</code> being implemented into Kubernetes, since dockerd is deprecated.</p> <p>Instead of specifying the <code>awslogs-stream</code> have you tried to set a <code>tag</code>?</p> <ul> <li>Check this <strong><a href="https://stackoverflow.com/questions/55609398/use-awslogs-with-kubernetes-natively">Use Awslogs With Kubernetes 'Natively'</a></strong> as it may fit perfectly to your needs.</li> </ul> <p>From Docker documentation link you posted:</p> <blockquote> <p>Specify <code>tag</code> as an alternative to the <code>awslogs-stream</code> option. <code>tag</code> interprets Go template markup, such as <code>{{.ID}}</code>, <code>{{.FullID}}</code> or <code>{{.Name}}</code> <code>docker.{{.ID}}</code>. See the <a href="https://docs.docker.com/config/containers/logging/log_tags/" rel="nofollow noreferrer">tag option documentation</a> for details on all supported template substitutions.</p> </blockquote> <p><em>The other viable approach is using a sidecar container daemon to process the logs and then forward to awslogs but <code>tag</code> is a cleaner, simplier solution.</em></p> <p>Here's the process with the fluentd: </p> <blockquote> <p><strong><a href="https://stackoverflow.com/questions/46469076/how-to-send-kubernetes-logs-to-aws-cloudwatch">How to Send Kubernetes Logs to AWS Cloudwatch</a></strong></p> </blockquote>
Will R.O.F.
<p>I am new to kubernetes and trying to add root certs to my existing secrets truststore.jks file. Using <code>get secret mysecret -o yaml</code>. I am able to view the details of truststore file inside mysecret but not sure how to replace with new truststore file or to edit the existing one with latest root certs. Can anyone help me to get the correct command to do this using kubectl?</p> <p>Thanks</p>
Anshu
<p>A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. There is an official documentation about <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Kubernetes.io: Secrets</a>.</p> <p>Assuming that you created your secret by: </p> <p><code>$ kubectl create secret generic NAME_OF_SECRET --from-file=keystore.jks</code></p> <p>You can edit your secret by invoking command: </p> <p><code>$ kubectl edit secret NAME_OF_SECRET</code></p> <p>It will show you <code>YAML</code> definition similar to this: </p> <pre><code>apiVersion: v1 data: keystore.jks: HERE_IS_YOUR_JKS_FILE kind: Secret metadata: creationTimestamp: "2020-02-20T13:14:24Z" name: NAME_OF_SECRET namespace: default resourceVersion: "430816" selfLink: /api/v1/namespaces/default/secrets/jks-old uid: 0ce898af-8678-498e-963d-f1537a2ac0c6 type: Opaque </code></pre> <p>To change it to new <code>keystore.jks</code> you would need to base64 encode it and paste in place of old one (<code>HERE_IS_YOUR_JKS_FILE</code>)</p> <p>You can get a base64 encoded string by: <code>cat keystore.jks | base64</code></p> <p>After successfully editing your secret it should give you a message: <code>secret/NAME_OF_SECRET edited</code></p> <hr> <p>Also you can look on this <a href="https://stackoverflow.com/a/38216458/12257134">StackOverflow answer</a> </p> <p>It shows a way to replace existing configmap but with a little of modification it can also replace a secret! </p> <p>Example below:</p> <ul> <li><p>Create a secret with keystore-old.jks: </p> <p><code>$ kubectl create secret generic my-secret --from-file=keystore-old.jks</code></p></li> <li><p>Update it with keystore-new.jks:</p> <p><code>$ kubectl create secret generic my-secret --from-file=keystore-new.jks -o yaml --dry-run | kubectl replace -f -</code></p></li> </ul> <hr> <p>Treating <code>keystore.jks</code> as a file allows you to use a volume mount to mount it to specific location inside a pod. </p> <p>Example <code>YAML</code> below creates a pod with secret mounted as volume: </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: ubuntu spec: containers: - name: ubuntu image: ubuntu command: - sleep - "360000" volumeMounts: - name: secret-volume mountPath: "/etc/secret" volumes: - name: secret-volume secret: secretName: NAME_OF_SECRET </code></pre> <p>Take a specific look on: </p> <pre><code> volumeMounts: - name: secret-volume mountPath: "/etc/secret" volumes: - name: secret-volume secret: secretName: NAME_OF_SECRET </code></pre> <p>This part will mount your secret inside your /etc/secret/ directory. It will be available there with a name <code>keystore.jks</code></p> <p>A word about mounted secrets: </p> <blockquote> <h2>Mounted Secrets are updated automatically<a href="https://kubernetes.io/docs/concepts/configuration/secret/#mounted-secrets-are-updated-automatically" rel="nofollow noreferrer"></a></h2> <p>When a secret currently consumed in a volume is updated, projected keys are eventually updated as well. The kubelet checks whether the mounted secret is fresh on every periodic sync.</p> <p>-- <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Kubernetes.io: Secrets</a>. </p> </blockquote> <p>Please let me know if you have any questions regarding that.</p>
Dawid Kruk
<p>When using Google Stackdriver I can use the log query to find the exact log statements I am looking for.</p> <p>This might look like this:</p> <pre><code>resource.type=&quot;k8s_container&quot; resource.labels.project_id=&quot;my-project&quot; resource.labels.location=&quot;europe-west3-a&quot; resource.labels.cluster_name=&quot;my-cluster&quot; resource.labels.namespace_name=&quot;dev&quot; resource.labels.pod_name=&quot;my-app-pod-7f6cf95b6c-nkkbm&quot; resource.labels.container_name=&quot;container&quot; </code></pre> <p>However as you can see in this query argument <code>resource.labels.pod_name=&quot;my-app-pod-7f6cf95b6c-nkkbm&quot;</code> that I am looking for a pod with the id <code>7f6cf95b6c-nkkbm</code>. Because of this I can not use this Stackdriver view with this exact query <em>if</em> I deployed a new revision of <code>my-app</code> therefore having a new ID and the one in the curreny query becomes <em>invalid</em> or not locatable.</p> <p>Now I don't always want to look for the new ID every time I want to have the current view of my <code>my-app</code> logs. So I tried to add a special label <code>stackdriver: my-app</code> to my Kubernetes YAML file.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: template: metadata: labels: stackdriver: my-app &lt;&lt;&lt; </code></pre> <p>Revisiting my newly deployed Pod I can assure that the label <code>stackdriver: my-app</code> is indeed existing.</p> <p>Now I want to add this new label to use as a query argument:</p> <pre><code>resource.type=&quot;k8s_container&quot; resource.labels.project_id=&quot;my-project&quot; resource.labels.location=&quot;europe-west3-a&quot; resource.labels.cluster_name=&quot;my-cluster&quot; resource.labels.namespace_name=&quot;dev&quot; resource.labels.pod_name=&quot;my-app-pod-7f6cf95b6c-nkkbm&quot; resource.labels.container_name=&quot;container&quot; resource.labels.stackdriver=my-app &lt;&lt;&lt; the kubernetes label </code></pre> <p>As you can guess this did not work otherwise I'd have no reason to write this question ;) Any idea how the thing I am about to do can be achieved?</p>
xetra11
<blockquote> <p>Any idea how the thing I am about to do can be achieved?</p> </blockquote> <p>Yes! In fact, I've prepared an example to show you the whole process :)</p> <p>Let's assume:</p> <ul> <li>You have a <code>GKE</code> cluster named: <code>gke-label</code></li> <li>You have a Cloud Operations for GKE enabled (logging)</li> <li>You have a <code>Deployment</code> named <code>nginx</code> with a following <code>label</code>: <ul> <li><code>stackdriver: look_here_for_me</code></li> </ul> </li> </ul> <p><code>deployment.yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx stackdriver: look_here_for_me replicas: 1 template: metadata: labels: app: nginx stackdriver: look_here_for_me spec: containers: - name: nginx image: nginx </code></pre> <p>You can apply this definition and send some traffic from the other pod so that the logs could be generated. I've done it with:</p> <ul> <li><code>$ kubectl run -it --rm --image=ubuntu ubuntu -- /bin/bash</code></li> <li><code>$ apt update &amp;&amp; apt install -y curl</code></li> <li><code>$ curl NGINX_POD_IP_ADDRESS/NONEXISTING</code> # &lt;-- this path is only for better visibility</li> </ul> <p>After that you can go to:</p> <ul> <li><code>GCP Cloud Console (Web UI)</code> -&gt; <code>Logging</code> (I used new version)</li> </ul> <p>With the following query:</p> <pre><code>resource.type=&quot;k8s_container&quot; resource.labels.cluster_name=&quot;gke-label&quot; --&gt;labels.&quot;k8s-pod/stackdriver&quot;=&quot;look_here_for_me&quot; </code></pre> <p>You should be able to see the container logs as well it's <code>label</code>:</p> <p><a href="https://i.stack.imgur.com/RTMYD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RTMYD.png" alt="NGINX_POD" /></a></p>
Dawid Kruk
<p>I use one secret to store multiple data items like this:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: my-certs namespace: my-namespace data: keystore.p12: LMNOPQRSTUV truststore.p12: ABCDEFGHIJK </code></pre> <p>In my <code>Deployment</code> I mount them into files like this:</p> <pre><code>volumeMounts: - mountPath: /app/truststore.p12 name: truststore-secret subPath: truststore.p12 - mountPath: /app/keystore.p12 name: keystore-secret subPath: keystore.p12 volumes: - name: truststore-secret secret: items: - key: truststore.p12 path: truststore.p12 secretName: my-certs - name: keystore-secret secret: items: - key: keystore.p12 path: keystore.p12 secretName: my-certs </code></pre> <p>This works as expected but I am wondering whether I can achieve the same result of mounting those two secret as files with less Yaml? For example <code>volumes</code> use <code>items</code> but I could not figure out how to use one volume with multiple <code>items</code> and mount those.</p>
Hedge
<p>Yes, you can reduce your yaml with <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/" rel="noreferrer">Projected Volume</a>.</p> <blockquote> <p>Currently, <code>secret</code>, <code>configMap</code>, <code>downwardAPI</code>, and <code>serviceAccountToken</code> volumes can be projected.</p> </blockquote> <p><strong>TL;DR use this structure in your <code>Deployment</code>:</strong></p> <pre><code>spec: containers: - name: {YOUR_CONTAINER_NAME} volumeMounts: - name: multiple-secrets-volume mountPath: "/app" readOnly: true volumes: - name: multiple-secrets-volume projected: sources: - secret: name: my-certs </code></pre> <p>And here's the full reproduction of your case, first I registered your <code>my-certs</code> secret:</p> <pre><code>user@minikube:~/secrets$ kubectl get secret my-certs -o yaml apiVersion: v1 data: keystore.p12: TE1OT1BRUlNUVVY= truststore.p12: QUJDREVGR0hJSks= kind: Secret metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"keystore.p12":"TE1OT1BRUlNUVVY=","truststore.p12":"QUJDREVGR0hJSks="},"kind":"Secret","metadata":{"annotations":{},"name":"my-certs","namespace":"default"}} creationTimestamp: "2020-01-22T10:43:51Z" name: my-certs namespace: default resourceVersion: "2759005" selfLink: /api/v1/namespaces/default/secrets/my-certs uid: d785045c-2931-434e-b6e1-7e090fdd6ff4 </code></pre> <p>Then created a <code>pod</code> to test the access to the <code>secret</code>, this is <code>projected.yaml</code>:</p> <pre><code>user@minikube:~/secrets$ cat projected.yaml apiVersion: v1 kind: Pod metadata: name: test-projected-volume spec: containers: - name: test-projected-volume image: busybox args: - sleep - "86400" volumeMounts: - name: multiple-secrets-volume mountPath: "/app" readOnly: true volumes: - name: multiple-secrets-volume projected: sources: - secret: name: my-certs user@minikube:~/secrets$ kubectl apply -f projected.yaml pod/test-projected-volume created </code></pre> <p>Then tested the access to the keys:</p> <pre><code>user@minikube:~/secrets$ kubectl exec -it test-projected-volume -- /bin/sh / # ls app bin dev etc home proc root sys tmp usr var / # cd app /app # ls keystore.p12 truststore.p12 /app # cat keystore.p12 LMNOPQRSTUV/app # /app # cat truststore.p12 ABCDEFGHIJK/app # /app # exit </code></pre> <p>You have the option to use a single <code>secret</code> with many data lines, as you requested or you can use many secrets from your base in your deployment in the following model:</p> <pre><code> volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true volumes: - name: all-in-one projected: sources: - secret: name: SECRET_1 - secret: name: SECRET_2 </code></pre>
Will R.O.F.
<p>How can I disable the kubectl autocompletions functionality on zsh. I'm running on osx and the autocompletions are slow(probably because they have to call the remote cluster API) and I don't want them no more.</p>
martinkaburu
<p>First of all, autocompletion in <code>kubectl</code> command is not enabled by default. You needed to enable it beforehand. To disable it would be best just to reverse the steps you took to enable it. </p> <h2>How to enable autocompletion for <code>kubectl</code> within <code>zsh</code> environment:</h2> <hr> <blockquote> <p>The kubectl completion script for Zsh can be generated with the command <code>kubectl completion zsh</code>. Sourcing the completion script in your shell enables kubectl autocompletion.</p> <p>To do so in all your shell sessions, add the following to your <code>~/.zshrc</code> file:</p> <pre class="lang-sh prettyprint-override"><code>$ source &lt;(kubectl completion zsh) </code></pre> <p>-- <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion" rel="nofollow noreferrer">Kubernetes.io: Enabling shell autocompletion</a></p> </blockquote> <p>Following above example: </p> <p>Command <code>$ source &lt;(kubectl completion zsh)</code>:</p> <ul> <li>can be run by itself in shell for autocompletion within current session</li> <li><strong>can be put in <code>~/.zshrc</code> file to be loaded each time user logs in</strong> </li> </ul> <p>After applying one of above solutions it should provide available options with <code>TAB</code> keypress to type into terminal like below:</p> <pre class="lang-sh prettyprint-override"><code>somefolder% kubectl get pod[TAB PRESSED HERE!] poddisruptionbudgets.policy pods.metrics.k8s.io podsecuritypolicies.policy pods podsecuritypolicies.extensions podtemplates </code></pre> <h2>How to disable autocompletion for <code>kubectl</code> within <code>zsh</code> environment:</h2> <hr> <p>As said above autocompletion is not enabled by default. It can be disabled: </p> <ul> <li>when created for current session by: <ul> <li>creating a new session (example <code>zsh</code>) </li> </ul></li> <li><strong>when edited <code>~/.zshrc</code> file by:</strong> <ul> <li>removing: <code>source &lt;(kubectl completion zsh)</code> from <code>~/.zshrc</code> file. </li> <li>creating a new session (example <code>zsh</code>) </li> </ul></li> </ul> <p><strong>After that autocompletion for <code>kubectl</code> should not work.</strong></p> <p>Please let me know if you have any questions to that. </p>
Dawid Kruk
<p>I'm currently playing with NGINX ingress controller in my k8s cluster. I was trying to make end-to-end encryption work and I was able to make the connection secure all the way to the pod.</p> <p>In order to achieve HTTPS all the way till pod, I had to use annotation</p> <pre><code>nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; </code></pre> <p>Sample Ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foo-api-ingress annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; spec: tls: - hosts: - foo.example.com secretName: foo-cert rules: - host: foo.example.com http: paths: - path: /path1 backend: serviceName: foo-api-path1-service servicePort: 443 - path: /path2 backend: serviceName: foo-api-path2-service servicePort: 443 </code></pre> <p>I'm confused in terms of how exactly this happens because when we encrypt the connection path also get encrypted then how NGINX does path-based routing? does it decrypt the connection at ingress and re-encrypt it? also, does performance get affected by using this method?</p>
Sam
<p><strong>TL;DR</strong></p> <blockquote> <p>does it decrypt the connection at ingress and re-encrypt it?</p> </blockquote> <p>In short, yes. Please see the explanation below.</p> <hr /> <h3>Explanation</h3> <p>The path that a request is travelling to get to a <code>Pod</code> can be seen as:</p> <p><a href="https://i.stack.imgur.com/RiUnf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RiUnf.png" alt="REQUEST-PATH" /></a></p> <ul> <li><em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></li> </ul> <p>Assuming that we have an <code>Ingress controller</code> (<code>nginx-ingress</code>) in place of an <code>Ingress</code> you can have several ways to connect your client with a <code>Pod</code> (simplified):</p> <ul> <li>Unencrypted: <ul> <li><code>client</code> -- (HTTP) --&gt; <code>Ingress controller</code> -- (HTTP) --&gt; <code>Service</code> ----&gt; <code>Pod</code></li> </ul> </li> </ul> <hr /> <ul> <li>Encrypted at the <code>Ingress controller</code> (with <code>nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot;</code>) <ul> <li><code>client</code> -- (HTTP) --&gt; <code>Ingress controller</code> -- (HTTP<strong>S</strong>) --&gt; <code>Service</code> ----&gt; <code>Pod</code></li> </ul> </li> </ul> <hr /> <ul> <li>Encrypted and decrypted at the <code>Ingress controller</code> where <a href="https://kubernetes.github.io/ingress-nginx/examples/tls-termination/#tls-termination" rel="noreferrer">TLS Termination</a> happens: <ul> <li><code>client</code> -- (HTTP<strong>S</strong>) --&gt; <code>Ingress controller</code> (TLS Termination) -- (HTTP) --&gt; <code>Service</code> ----&gt; <code>Pod</code></li> </ul> </li> </ul> <hr /> <p><strong>Your setup:</strong></p> <ul> <li>Encrypted and decrypted at the <code>Ingress</code> controller where <a href="https://kubernetes.github.io/ingress-nginx/examples/tls-termination/#tls-termination" rel="noreferrer">TLS Termination</a> happens and <strong>encrypted once again</strong> when connecting with a HTTPS backend by <code>nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot;</code>: <ul> <li><code>client</code> -- (HTTP<strong>S</strong>) --&gt; <code>Ingress controller</code> (TLS Termination) -- (HTTP<strong>S</strong>) --&gt; <code>Service</code> ----&gt; <code>Pod</code></li> </ul> </li> </ul> <hr /> <ul> <li>Encrypted and decrypted at the <code>Pod</code> where <code>Ingress controller</code> is configured with <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="noreferrer">SSL Passthrough</a>: <ul> <li><code>client</code> -- (HTTP<strong>S</strong>) --&gt; <code>Ingress controller</code> -- (HTTP<strong>S</strong>) --&gt; <code>Service</code> ----&gt; <code>Pod</code></li> </ul> </li> </ul> <blockquote> <p><strong>Disclaimer!</strong></p> <p>This is only a simplified explanation. For more reference you can look at this comment:</p> <blockquote> <p>there is a missing detail here, the SSL Passthrough traffic never reaches NGINX in the ingress controller. There is a go listener for TLS connections that just pipes the traffic to the service defined in the ingress.</p> <ul> <li><em><a href="https://github.com/kubernetes/ingress-nginx/issues/5618" rel="noreferrer">Github.com: Kubernetes: Ingress nginx: Issues: 5618</a></em></li> </ul> </blockquote> </blockquote> <hr /> <hr /> <p>For more reference you can look on the similar question (with an answer):</p> <ul> <li><em><a href="https://stackoverflow.com/a/54459898/12257134">Stackoverflow.com: Answer: How to configure ingress to direct traffic to an https backend using https</a></em></li> </ul> <p>You can also check this article with example setup similar to yours:</p> <ul> <li><em><a href="https://code.oursky.com/how-to-enable-tls-https-between-your-kubernetes-ingress-and-back-end-deployments/" rel="noreferrer">Code.oursky.com: How to enable tls https between your kubernetes ingress and back end deployments</a></em></li> </ul> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://github.com/kubernetes/ingress-nginx/issues/6142" rel="noreferrer">Github.com: Kubernetes: Ingress nginx: Is it possible to have secure backend connections from the nginx controller?</a></em></li> <li><em><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#backend-certificate-authentication" rel="noreferrer">Github.com: Kubernetes: Ingress nginx: Nginx configuration: Annotations: Backend certificate authentication</a></em></li> </ul>
Dawid Kruk
<p>I know you can use ConfigMap properties as environment variables in the pod spec, but can you use environment variables declared in the pods spec inside the configmap?</p> <p>For example:</p> <p>I have a secret password which I wish to access in my configmap application.properties. The secret looks like so:</p> <pre><code>apiVersion: v1 data: pw: THV3OE9vcXVpYTll== kind: Secret metadata: name: foo namespace: foo-bar type: Opaque </code></pre> <p>so inside the pod spec I reference the secret as an env var. The configMap will be mounted as a volume from within the spec:</p> <pre><code> env: - name: PASSWORD valueFrom: secretKeyRef: name: foo key: pw ... </code></pre> <p>and inside my configMap I can then reference the secret value like so:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: application.properties namespace: foo-bar data: application.properties: / secret.password=$(PASSWORD) </code></pre> <p>Anything I've found online is just about consuming configMap values as env vars and doesn't mention consuming env vars in configMap values.</p>
grinferno
<p><strong>Currently it's not a Kubernetes Feature.</strong></p> <p>There is a closed issue requesting this feature and it's kind of controversial topic because the discussion is ongoing many months after being closed: <a href="https://github.com/kubernetes/kubernetes/issues/79224" rel="nofollow noreferrer">Reference Secrets from ConfigMap #79224</a></p> <p>Referencing the closing comment:</p> <blockquote> <p>Best practice is to not use secret values in envvars, only as mounted files. if you want to keep all config values in a single object, you can place all the values in a secret object and reference them that way. Referencing secrets via configmaps is a non-goal... it confuses whether things mounting or injecting the config map are mounting confidential values.</p> </blockquote> <p>I suggest you to read the entire thread to understand his reasons and maybe find another approach for your environment to get this variables.</p> <hr> <p><strong><em>"OK, but this is Real Life, I need to make this work"</em></strong></p> <p>Then I recommend you this workaround:</p> <p><strong><a href="https://stackoverflow.com/questions/50452665/import-data-to-config-map-from-kubernetes-secret">Import Data to Config Map from Kubernetes Secret</a></strong></p> <p>It makes the substitution with a shell in the entrypoint of the container. </p>
Will R.O.F.
<p>I'm simply following the tutorial here: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_managed_certificate" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_managed_certificate</a></p> <p>Everything works fine until I deploy my certificate and wait 20 minutes for it to show up as:</p> <pre><code>Status: Certificate Name: daojnfiwlefielwrfn Certificate Status: Provisioning Domain Status: Domain: moviedecisionengine.com Status: FailedNotVisible </code></pre> <p>That domain clearly works so what am I missing?</p> <p>EDIT:</p> <p>Here's the Cert:</p> <pre><code>apiVersion: networking.gke.io/v1beta1 kind: ManagedCertificate metadata: name: moviedecisionengine spec: domains: - moviedecisionengine.com </code></pre> <p>The Ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b ingress.kubernetes.io/backends: '{"k8s-be-31721--1cd1f38313af9089":"HEALTHY"}' ingress.kubernetes.io/forwarding-rule: k8s-fw-default-showcase-mde-ingress--1cd1f38313af9089 ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-showcase-mde-ingress--1cd1f38313af9089 ingress.kubernetes.io/https-target-proxy: k8s-tps-default-showcase-mde-ingress--1cd1f38313af9089 ingress.kubernetes.io/ssl-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b ingress.kubernetes.io/target-proxy: k8s-tp-default-showcase-mde-ingress--1cd1f38313af9089 ingress.kubernetes.io/url-map: k8s-um-default-showcase-mde-ingress--1cd1f38313af9089 kubernetes.io/ingress.global-static-ip-name: 34.107.208.110 networking.gke.io/managed-certificates: moviedecisionengine creationTimestamp: "2020-01-16T19:44:13Z" generation: 4 name: showcase-mde-ingress namespace: default resourceVersion: "1039270" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/showcase-mde-ingress uid: 92a2f91f-3898-11ea-b820-42010a800045 spec: backend: serviceName: showcase-mde servicePort: 80 rules: - host: moviedecisionengine.com http: paths: - backend: serviceName: showcase-mde servicePort: 80 - host: www.moviedecisionengine.com http: paths: - backend: serviceName: showcase-mde servicePort: 80 status: loadBalancer: ingress: - ip: 34.107.208.110 </code></pre> <p>And lastly, the load balancer:</p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: "2020-01-13T22:41:27Z" labels: app: showcase-mde name: showcase-mde namespace: default resourceVersion: "2298" selfLink: /api/v1/namespaces/default/services/showcase-mde uid: d5a77d7b-3655-11ea-af7f-42010a800157 spec: clusterIP: 10.31.251.46 externalTrafficPolicy: Cluster ports: - nodePort: 31721 port: 80 protocol: TCP targetPort: 80 selector: app: showcase-mde sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 35.232.156.172 </code></pre> <p>For the full output of <code>kubectl describe managedcertificate moviedecisionengine</code>:</p> <pre><code>Name: moviedecisionengine Namespace: default Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.gke.io/v1beta1","kind":"ManagedCertificate","metadata":{"annotations":{},"name":"moviedecisionengine","namespace... API Version: networking.gke.io/v1beta1 Kind: ManagedCertificate Metadata: Creation Timestamp: 2020-01-17T16:47:19Z Generation: 3 Resource Version: 1042869 Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/moviedecisionengine UID: 06c97b69-3949-11ea-b820-42010a800045 Spec: Domains: moviedecisionengine.com Status: Certificate Name: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b Certificate Status: Provisioning Domain Status: Domain: moviedecisionengine.com Status: FailedNotVisible Events: &lt;none&gt; </code></pre>
AlxVallejo
<p>I was successful in using <code>Managedcertificate</code> with GKE <code>Ingress</code> resource. </p> <p>Let me elaborate on that:</p> <p><strong>Steps to reproduce:</strong></p> <ul> <li>Create IP address with <code>gcloud</code></li> <li>Update the DNS entry</li> <li>Create a deployment </li> <li>Create a service</li> <li>Create a certificate</li> <li>Create a Ingress resource </li> </ul> <h2>Create IP address with gcloud</h2> <p>Invoke below command to create static ip address:</p> <p><code>$ gcloud compute addresses create example-address --global</code></p> <p>Check newly created IP address with below command: </p> <p><code>$ gcloud compute addresses describe example-address --global</code></p> <h2>Update the DNS entry</h2> <p>Go to <code>GCP</code> -> <code>Network Services</code> -> <code>Cloud DNS</code>.</p> <p>Edit your zone with <code>A record</code> with the same address that was created above. </p> <p>Wait for it to apply. </p> <p><strong>Check with <code>$ nslookup DOMAIN.NAME</code> if the entry is pointing to the appropriate address.</strong></p> <h2>Create a deployment</h2> <p>Below is example deployment which will respond to traffic:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello spec: selector: matchLabels: app: hello version: 1.0.0 replicas: 3 template: metadata: labels: app: hello version: 1.0.0 spec: containers: - name: hello image: "gcr.io/google-samples/hello-app:1.0" env: - name: "PORT" value: "50001" </code></pre> <p>Apply it with command <code>$ kubectl apply -f FILE_NAME.yaml</code></p> <p>You can change this deployment to suit your application but be aware of the ports that your application will respond to. </p> <h2>Create a service</h2> <p>Use the <code>NodePort</code> as it's the same as in the provided link: </p> <pre><code>apiVersion: v1 kind: Service metadata: name: hello-service spec: type: NodePort selector: app: hello version: 1.0.0 ports: - name: hello-port protocol: TCP port: 50001 targetPort: 50001 </code></pre> <p>Apply it with command <code>$ kubectl apply -f FILE_NAME.yaml</code></p> <h2>Create a certificate</h2> <p>As shown in guide you can use below example to create <code>ManagedCertificate</code>:</p> <pre><code>apiVersion: networking.gke.io/v1beta1 kind: ManagedCertificate metadata: name: example-certificate spec: domains: - DOMAIN.NAME </code></pre> <p>Apply it with command <code>$ kubectl apply -f FILE_NAME.yaml</code></p> <blockquote> <p>The status <code>FAILED_NOT_VISIBLE</code> indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer. -- <em> <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates#domain-status" rel="nofollow noreferrer">Google Cloud documentation</a> </em></p> </blockquote> <p>Creation of this certificate should be affected by DNS entry that you provided earlier. </p> <h2>Create a Ingress resource</h2> <p>Below is example for <code>Ingress</code> resource which will use <code>ManagedCertificate</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/ingress.global-static-ip-name: example-address networking.gke.io/managed-certificates: example-certificate spec: rules: - host: DOMAIN.NAME http: paths: - path: / backend: serviceName: hello-service servicePort: hello-port </code></pre> <p>Apply it with command <code>$ kubectl apply -f FILE_NAME.yaml</code></p> <p><strong>It took about 20-25 minutes for it to fully work.</strong> </p>
Dawid Kruk
<p>I created a basic react/express app with IAP authentication and deployed to google app engine and everything works as expected. Now i'm moving from app engine deployment to kubernetes, all things work except the user authentication with IAP on kubernetes. How do I enable IAP user authentication with kubernetes?</p> <p>Do I have to create a kubernetes secret to get user authentication to work? <a href="https://cloud.google.com/iap/docs/enabling-kubernetes-howto" rel="nofollow noreferrer">https://cloud.google.com/iap/docs/enabling-kubernetes-howto</a></p> <p>Authentication code in my server.js <a href="https://cloud.google.com/nodejs/getting-started/authenticate-users#cloud-identity-aware-proxy" rel="nofollow noreferrer">https://cloud.google.com/nodejs/getting-started/authenticate-users#cloud-identity-aware-proxy</a></p>
Machine Learning
<p>In order for Cloud IAP to be working with Kubernetes, you will need a group of one or more GKE instances, served by an HTTPS load balancer. The load balancer should be created automatically when you create an <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">Ingress object</a> in a GKE cluster.</p> <p>Also required for enabling Cloud IAP in GKE: a domain name registered to the address of your load balancer and an App code to verify that all requests have an <a href="https://cloud.google.com/iap/docs/identity-howto" rel="nofollow noreferrer">identity</a>.</p> <p>Once these requirements have been achieved, you can move forward with <a href="https://cloud.google.com/iap/docs/enabling-kubernetes-howto#enabling_iap" rel="nofollow noreferrer">enabling Cloud IAP</a> on Kubernetes Engine. This includes the steps to set up Cloud IAP access and creating OAuth credentials.</p> <p>You will need to create a Kubernetes Secret to <a href="https://cloud.google.com/iap/docs/enabling-kubernetes-howto#kubernetes-configure" rel="nofollow noreferrer">configure BackendConfig for Cloud IAP</a>. The BackendConfig uses a Kubernetes Secret to wrap the OAuth client that you create when enabling the Cloud IAP.</p>
Md Daud Walizarif
<p>I have helm deployment scripts for a vendor application which we are operating. For logging solution, I need to add a sidecar container for fluentbit to push the logs to aggregated log server (splunk in this case).</p> <p>Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).</p> <p>So far I have understood that sidecar container can be defined inside the same deployment script (deployment configuration).</p>
Obaid Maroof
<p>Answering the question in the comments:</p> <blockquote> <p>thanks @david. This has to be done before the deployment. I was wondering if I could attach a sidecar container to an already deployed (running) pod.</p> </blockquote> <p><strong>You can't attach the additional container to a running <code>Pod</code></strong>. You can update (patch) the resource definition. This will force the resource to be recreated with new specification.</p> <p>There is a github issue about this feature which was closed with the following comment:</p> <blockquote> <p>After discussing the goals of SIG Node, <strong>the clear consensus is that the containers list in the pod spec should remain immutable</strong>. <a href="https://github.com/kubernetes/kubernetes/issues/27140" rel="nofollow noreferrer">#27140</a> will be better addressed by <a href="https://github.com/kubernetes/community/pull/649" rel="nofollow noreferrer">kubernetes/community#649</a>, which allows running an ephemeral debugging container in an existing pod. This will not be implemented.</p> <p>-- <em><a href="https://github.com/kubernetes/kubernetes/issues/37838" rel="nofollow noreferrer">Github.com: Kubernetes: Issues: Allow containers to be added to a running pod</a></em></p> </blockquote> <hr /> <p>Answering the part of the post:</p> <blockquote> <p>Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).</p> </blockquote> <p>Below I've included two methods to add a sidecar to a <code>Deployment</code>. <strong>Both of those methods will reload the <code>Pods</code></strong> to match new specification:</p> <ul> <li>Use <code>$ kubectl patch</code></li> <li>Edit the <code>Helm</code> Chart and use <code>$ helm upgrade</code></li> </ul> <p>In both cases, I encourage you to check how Kubernetes handles updates of its resources. You can read more by following below links:</p> <ul> <li><em><a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tutorials: Kubernetes Basics: Update: Update </a></em></li> <li><em><a href="https://medium.com/platformer-blog/enable-rolling-updates-in-kubernetes-with-zero-downtime-31d7ec388c81" rel="nofollow noreferrer">Medium.com: Platformer blog: Enable rolling updates in Kubernetes with zero downtime</a></em></li> </ul> <hr /> <hr /> <h3>Use <code>$ kubectl patch</code></h3> <p>The way to completely avoid editing the Helm charts would be to use:</p> <ul> <li><code>$ kubectl patch</code></li> </ul> <p>This method will &quot;patch&quot; the existing <code>Deployment/StatefulSet/Daemonset</code> and add the sidecar. The downside of this method is that it's not automated like Helm and you would need to create a &quot;patch&quot; for every resource (each <code>Deployment</code>/<code>Statefulset</code>/<code>Daemonset</code> etc.). In case of any updates from other sources like Helm, this &quot;patch&quot; would be overridden.</p> <p>Documentation about updating API objects in place:</p> <ul> <li><em><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Manage Kubernetes objects: Update api object kubectl patch</a></em></li> </ul> <hr /> <h3>Edit the <code>Helm</code> Chart and use <code>$ helm upgrade</code></h3> <p>This method will require editing the Helm charts. The changes made like adding a sidecar will persist through the updates. After making the changes you will need to use the <code>$ helm upgrade RELEASE_NAME CHART</code>.</p> <p>You can read more about it here:</p> <ul> <li><em><a href="https://helm.sh/docs/helm/helm_upgrade/" rel="nofollow noreferrer">Helm.sh: Docs: Helm: Helm upgrade</a></em></li> </ul>
Dawid Kruk
<p>I can access kubernetes dashboard when run <code>kubectl proxy --port=8001</code> and able to sign in with the token that ı recieved from secret.</p> <p><a href="http://localhost:8001/api/v1/namespaces/prod/services/http:kubernetes-dashboard:http/proxy/#/about?namespace=default" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/prod/services/http:kubernetes-dashboard:http/proxy/#/about?namespace=default</a></p> <p>But when I expose my application via ingress I can also access the UI but I can not sign in with the token that ı can sign in for local. I am struggling to resolve this issue any help will be appreciated.</p> <pre><code>{ "jweToken": "", "errors": [ { "ErrStatus": { "metadata": {}, "status": "Failure", "message": "MSG_LOGIN_UNAUTHORIZED_ERROR", "reason": "Unauthorized", "code": 401 } } ] } </code></pre> <p>curl -v <a href="http://..../" rel="nofollow noreferrer">http://..../</a> -H "" Output</p> <pre><code>&lt; HTTP/1.1 200 OK &lt; Accept-Ranges: bytes &lt; Cache-Control: no-store &lt; Content-Type: text/html; charset=utf-8 &lt; Date: Tue, 18 Feb 2020 14:28:28 GMT &lt; Last-Modified: Fri, 07 Feb 2020 13:15:14 GMT &lt; Vary: Accept-Encoding &lt; Content-Length: 1262 &lt; &lt;!-- Copyright 2017 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --&gt; </code></pre> <p>ingress logs</p> <pre><code>8T14:21:42Z"} {"level":"warning","msg":"Endpoints not available for prod/kubernetes-dashboard","time":"2020-02-18T14:21:42Z"} {"level":"warning","msg":"Endpoints not available for prod/kubernetes-dashboard","time":"2020-02-18T14:21:43Z"} {"level":"warning","msg":"Endpoints not available for prod/kubernetes- </code></pre>
semural
<blockquote> <p>When I expose my application via ingress I can also access the UI but I can not sign in with the token that I can sign in for local.</p> </blockquote> <p>According to <a href="https://github.com/kubernetes/dashboard/blob/master/docs/user/accessing-dashboard/1.7.x-and-above.md" rel="nofollow noreferrer">Kubernetes Dashboard</a> documentation, since version 1.7.x (2017) you can still access dashboard via HTTP the same way <a href="https://github.com/kubernetes/dashboard/blob/master/docs/user/accessing-dashboard/1.6.x-and-below.md" rel="nofollow noreferrer">older versions does</a> when doing with localhost.</p> <p>But when you choose to expose it:</p> <blockquote> <p>Dashboard should not be exposed publicly over HTTP. For domains accessed over HTTP it will not be possible to sign in. Nothing will happen after clicking Sign in button on login page.</p> </blockquote> <p>In order to expose your dashboard you need to configure HTTPS access. You mentioned in the comments you are running Kubernetes On Premise and that you wish to access the dashboard via NodeIP.</p> <p><strong>In this case follow: <a href="https://github.com/kubernetes/dashboard/blob/master/docs/user/accessing-dashboard/1.7.x-and-above.md#nodeport" rel="nofollow noreferrer">Accessing Dashboard via NodePort</a>.</strong></p> <p>Remember that instead of accessing <code>https://&lt;master-ip&gt;:&lt;nodePort&gt;</code> you should access <code>https://&lt;node-ip&gt;:&lt;nodePort&gt;</code> of the node which dashboard is installed.</p> <p>If you have any doubts let me know in the comments and I'll help you.</p>
Will R.O.F.
<p>I have map windows folder into me linux machine with </p> <pre><code>mount -t cifs //AUTOCHECK/OneStopShopWebAPI -o user=someuser,password=Aa1234 /xml_shared </code></pre> <p>and the following command </p> <blockquote> <p>df -hk</p> </blockquote> <p>give me </p> <pre><code>//AUTOCHECK/OneStopShopWebAPI 83372028 58363852 25008176 71% /xml_shared </code></pre> <p>after that I create yaml file with </p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nfs-jenkins-slave spec: storageClassName: jenkins-slave-data accessModes: - ReadWriteMany resources: requests: storage: 4Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-jenkins-slave labels: type: jenkins-slave-data2 spec: storageClassName: jenkins-slave-data capacity: storage: 4Gi accessModes: - ReadWriteMany nfs: server: 192.168.100.109 path: "//AUTOCHECK/OneStopShopWebAPI/jenkins_slave_shared" </code></pre> <p>this seems to not work when I create new pod </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: jenkins-slave labels: label: jenkins-slave spec: containers: - name: node image: node command: - cat tty: true volumeMounts: - mountPath: /var/jenkins_slave_shared name: jenkins-slave-vol volumes: - name: jenkins-slave-vol persistentVolumeClaim: claimName: pvc-nfs-jenkins-slave </code></pre> <p>do i need to change the nfs ? what is wrong with me logic?</p>
shaharnakash
<p>The mounting of CIFS share under Linux machine is correct but you need to take different approach to mount CIFS volume under Kubernetes. Let me explain:</p> <p>There are some differences between <a href="https://ipwithease.com/difference-between-cifs-and-nfs/" rel="nofollow noreferrer">NFS and CIFS</a>. </p> <p>This site explained the whole process step by step: <a href="https://github.com/fstab/cifs" rel="nofollow noreferrer">Github CIFS Kubernetes</a>. </p>
Dawid Kruk
<p>I am running JupyterHub on Google Cloud VM but due to some reasons I am not able to access JupyterHub running on VM now. Rather than resolving the issue with current JupyterHub I wanted to migrate JupyterHub on our Google Kubernetes Engine, so I installed another JupyterHub on Google Kubernetes Engine using <code>zero-to-jupyterhub-k8s</code>.</p> <p>Now everything is running fine but I want to migrate the data saved on the old JupyterHub VM to my new JupyterHub. The new JupyterHub using Persistent Volume claims as storage for each of the pods of users. Could someone please let me know how can I do it?</p>
tank
<p>Posting this answer as a community wiki for a better visibility as well to add some additional resources that could help when encountered with similar scenario.</p> <p>The issue portrayed in the question was resolved by copying user data to GCS bucket and then mounting the data to the user pods as posted in the comment:</p> <blockquote> <p>I solved this issue by copying the data from the VM to Google Cloud Storage and then mounted the GCS Bucket on the user pods in JupyterHub on Google Kubernetes Engine.</p> </blockquote> <p>The guide for installing <code>zero-to-jupyterhub-k8s</code>:</p> <ul> <li><em><a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/google/step-zero-gcp.html" rel="nofollow noreferrer">Zero-to-jupyterhub: GKE cluster provisioning</a></em></li> <li><em><a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub/index.html" rel="nofollow noreferrer">Zero-to-jupyterhub: Jupyterhub installation</a></em></li> </ul> <p>Resources on mounting GCS bucket to the Kubernetes pod:</p> <ul> <li><em><a href="https://github.com/maciekrb/gcs-fuse-sample" rel="nofollow noreferrer">Github.com: Maciekrb: gcs-fuse-sample</a></em></li> <li><em><a href="https://karlstoney.com/2017/03/01/fuse-mount-in-kubernetes/" rel="nofollow noreferrer">Karlstoney.com: Fuse mount in Kubernetes</a></em></li> </ul> <hr /> <p>Citing the Github page:</p> <blockquote> <p><strong>Disclaimer!</strong></p> <p>The big catch is that for this to work, the container has to be built with gcsfuse. The Dockerfile includes a base build for debian jessie.</p> </blockquote> <blockquote> <p>The most note worthy parts of the configuration are the following:</p> <pre class="lang-yaml prettyprint-override"><code>securityContext: privileged: true capabilities: add: - SYS_ADMIN </code></pre> <p>For the container to have access to /dev/fuse it has to run with SYS_ADMIN capabilities.</p> <pre class="lang-yaml prettyprint-override"><code>lifecycle: postStart: exec: command: [&quot;gcsfuse&quot;, &quot;-o&quot;, &quot;nonempty&quot;, &quot;test-bucket&quot;, &quot;/mnt/test-bucket&quot;] preStop: exec: command: [&quot;fusermount&quot;, &quot;-u&quot;, &quot;/mnt/test-bucket&quot;] </code></pre> </blockquote>
Dawid Kruk
<p>I don't understand a difference between running 1 Kafka node with 3 replicas and 3 Kafka nodes each with 1 replica.</p> <p>We maintain our own Kubernetes cluster where we want to run a Kafka cluster. We're using a <a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka" rel="nofollow noreferrer">Bitnami Helm chart</a>. </p> <p>We can set:</p> <ol> <li>...3 different Kafka services with 1 replica and each has its own URL (e.g. localhost:9092, localhost:9093 and localhost:9094).</li> <li>...1 Kafka service running in 3 replicas (there are only 1 URL localhost:9092 for all replicas).</li> </ol> <p>Is there any difference in a way of synchronization and what is a better way for configuration?</p>
Prokop Simek
<p>1 Kafka node with 3 replicas are in the same machine. The data will be stored in the same server. The replication on the same Kafka server is to create a security to avoid data to be corrupted. </p> <p>About 3 kafka with 1 replica is another approach. For example, if one of your servers took down, another Kafka can assume the leader position for specific topic if all data replicated from the same topic. This is one of the beauties of Kafka. If you configure in the right way, Zookeeper can do the replace and your service will not crash. </p> <p>One of the best practices that you can do in production is create two zookeepers (leader electors) an put 3 or 4 kafkas in different machines and, each kafka, with 3 in replica factor. This will create a strong consistency in your data and, same if one or two servers down, your kafka will run in a certain safe way.</p> <p>This happened to me. 4 kafkas, 2 down and everything still running perfectly. Also, some details need to be put on configurations. Suggest you to see about the <a href="https://www.youtube.com/watch?v=ZOU7PJWZU9w&amp;list=PLt1SIbA8guusxiHz9bveV-UHs_biWFegU&amp;index=5" rel="nofollow noreferrer">Stephane Maarek on YT</a>. </p>
William Prigol Lopes
<p>I have a minikube running with the deployment of django app. Till today, we used server which django spins up. Now, I have added another Nginx container so that we can deploy django app cause I read django is not really for production. After reading some documentation and blogs, I configured the deployment.yaml file and it is running very much fine now. The problem is that no static content is being served. This is really because static content is in django container and not Nginx container. (Idk if they can share volume or not, please clarify this doubt or misconception) What will be the best way so I can serve my static content? This is my deployment file's spec:</p> <pre><code>spec: containers: - name: user-django-app image: my-django-app:latest ports: - containerPort: 8000 env: - name: POSTGRES_HOST value: mysql-service - name: POSTGRES_USER value: admin - name: POSTGRES_PASSWORD value: admin - name: POSTGRES_PORT value: "8001" - name: POSTGRES_DB value: userdb - name: user-nginx image: nginx volumeMounts: - name: nginx-config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: nginx-config configMap: name: nginx-config </code></pre> <p>I believe that </p> <pre><code>server { location /static { alias /var/www/djangoapp/static; } } </code></pre> <p>needs to be changed. But I don't know what should I write? Also, how can I run <code>python manage.py migrate</code> and <code>python manage.py collectstatic</code> as soon as the deployment is made. </p> <p>Kindly provide resource/docs/blogs which will assist me doing this. Thank you! Thank you.</p> <p>After @willrof 's answer, this is my current YAML file.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: user-deployment labels: app: web spec: replicas: 1 selector: matchLabels: app: web micro-service: user template: metadata: name: user labels: app: web micro-service: user spec: containers: - name: user-django-app image: docker.io/dev1911/drone_plus_plus_user:latest ports: - containerPort: 8000 env: - name: POSTGRES_HOST value: mysql-service - name: POSTGRES_USER value: admin - name: POSTGRES_PASSWORD value: admin - name: POSTGRES_PORT value: "8001" - name: POSTGRES_DB value: userdb volumeMounts: - name: shared mountPath: /shared command: ["/bin/sh", "-c"] args: ["apt-get install nano"] - name: user-nginx image: nginx volumeMounts: - name: nginx-config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf - name: shared mountPath: /var/www/user/static volumes: - name: nginx-config configMap: name: nginx-config - name: shared emptyDir: {} </code></pre> <p>And nginx-config file is</p> <pre><code>worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 4096; ## Default: 1024 } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format ltsv 'domain:$host\t' 'host:$remote_addr\t' 'user:$remote_user\t' 'time:$time_local\t' 'method:$request_method\t' 'path:$request_uri\t' 'protocol:$server_protocol\t' 'status:$status\t' 'size:$body_bytes_sent\t' 'referer:$http_referer\t' 'agent:$http_user_agent\t' 'response_time:$request_time\t' 'cookie:$http_cookie\t' 'set_cookie:$sent_http_set_cookie\t' 'upstream_addr:$upstream_addr\t' 'upstream_cache_status:$upstream_cache_status\t' 'upstream_response_time:$upstream_response_time'; access_log /var/log/nginx/access.log ltsv; sendfile on; tcp_nopush on; server_names_hash_bucket_size 128; # this seems to be required for some vhosts keepalive_timeout 65; gzip on; server { listen 80; server_name example.com ; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /static { alias /var/www/user/static; } } # include /etc/nginx/conf.d/*.conf; } </code></pre> <p>I did not write this config but found this and edited to my use.</p>
Nikhil Jagtap
<p>After our chat in comments, you told me you are having difficulties with using cmd and args.</p> <p>Here is an example called <code>two-containers.yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: two-containers spec: restartPolicy: Never containers: - name: python image: python volumeMounts: - name: shared-data mountPath: /pod-data command: ["/bin/sh"] args: ["-c", "apt-get update &amp;&amp; apt-get install -y curl &amp;&amp; mkdir /curl-folder &amp;&amp; cp /usr/bin/curl /curl-folder &amp;&amp; cp -r /curl-folder /pod-data/"] - name: user-nginx image: nginx volumeMounts: - name: shared-data mountPath: /tmp/pod-data volumes: - name: shared-data emptyDir: {} </code></pre> <p><code>python</code> will start up, run <code>apt-get update</code> then <code>apt-get install -y curl</code> then <code>mkdir /curl-folder</code> then copy <code>usr/bin/curl</code> to <code>/curl-folder</code> then copy the folder <code>/curl-folder</code> to <code>/pod-data</code> shared mounted volume.</p> <p><strong>A few observations:</strong></p> <ul> <li>The container image has to have the binary mentioned in <code>command</code> (like <code>/bin/sh</code> in python).</li> <li>Try using <code>&amp;&amp;</code> to chain commands consecutively in the args field it's easier to test and deploy.</li> </ul> <p><strong>Reproduction:</strong></p> <pre class="lang-sh prettyprint-override"><code>$ kubectl apply -f two-container-volume.yaml pod/two-containers created $ kubectl get pods -w NAME READY STATUS RESTARTS AGE two-containers 2/2 Running 0 7s two-containers 1/2 NotReady 0 30s $ kubectl describe pod two-containers ... Containers: python: Container ID: docker://911462e67d7afab9bca6cdaea154f9229c80632efbfc631ddc76c3d431333193 Image: python Command: /bin/sh Args: -c apt-get update &amp;&amp; apt-get install -y curl &amp;&amp; mkdir /curl-folder &amp;&amp; cp /usr/bin/curl /curl-folder &amp;&amp; cp -r /curl-folder /pod-data/ State: Terminated Reason: Completed Exit Code: 0 user-nginx: State: Running </code></pre> <ul> <li>The <code>python</code> container executed and completed correctly, now let's check the files logging inside the nginx container.</li> </ul> <pre><code>$ kubectl exec -it two-containers -c user-nginx -- /bin/bash root@two-containers:/# cd /tmp/pod-data/curl-folder/ root@two-containers:/tmp/pod-data/curl-folder# ls curl </code></pre> <p>If you need further help, post the yaml with the command+args as you are trying to run and we can help you troubleshoot the syntax.</p>
Will R.O.F.
<p>This is my ingress file , what I need is how to add https redirection settings here in ingress file , I did it using service file and it works but after to reduce costs I decided to use SINGLE ingress file which manage multiple services with SINGLE AWS CLASSIC load balancer. </p> <pre><code> apiVersion: extensions/v1beta1 kind: Ingress metadata: generation: 4 name: brain-xx namespace: xx spec: rules: - host: app.xx.com http: paths: - backend: serviceName: xx-frontend-service servicePort: 443 path: / status: loadBalancer: ingress: - ip: xx.xx.xx.xx </code></pre>
R A
<p>I have managed to create <code>http</code> to <code>https</code> redirection on GKE. Let me know if this solution will work for your case on AWS: </p> <p><strong>Steps to reproduce</strong></p> <ul> <li>Apply Ingress definitions</li> <li>Configure basic HTTP ingress resource</li> <li>Create SSL certificate </li> <li>Replace old Ingress resource with HTTPS enabled one. </li> </ul> <h2>Apply Ingress definitions</h2> <p>Follow this <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">Ingress link </a> to check if there are any needed prerequisites before installing NGINX Ingress controller on your AWS infrastructure and install it. </p> <h3>Configure basic HTTP ingress resource and test it</h3> <p>Example below is Ingress configuration with HTTP traffic only. It will act as starting point: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-http annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: xx.yy.zz http: paths: - path: / backend: serviceName: hello-service servicePort: hello-port - path: /v2/ backend: serviceName: goodbye-service servicePort: goodbye-port </code></pre> <p><strong>Please change this file to reflect configuration appropriate to your case.</strong></p> <h3>Create SSL certificate</h3> <p>For this to work without browser's security warnings you will need valid SSL certificate and a domain name. </p> <p>To create this certificate you can use for example: <a href="https://www.linode.com/docs/security/ssl/install-lets-encrypt-to-create-ssl-certificates/" rel="nofollow noreferrer">Linode create Let's Encrypt SSL certificates</a>.</p> <p>Let's Encrypt will create files which will be used later. </p> <h3>Configure HTTPS ingress resource and test it</h3> <p>By default Nginx Ingress will create a self-signed certificate if he's not provided one. To provide him one you will need to add it as a secret to your Kubernetes cluster. </p> <p>As I said earlier the files (<code>cert.pem privkey.pem</code>) that Let's Encrypt created will be added to Kubernetes to configure HTTPS. </p> <p>Below command will use this files to create secret for Ingress:</p> <p><code>$ kubectl create secret tls ssl-certificate --cert cert.pem --key privkey.pem</code></p> <p>This Ingress configuration support HTTPS as well as redirects all the traffic to it: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-https annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" spec: tls: - secretName: ssl-certificate rules: - host: xx.yy.zz http: paths: - path: / backend: serviceName: hello-service servicePort: hello-port - path: /v2/ backend: serviceName: goodbye-service servicePort: goodbye-port </code></pre> <p><strong>Please change this file to reflect configuration appropriate to your case.</strong></p> <p>Take a look at this fragment which will enable HTTPS and redirect all the traffic to it: </p> <pre><code> nginx.ingress.kubernetes.io/force-ssl-redirect: "true" spec: tls: - secretName: ssl-certificate </code></pre> <p>Apply this configuration and check if it worked for you. </p> <p>Below is part of curl output which shows that connecting to <code>http://xx.yy.zz</code> gives redirection to <code>https://xx.yy.zz</code></p> <pre><code>&lt; HTTP/1.1 308 Permanent Redirect &lt; Server: openresty/1.15.8.2 &lt; Date: Fri, 20 Dec 2019 15:06:57 GMT &lt; Content-Type: text/html &lt; Content-Length: 177 &lt; Connection: keep-alive &lt; Location: https://xx.yy.zz/ </code></pre>
Dawid Kruk
<p>My k8 cluster runs on minikube.</p> <p>I am familiar with kubectl port-forward command which allows to route traffic from localhost into the cluster.</p> <p>Is there a way do do it the other way around? Can I route the traffic from one of the pods to the web server that runs locally on my machine?</p>
Pavel Ryvintsev
<p>The way to connect from your <code>minikube</code> pod to your host will heavily depend on the type of <code>--driver</code> you used.</p> <p>Each <code>--driver</code> could alter the way to connect from your pod to your host. What I mean is that there could be multiple options for each <code>--driver</code> to connect to your host.</p> <p>As pointed by user @Srikrishna B H</p> <blockquote> <p>Make sure you use local machine IP instead of <code>localhost</code> while connecting to web server running locally in your machine.</p> </blockquote> <p>I've created 3 examples:</p> <ul> <li>Ubuntu with <code>minikube start --driver=docker</code></li> <li>Mac OS with <code>minikube start --driver=hyperkit</code></li> <li>Windows with <code>minikube start --driver=virtualbox</code></li> </ul> <blockquote> <p><strong>IP addresses used below are only for example purposes!</strong></p> </blockquote> <hr /> <h3>Ubuntu with Docker</h3> <p>Assuming that you have an Ubuntu machine with Docker installed and <code>nginx</code> working as a server that your pod will connect to:</p> <ul> <li><code>$ minikube start --driver=docker</code></li> <li><code>$ ip addr show</code>: <ul> <li><code>docker0</code> - docker interface</li> <li><code>ensX</code> - &quot;physical&quot; interface</li> </ul> </li> </ul> <p>Above command will tell you the IP address to connect from your pod (on host)</p> <pre class="lang-sh prettyprint-override"><code>2: ensX: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1460 qdisc mq state UP group default qlen 1000 &lt;----&gt; inet 10.0.0.2/32 scope global dynamic ensX &lt;----&gt; </code></pre> <pre class="lang-sh prettyprint-override"><code>3: docker0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default &lt;----&gt; inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 &lt;----&gt; </code></pre> <p>Then you can spawn a pod to check if you can connect to your nginx:</p> <ul> <li><code>$ kubectl run -i --tty busybox --image=busybox --restart=Never -- sh</code></li> <li><code>$ wget -q0 - IP_ADDRESS</code> - <code>busybox</code> image doesn't have <code>curl</code> installed <ul> <li><code>172.17.0.1</code></li> <li><code>10.0.0.2</code></li> </ul> </li> </ul> <hr /> <h3>Mac OS with Hyperkit</h3> <p>Assuming that you have a Mac OS machine and you have configured <code>nginx</code> on port <code>8080</code>:</p> <ul> <li><code>$ minikube start --driver=hyperkit</code></li> <li><code>$ ifconfig</code>: <ul> <li><code>bridge100</code></li> <li><code>enX</code> - &quot;physical&quot; interface</li> </ul> </li> </ul> <blockquote> <p>Disclaimer!</p> <p>You will need to allow connections to your nginx outside of <code>localhost</code> in the firewall or completely disable it (not recommended!)</p> </blockquote> <pre class="lang-sh prettyprint-override"><code>bridge100: flags=8a63&lt;UP,BROADCAST,SMART,RUNNING,ALLMULTI,SIMPLEX,MULTICAST&gt; mtu 1500 &lt;----&gt; inet 192.168.64.1 netmask 0xffffff00 broadcast 192.168.64.255 &lt;----&gt; </code></pre> <pre class="lang-sh prettyprint-override"><code>enX: flags=8863&lt;UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST&gt; mtu 1500 &lt;----&gt; inet 192.168.1.101 netmask 0xffffff00 broadcast 192.168.0.255 &lt;----&gt; </code></pre> <p>Then you can spawn a pod to check if you can connect to your nginx:</p> <ul> <li><code>$ kubectl run -i --tty busybox --image=busybox --restart=Never -- sh</code></li> <li><code>$ wget -q0 - IP_ADDRESS</code> - <code>busybox</code> image doesn't have <code>curl</code> installed <ul> <li><code>192.168.64.1:8080</code></li> <li><code>192.168.1.101:8080</code></li> </ul> </li> </ul> <hr /> <h3>Windows with Virtualbox</h3> <p>When you create a <code>minikube</code> instance with <code>--driver=virtualbox</code> in Windows, it creates a <code>VM</code> with 2 network interfaces:</p> <ul> <li><code>NAT</code> - used to communicate with outside (Internet)</li> <li><code>Virtualbox Host-Only Ethernet Adapter</code> - used to communicate between <code>minikube</code> and your host</li> </ul> <p>Assuming that you have a Windows machine with Virtualbox, you have configured <code>nginx</code> on port <code>80</code> and also you have a running <code>minikube</code> instance.</p> <p>You will need to get the IP addresses of (<code>$ ipconfig</code>) :</p> <ul> <li>Your &quot;physical&quot; interface (<code>Ethernet X</code> for example)</li> <li>Your <code>Virtualbox Host-Only Ethernet Adapter</code></li> </ul> <pre class="lang-bsh prettyprint-override"><code>Ethernet adapter Ethernet X: &lt;----&gt; IPv4 Address. . . . . . . . . . . : 192.168.1.3 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.1 </code></pre> <pre class="lang-bsh prettyprint-override"><code>Ethernet adapter VirtualBox Host-Only Network #X: &lt;---&gt; IPv4 Address. . . . . . . . . . . : 192.168.99.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : </code></pre> <blockquote> <p>Disclaimer!</p> <p>You will need check if your firewall will accept the traffic destined for ngin (If it's not blocked).</p> </blockquote> <p>Then you can spawn a pod to check if you can connect to your nginx:</p> <ul> <li><code>$ kubectl run -i --tty busybox --image=busybox --restart=Never -- sh</code></li> <li><code>$ wget -q0 - IP_ADDRESS</code> - <code>busybox</code> image doesn't have <code>curl</code> installed <ul> <li><code>192.168.99.1</code></li> <li><code>192.168.1.3</code></li> </ul> </li> </ul>
Dawid Kruk
<p>For my React App made with <code>create-react-app</code>, I want to use Kubernetes secrets as environment variables.</p> <p>These secrets are used for different NodeJS containers in my cluster and they work just fine. I used a shell to echo the variables within the frontend container itself, and they are there but I realize that the environment variables are not loaded in the React App.</p> <p>I have ensured that the environment variable keys start with <code>REACT_APP_</code>, so that is not the problem.</p> <p>Here are some files:</p> <p><code>frontend.yml</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: frontend labels: app: frontend spec: selector: matchLabels: app: frontend replicas: 1 template: metadata: labels: app: frontend spec: containers: - name: frontend image: build_link ports: - containerPort: 3000 envFrom: - secretRef: name: prod </code></pre> <p>Dockerfile</p> <pre><code>FROM node:17.0.1-alpine3.12 RUN mkdir -p /usr/src/app WORKDIR /usr/src/app # latest yarn RUN npm i -g --force yarn serve COPY package.json yarn.lock . RUN yarn --frozen-lockfile # Legacy docker settings cos node 17 breaks build ENV NODE_OPTIONS=--openssl-legacy-provider COPY . . RUN yarn build ENV NODE_ENV=production EXPOSE 3000 CMD [&quot;yarn&quot;, &quot;prod&quot;] </code></pre> <p>Kubernetes <code>prod</code> secret</p> <pre><code>kubectl describe secret prod Name: prod Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Type: Opaque Data ==== REACT_APP_DOCS_SERVER_URL: 41 bytes REACT_APP_AUTH0_CLIENT_ID: 32 bytes REACT_APP_AUTH0_DOMAIN: 24 bytes </code></pre> <p>env of react app console logged in production (sorry i was desperate)</p> <pre class="lang-yaml prettyprint-override"><code>FAST_REFRESH: true NODE_ENV: &quot;production&quot; PUBLIC_URL: &quot;&quot; WDS_SOCKET_HOST: undefined WDS_SOCKET_PATH: undefined WDS_SOCKET_PORT: undefined </code></pre>
Marcus Teh
<p>The build command for <code>create-react-app</code> takes in environment variables present during the build phase, causing the environment variables added later to be missing.</p> <p>The fix I made involves combining removing the build command in the Dockerfile and then combining the build and run command in the same command: <code>package.json</code></p> <pre class="lang-json prettyprint-override"><code>{ ... &quot;scripts&quot;: { &quot;prod&quot;: &quot;yarn build &amp;&amp; serve build -l 3000&quot;, &quot;build&quot;: &quot;react-scripts build&quot; } } </code></pre>
Marcus Teh
<p>I am trying to set up <code>HorizontalPodAutoscaler</code> autoscaler for my app, alongside automatic <a href="https://www.digitalocean.com/docs/kubernetes/resources/autoscaling-with-hpa-ca/" rel="nofollow noreferrer">Cluster Autoscaling of DigitalOcean</a></p> <p>I will add my deployment yaml below, I have also deployed <code>metrics-server</code> as per guide in link above. At the moment I am struggling to figure out how to determine what values to use for my cpu and memory <code>requests</code> and <code>limits</code> fields. Mainly due to variable replica count, i.e. do I need to account for maximum number of replicas each using their resources or for deployment in general, do I plan it per pod basis or for each container individually?</p> <p>For some context I am running this on a cluster that can have up to two nodes, each node has 1 vCPU and 2GB of memory (so total can be 2 vCPUs and 4 GB of memory).</p> <p>As it is now my cluster is running one node and my <code>kubectl top</code> statistics for pods and nodes look as follows:</p> <p><strong>kubectl top pods</strong></p> <pre><code>NAME CPU(cores) MEMORY(bytes) graphql-85cc89c874-cml6j 5m 203Mi graphql-85cc89c874-swmzc 5m 176Mi </code></pre> <p><strong>kubectl top nodes</strong></p> <pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% skimitar-dev-pool-3cpbj 62m 6% 1151Mi 73% </code></pre> <p>I have tried various combinations of cpu and resources, but when I deploy my file my deployment is either stuck in a <code>Pending</code> state, or keeps restarting multiple times until it gets terminated. My horizontal pod autoscaler also reports targets as <code>&lt;unknown&gt;/80%</code>, but I believe it is due to me removing <code>resources</code> from my deployment, as it was not working.</p> <p>Considering deployment below, what should I look at / consider in order to determine best values for <code>requests</code> and <code>limits</code> of my resources?</p> <p>Following yaml is cleaned up from things like env variables / services, it works as is, but results in above mentioned issues when <code>resources</code> fields are uncommented. </p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: graphql spec: replicas: 2 selector: matchLabels: app: graphql template: metadata: labels: app: graphql spec: containers: - name: graphql-hasura image: hasura/graphql-engine:v1.2.1 ports: - containerPort: 8080 protocol: TCP livenessProbe: httpGet: path: /healthz port: 8080 readinessProbe: httpGet: path: /healthz port: 8080 # resources: # requests: # memory: "150Mi" # cpu: "100m" # limits: # memory: "200Mi" # cpu: "150m" - name: graphql-actions image: my/nodejs-app:1 ports: - containerPort: 4040 protocol: TCP livenessProbe: httpGet: path: /healthz port: 4040 readinessProbe: httpGet: path: /healthz port: 4040 # resources: # requests: # memory: "150Mi" # cpu: "100m" # limits: # memory: "200Mi" # cpu: "150m" # Disruption budget --- apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: graphql-disruption-budget spec: minAvailable: 1 selector: matchLabels: app: graphql # Horizontal auto scaling --- apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: graphql-autoscaler spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: graphql minReplicas: 2 maxReplicas: 3 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 </code></pre>
Ilja
<blockquote> <p>How to determine what values to use for my cpu and memory requests and limits fields. Mainly due to variable replica count, i.e. do I need to account for maximum number of replicas each using their resources or for deployment in general, do I plan it per pod basis or for each container individually</p> </blockquote> <p>Requests and limits are the mechanisms Kubernetes uses to control resources such as CPU and memory.</p> <ul> <li><strong>Requests</strong> are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource.</li> <li><strong>Limits</strong>, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.</li> </ul> <p>The number of replicas will be determined by the autoscaler on the <code>ReplicaController</code>.</p> <blockquote> <p>when I deploy my file my deployment is either stuck in a Pending state, or keeps restarting multiple times until it gets terminated.</p> </blockquote> <ul> <li><p><code>pending</code> state means that there is not resources available to schedule new pods.</p> </li> <li><p><code>restarting</code> may be triggered by other issues, I'd suggest you to debug it after solving the scaling issues.</p> </li> </ul> <blockquote> <p>My horizontal pod autoscaler also reports targets as <code>&lt;unknown&gt;/80%</code>, but I believe it is due to me removing resources from my deployment, as it was not working.</p> </blockquote> <ul> <li><p>You are correct, if you don't set the request limit, the % desired will remain unknown and the autoscaler won't be able to trigger scaling up or down.</p> </li> <li><p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">Here</a> you can see algorithm responsible for that.</p> </li> <li><p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> will trigger new pods based on the request % of usage on the pod. In this case whenever the pod reachs 80% of the max request value it will trigger new pods up to the maximum specified.</p> </li> </ul> <p>For a good HPA example, check this link: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">Horizontal Pod Autoscale Walkthrough</a></p> <hr /> <p><strong>But How does Horizontal Pod Autoscaler <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-horizontal-pod-autoscaler-work-with-cluster-autoscaler" rel="nofollow noreferrer">works with</a> Cluster Autoscaler?</strong></p> <ul> <li><p>Horizontal Pod Autoscaler changes the deployment's or replicaset's number of replicas based on the current CPU load. If the load increases, HPA will create new replicas, for which there may or may not be enough space in the cluster.</p> </li> <li><p>If there are not enough resources, CA will try to bring up some nodes, so that the HPA-created pods have a place to run. If the load decreases, HPA will stop some of the replicas. As a result, some nodes may become underutilized or completely empty, and then CA will terminate such unneeded nodes.</p> </li> </ul> <p><strong>NOTE:</strong> The key is to set the maximum replicas for HPA thinking on a cluster level according to the amount of nodes (and budget) available for your app, you can start setting a very high max number of replicas, monitor and then change it according to the usage metrics and prediction of future load.</p> <ul> <li>Take a look at <a href="https://www.digitalocean.com/docs/kubernetes/how-to/autoscale/" rel="nofollow noreferrer">How to Enable the Cluster Autoscaler for a DigitalOcean Kubernetes Cluster</a> in order to properly enable it as well.</li> </ul> <p>If you have any question let me know in the comments.</p>
Will R.O.F.
<p>I am following <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">this tutorial</a> for setting up Ingress with Ingress-Nginx on Minikube. But I can't seem to get it to work. I get a connection refused when I try to connect to port 80 on the VM IP address returned by <code>minikube ip</code></p> <p>My setup is this:</p> <ul> <li><strong>Minikube version</strong>: v1.25.1</li> <li><strong>VirtualBox version</strong>: 6.1</li> <li><strong>Kubernetes version</strong>: v1.22.5</li> </ul> <p>The ingress-nginx namespace has the below resources:</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/ingress-nginx-controller-85f4c5b458-2dhqh 1/1 Running 0 49m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ingress-nginx-controller NodePort 10.102.88.109 &lt;none&gt; 80:30551/TCP,443:31918/TCP 20h service/ingress-nginx-controller-admission ClusterIP 10.103.134.39 &lt;none&gt; 443/TCP 20h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ingress-nginx-controller 1/1 1 1 20h NAME DESIRED CURRENT READY AGE replicaset.apps/ingress-nginx-controller-85f4c5b458 1 1 1 20h NAME COMPLETIONS DURATION AGE job.batch/ingress-nginx-admission-create 1/1 6s 20h job.batch/ingress-nginx-admission-patch 1/1 6s 20h </code></pre> <p>The default namespace has the below resources</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/web-79d88c97d6-rvp2r 1/1 Running 0 47m 10.244.1.4 minikube-m02 &lt;none&gt; &lt;none&gt; NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 20h &lt;none&gt; service/web NodePort 10.104.20.14 &lt;none&gt; 8080:31613/TCP 20h app=web NAME CLASS HOSTS ADDRESS PORTS AGE ingress.networking.k8s.io/example-ingress nginx hello-world.info localhost 80 20h </code></pre> <p>Minikube is exposing these services:</p> <pre><code>|---------------|------------------------------------|--------------|-----------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |---------------|------------------------------------|--------------|-----------------------------| | default | kubernetes | No node port | | default | web | 8080 | http://192.168.59.106:31613 | | ingress-nginx | ingress-nginx-controller | http/80 | http://192.168.59.106:30551 | | | | https/443 | http://192.168.59.106:31918 | | ingress-nginx | ingress-nginx-controller-admission | No node port | | kube-system | kube-dns | No node port | | kube-system | registry | No node port | |---------------|------------------------------------|--------------|-----------------------------| </code></pre> <p>In step 4 of the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#create-an-ingress" rel="nofollow noreferrer">Create an Ingress</a> section The tutorial mentions this:</p> <pre><code>Add the following line to the bottom of the /etc/hosts file on your computer (you will need administrator access): 172.17.0.15 hello-world.info Note: If you are running Minikube locally, use minikube ip to get the external IP. The IP address displayed within the ingress list will be the internal IP. </code></pre> <p>It's a three node cluster using VirtualBox. I've tried adding the Minikube ingress-nginx-controller service's IP (192.168.59.106, which is also the result of minikube ip) to my hosts file, but it doesn't work. And as far as I know, I can't include the service's node port 30551 in the hosts file to test that.</p> <p>Some guidance on how to get this working would be much appreciated</p>
cjt
<p>You are correct. You cannot include the port in the <code>/etc/hosts</code> file. To get there, you would need to specify the full path in your browser or some other application as following (assuming no connectivity issues):</p> <ul> <li><code>hello-world.info:30551</code></li> </ul> <hr /> <p>I'd recommend you to tell specifically what type of issue you have. There can be multiple issues and each one will have different solution.</p> <p>For example there will be a difference between the inability to access the <em>Service</em> and getting the <code>404</code> message.</p> <hr /> <p>I'm not sure if it's related but I had connectivity issues when I created a cluster in a following way:</p> <ul> <li><code>minikube start --driver=&quot;virtualbox&quot;</code></li> <li><code>minikube node add</code></li> <li><code>minikube node add</code></li> </ul> <p>However, when I ran below command, I encountered noone:</p> <ul> <li><code>minikube start --driver=&quot;virtualbox&quot; --nodes=3</code></li> </ul> <hr /> <p>Assuming that you would like to expose your <em>Nginx Ingress controller</em> to be available on the ports <code>80</code> and <code>443</code> instead of <code>NodePort's</code> you can do:</p> <ul> <li>Spawn your cluster</li> <li>Deploy: <ul> <li><em><a href="https://metallb.universe.tf/" rel="nofollow noreferrer">Metallb.universe.tf</a></em></li> </ul> </li> <li>Configure your address pool similar to:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.59.200-192.168.59.210&quot; </code></pre> <ul> <li>Change the <em>Service</em> of your: <code>ingress-nginx-controller</code> to <em>LoadBalancer</em> instead of <code>NodePort</code> (<code>kubectl edit svc -n ingress-nginx ingress-nginx-controller</code>)</li> <li>Check on the <em>Service</em>: <ul> <li><code>kubectl get svc -n ingress-nginx ingress-nginx-controller</code></li> </ul> </li> </ul> <pre class="lang-yaml prettyprint-override"><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.106.63.253 192.168.59.201 80:30092/TCP,443:30915/TCP 23m </code></pre> <ul> <li>Put the <em>EXTERNAL-IP</em> of your <em>Ingress controller</em> to your <code>/etc/hosts</code> file.</li> <li>Create an <em>Ingress</em> resource that is matching what you've inputted as a name to <code>/etc/hosts</code> and has some backend.</li> </ul> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress nginx</a></em></li> <li><em><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Service</a></em></li> </ul>
Dawid Kruk
<p>I would like to run a pod on one of my IoT devices. Each one of those devices contains an environment variable I want this pod to use. Is there any way to inject this env variable into the pod using build-in templating of <code>helm</code>/<code>kubectl</code>? I was trying the following on my <code>deployment.yaml</code> file:</p> <pre><code>env: - name: XXX value: $ENV_FROM_HOST </code></pre> <p>but when executing the pod and trying the get <code>XXX</code> value, I get the string <code>$ENV_FROM_HOST</code> instead of its value from the host:</p> <pre><code>$ echo $XXX $ENV_FROM_HOST </code></pre> <p>Thanks.</p>
Livne Rosenblum
<p>It's not possible to directly pass the host's env vars to the pods. I often do that by creating a ConfigMap.</p> <ol> <li><p>Create a ConfigMap with <code>from-lireral</code> option:</p> <pre><code>kubectl create configmap testcm --from-literal=hostname=$HOSTNAME </code></pre> </li> <li><p>Refer to that in the Pod's manifest:</p> <pre class="lang-yaml prettyprint-override"><code>- name: TEST valueFrom: configMapKeyRef: name: testcm key: hostname </code></pre> </li> </ol> <p>This will inject the host's $HOSTNAME into the Pod's $TEST.</p> <p>If it's sensitive information, you can use Secrets instead of using ConfigMap.</p>
Daigo
<p>I have an app hosted on GKE which, among many tasks, serve's a zip file to clients. These zip files are constructed on the fly through many individual files on google cloud storage.</p> <p>The issue that I'm facing is that when these zip's get particularly large, the connection fails randomly part way through (anywhere between 1.4GB to 2.5GB). There doesn't seem to be any pattern with timing either - it could happen between 2-8 minutes.</p> <p>AFAIK, the connection is disconnecting somewhere between the load balancer and my app. Is GKE ingress (load balancer) known to close long/large connections?</p> <p>GKE setup:</p> <ul> <li>HTTP(S) load balancer ingress</li> <li>NodePort backend service</li> <li>Deployment (my app)</li> </ul> <p>More details/debugging steps:</p> <ul> <li>I can't reproduce it locally (without kubernetes).</li> <li>The load balancer logs <code>statusDetails: "backend_connection_closed_after_partial_response_sent"</code> while the response has a 200 status code. A google of this gave nothing helpful.</li> <li>Directly accessing the pod and downloading using k8s port-forward worked successfully</li> <li>My app logs that the request was cancelled (by the requester)</li> <li>I can verify none of the files are corrupt (can download all directly from storage)</li> </ul>
Taylor Graham
<p>I believe your "backend_connection_closed_after_partial_response_sent" issue is caused by websocket connection being killed by the back-end prematurily. You can see the documentation on <a href="https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_socket_keepalive" rel="nofollow noreferrer">websocket proxying in nginx</a> - it explains the nature of this process. In short - by default WebSocket connection is killed after 10 minutes.</p> <p>Why it works when you download the file directly from the pod ? Because you're bypassing the load-balancer and the websocket connection is kept alive properly. When you proxy websocket then things start to happen because WebSocket relies on hop-by-hop headers which are not proxied.</p> <p>Similar <a href="https://stackoverflow.com/a/58019331/12257250">case was discussed here</a>. It was resolved by sending ping frames from the back-end to the client.</p> <p>In my opinion your best shot is to do the same. I've found many cases with similar issues when websocket was proxied and most of them suggest to use pings because it will reset the connection timer and will keep it alive.</p> <p>Here's more about <a href="https://uwsgi-docs.readthedocs.io/en/latest/WebSockets.html#ping-pong" rel="nofollow noreferrer">pinging the client using WebSocket</a> and <a href="https://cloud.google.com/load-balancing/docs/https/#timeouts_and_retries" rel="nofollow noreferrer">timeouts</a> </p> <p>I work for Google and this is as far as I can help you - if this doesn't resolve your issue you have to contact GCP support.</p>
Wojtek_B
<p>Currently I am using a script to renew Kubernetes certificates before they expire. But this is a manual process. I have to monitor expiration dates carefully and run this script beforehand. What's the recommended way to update all control plane certificates automatically without updating control plane? Do kubelet's --rotate* flags rotate all components (e.g. controller) or it is just for kubelet? PS: Kubernetes cluster was created with kubeadm.</p>
Baris Simsek
<p>Answering following question:</p> <blockquote> <p>What's the recommended way to update all control plane certificates automatically without updating control plane</p> </blockquote> <p>According to the k8s docs and best practices the best practice is to use &quot;Automatic certificate renewal&quot; with control plane upgrade:</p> <blockquote> <h3>Automatic certificate renewal</h3> <p>This feature is designed for addressing the simplest use cases; if you don't have specific requirements on certificate renewal and perform Kubernetes version upgrades regularly (less than 1 year in between each upgrade), kubeadm will take care of keeping your cluster up to date and reasonably secure.</p> <p><strong>Note:</strong> It is a best practice to upgrade your cluster frequently in order to stay secure.</p> <p>-- <em><a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#automatic-certificate-renewal" rel="noreferrer">Kubernetes.io: Administer cluster: Kubeadm certs: Automatic certificate renewal</a></em></p> </blockquote> <p>Why this is the recommended way:</p> <p>From the best practices standpoint you should be upgrading your <code>control-plane</code> to patch vulnerabilities, add features and use the version that is currently supported.</p> <p>Each <code>control-plane</code> upgrade will renew the certificates as described (defaults to <code>true</code>):</p> <ul> <li><code>$ kubeadm upgrade apply --help</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>--certificate-renewal Perform the renewal of certificates used by component changed during upgrades. (default true) </code></pre> <p>You can also check the expiration of the <code>control-plane</code> certificates by running:</p> <ul> <li><code>$ kubeadm certs check-expiration</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>[check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf May 30, 2022 13:36 UTC 364d no apiserver May 30, 2022 13:36 UTC 364d ca no apiserver-etcd-client May 30, 2022 13:36 UTC 364d etcd-ca no apiserver-kubelet-client May 30, 2022 13:36 UTC 364d ca no controller-manager.conf May 30, 2022 13:36 UTC 364d no etcd-healthcheck-client May 30, 2022 13:36 UTC 364d etcd-ca no etcd-peer May 30, 2022 13:36 UTC 364d etcd-ca no etcd-server May 30, 2022 13:36 UTC 364d etcd-ca no front-proxy-client May 30, 2022 13:36 UTC 364d front-proxy-ca no scheduler.conf May 30, 2022 13:36 UTC 364d no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca May 28, 2031 13:36 UTC 9y no etcd-ca May 28, 2031 13:36 UTC 9y no front-proxy-ca May 28, 2031 13:36 UTC 9y no </code></pre> <blockquote> <p><strong>A side note!</strong></p> <p><code>kubelet.conf</code> is not included in the list above because <code>kubeadm</code> configures <code>kubelet</code> for automatic certificate renewal.</p> </blockquote> <p>From what it can be seen by default:</p> <blockquote> <ul> <li>Client certificates generated by <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/" rel="noreferrer">kubeadm</a> expire after 1 year.</li> <li>CA created by <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/" rel="noreferrer">kubeadm</a> are set to expire after 10 years.</li> </ul> </blockquote> <hr /> <p>There are other features that allows you to rotate the certificates in a &quot;semi automatic&quot; way.</p> <p>You can opt for a manual certificate renewal with the:</p> <ul> <li><code>$ kubeadm certs renew</code></li> </ul> <p>where you can automatically (with the command) renew the specified (or all) certificates:</p> <ul> <li><code>$ kubeadm certs renew all</code></li> </ul> <pre><code>[renew] Reading configuration from the cluster... [renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed certificate for serving the Kubernetes API renewed certificate the apiserver uses to access etcd renewed certificate for the API server to connect to kubelet renewed certificate embedded in the kubeconfig file for the controller manager to use renewed certificate for liveness probes to healthcheck etcd renewed certificate for etcd nodes to communicate with each other renewed certificate for serving etcd renewed certificate for the front proxy client renewed certificate embedded in the kubeconfig file for the scheduler manager to use renewed Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates. </code></pre> <p>Please take a specific look on the output:</p> <pre><code>You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates. </code></pre> <p>As pointed, you will need to restart the components of your <code>control-plane</code> to use new certificate but remember:</p> <ul> <li><code>$ kubectl delete pod -n kube-system kube-scheduler-ubuntu</code> <strong>will not work</strong>.</li> </ul> <p>You will need to restart the docker container responsible for the component:</p> <ul> <li><code>$ docker ps | grep -i &quot;scheduler&quot;</code></li> <li><code>$ docker restart 8c361562701b</code> (example)</li> </ul> <pre><code>8c361562701b 38f903b54010 &quot;kube-scheduler --au…&quot; 11 minutes ago Up 11 minutes k8s_kube-scheduler_kube-scheduler-ubuntu_kube-system_dbb97c1c9c802fa7cf2ad7d07938bae9_5 b709e8fb5e6c k8s.gcr.io/pause:3.4.1 &quot;/pause&quot; About an hour ago Up About an hour k8s_POD_kube-scheduler-ubuntu_kube-system_dbb97c1c9c802fa7cf2ad7d07938bae9_0 </code></pre> <hr /> <p>As pointed in below link, <code>kubelet</code> can automatically renew it's certificate (<code>kubeadm</code> configures the cluster in a way that this option is enabled):</p> <ul> <li><em><a href="https://kubernetes.io/docs/tasks/tls/certificate-rotation/" rel="noreferrer">Kubernetes.io: Configure Certificate Rotation for the Kubelet</a></em></li> <li><em><a href="https://github.com/kubernetes/kubeadm/issues/2185#issuecomment-644260417" rel="noreferrer">Github.com: Kubernetes: Kubeadm: Issues: --certificate-renewal true doesn't renew kubelet.conf</a></em></li> </ul> <p>Depending on the version used in your environment, this can be disabled. Currently in the newest version of k8s managed by <code>kubeadm</code> this option is enabled by default according to my knowledge.</p> <hr /> <p>Please keep in mind that before you start with any kubernetes node/control plane/update/upgrade to <strong>read &quot;Urgent Upgrade Notes&quot; specific to your k8s version</strong> (example):</p> <ul> <li><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#urgent-upgrade-notes" rel="noreferrer">Github.com: Kubernetes: CHANGELOG: 1.21: Urgent upgrade nodes</a></li> </ul> <hr /> <p>Defining the automatic way of certificate rotation could go in either way but you can use already mentioned commands to automate this process. You would need to create a script (which you already have) that would be put in cron that would fire after some time and renew them.</p>
Dawid Kruk
<p>I am trying to deploy my microservice application on Kubernetes using minikube. I have an angular front service and two backend services and I used this configuration to lunch deployment and services </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: searchservice labels: app: searchservice spec: selector: matchLabels: app: searchservice template: metadata: labels: app: searchservice spec: containers: - name: searchservice image: ayoubdali/searchservice:0.1.9-SNAPSHOT ports: - containerPort: 8070 --- apiVersion: v1 kind: Service metadata: name: searchservice spec: type: NodePort selector: app: searchservice ports: - protocol: TCP # Port accessible inside cluster port: 8070 # Port to forward to inside the pod targetPort: 8070 nodePort: 31000 --- apiVersion: apps/v1 kind: Deployment metadata: name: searchappfront labels: app: searchappfront spec: selector: matchLabels: app: searchappfront template: metadata: labels: app: searchappfront spec: containers: - name: searchappfront image: ayoubdali/searchappfront:0.6.5 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: searchappfront spec: type: NodePort selector: app: searchappfront ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: subscriberservice labels: app: subscriberservice spec: selector: matchLabels: app: subscriberservice template: metadata: labels: app: subscriberservice spec: containers: - name: subscriberservice image: ayoubdali/subscriber-service:0.1.0-SNAPSHOT ports: - containerPort: 8090 --- apiVersion: v1 kind: Service metadata: name: subscriberservice spec: type: NodePort selector: app: subscriberservice ports: - protocol: TCP port: 8090 #targetPort: 80 nodePort: 31102 </code></pre> <p>and this configuration for the ingress service </p> <pre><code> apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: app.info http: paths: - path: / backend: serviceName: searchappfront servicePort: 80 - path: /api backend: serviceName: subscriberservice servicePort: 8090 </code></pre> <p>But when I open app.info/ I got a blank page with javascript error like </p> <pre><code>Failed to load module script: The server responded with a non-JavaScript MIME type of "text/html". Strict MIME type checking is enforced for module scripts per HTML spec. </code></pre> <p>I tried deploying the application using docker compose and it works fine. </p>
daliDV
<p>When you enable <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer"><code>rewrite-target</code></a> it will create a capture group and send it to the appropriate service. </p> <p>If you set the capture group to <code>$1</code>, everything after the root <code>/</code> will be discarted and the request is forwarded to root <code>path</code>.</p> <ul> <li>In order to forward the full request, <strong>remove the <code>rewrite-target</code> line</strong>, your Ingress should be like this:</li> </ul> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: app.info http: paths: - path: / backend: serviceName: searchappfront servicePort: 80 - path: /api backend: serviceName: subscriberservice servicePort: 8090 </code></pre> <hr> <p><strong>Example:</strong></p> <ul> <li><p>I've deployed two <code>echo-apps</code> that echoes the requests, I've suppressed the output to show the path and the pod handling it on the background).</p> <ul> <li><code>echo1-app</code> is simulating <code>searchappfront</code></li> <li><code>echo2-app</code> is simulating <code>subscriberservice</code></li> </ul></li> <li><p>This is the ingress I'm using:</p></li> </ul> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: echo-ingress annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: app.info http: paths: - path: / backend: serviceName: echo1-svc servicePort: 80 - path: /api backend: serviceName: echo2-svc servicePort: 80 </code></pre> <ul> <li>Now let's test the commands:</li> </ul> <pre><code>$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE echo-ingress app.info 35.188.7.149 80 73m $ kubectl get pods NAME READY STATUS RESTARTS AGE echo1-deploy-764d5df7cf-2wx4m 1/1 Running 0 74m echo2-deploy-7bcb8f8d5f-xlknt 1/1 Running 0 74m $ tail -n 1 /etc/hosts 35.188.7.149 app.info $ curl app.info {"path": "/", "os": {"hostname": "echo1-deploy-764d5df7cf-2wx4m"}} $ curl app.info/foo/bar {"path": "/foo/bar", "os": {"hostname": "echo1-deploy-764d5df7cf-2wx4m"}} $ curl app.info/api {"path": "/api", "os": {"hostname": "echo2-deploy-7bcb8f8d5f-xlknt"}} $ curl app.info/api/foo {"path": "/api/foo", "os": {"hostname": "echo2-deploy-7bcb8f8d5f-xlknt"}} $ curl app.info/api/foo/bar {"path": "/api/foo/bar", "os": {"hostname": "echo2-deploy-7bcb8f8d5f-xlknt"}} </code></pre> <p>To summarize:</p> <ul> <li>Requests to <strong>app.info/</strong> will be delivered to <code>echo1-app</code> as <strong>/</strong></li> <li>Requests to <strong>app.info/foo/bar</strong> will be delivered to <code>echo1-app</code> as <strong>/foo/bar</strong></li> <li>Requests to <strong>app.info/api</strong> will be delivered to <code>echo2-app</code> as <strong>/api</strong></li> <li>Requests to <strong>app.info/api/foo/bar</strong> will be delivered to <code>echo2-app</code> as <strong>/api/foo/bar</strong></li> </ul> <hr> <p>Considerations about your environment:</p> <ul> <li><p>I understand you are using <code>NodePort</code> to test the access, but if you wish to close that access, you can set the services as <code>ClusterIP</code> since it will be Ingress job to handle the incoming traffic.</p></li> <li><p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">Nodeport</a> port range by default is 30000-32767.</p> <ul> <li><code>searchappfront</code> has it set to <code>80</code>, which will be ignored and the correct port will be attributed.</li> </ul></li> </ul> <p>If you have any question let me know in the comments.</p>
Will R.O.F.