Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I wanted to know exactly what is the difference between '&gt;-' and '|-' especially in kubernetes yaml manifests</p>
Bguess
<p>Newlines in folded block scalars (<code>&gt;</code>) are subject to line folding, newlines in literal block scalars (<code>|</code>) are not.</p> <p>Line folding replaces a single newline between non-empty lines with a space, and in the case of empty lines, reduces the number of newline characters between the surrounding non-empty lines by one:</p> <pre class="lang-yaml prettyprint-override"><code>a: &gt; # folds into &quot;one two\nthree four\n\nfive\n&quot; one two three four five </code></pre> <p>Line folding does not occur between lines when at least one line is more indented, i.e. contains whitespace at the beginning that is not part of the block's general indentation:</p> <pre class="lang-yaml prettyprint-override"><code>a: &gt; # folds into &quot;one\n two\nthree four\n\n five\n&quot; one two three four five </code></pre> <p>Adding <code>-</code> after either <code>|</code> or <code>&gt;</code> will strip the newline character from the last line:</p> <pre class="lang-yaml prettyprint-override"><code>a: &gt;- # folded into &quot;one two&quot; one two b: &gt;- # folded into &quot;one\ntwo&quot; one two </code></pre> <p>In contrast, <code>|</code> emits every newline character as-is, the sole exception being the last one if you use <code>-</code>.</p>
flyx
<p>I have a running GKE cluster with an HPA using a target CPU utilisation metric. This is OK but CPU utilisation is not the best scaling metric for us. Analysis suggests that active connection count is a good indicator of general platform load and thus, we'd like to look into this as our primary scaling metric.</p> <p>To this end I have enabled custom metrics for the NGINX ingress that we use. From here we can see active connection counts, request rates, etc.</p> <p>Here is the HPA specification using the NGINX custom metric:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: hpa-uat-active-connections namespace: default spec: minReplicas: 3 maxReplicas: 6 metrics: - type: Pods pods: metricName: custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections selector: matchLabels: metric.labels.state: active resource.labels.cluster_name: "[redacted]" targetAverageValue: 5 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: "[redacted]" </code></pre> <p>However, while this specification does deploy OK, I always get this output from the HPA:</p> <pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-uat-active-connections Deployment/[redacted] &lt;unknown&gt;/5 3 6 3 31s </code></pre> <p>In short, the target value is "unknown" and I have so far failed to understand / resolve why. The custom metric is indeed present:</p> <blockquote> <p>kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections?labelSelector=metric.labels.state%3Dactive,resource.labels.cluster_name%3D[redacted]" | jq</p> </blockquote> <p>Which gives:</p> <pre><code>{ "kind": "ExternalMetricValueList", "apiVersion": "external.metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/custom.googleapis.com%7Cnginx-ingress-controller%7Cnginx_ingress_controller_nginx_process_connections" }, "items": [ { "metricName": "custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections", "metricLabels": { "metric.labels.controller_class": "nginx", "metric.labels.controller_namespace": "ingress-nginx", "metric.labels.controller_pod": "nginx-ingress-controller-54f84b8dff-sml6l", "metric.labels.state": "active", "resource.labels.cluster_name": "[redacted]", "resource.labels.container_name": "", "resource.labels.instance_id": "[redacted]-eac4b327-stqn", "resource.labels.namespace_id": "ingress-nginx", "resource.labels.pod_id": "nginx-ingress-controller-54f84b8dff-sml6l", "resource.labels.project_id": "[redacted], "resource.labels.zone": "[redacted]", "resource.type": "gke_container" }, "timestamp": "2019-12-30T14:11:01Z", "value": "1" } ] } </code></pre> <p>So I have two questions, really:</p> <ol> <li>(the main one): what am I doing wrong here to cause the HPA to not be able to read the metric?</li> <li>Is this is right way to go about trying to scale to an average active connections load over a number of pods?</li> </ol> <p>Many thanks in advance, Ben</p> <p><strong>Edit 1</strong></p> <blockquote> <p>kubectl get all</p> </blockquote> <pre><code>NAME READY STATUS RESTARTS AGE pod/[redacted]-deployment-7f5fbc9ddf-l9tqk 1/1 Running 0 34h pod/[redacted]-uat-deployment-7f5fbc9ddf-pbcns 1/1 Running 0 34h pod/[redacted]-uat-deployment-7f5fbc9ddf-tjfrm 1/1 Running 0 34h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/[redacted]-webapp-service NodePort [redacted] &lt;none&gt; [redacted] 57d service/kubernetes ClusterIP [redacted] &lt;none&gt; [redacted] 57d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/[redacted]-uat-deployment 3/3 3 3 57d NAME DESIRED CURRENT READY AGE replicaset.apps/[redacted]-uat-deployment-54b6bd5f9c 0 0 0 12d replicaset.apps/[redacted]-uat-deployment-574c778cc9 0 0 0 35h replicaset.apps/[redacted]-uat-deployment-66546bf76b 0 0 0 11d replicaset.apps/[redacted]-uat-deployment-698dfbb6c4 0 0 0 4d replicaset.apps/[redacted]-uat-deployment-69b5c79d54 0 0 0 6d17h replicaset.apps/[redacted]-uat-deployment-6f67ff6599 0 0 0 10d replicaset.apps/[redacted]-uat-deployment-777bfdbb9d 0 0 0 3d23h replicaset.apps/[redacted]-uat-deployment-7f5fbc9ddf 3 3 3 34h replicaset.apps/[redacted]-uat-deployment-9585454ff 0 0 0 6d21h replicaset.apps/[redacted]-uat-deployment-97cbcfc6 0 0 0 17d replicaset.apps/[redacted]-uat-deployment-c776f648d 0 0 0 10d NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/[redacted]-uat-deployment Deployment/[redacted]-uat-deployment 4%/80% 3 6 3 9h </code></pre>
benjimix
<p>Ok I managed to figure this out by looking up the schema for the HPA (<a href="https://docs.okd.io/latest/rest_api/apis-autoscaling/v2beta1.HorizontalPodAutoscaler.html" rel="noreferrer">https://docs.okd.io/latest/rest_api/apis-autoscaling/v2beta1.HorizontalPodAutoscaler.html</a>).</p> <p>In short, I was using the wrong metric type (as above you can see I am using "Pods", but I should be using "External").</p> <p>The correct HPA specification is:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: hpa-uat-active-connections namespace: default spec: minReplicas: 3 maxReplicas: 6 metrics: - type: External external: metricName: custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections metricSelector: matchLabels: metric.labels.state: active resource.labels.cluster_name: [redacted] targetAverageValue: 5 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: [redacted] </code></pre> <p>As soon as I did this, things worked right away:</p> <pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-uat-active-connections Deployment/bustle-webapp-uat-deployment 334m/5 (avg) 3 6 3 30s </code></pre>
benjimix
<p>I am learning k8s. My question is that how to let k8s get service url as minikube command "minikube get service xxx --url" do? Why I ask is because that when pod is down and up/created/initiated again, there is no need to change url by visiting service url. While I deploy pod as NodePort, I could access pod with host IP and port, but if it is reinitiated/created again, the port changes. </p> <p>My case is illustrated below: I have </p> <pre><code>one master(172.16.100.91) and one node(hostname node3, 172.16.100.96) </code></pre> <p>I create pod and service as below, helllocomm deployed as NodePort, and helloext deployed as ClusterIP. hellocomm and helloext are both spring boot hello world applications. </p> <pre><code>docker build -t jshenmaster2/hellocomm:0.0.2 . kubectl run hellocomm --image=jshenmaster2/hellocomm:0.0.2 --port=8080 kubectl expose deployment hellocomm --type NodePort docker build -t jshenmaster2/helloext:0.0.1 . kubectl run helloext --image=jshenmaster2/helloext:0.0.1 --port=8080 kubectl expose deployment helloext --type ClusterIP [root@master2 shell]# kubectl get service -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR hellocomm NodePort 10.108.175.143 &lt;none&gt; 8080:31666/TCP 8s run=hellocomm helloext ClusterIP 10.102.5.44 &lt;none&gt; 8080/TCP 2m run=helloext [root@master2 hello]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE hellocomm-54584f59c5-7nxp4 1/1 Running 0 18m 192.168.136.2 node3 helloext-c455859cc-5zz4s 1/1 Running 0 21m 192.168.136.1 node3 </code></pre> <p>In above, my pod is deployed at node3(172.16.100.96), so I could access hellocomm by 172.16.100.96:31666/hello, With this scenario, one could see easily that when node3 is down, a new pod is created/initiated, the port changes also. so that my client lost connection. I do not want this solution.</p> <p>My current question is that as helloext is deployed as ClusteriP and it is also a service as shown above. does that mean ClusterIP 10.102.5.44 and port 8080 would be service url, <a href="http://10.102.5.44:8080/hello" rel="noreferrer">http://10.102.5.44:8080/hello</a>? </p> <p>Do I need to create service by yaml file again? What is the difference from service created by command against by yaml file? How to write following yaml file if I have to create service by yaml?</p> <p>Below is yaml definition template I need to fill, How to fill?</p> <pre><code>apiVersion: v1 kind: Service matadata: name: string helloext namespace: string default labels: - name: string helloext annotations: - name: string hello world spec: selector: [] ? type: string ? clusterIP: string anything I could give? sessionAffinity: string ? (yes or no) ports: - name: string helloext protocol: string tcp port: int 8081? (port used by host machine) targetPort: int 8080? (spring boot uses 8080) nodePort: int ? status: since I am not using loadBalancer in deploymennt, I could forget this. loadBalancer: ingress: ip: string hostname: string </code></pre>
user84592
<p>NodePort, as the name suggests, opens a port directly on the node (actually on all nodes in the cluster) so that you can access your service. By default it's random - that's why when a pod dies, it generates a new one for you. However, you can specify a port as well (3rd paragraph <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">here</a>) - and you will be able to access on the same port even after the pod has been re-created.</p> <p>The clusterIP is only accessible inside the cluster, as it's a private IP. Meaning, in a default scenario you can access this service from another container / node inside the cluster. You can <code>exec</code> / <code>ssh</code> into any running container/node and try it out.</p> <p>Yaml files can be version controlled, documented, templatized (<a href="https://www.helm.sh/" rel="nofollow noreferrer">Helm</a>), etc. </p> <p>Check <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#servicespec-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#servicespec-v1-core</a> for details on each field.</p> <p><strong>EDIT</strong>: More detailed info on services here: <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="nofollow noreferrer">https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0</a></p>
Amrit
<p>A container running behind a K8s service fails to make network requests with the error <code>x509: certificate signed by unknown authority</code>.</p> <p>The container is an API that serves incoming requests and makes external network requests before responding, it's running in a local K8s cluster managed by Docker desktop. The third party API being called is failing the certificate validation and Im not using a proxy or VPN.</p> <p>What could be the cause of this?</p>
some_id
<p>I hope this helps someone else as there are many different discussions about this topic online.</p> <p>The fix seems to be that when doing a multi stage docker build and using e.g. <code>FROM golang:alpine3.14 AS build</code> along with <code>FROM scratch</code>, the root certificates are not copied into the image.</p> <p>adding this to the Dockerfile after the <code>FROM scratch</code> line removes the error.</p> <pre><code>COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ </code></pre> <p>This was found on this <a href="https://stackoverflow.com/a/52979541/356387">stackoverflow answer</a></p>
some_id
<p>I am trying to apply kubernetes to my minikube cluster for the first time. I have limited experience with cluster management and have never worked with prometheus before so I apologize for noob errors. </p> <p>I run the following commands:</p> <pre><code>docker build -t my-prometheus . docker run -p 9090:9090 my-prometheus </code></pre> <p>here is my yaml:</p> <pre><code>global: scrape_interval: 15s external_labels: monitor: 'codelab-monitor' scrape_configs: - job_name: 'kubernetes-apiservers' scheme: http tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt kubernetes_sd_configs: - role: endpoints - api_server: localhost:30000 </code></pre> <p>I ran this through YAMLlint and got that it was valid. However I get the following error when I run the second docker command:</p> <pre><code>level=error ts=2018-09-18T21:49:34.365671918Z caller=main.go:617 err="error loading config from \"/etc/prometheus/prometheus.yml\": couldn't load configuration (--config.file=\"/etc/prometheus/prometheus.yml\"): parsing YAML file /etc/prometheus/prometheus.yml: role missing (one of: pod, service, endpoints, node)" </code></pre> <p>However, you can see that I have specified my <code>- role: endpoints</code> in my <code>kubernetes_sd_configs</code>.</p> <p>Can anyone help me on this</p>
beanwa
<p><code>kubernetes_sd_configs</code> is a list of configs, styled as block sequence in YAML terms.</p> <p>Now, your list of configs looks like this:</p> <pre><code>- role: endpoints - api_server: localhost:3000 </code></pre> <p>So you're defining two configs, and only the first one of them has a role. This is why you get the error. Most probably, you want to create only one config with <code>role</code> and <code>api_server</code> configured. Drop the second <code>-</code> so that the <code>api_server</code> belongs to the first config:</p> <pre><code>- role: endpoints api_server: localhost:3000 </code></pre>
flyx
<p>I have a Kubernetes cluster running on my local machine(via docker-for-desktop) and a metrics-server has been deployed to monitor CPU Usage. I want to make some changes in the <code>metrics-server-deployment.yaml</code> file which resides in <code>/metrics-server/deploy/1.8+</code> </p> <p>I am done with the changes but I can't figure how to redeploy the metrics-server so that it will reflect the new changes. I am new to K8S and would love to get some help/tips or useful resources.</p> <p>Thanks in advance</p>
Gauraang Khurana
<p>From the directory where you have <code>metrics-server-deployment.yaml</code>, just run:</p> <pre><code>kubectl apply -f metrics-server-deployment.yaml </code></pre> <p>If it complains, you can also manually delete it and run:</p> <pre><code>kubectl create -f metrics-server-deployment.yaml </code></pre>
Amrit
<p>I'm trying to inject an HTTP status 500 fault in the bookinfo example.</p> <p>I managed to inject a 500 error status when the traffic is coming from the Gateway with:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo namespace: default spec: gateways: - bookinfo-gateway hosts: - '*' http: - fault: abort: httpStatus: 500 percent: 100 match: - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 </code></pre> <p>Example:</p> <pre><code>$ curl $(minikube ip):30890/api/v1/products fault filter abort </code></pre> <p>But, I fails to achieve this for traffic that is coming from other pods:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo namespace: default spec: gateways: - mesh hosts: - productpage http: - fault: abort: httpStatus: 500 percent: 100 match: - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 </code></pre> <p>Example:</p> <pre><code># jump into a random pod $ kubectl exec -ti details-v1-dasa231 -- bash root@details $ curl productpage:9080/api/v1/products [{"descriptionHtml": ... &lt;- actual product list, I expect a http 500 </code></pre> <ul> <li>I tried using the FQDN for the host <code>productpage.svc.default.cluster.local</code> but I get the same behavior.</li> <li><p>I checked the proxy status with <code>istioctl proxy-status</code> everything is synced.</p></li> <li><p>I tested if the istio-proxy is injected into the pods, it is:</p></li> </ul> <p>Pods:</p> <pre><code>NAME READY STATUS RESTARTS AGE details-v1-6764bbc7f7-bm9zq 2/2 Running 0 4h productpage-v1-54b8b9f55-72hfb 2/2 Running 0 4h ratings-v1-7bc85949-cfpj2 2/2 Running 0 4h reviews-v1-fdbf674bb-5sk5x 2/2 Running 0 4h reviews-v2-5bdc5877d6-cb86k 2/2 Running 0 4h reviews-v3-dd846cc78-lzb5t 2/2 Running 0 4h </code></pre> <p>I'm completely stuck and not sure what to check next. I feel like I am missing something very obvious.</p> <p>I would really appreciate any help on this topic.</p>
Igor Šarčević
<p>The root cause of my issues were an improperly set up includeIPRanges in my minicloud cluster. I set up the 10.0.0.1/24 CIDR, but some services were listening on 10.35.x.x.</p>
Igor Šarčević
<p>I'm new to kubernetes and trying to explore the new things in it. So, my question is </p> <p>Suppose I have existing kubernetes cluster with 1 master node and 1 worker node. Consider this setup is on AWS, now I have 1 more VM instance available on Oracle Cloud Platform and I want to configure that VM as worker node and attach that worker node to existing cluster.</p> <p>So, is it possible to do so? Can anybody have any suggestions regarding this.</p>
Shubham Naphade
<p>I would instead divide your clusters up based on region (unless you have a good VPN between your oracle and AWS infrastructure)</p> <p>You can then run applications across clusters. If you absolutely must have one cluster that is geographically separated, I would create a master (etcd host) in each region that you have a worker node in. </p>
Ryan
<p>I have a security requirement to include in my event logs the machine name / hostname on which an instance of my application is running. Presumably this is so that a security auditor who is trying to trace through a breach can determine, e.g., if the breach is related to one specific compromised machine. This also assumes that all of the logs are being collected centrally in a SIEM or something like one, so you need to know where each event came from.</p> <p>This was always easy in the past on monolithic applications. However, now I'm working on a Kubernetes-based cloud app with various micro-services. /etc/hostname shows a bs pod name, as each pod has it's own filesystem. The &quot;hostname&quot; command doesn't appear to be installed; even if it were, I have no faith that it wouldn't just parrot the information in /etc/hostname. I need to get the name of the actual machine on which the pod is running <strong>from inside the code that is running in the pod</strong> (before somebody says &quot;why don't you use kubectl&quot;?); I am happy to accept the name of the VM on which the Kubernetes node containing the instance of the pod is running, assuming that getting all the way back to the hypervisor machine name is a bridge too far.</p> <p>Is there a way that an application (e.g. a C# dot net web app running using Kestrel as the web server) can reach out of the container and pod to get this information? Or a reasonable way that I can get the scheduler to write over a config map entry immediately before it spins the pod up so the app can read it from there (yes, we are using Helm, but this is information only the scheduler knows at the time the pod is being instantiated)? Or do I have to have an argument with the Cyber folks about how their requirement is impossible to achieve?</p>
JackLThornton
<p>Perhaps the easiest way to do this is to use <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Kubernetes Downward API</a>. You can expose node name via environment variable defined in Pod spec:</p> <pre><code> env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName </code></pre>
Oleg
<p>I was trying to showcase binary authorization to my client as POC. During the deployment, it is failing with the following error message:</p> <blockquote> <p>pods "hello-app-6589454ddd-wlkbg" is forbidden: image policy webhook backend denied one or more images: Denied by cluster admission rule for us-central1.staging-cluster. Denied by Attestor. Image gcr.io//hello-app:e1479a4 denied by projects//attestors/vulnz-attestor: Attestor cannot attest to an image deployed by tag</p> </blockquote> <p>I have adhered all steps mentioned in the site.</p> <p>I have verified the image repeatedly for few occurances, for example using below command to force fully make the attestation:</p> <pre><code>gcloud alpha container binauthz attestations sign-and-create --project "projectxyz" --artifact-url "gcr.io/projectxyz/hello-app@sha256:82f1887cf5e1ff80ee67f4a820703130b7d533f43fe4b7a2b6b32ec430ddd699" --attestor "vulnz-attestor" --attestor-project "projectxyz" --keyversion "1" --keyversion-key "vulnz-signer" --keyversion-location "us-central1" --keyversion-keyring "binauthz" --keyversion-project "projectxyz" </code></pre> <p>It throws error as:</p> <blockquote> <p>ERROR: (gcloud.alpha.container.binauthz.attestations.sign-and-create) Resource in project [project xyz] is the subject of a conflict: occurrence ID "c5f03cc3-3829-44cc-ae38-2b2b3967ba61" already exists in project "projectxyz"</p> </blockquote> <p>So when I verify, I found the attestion present:</p> <pre><code>gcloud beta container binauthz attestations list --artifact-url "gcr.io/projectxyz/hello-app@sha256:82f1887cf5e1ff80ee67f4a820703130b7d533f43fe4b7a2b6b32ec430ddd699" --attestor "vulnz-attestor" --attestor-project "projectxyz" --format json | jq '.[0].kind' \ &gt; | grep 'ATTESTATION' "ATTESTATION" </code></pre> <p>Here are the screen shots:</p> <p><img src="https://user-images.githubusercontent.com/27581174/67691987-c3b44b80-f99f-11e9-92c0-384dc41a120d.PNG" alt="deployment error"></p> <p><img src="https://user-images.githubusercontent.com/27581174/67691988-c3b44b80-f99f-11e9-80c9-0a3fce511500.PNG" alt="container"></p> <p><img src="https://user-images.githubusercontent.com/27581174/67691989-c44ce200-f99f-11e9-9917-880d665cbe83.PNG" alt="cloud build"></p> <p>Any feedback please?</p> <p>Thanks in advance.</p>
ARINDAM BANERJEE
<p>Thank you for trying Binary Authorization. I just updated the <a href="https://cloud.google.com/solutions/binary-auth-with-cloud-build-and-gke" rel="nofollow noreferrer">Binary Authorization Solution</a>, which you might find helpful.</p> <p>A few things I noticed along the way:</p> <blockquote> <p>... denied by projects//attestors/vulnz-attestor:</p> </blockquote> <p>There should be a project ID in between <code>projects</code> and <code>attestors</code>, like:</p> <pre><code>projects/my-project/attestors/vulnz-attestor </code></pre> <p>Similarly, your gcr.io links should include that same project ID, for example:</p> <blockquote> <p>gcr.io//hello-app:e1479a4</p> </blockquote> <p>should be</p> <pre><code>gcr.io/my-project/hello-app:e1479a4 </code></pre> <p>If you followed a tutorial, it likely asked you to set a variable like <code>$PROJECT_ID</code>, but you may have accidentally unset it or ran the command in a different terminal session.</p>
sethvargo
<p>We're using Kubernetes on-premise and it's currently running on VMWare. So far, we have been successfull in being able to provision volumes for the apps that we deploy. The problem comes if the pods - for whatever reason - switch to a different worker node. When that happens, the disk fails to mount to the second worker as it's already present on the first worker where the pod was originally running. See below:</p> <p>As it stands, we have no app on either worker1 or worker2:</p> <pre><code>[root@worker01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 200G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 199.5G 0 part ├─vg_root-lv_root 253:0 0 20G 0 lvm / ├─vg_root-lv_swap 253:1 0 2G 0 lvm ├─vg_root-lv_var 253:2 0 50G 0 lvm /var └─vg_root-lv_k8s 253:3 0 20G 0 lvm /mnt/disks sr0 11:0 1 1024M 0 rom [root@worker02 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 200G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 199.5G 0 part ├─vg_root-lv_root 253:0 0 20G 0 lvm / ├─vg_root-lv_swap 253:1 0 2G 0 lvm ├─vg_root-lv_var 253:2 0 50G 0 lvm /var └─vg_root-lv_k8s 253:3 0 20G 0 lvm /mnt/disks sr0 11:0 1 4.5G 0 rom </code></pre> <p>Next we create our PVC with the following:</p> <pre><code>[root@master01 ~]$ cat app-pvc.yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: app-pvc annotations: volume.beta.kubernetes.io/storage-class: thin-disk namespace: tools spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi [root@master01 ~]$ kubectl create -f app-pvc.yaml persistentvolumeclaim "app-pvc" created </code></pre> <p>This works fine as the disk is created and bound:</p> <pre><code>[root@master01 ~]$ kubectl get pvc -n tools NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE app-pvc Bound pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7 10Gi RWO thin-disk 12s [root@master01 ~]$ kubectl get pv -n tools NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7 10Gi RWO Delete Bound tools/app-pvc thin-disk 12s </code></pre> <p>Now we can deploy our application which creates the pod and sorts storage etc:</p> <pre><code>[centos@master01 ~]$ cat app.yaml --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: app namespace: tools spec: replicas: 1 template: metadata: labels: app: app spec: containers: - image: sonatype/app3:latest imagePullPolicy: IfNotPresent name: app ports: - containerPort: 8081 - containerPort: 5000 volumeMounts: - mountPath: /app-data name: app-data-volume securityContext: fsGroup: 2000 volumes: - name: app-data-volume persistentVolumeClaim: claimName: app-pvc --- apiVersion: v1 kind: Service metadata: name: app-service namespace: tools spec: type: NodePort ports: - port: 80 targetPort: 8081 protocol: TCP name: http - port: 5000 targetPort: 5000 protocol: TCP name: docker selector: app: app [centos@master01 ~]$ kubectl create -f app.yaml deployment.extensions "app" created service "app-service" created </code></pre> <p>This deploys fine:</p> <pre><code>[centos@master01 ~]$ kubectl get pods -n tools NAME READY STATUS RESTARTS AGE app-6588cf4b87-wvwg2 0/1 ContainerCreating 0 6s [centos@neb-k8s02-master01 ~]$ kubectl describe pod app-6588cf4b87-wvwg2 -n tools Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 18s default-scheduler Successfully assigned nexus-6588cf4b87-wvwg2 to neb-k8s02-worker01 Normal SuccessfulMountVolume 18s kubelet, worker01 MountVolume.SetUp succeeded for volume "default-token-7cv62" Normal SuccessfulAttachVolume 15s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7" Normal SuccessfulMountVolume 7s kubelet, worker01 MountVolume.SetUp succeeded for volume "pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7" Normal Pulled 7s kubelet, worker01 Container image "acme/app:latest" already present on machine Normal Created 7s kubelet, worker01 Created container Normal Started 6s kubelet, worker01 Started container </code></pre> <p>We can also see the disk has been created and mounted in VMWare for Worker01 and not for Worker02:</p> <pre><code>[root@worker01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 200G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 199.5G 0 part ├─vg_root-lv_root 253:0 0 20G 0 lvm / ├─vg_root-lv_swap 253:1 0 2G 0 lvm ├─vg_root-lv_var 253:2 0 50G 0 lvm /var └─vg_root-lv_k8s 253:3 0 20G 0 lvm /mnt/disks sdb 8:16 0 10G 0 disk /var/lib/kubelet/pods/1e55ad6a-294f-11e9-9175-005056a47f18/volumes/kubernetes.io~vsphere-volume/pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7 sr0 11:0 1 1024M 0 rom [root@worker02 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 200G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 199.5G 0 part ├─vg_root-lv_root 253:0 0 20G 0 lvm / ├─vg_root-lv_swap 253:1 0 2G 0 lvm ├─vg_root-lv_var 253:2 0 50G 0 lvm /var └─vg_root-lv_k8s 253:3 0 20G 0 lvm /mnt/disks sr0 11:0 1 4.5G 0 rom </code></pre> <p>If Worker01 falls over then Worker02 kicks in and we can see the disk being attached to the other node:</p> <pre><code>[root@worker02 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 200G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 199.5G 0 part ├─vg_root-lv_root 253:0 0 20G 0 lvm / ├─vg_root-lv_swap 253:1 0 2G 0 lvm ├─vg_root-lv_var 253:2 0 50G 0 lvm /var └─vg_root-lv_k8s 253:3 0 20G 0 lvm /mnt/disks sdb 8:16 0 10G 0 disk /var/lib/kubelet/pods/a0695030-2950-11e9-9175-005056a47f18/volumes/kubernetes.io~vsphere-volume/pvc-d4bf77cc-294e-11e9-9106-005056a4b1c7 sr0 11:0 1 4.5G 0 rom </code></pre> <p>However, seeing as though the disk is now attached to Worker01 and Worker02, Worker01 will no longer start citing the following error in vCenter:</p> <pre><code>Cannot open the disk '/vmfs/volumes/5ba35d3b-21568577-efd4-469e3c301eaa/kubevols/kubernetes-dynamic-pvc-e55ad6a-294f-11e9-9175-005056a47f18.vmdk' or one of the snapshot disks it depends on. </code></pre> <p>This error occurs because (I assume) Worker02 has access to the disk and is reading/writing from/to it. Shouldn't Kubernetes detach the disk from nodes that do not need it if it's been attached to another node. How can we go about fixing this issue? If a pods moves to another host due to node failure then we have to manually detach the disk and then start the other worker manually. </p> <p>Any and all help appreciated. </p>
automation1002
<p>First, I'll assume your running in tree vsphere disks.</p> <p>Second, in this case (and more so, with CSI) kubernetes doesn't have control over all volume operations. The VMWare functionality for managing attachment and detachment of a disk is implemented in the volume plugin which you are using. Kubernetes doesn't strictly control all volume attachment/detachment semantics as a generic function.</p> <p>To see the in-tree implementation details, check out:</p> <p><a href="https://kubernetes.io/docs/concepts/storage/volumes/#vspherevolume" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#vspherevolume</a></p> <p>Overall i think the way you are doing failover is going to mean that when your worker1 pod dies, worker2 can schedule. At that point, worker1 should <em>not</em> be able to grab the same PVC, and it should <em>not</em> schedule until the worker2 pod dies.</p> <p>However if worker1 is scheduling, it means that Vsphere is trying to (erroneously) let worker1 start, and the kubelet is failing.</p> <p>There is a chance that this is a bug in the VMWare driver in that it will bind a persistent volume even though it is not ready to.</p> <p>To further elaborate, details about how worker2 is being launched may be helped. Is it a separate replication controller ? or is it running outside of kubernetes? If the latter, then the volumes wont be managed the same way, and you cant use a the same PVC as the locking mechanism.</p>
jayunit100
<p>I have defined Kafka and Kafka schema registry configuration using Kubernetes deployments and services. I used <a href="https://docs.confluent.io/current/installation/docker/docs/config-reference.html" rel="nofollow noreferrer">this link</a> as a reference for the environment variables set up. However, when I try to run Kafka with registry I see that the schema registry pods crashes with an error message in the logs:</p> <pre><code>[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. [main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list. java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. </code></pre> <p>What could be the reason of this error?</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kafka-service spec: ports: - name: client port: 9092 selector: app: kafka server-id: "1" --- apiVersion: apps/v1 kind: Deployment metadata: name: kafka-1 spec: selector: matchLabels: app: kafka server-id: "1" replicas: 1 template: metadata: labels: app: kafka server-id: "1" spec: volumes: - name: kafka-data emptyDir: {} containers: - name: server image: confluentinc/cp-kafka:5.1.0 env: - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper:2181 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://localhost:9092 - name: KAFKA_BROKER_ID value: "2" - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR value: "1" ports: - containerPort: 9092 volumeMounts: - mountPath: /var/lib/kafka name: kafka-data --- apiVersion: v1 kind: Service metadata: name: schema-registry-service spec: ports: - name: client port: 8081 selector: app: kafka-schema-registry --- apiVersion: apps/v1 kind: Deployment metadata: name: kafka-schema-registry spec: replicas: 1 selector: matchLabels: app: kafka-schema-registry template: metadata: labels: app: kafka-schema-registry spec: containers: - name: kafka-schema-registry image: confluentinc/cp-schema-registry:5.1.0 env: - name: SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL value: zookeeper:2181 - name: SCHEMA_REGISTRY_HOST_NAME value: localhost - name: SCHEMA_REGISTRY_LISTENERS value: "http://0.0.0.0:8081" - name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS value: PLAINTEXT://localhost:9092 ports: - containerPort: 8081 </code></pre>
Cassie
<p>You've configured Schema Registry to look for the Kafka broker at <code>kafka:9092</code>, but you've also configured the Kafka broker to advertise its address as <code>localhost:9092</code>. </p> <p>I'm not familiar with Kubernetes specifically, but <a href="https://rmoff.net/2018/08/02/kafka-listeners-explained/" rel="nofollow noreferrer">this article</a> describes how to handle networking config in principle when using containers, IaaS, etc. </p>
Robin Moffatt
<p><strong><em>Update</em></strong> The issue is resolved. </p> <ol> <li>I shutdown docker desktop.</li> <li>Deleted C:\ProgramData\DockerDesktop and .kube folder</li> <li>Restarted docker desktop</li> <li>reset docker desktop to factory defaults</li> <li>and restarted it again and it worked.</li> </ol> <p>I have started learning Kubernetes and Dockers yesterday. I installed Docker-Desktop today and my docker container is running. When i check the enable kubernetes option on Docker-Desktop it's not running. It just shows me a loading and below it just shows kubernete starting.<br> <a href="https://i.stack.imgur.com/eJcyo.png" rel="nofollow noreferrer">Photo of my Docker Desktop</a> <strong>What i have tried:</strong></p> <ol> <li>Uninstalling Docker Desktop and reinstalling it</li> <li>Reset Kubernetes Cluster</li> <li>Reset to Factory default</li> </ol> <p>I have tried other solutions too which i found here on stackflow like:</p> <ol> <li>Running it as an Administrator</li> <li>Run this on powershell and then try to start kubernetes <code>[Environment]::SetEnvironmentVariable("KUBECONFIG", $HOME + "\.kube\config", [EnvironmentVariableTarget]::Machine)</code></li> </ol> <p>But None of the solutions is trying to fix the problem im having.</p> <p><strong>Addition Information:</strong></p> <ol> <li>Im running Docker Desktop on Windows 10 pro</li> <li>Docker Version: 2.2.0.5 Kubernetes version: v1.15.5</li> </ol>
Talal Abbas
<h2>Just follow these steps</h2> <ol> <li>stop docker for desktop</li> <li>remove the folder <code>~/Library/Group\ Containers/group.com.docker/pki</code></li> </ol> <pre><code> rm -rf ~/Library/Group\ Containers/group.com.docker/pki </code></pre> <ol start="3"> <li>start docker for destkop</li> </ol> <p>Found <a href="https://github.com/docker/for-mac/issues/3594#issuecomment-621487150" rel="nofollow noreferrer">the solution here</a></p> <p>And given that every time I try to start &quot;Docker for Desktop&quot; Kubernetes get stuck, to better investigate for me was useful remove the kube autostart from the configuration.</p> <p>Just edit the file:</p> <pre><code>~/Library/Group\ Containers/group.com.docker/settings.json </code></pre> <p>And change <code>kubernetesEnabled</code> value to <code>false</code></p>
freedev
<h2>Background</h2> <p>I'm running a Kubernetes cluster on Google Cloud Platform. I have 2 Node-Pools in my cluster: <code>A</code> and <code>B</code>. <code>B</code> is cheaper (depends on hardware). I prefer that my deployment will run on <code>B</code>. Unless no free resources in <code>B</code>. In that case, new pods will deploy to <code>A</code>. </p> <p>So I added this section to deployment YAML:</p> <pre><code> affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: - key: B operator: Exists weight: 100 </code></pre> <p>So I giving more weight to node-pool B.</p> <p>At start, it's working good. I came back after 24 hours and found that some pods are deployed to node-pool A while I have free resources (un-allocated machines) in node B. This is wast of money. </p> <h2>So, how its happen?</h2> <p>I sure that the property <code>nodeAffinity</code> is working currectly. I suspect that at same point, node pool B was running without any FREE resources. At this point, the cluster want to grow... The new pod was deployed to node pool A. Until here, everything is fine...</p> <h2>What I want to achieve?</h2> <p>Lets say that after an hour, from lack of node-pool <code>B</code> resources time, There are plany of resources free to alocation. I want that Kubernetes will move the existing pods from A to their new house in node pool B. </p> <p>I looking for something like <code>preferredDuringSchedulingPreferedDuringExecution</code>.</p> <h2>Question</h2> <p>Is this possible?</p> <h2>Update</h2> <p>Based on @Hitobat answer, I tried to use this code:</p> <pre><code> spec: tolerations: - key: A operator: "Exists" effect: "NoExecute" tolerationSeconds: 60 </code></pre> <p>Unfortunately, After waiting enough time, I still see pods on my <code>A</code> nodepool. I did something wrong?</p>
No1Lives4Ever
<p>You can taint pool A. Then configure <em>all</em> your pods to tolerate the taint, but with a tolerationSeconds for the duration you want. This is in addition to the config you already did for pool B.</p> <p>The effect will be that the pod is scheduled to A if it won't fit on B, but then after a while will be evicted (and hopefully rescheduled onto B again).</p> <p>See: <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#taint-based-evictions" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#taint-based-evictions</a></p>
Hitobat
<p>I have a specific version of postgres with postgis that I need to use for my database. This is what I usually do to provision it:</p> <ol> <li>Create a kubernetes secret that holds value of admin password</li> <li>Create a PV (persistent volume) - I have .yaml for this</li> <li>Create a PVC(pv claim) to use it for the pgdata - I have .yaml for this</li> <li>Create deployment - I have .yaml for this</li> <li>Create a service - I have .yaml for this</li> </ol> <p>How do I automate these steps? Eventually I'd like to automate the whole app stack that we use, like a one click install. I can do it with bunch of shell scripts now, but I'm looking for a better way to do this, any good suggestions?</p>
Remember_me
<p>Ever looked into <a href="https://www.terraform.io/" rel="nofollow noreferrer">Terraform</a> or <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a>? Both are declarative ways to create reproducible deployments. IMO Terraform is more for the basic Kubernetes infrastructure itself, while I see Helm more used for Kubernetes applications running on this infrastructure.</p> <p>Since you're starting from scratch you could also look at tools like <a href="https://www.pulumi.com/" rel="nofollow noreferrer">Pulumi</a> which allows you to code your infrastructure. This might come handy when doing complex migrations, which would require multiple Terraform/Helm charts and additional scripts exectuted in a certain order, whereas a migration could also be done in one Pulumi exection.</p> <p>For completeness I am adding <a href="https://github.com/kris-nova/naml" rel="nofollow noreferrer">NAML</a> which is an open source tool I recently came accross.</p>
Alex_M
<p>From what I've read about Kubernetes, if the master(s) die, the workers should still be able to function as normal (<a href="https://stackoverflow.com/a/39173007/281469">https://stackoverflow.com/a/39173007/281469</a>), although no new scheduling will occur.</p> <p>However, I've found this to not be the case when the master can also schedule worker pods. Take a 2-node cluster, where one node is a master and the other a worker, and the master has the taints removed:</p> <p><a href="https://i.stack.imgur.com/QojGI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QojGI.png" alt="diagram"></a></p> <p>If I shut down the master and <code>docker exec</code> into one of the containers on the worker I can see that:</p> <pre><code>nc -zv ip-of-pod 80 </code></pre> <p>succeeds, but</p> <pre><code>nc -zv ip-of-service 80 </code></pre> <p>fails half of the time. The Kubernetes version is v1.15.10, using iptables mode for kube-proxy.</p> <p>I'm guessing that since the kube-proxy on the worker node can't connect to the apiserver, it will not remove the master node from the iptables rules.</p> <p>Questions:</p> <ol> <li>Is it expected behaviour that kube-proxy won't stop routing to pods on master nodes, or is there something "broken"?</li> <li>Are any workarounds available for this kind of setup to allow the worker nodes to still function correctly?</li> </ol> <p>I realise the best thing to do is separate the CP nodes but that's not viable for what I'm working on at the moment.</p>
bcoughlan
<blockquote> <p>Is it expected behaviour that kube-proxy won't stop routing to pods on master nodes, or is there something "broken"? </p> <p>Are any workarounds available for this kind of setup to allow the worker nodes to still function correctly?</p> </blockquote> <p>The cluster master plays the role of decision maker for the various activities in cluster's nodes. This can include scheduling workloads, managing the workloads' lifecycle, scaling etc.. Each node is managed by the master components and contains the services necessary to run pods. The services on a node typically includes the kube-proxy, container runtime and kubelet. </p> <p>The kube-proxy component enforces network rules on nodes and helps kubernetes in managing the connectivity among Pods and Services. Also, the kube-proxy, acts as an egress-based load-balancing controller which keeps monitoring the the kubernetes API server and continually updates node's iptables subsystem based on it.</p> <p>In simple terms, the master node only is aware of everything and is in charge of creating the list of routing rules as well based on node addition or deletion etc. kube-proxy plays a kind of enforcer whereby it takes charge of checking with master, syncing the information and enforcing the rules on the list.</p> <p>If the master node(API server) is down, the cluster will not be able to respond to API commands or deploy nodes. If another master node is not available, there shall be no one else available who can instruct the worker nodes on change in work allocation and hence they shall continue to execute the operations that were earlier scheduled by the master until the time the master node is back and gives different instructions. Inline to it, kube-proxy shall also be unable to get the latest rules by sync up with master, however it shall not stop routing and shall continue to handle the networking and routing functionalities (uses the earlier iptable rules that were determined before the master node went down) that shall allow network communication to your pods provided all pods in worker nodes are still up and running.</p> <p>Single master node based architecture is not a preferred deployment architecture for production. Considering that resilience and reliability is one of the major business goal of kubernetes, it is recommended as a best practice to have HA cluster based architecture to avoid single point of failure.</p>
Karthik Balaguru
<p>Ok.. so, we have Google Secret Manager on GCP, AWS Secret Manager in AWS, Key Vault in Azure... and so on.</p> <p>Those services give you libs so you can code the way your software will access the secrets there. They all look straightforward and sort of easy to implement. Right?</p> <p>For instance, using Google SM you can like:</p> <pre><code>from google.cloud import secretmanager client = secretmanager.SecretManagerServiceClient() request = {&quot;name&quot;: f&quot;projects/&lt;proj-id&gt;/secrets/mysecret/versions/1&quot;} response = client.access_secret_version(request) payload = response.payload.data.decode(&quot;UTF-8&quot;) </code></pre> <p>and you're done.</p> <p>I mean, if we talk about K8S, you can improve the code above by reading the vars from a configmap where you may have all the resources of your secrets, like:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: myms namespace: myns data: DBPASS: projects/&lt;proj-id&gt;/secrets/mysecretdb/versions/1 APIKEY: projects/&lt;proj-id&gt;/secrets/myapikey/versions/1 DIRTYSECRET: projects/&lt;proj-id&gt;/secrets/mydirtysecret/versions/1 </code></pre> <p>And then use part of the code above to load the vars and get the secrets from the SM.</p> <p>So, when I was looking the <em>interwebs</em> for best practices and examples, I found projects like the below:</p> <ol> <li><a href="https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp</a></li> <li><a href="https://github.com/doitintl/secrets-init" rel="nofollow noreferrer">https://github.com/doitintl/secrets-init</a></li> <li><a href="https://github.com/doitintl/kube-secrets-init" rel="nofollow noreferrer">https://github.com/doitintl/kube-secrets-init</a></li> <li><a href="https://github.com/aws-samples/aws-secret-sidecar-injector" rel="nofollow noreferrer">https://github.com/aws-samples/aws-secret-sidecar-injector</a></li> <li><a href="https://github.com/aws/secrets-store-csi-driver-provider-aws" rel="nofollow noreferrer">https://github.com/aws/secrets-store-csi-driver-provider-aws</a></li> </ol> <p>But those projects don't clearly explain what's the point of mounting my secrets as files or env_vars..</p> <p>I got really confused, maybe I'm too newbie on the K8S and cloud world... and that's why I'm here asking, maybe, a really really dumb questions. Sorry :/</p> <p>My questions are:</p> <ol> <li>Are the projects, mentioned above, recommended for old code that I do not want to touch? I mean, let's say that my code already use a env var called DBPASS=mypass and I would like to workaround it so the value from the DBPASS env would be <em>hackinjected</em> by a value from a SM.</li> <li>The implementation to handle a secret manager is very hard. So it is recommended to use one of the solutions above?</li> <li>What are the advantages of such injection approach?</li> </ol> <p>Thx a lot!</p>
JGG
<p>There are many possible motivations why you may want to use an abstraction (such as the CSI driver or sidecar injector) over a native integration:</p> <ul> <li><p><strong>Portability</strong> - If you're multi-cloud or multi-target, you may have multiple secret management solutions. Or you might have a different secret manager target for local development versus production. Projecting secrets onto a virtual filesystem or into environment variables provides a &quot;least common denominator&quot; approach that decouples the application from its secrets management provider.</p> </li> <li><p><strong>Local development</strong> - Similar to the previous point on portability, it's common to have &quot;fake&quot; or fakeish data for local development. For local dev, secrets might all be fake and not need to connect to a real secret manager. Moving to an abstraction avoids error-prone spaghetti code like:</p> <pre class="lang-js prettyprint-override"><code>let secret; if (process.env.RAILS_ENV === 'production') { secret = secretmanager.access('...') } else { secret = 'abcd1234' } </code></pre> </li> <li><p><strong>De-risking</strong> - By avoiding a tight coupling, you can de-risk upstream API changes in an abstraction layer. This is conceptual similar to the benefits of microservices. As a platform team, you make a guarantee to your developers that their secret will live at <code>process.env.FOO</code>, and it doesn't matter <em>how</em> it gets there, so long as you continue to fulfill that API contract.</p> </li> <li><p><strong>Separate of duties</strong> - In some organizations, the platform team (k8s team) is separate from the security team, is separate from development teams. It might not be realistic for a developer to ever have direct access to a secret manager.</p> </li> <li><p><strong>Preserving identities</strong> - Depending on the implementation, it's possible that the actor which accesses the secret varies. Sometimes it's the k8s cluster, sometimes it's the individual pod. They both had trade-offs.</p> </li> </ul> <hr /> <p>Why might you <em>not</em> want this abstraction? Well, it adds additional security concerns. Exposing secrets via environment variables or via the filesystem makes you subject to a generic series of supply chain attacks. Using a secret manager client library or API directly doesn't entirely prevent this, but it forces a more targeted attack (e.g. core dump) instead of a more generic path traversal or env-dump-to-pastebin attack.</p>
sethvargo
<p>Kubernetes assigns an IP address for each container, but how can I acquire the IP address from a container in the Pod? I couldn't find the way from documentations.</p> <p>Edit: I'm going to run Aerospike cluster in Kubernetes. and the config files need its own IP address. And I'm attempting to use confd to set the hostname. I would use the environment variable if it was set.</p>
yanana
<p>The simplest answer is to ensure that your pod or replication controller yaml/json files add the pod IP as an environment variable by adding the config block defined below. (the block below additionally makes the name and namespace available to the pod)</p> <pre><code>env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP </code></pre> <p>Recreate the pod/rc and then try</p> <pre><code>echo $MY_POD_IP </code></pre> <p>also run <code>env</code> to see what else kubernetes provides you with.</p>
PiersyP
<p>All of a sudden, I cannot deploy some images which could be deployed before. I got the following pod status:</p> <pre><code>[root@webdev2 origin]# oc get pods NAME READY STATUS RESTARTS AGE arix-3-yjq9w 0/1 ImagePullBackOff 0 10m docker-registry-2-vqstm 1/1 Running 0 2d router-1-kvjxq 1/1 Running 0 2d </code></pre> <p>The application just won't start. The pod is not trying to run the container. From the Event page, I have got <code>Back-off pulling image &quot;172.30.84.25:5000/default/arix@sha256:d326</code>. I have verified that I can pull the image with the tag with <code>docker pull</code>.</p> <p>I have also checked the log of the last container. It was closed for some reason. I think the pod should at least try to restart it.</p> <p>I have run out of ideas to debug the issues. What can I check more?</p>
Devs love ZenUML
<p>You can use the '<em><strong>describe pod</strong></em>' syntax</p> <p><strong>For OpenShift use:</strong></p> <pre><code>oc describe pod &lt;pod-id&gt; </code></pre> <p><strong>For vanilla Kubernetes:</strong></p> <pre><code>kubectl describe pod &lt;pod-id&gt; </code></pre> <p>Examine the events of the output. In my case it shows <code>Back-off pulling image unreachableserver/nginx:1.14.22222</code></p> <p>In this case the image <code>unreachableserver/nginx:1.14.22222</code> can not be pulled from the Internet because there is no Docker registry unreachableserver and the image <code>nginx:1.14.22222</code> does not exist.</p> <p><strong>NB: If you do not see any events of interest and the pod has been in the 'ImagePullBackOff' status for a while (seems like more than 60 minutes), you need to delete the pod and look at the events from the new pod.</strong></p> <p><strong>For OpenShift use:</strong></p> <pre><code>oc delete pod &lt;pod-id&gt; oc get pods oc get pod &lt;new-pod-id&gt; </code></pre> <p><strong>For vanilla Kubernetes:</strong></p> <pre><code>kubectl delete pod &lt;pod-id&gt; kubectl get pods kubectl get pod &lt;new-pod-id&gt; </code></pre> <p>Sample output:</p> <pre><code> Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32s default-scheduler Successfully assigned rk/nginx-deployment-6c879b5f64-2xrmt to aks-agentpool-x Normal Pulling 17s (x2 over 30s) kubelet Pulling image &quot;unreachableserver/nginx:1.14.22222&quot; Warning Failed 16s (x2 over 29s) kubelet Failed to pull image &quot;unreachableserver/nginx:1.14.22222&quot;: rpc error: code = Unknown desc = Error response from daemon: pull access denied for unreachableserver/nginx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Warning Failed 16s (x2 over 29s) kubelet Error: ErrImagePull Normal BackOff 5s (x2 over 28s) kubelet Back-off pulling image &quot;unreachableserver/nginx:1.14.22222&quot; Warning Failed 5s (x2 over 28s) kubelet Error: ImagePullBackOff </code></pre> <p><strong>Additional debugging steps</strong></p> <ol> <li>try to pull the docker image and tag manually on your computer</li> <li>Identify the node by doing a 'kubectl/oc get pods -o wide'</li> <li>ssh into the node (if you can) that can not pull the docker image</li> <li>check that the node can resolve the DNS of the docker registry by performing a ping.</li> <li>try to pull the docker image manually on the node</li> <li>If you are using a private registry, check that your <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="noreferrer">secret</a> exists and the secret is correct. Your secret should also be in the same namespace. Thanks <a href="https://stackoverflow.com/users/2677943/swenzel">swenzel</a></li> <li>Some registries have firewalls that limit ip address access. The firewall may block the pull</li> <li>Some CIs create deployments with temporary docker secrets. So the secret expires after a few days (You are asking for production failures...)</li> </ol>
rjdkolb
<p>On Google Kubernetes Engine (GKE) you can use the <code>cloud.google.com/app-protocols</code> annotation on a Service to specify what protocol is used on that port (HTTP or HTTPS) <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb#https_tls_between_load_balancer_and_your_application" rel="nofollow noreferrer">docs</a></p> <p>When you create an External HTTP(S) Ingress, it will use this protocol between the Ingress and the Service.</p> <p>How do I set things up so that the Service uses a certificate that is actually trusted by the Ingress?</p> <p>Does it just trust any certificate signed by the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-trust" rel="nofollow noreferrer">Cluster Root CA</a>? <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#create-a-certificate-signing-request" rel="nofollow noreferrer">Manage tls in a cluster</a> suggests you need to include the pod IP address in the CSR - does that mean generating the CSR and waiting for the signed certificate to be created should be part of my container startup process?</p>
Arnout Engelen
<p>Turns out when the &quot;GKE Ingress for HTTP(S) Load Balancing&quot; uses HTTPS to connect to the service, it accepts <em>any</em> certificate valid (even a self-signed one), without further configuration.</p> <p>Apparently it does not use TLS to protect against MITM attacks here (which I guess might be reasonable).</p>
Arnout Engelen
<p>I am trying to deploy an app to kubernetes cluster and I want to store data in <strong>Persistent Volume</strong>. However, I am very confused about two parameters in the setup. Can someone explains what is the different between <strong>volumes.hostPath</strong> and <strong>volumeMounts.mountPath</strong>? I read some documentations online but it does not help me to understand.</p> <pre><code>volumeMounts: - mountPath: /var/lib/mysql volumes: hostPath: path: /k8s </code></pre> <p>If my setup is as above, is the volume going to be mounted at <code>/k8s/var/lib/mysql</code>?</p>
jiashenC
<p>The mount path is always the destination inside the Pod a volume gets mounted to.</p> <p>I think the documentation is pretty clear on what hostPath does:</p> <blockquote> <p>A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.</p> <p>For example, some uses for a hostPath are:</p> <pre><code>- running a Container that needs access to Docker internals; use a hostPath of /var/lib/docker - running cAdvisor in a Container; use a hostPath of /sys - allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as </code></pre> </blockquote> <p>So your example does not what you think it does. It would mount the node's <code>/k8s</code> directory into the Pod at <code>/var/lib/mysql</code>.</p> <p>This should be done only if you fully understand the implications!</p>
matthias krull
<p>I've got a persistent disk (GCP), that I'm hoping to be able to allow read write access to multiple pods.</p> <p>Is this possible? Here are my two configs:</p> <p><strong>pVolume.yaml</strong></p> <pre><code>apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" spec: storageClassName: manual capacity: storage: "10Gi" accessModes: - "ReadWriteMany" gcePersistentDisk: fsType: "ext4" pdName: "wordpress-disk" </code></pre> <p><strong>pVolumeClaim.yaml</strong></p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: task-pv-claim spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 3Gi </code></pre> <p>With the above config, I see the following error on my pods:</p> <pre><code>FailedMount Failed to attach volume "pv0001" on node "xyz" with: googleapi: Error 400: The disk resource 'abc' is already being used by 'xyz' </code></pre> <p>This occurs with the replica count set to 2. </p>
Chris Stryczynski
<p>For a GCP persistent disk in ReadWrite mode on different nodes this is not possible :(</p> <p>It is possible however:</p> <ul> <li>Have both replicas scheduled on the <strong>same node</strong>. In that case both of them can mount the same persistent disk ReadWrite</li> <li>Use it in <strong>ReadOnly mode</strong>, on any number of nodes</li> <li>Use a <strong>different kind of PV</strong>, like <a href="https://github.com/gluster/gluster-kubernetes" rel="nofollow noreferrer">gluster</a> or nfs that supports this kind of use</li> </ul>
Janos Lenart
<p>I'm trying to upgrade some GKE cluster from 1.21 to 1.22 and I'm getting some warnings about deprecated APIs. Am running Istio 1.12.1 version as well in my cluster</p> <p>One of them is causing me some concerns:</p> <p><code>/apis/extensions/v1beta1/ingresses</code></p> <p>I was surprised to see this warning because we are up to date with our deployments. We don't use Ingresses.</p> <p>Further deep diving, I got the below details:</p> <pre><code>➜ kubectl get --raw /apis/extensions/v1beta1/ingresses | jq Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress { &quot;kind&quot;: &quot;IngressList&quot;, &quot;apiVersion&quot;: &quot;extensions/v1beta1&quot;, &quot;metadata&quot;: { &quot;resourceVersion&quot;: &quot;191638911&quot; }, &quot;items&quot;: [] } </code></pre> <p>It seems an IngressList is that calls the old API. Tried deleting the same,</p> <pre><code>➜ kubectl delete --raw /apis/extensions/v1beta1/ingresses Error from server (MethodNotAllowed): the server does not allow this method on the requested resource </code></pre> <p>Neither able to delete it, nor able to upgrade.</p> <p>Any suggestions would be really helpful.</p> <p>[Update]: My GKE cluster got updated to <code>1.21.11-gke.1900</code> and after that the warning messages are gone.</p>
Sunil
<p>In our case, use of old version of <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> seems to cause the beta version of API calls. Some days after updating kube-state-metrics, the deprecated API calls was stopped so far.</p>
hiroshi
<p>I am following the <code>Installation Instructions</code> from <a href="https://argoproj.github.io/argo-cd/getting_started/#3-access-the-argo-cd-api-server" rel="nofollow noreferrer">https://argoproj.github.io/argo-cd/getting_started/#3-access-the-argo-cd-api-server</a> and even though the service type has been changes to <code>LoadBalancer</code> I cannot manage to login.</p> <p>The information I have is:</p> <pre><code>$ oc describe svc argocd-server Name: argocd-server Namespace: argocd Labels: app.kubernetes.io/component=server app.kubernetes.io/name=argocd-server app.kubernetes.io/part-of=argocd Annotations: &lt;none&gt; Selector: app.kubernetes.io/name=argocd-server Type: LoadBalancer IP: 172.30.70.178 LoadBalancer Ingress: a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com Port: http 80/TCP TargetPort: 8080/TCP NodePort: http 30942/TCP Endpoints: 10.128.3.91:8080 Port: https 443/TCP TargetPort: 8080/TCP NodePort: https 30734/TCP Endpoints: 10.128.3.91:8080 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>If I do:</p> <pre><code>$ oc login https://a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com The server is using a certificate that does not match its hostname: x509: certificate is valid for localhost, argocd-server, argocd-server.argocd, argocd-server.argocd.svc, argocd-server.argocd.svc.cluster.local, not a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y error: Seems you passed an HTML page (console?) instead of server URL. Verify provided address and try again. </code></pre>
Hector Esteban
<p>I managed to successfully login argocd-server by the following</p> <pre><code>kubectl patch svc argocd-server -n argocd -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;LoadBalancer&quot;}}' argoPass=$(kubectl -n argocd get secret argocd-initial-admin-secret \ -o jsonpath=&quot;{.data.password}&quot; | base64 -d) argocd login --insecure --grpc-web k3s_master:32761 --username admin \ --password $argoPass </code></pre>
j3ffyang
<p>I'm using github actions for creating new images and pushing them to a registry.</p> <pre><code> - name: Build the Docker image run: docker build . --file Dockerfile --tag ${{secrets.DOCKER_USER}}/book:$GITHUB_SHA </code></pre> <p>This works perfectly. Now, I need to replace the value of the image. So I thought of using yq</p> <pre><code>apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - image: xxxxx/book:main </code></pre> <pre><code> - name: Replace Image run: yq -i e '.spec.template.spec.containers.image |= ${{secrets.DOCKER_USER}}/guestbook:$GITHUB_SHA' argo/deployment.yaml </code></pre> <p>But getting this error <code>Error: 1:43: invalid input text &quot;xxxx/book:...&quot; Error: Process completed with exit code 1.</code></p> <p>How I would need to do this replacement?</p>
Diego
<p>Use the correct json path expression to the <code>image</code> property and quote the replacement value.</p> <p>Example:</p> <pre><code>$ yq '.spec.template.spec.containers.[0].image = &quot;STRING&quot;' argo/deployment.yaml apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - image: STRING $ yq --version yq (https://github.com/mikefarah/yq/) version v4.34.1 </code></pre>
hakre
<p>In kubernetes I can currently limit the CPU and Memory of the containers, but what about the hard disk size of the containers.</p> <p>For example, how could I avoid that someone runs a container in my k8s worker node that stores internally .jpg files making this container grow and grow with the time.</p> <p>Not talking about Persistent Volume and Persistent Volume Claim. I'm talking that if someone makes an error in the container and write inside the container filesystem I want to control it.</p> <p>Is there a way to limit the amount of disk used by the containers?</p> <p>Thank you.</p>
Jxadro
<p>There is some support for this; the tracking issues are <a href="https://github.com/kubernetes/features/issues/361" rel="nofollow noreferrer">#361</a>, <a href="https://github.com/kubernetes/features/issues/362" rel="nofollow noreferrer">#362</a> and <a href="https://github.com/kubernetes/features/issues/363" rel="nofollow noreferrer">#363</a>. You can define requests and/or limits on the resource called <code>ephemeral-storage</code>, like so (for a Pod/PodTemplate):</p> <pre><code>spec: containers: - name: foo resources: requests: ephemeral-storage: 50Mi limits: ephemeral-storage: 50Mi </code></pre> <p>The page <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/" rel="nofollow noreferrer">Reserve Compute Resources for System Daemons</a> has some additional information on this feature.</p>
Janos Lenart
<p>I have Kubernets 1.20.1 cluster with single master and single worker configured with <code>ipvs</code> mode. Using calico CNI <code>calico/cni:v3.16.1</code>. Cluster running on OS RHEL 8 kernel <code>4.18.0-240.10</code> with firewalld and selinux disabled.</p> <p>Running one <code>netshoot</code> pod (<code>10.1.30.130</code>) on master and another pod (<code>10.3.65.132</code>) in worker node.</p> <ol> <li>I can ping both pod, in both direction</li> <li>if run the nc command in web server mode, connection is not working. I tried to run nginx on both server, not able get http traffic one server from another server.</li> </ol> <p>Ran the tcpdump on both servers <code>tcpdump -vv -nn -XX -i any host &lt;PODIP&gt;</code> I can see ping traffic going to both nodes, but TCP traffic not reaching the other node.</p> <p><code>iptables -vL | grep DROP</code> command not showing any packet drop on both nodes.</p> <p>I don't know where the TCP traffic getting lost, need some tips to troubleshoot this issue.</p> <p><strong>Master node iptables-save command output</strong></p> <pre><code># Generated by iptables-save v1.8.4 on Sat Jan 16 18:52:50 2021 *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :KUBE-MARK-DROP - [0:0] :KUBE-MARK-MASQ - [0:0] :KUBE-POSTROUTING - [0:0] :KUBE-KUBELET-CANARY - [0:0] :KUBE-SERVICES - [0:0] :KUBE-FIREWALL - [0:0] :KUBE-NODE-PORT - [0:0] :KUBE-LOAD-BALANCER - [0:0] -A PREROUTING -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES -A POSTROUTING -m comment --comment &quot;kubernetes postrouting rules&quot; -j KUBE-POSTROUTING -A OUTPUT -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000 -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000 -A KUBE-POSTROUTING -m comment --comment &quot;Kubernetes endpoints dst ip:port, source ip for solving hairpin purpose&quot; -m set --match-set KUBE-LOOP-BACK dst,dst,src -j MASQUERADE -A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN -A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0 -A KUBE-POSTROUTING -m comment --comment &quot;kubernetes service traffic requiring SNAT&quot; -j MASQUERADE --random-fully -A KUBE-SERVICES ! -s 10.0.0.0/14 -m comment --comment &quot;Kubernetes service cluster ip + port for masquerade purpose&quot; -m set --match-set KUBE-CLUSTER-IP dst,dst -j KUBE-MARK-MASQ -A KUBE-SERVICES -m addrtype --dst-type LOCAL -j KUBE-NODE-PORT -A KUBE-SERVICES -m set --match-set KUBE-CLUSTER-IP dst,dst -j ACCEPT -A KUBE-FIREWALL -j KUBE-MARK-DROP -A KUBE-LOAD-BALANCER -j KUBE-MARK-MASQ COMMIT # Completed on Sat Jan 16 18:52:50 2021 # Generated by iptables-save v1.8.4 on Sat Jan 16 18:52:50 2021 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :KUBE-FIREWALL - [0:0] :KUBE-KUBELET-CANARY - [0:0] :KUBE-FORWARD - [0:0] -A INPUT -j KUBE-FIREWALL -A FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -j KUBE-FORWARD -A OUTPUT -j KUBE-FIREWALL -A KUBE-FIREWALL -m comment --comment &quot;kubernetes firewall for dropping marked packets&quot; -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment &quot;block incoming localnet connections&quot; -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP -A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -m mark --mark 0x4000/0x4000 -j ACCEPT -A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding conntrack pod source rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding conntrack pod destination rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT COMMIT # Completed on Sat Jan 16 18:52:50 2021 # Generated by iptables-save v1.8.4 on Sat Jan 16 18:52:50 2021 *mangle :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :KUBE-KUBELET-CANARY - [0:0] COMMIT # Completed on Sat Jan 16 18:52:50 2021 </code></pre> <p><strong>Worker iptables-save output</strong></p> <pre><code># Generated by iptables-save v1.8.4 on Sat Jan 16 18:53:58 2021 *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :KUBE-MARK-DROP - [0:0] :KUBE-MARK-MASQ - [0:0] :KUBE-POSTROUTING - [0:0] :KUBE-KUBELET-CANARY - [0:0] :KUBE-SERVICES - [0:0] :KUBE-FIREWALL - [0:0] :KUBE-NODE-PORT - [0:0] :KUBE-LOAD-BALANCER - [0:0] -A PREROUTING -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES -A POSTROUTING -m comment --comment &quot;kubernetes postrouting rules&quot; -j KUBE-POSTROUTING -A OUTPUT -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000 -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000 -A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN -A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0 -A KUBE-POSTROUTING -m comment --comment &quot;kubernetes service traffic requiring SNAT&quot; -j MASQUERADE --random-fully -A KUBE-SERVICES ! -s 10.0.0.0/14 -m comment --comment &quot;Kubernetes service cluster ip + port for masquerade purpose&quot; -m set --match-set KUBE-CLUSTER-IP dst,dst -j KUBE-MARK-MASQ -A KUBE-SERVICES -m addrtype --dst-type LOCAL -j KUBE-NODE-PORT -A KUBE-SERVICES -m set --match-set KUBE-CLUSTER-IP dst,dst -j ACCEPT -A KUBE-FIREWALL -j KUBE-MARK-DROP -A KUBE-LOAD-BALANCER -j KUBE-MARK-MASQ COMMIT # Completed on Sat Jan 16 18:53:58 2021 # Generated by iptables-save v1.8.4 on Sat Jan 16 18:53:58 2021 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :KUBE-FIREWALL - [0:0] :KUBE-KUBELET-CANARY - [0:0] :KUBE-FORWARD - [0:0] -A INPUT -j KUBE-FIREWALL -A FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -j KUBE-FORWARD -A OUTPUT -j KUBE-FIREWALL -A KUBE-FIREWALL -m comment --comment &quot;kubernetes firewall for dropping marked packets&quot; -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment &quot;block incoming localnet connections&quot; -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP -A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -m mark --mark 0x4000/0x4000 -j ACCEPT -A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding conntrack pod source rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding conntrack pod destination rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT COMMIT # Completed on Sat Jan 16 18:53:58 2021 # Generated by iptables-save v1.8.4 on Sat Jan 16 18:53:58 2021 *mangle :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :KUBE-KUBELET-CANARY - [0:0] COMMIT # Completed on Sat Jan 16 18:53:58 2021 </code></pre>
sfgroups
<p>I was able to resolve this issue by running below command on <code>ens192</code> interface on VMware VM on.</p> <pre><code># cat /etc/sysconfig/network-scripts/ifcfg-ens192 | grep ETHTOOL ETHTOOL_OPTS=&quot;-K ens192 tx-udp_tnl-csum-segmentation off; -K ens192 tx-udp_tnl-segmentation off&quot; </code></pre> <p>got the tips from here: <a href="https://github.com/kubernetes-sigs/kubespray/issues/7268" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kubespray/issues/7268</a> Thanks</p> <p>SR</p>
sfgroups
<p>I want to send data from a sensor written in Python to a Go http Server which are deployed with Kubernetes (k3s) on two Raspberry Pi's. The sensor will read every minute the temperatur and luminosity and send the data as a json with a timestamp to the server. At first when I run the setup it works, but after a while the sensor gets a <code>ConnectionRefusedError: [Errno 111] Connection refused</code> Error in its POST request. However after a while it will continue to work normal until it will break again. I do not know what causes this, since it works part-time. It will just suddenly refuse the connection.</p> <p>When I use <code>kubectl describe pod weather-sensor-5b88dd65d8-m8zn2</code> I get under Events:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 3m16s (x3684 over 18h) kubelet Back-off restarting failed container </code></pre> <p>running <code>kubectl logs weather-sensor-5b88dd65d8-m8zn2</code> says:</p> <pre><code>send this data: {&quot;time&quot;: 1635531114, &quot;temp&quot;: &quot;23.25&quot;, &quot;lux&quot;: &quot;254&quot;} DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): weather-server:8080 DEBUG:urllib3.connectionpool:http://weather-server:8080 &quot;POST /weather HTTP/1.1&quot; 200 0 send this data: {&quot;time&quot;: 1635531175, &quot;temp&quot;: &quot;23.25&quot;, &quot;lux&quot;: &quot;252&quot;} DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): weather-server:8080 Traceback (most recent call last): File &quot;/usr/local/lib/python3.8/site-packages/urllib3/connection.py&quot;, line 174, in _new_conn conn = connection.create_connection( File &quot;/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py&quot;, line 96, in create_connection raise err File &quot;/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py&quot;, line 86, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused </code></pre> <p>The sensor will then continue to try to connect to the server until it gets a MaxRetryError. Then Kubernetes will terminate the pod because of <code>CrashLoopBackOff</code></p> <p>On the sensor I use this url for the post request: <code>URL = &quot;http://weather-server:8080/weather&quot;</code></p> <p>In the logs on the server side, I haven't seen anything unusual except that it only gets data erratically.</p> <p>Relevant Python code:</p> <pre><code>def create_data(temp, lux): weather = { 'time': int(time.time()), 'temp': temp, 'lux': lux } return json.dumps(weather) def send_data(data): try: headers = {'Content-Type': 'application/json'} requests.post(url=URL, data=data, headers=headers) except ConnectionError as e: print(e) </code></pre> <p>Here my yml files:</p> <p>sensor_deployment.yml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: weather-sensor labels: app: weather spec: replicas: 1 selector: matchLabels: app: weather template: metadata: labels: app: weather spec: containers: - name: weather-sensor image: weather-sensor:pi-1.14 imagePullPolicy: IfNotPresent securityContext: privileged: true </code></pre> <p>server_deployment.yml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: weather-server labels: app: weather spec: replicas: 2 selector: matchLabels: app: weather template: metadata: labels: app: weather spec: containers: - name: weather-server image: weather-server:pi-1.15 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 </code></pre> <p>server_service.yml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: weather-server labels: app: weather spec: type: NodePort selector: app: weather ports: - port: 8080 protocol: TCP targetPort: 8080 </code></pre> <p>Any ideas?</p>
TheQuestioner
<p>I think the problem is the weather server (k8s service) is broken.</p> <p>This is because the selector is checking for pods with label <code>app=weather</code> which includes both server pods and sensor pods.</p> <p>If a sensor tries to send data (through the k8s service) to another sensor pod, then it will result in the error because the sensor does not listen for HTTP requests.</p> <p>To fix it, ensure that the app label is unique for each pod type. For example, weather-server has <code>app=weather-server</code> and weather-sensor has <code>app=weather-sensor</code>.</p> <p><strong>server_deployment.yml</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: weather-server labels: app: weather-server spec: replicas: 2 selector: matchLabels: app: weather-server template: metadata: labels: app: weather-server spec: containers: - name: weather-server image: weather-server:pi-1.15 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 </code></pre> <p><strong>server_service.yml</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: weather-server labels: app: weather-server spec: type: NodePort selector: app: weather-server ports: - port: 8080 protocol: TCP targetPort: 8080 </code></pre>
Hitobat
<p>I was using NodePort to host a webapp on Google Container Engine (GKE). It allows you to directly point your domains to the node IP address, instead of an expensive Google load balancer. Unfortunately, instances are created with HTTP ports blocked by default, and an update locked down manually changing the nodes, as they are now created using and Instance Group/and an Immutable Instance Template.</p> <p><strong>I need to open port 443 on my nodes, how do I do that with Kubernetes or GCE? Preferably in an update resistant way.</strong></p> <p><a href="https://i.stack.imgur.com/cBa5b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cBa5b.png" alt="enter image description here"></a></p> <p>Related github question: <a href="https://github.com/nginxinc/kubernetes-ingress/issues/502" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/issues/502</a></p>
Ray Foss
<p>Update 2023-06-23</p> <p>At some point Google added the ability to add network tags to your node pool... So now you can directly add http-server, https-server and it will work as expected.</p> <hr /> <p>Update: A deamonset with a nodeport can handle the port opening for you. nginx/k8s-ingress has a nodeport on 443 which gets exposed by a custom firewall rule. the GCE UI will not show「Allow HTTPS traffic」as checked, because its not using the default rule.</p> <hr /> <p>You can do everything you do on the GUI Google Cloud Console using the Cloud SDK, most easily through the Google Cloud Shell. Here is the command for adding a network tag to a running instance. This works, even though the GUI disabled the ability to do so</p> <pre><code>gcloud compute instances add-tags gke-clusty-pool-0-7696af58-52nf --zone=us-central1-b --tags https-server,http-server </code></pre> <p>This also works on the beta, meaning it should continue to work for a bit. See <a href="https://cloud.google.com/sdk/docs/scripting-gcloud" rel="nofollow noreferrer">https://cloud.google.com/sdk/docs/scripting-gcloud</a> for examples on how to automate this. Perhaps consider running on a webhook when downtime is detected. Obviously none of this is ideal.</p> <p>Alternatively, you can change the templates themselves. With this method you can also add a startup to new nodes, which allows you do do things like fire a webhook with the new IP Address for a round robin low downtime dynamic dns.</p> <p>Source (he had the opposite problem, his problem is our solution): <a href="https://stackoverflow.com/a/51866195/370238">https://stackoverflow.com/a/51866195/370238</a></p>
Ray Foss
<p>In my kubernetes Ingress controller logging lots of handshake message like this. how to stop this error message? it appers request coming from with-in the pod 127.0.0.1</p> <pre><code>2018/09/15 13:28:28 [crit] 21472#21472: *323765 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442 2018/09/15 13:28:28 [crit] 21472#21472: *323766 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442 2018/09/15 13:28:28 [crit] 21472#21472: *323767 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442 2018/09/15 13:28:28 [crit] 21472#21472: *323768 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442 2018/09/15 13:28:28 [crit] 21472#21472: *323769 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442 </code></pre> <p>Here is ingress argument.</p> <pre><code> - args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io - --enable-ssl-chain-completion=false - --default-ssl-certificate=ingress-nginx/ingress-tls-secret - --enable-ssl-passthrough </code></pre> <p>Thanks</p>
sfgroups
<p>My issue is with HAPROXY health check configuration I set to <code>ssl-hello-chk</code> now I changed it to <code>tcp-check</code> error message stopped.</p> <p>change this:</p> <pre><code>mode tcp balance leastconn option ssl-hello-chk </code></pre> <p>to</p> <pre><code> mode tcp balance leastconn option tcp-check </code></pre>
sfgroups
<p>I am running some internal services and also some customer facing services in one K8s cluster. The internal ones should only be accessible from some specific ips and the customer facing services should be accessible worldwide.</p> <p>So I created my Ingresses and an nginx Ingress Controller and some K8s LoadBalancer Services with the proper ip filters. </p> <p>Now I see those Firewall rules in GCP are created behind the scenes. But they are conflicting and the "customer facing" firewall rules overrule the "internal" ones. And so everything of my K8s Cluster is visible worldwide. </p> <p>The usecase sounds not that exotic to me - do you have an idea how to get some parts of a K8s cluster protected by firewall rules and some accessible everywhere?</p>
Hubert Ströbitzer
<p>As surprising as it is, the L7 (http/https) load balancer in GCP created by a Kubernetes Ingress object <strong>has no IP whitelisting capabilities</strong> by default, so what you described is working as intended. You can filter on your end using the <code>X-Forwarded-For</code> header (see Target Proxies under <a href="https://cloud.google.com/compute/docs/load-balancing/http/" rel="nofollow noreferrer">Setting Up HTTP(S) Load Balancing</a>).</p> <p>Whitelisting will be available trough <a href="https://cloud.google.com/armor/docs/security-policy-concepts" rel="nofollow noreferrer">Cloud Armour</a>, which is in private beta at the moment.</p> <p>To make this situation slightly more complicated: the L4 (tcp/ssl) load balancer in GCP created by a Kubernetes LoadBalancer object (so, not an Ingress) does have IP filtering capability. You simply set <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service" rel="nofollow noreferrer"><code>.spec.loadBalancerSourceRanges</code></a> on the Service for that. Of course, a Service will not give you url/host based routing, but you can achieve that by deploying an ingress controller like <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx-ingress</a>. If you go this route you can still create Ingresses for your internal services you just need to annotate them so the new ingress controller picks them up. This is a fairly standard solution, and is actually cheaper than creating L7s for each of your internal services (you will only have to pay for 1 forwarding rule for all of your internal services).</p> <p>(By "internal services" above I meant services you need to be able to access from outside of the itself cluster but only from specific IPs, say a VPN, office, etc. For services you only need to access from inside the cluster you should use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer"><code>type: ClusterIP</code></a>)</p>
Janos Lenart
<p>I have Kubernets 1.18 cluster with Calico CNI (v3.13.2). I was able to schedule to workload. but in the events I see <code>CIDRNotAvailable</code> message, coming from all nodes in the default name space.</p> <p>my CIDR range is <code>-cluster-cidr=10.236.0.0/16</code> in <code>/etc/kubernetes/manifests/kube-controller-manager.yaml</code> file.</p> <pre><code>kg events -A -w NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE default 4m41s Normal CIDRNotAvailable node/kube01 Node kube01 status is now: CIDRNotAvailable default 23s Normal CIDRNotAvailable node/kube02 Node kube02 status is now: CIDRNotAvailable default 2m56s Normal CIDRNotAvailable node/kube03 Node kube03 status is now: CIDRNotAvailable default 4m33s Normal CIDRNotAvailable node/kube04 Node kube04 status is now: CIDRNotAvailable default 4m1s Normal CIDRNotAvailable node/kube29 Node kube29 status is now: CIDRNotAvailable default 94s Normal CIDRNotAvailable node/kube30 Node kube30 status is now: CIDRNotAvailable default 3m12s Normal CIDRNotAvailable node/kube31 Node kube31 status is now: CIDRNotAvailable </code></pre> <p>Any idea why it giving this message?</p> <p>Thanks SR</p> <p>subnet <a href="https://i.stack.imgur.com/0QRYW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0QRYW.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/S0V9X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S0V9X.png" alt="enter image description here"></a></p>
sfgroups
<p>I have to remove <code>serviceSubnet</code> from kubeadm configure and use the default one and use this subnet for POD IP <code>podSubnet: 10.201.0.0/16</code>. created cluster with this configurtion, Now this error stop coming, I can see all the node has CIDR subnet assigned.</p> <pre><code>kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' </code></pre>
sfgroups
<p>I have applied the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> to my k8s cluster, and notice there are no scrape configs for my services or pods.</p> <p>I'd like all services etc in my cluster to be scraped, according to the standard k8s attributes e.g.:</p> <pre class="lang-yaml prettyprint-override"><code>prometheus.io/path: /metrics prometheus.io/port: '8080' prometheus.io/scrape: 'true' </code></pre> <p>My question is:</p> <ul> <li>Is there a supported way to tell the operator to scrape everything? The docs seems to suggest not, so..</li> <li>Failing that, is there a place that I can upload some custom prometheus config to do the same?</li> </ul>
Jon Bates
<p>Looks like there was already a <a href="https://stackoverflow.com/questions/64452966/add-custom-scrape-endpoints-in-helm-chart-kube-prometheus-stack-deployment">solution</a></p> <p>Just add your additional jobs to the values file at this location:</p> <pre class="lang-yaml prettyprint-override"><code>prometheus: prometheusSpec: additionalScrapeConfigs: - job_name: some_job ... </code></pre>
Jon Bates
<p>I have successfully set up NGINX as an ingress for my Kubernetes cluster on GKE. I have enabled and configured external metrics (and I am using an external metric in my HPA for auto-scaling). All good there and it's working well.</p> <p>However, I have a deprecation warning in StackDriver around these external metrics. I have come to discover that these warnings are because of "old" resource types being used.</p> <p>For example, using this command:</p> <blockquote> <p>kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections" | jq</p> </blockquote> <p>I get this output:</p> <pre><code>{ "metricName": "custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections", "metricLabels": { "metric.labels.controller_class": "nginx", "metric.labels.controller_namespace": "ingress-nginx", "metric.labels.controller_pod": "nginx-ingress-controller-[snip]", "metric.labels.state": "writing", "resource.labels.cluster_name": "[snip]", "resource.labels.container_name": "", "resource.labels.instance_id": "[snip]", "resource.labels.namespace_id": "ingress-nginx", "resource.labels.pod_id": "nginx-ingress-controller-[snip]", "resource.labels.project_id": "[snip]", "resource.labels.zone": "[snip]", "resource.type": "gke_container" }, "timestamp": "2020-01-26T05:17:33Z", "value": "1" } </code></pre> <p>Note that the "resource.type" field is "gke_container". As of the next version of Kubernetes this needs to be "k8s_container".</p> <p>I have looked through the Kubernetes NGINX configuration to try to determine when (or if) an upgrade has been made to support the new StackDriver resource model, but I have failed so far. And I would rather not "blindly" upgrade NGINX if I can help it (even in UAT).</p> <p>These are the Docker images that I am currently using:</p> <pre><code>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.2 gcr.io/google-containers/prometheus-to-sd:v0.9.0 gcr.io/google-containers/custom-metrics-stackdriver-adapter:v0.10.0 </code></pre> <p>Could anyone help out here?</p> <p>Thanks in advance, Ben</p>
benjimix
<p>Ok this has nothing to do with NGINX and everything to do with Prometheus (and specifically the Prometheus sidecar <code>prometheus-to-sd</code>).</p> <p>For future readers if your Prometheus start-up looks like this:</p> <pre><code> - name: prometheus-to-sd image: gcr.io/google-containers/prometheus-to-sd:v0.9.0 ports: - name: profiler containerPort: 6060 command: - /monitor - --stackdriver-prefix=custom.googleapis.com - --source=nginx-ingress-controller:http://localhost:10254/metrics - --pod-id=$(POD_NAME) - --namespace-id=$(POD_NAMESPACE) </code></pre> <p>Then is needs to look like this:</p> <pre><code> - name: prometheus-to-sd image: gcr.io/google-containers/prometheus-to-sd:v0.9.0 ports: - name: profiler containerPort: 6060 command: - /monitor - --stackdriver-prefix=custom.googleapis.com - --source=nginx-ingress-controller:http://localhost:10254/metrics - --monitored-resource-type-prefix=k8s_ - --pod-id=$(POD_NAME) - --namespace-id=$(POD_NAMESPACE) </code></pre> <p>That is, include the <code>--monitored-resource-type-prefix=k8s_</code> option.</p>
benjimix
<p>I setup my cluster with one master and two nodes. I can create pods on nodes. If my master node fails (reboot) when I use kubeadm reset and then kubeadm init I lost all my pods, deployments, services.</p> <p>Am I losting my pods because reset? What should I do?</p> <p>Some similar questions:</p> <p><a href="https://stackpointcloud.com/community/question/how-do-i-restart-my-kubernetes-cluster" rel="nofollow noreferrer">https://stackpointcloud.com/community/question/how-do-i-restart-my-kubernetes-cluster</a></p> <p><a href="https://stackoverflow.com/questions/48362855/is-there-a-best-practice-to-reboot-a-cluster">Is there a best practice to reboot a cluster</a></p>
gustavomr
<p><code>kubeadm reset</code> on the master deletes all configuration (files and a database too). There is no way back.</p> <p>You should not run <code>kubeadm init</code> when you reboot the master. <code>kubeadm init</code> is a one off action to bootstrap the cluster. When the master is rebooted your OS's init system (systemd, upstart, ...) should start <code>kubelet</code> which in turn starts the master components (as containers). An exception is if your cluster is <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-self-hosting" rel="nofollow noreferrer">self-hosting</a></p>
Janos Lenart
<p>I am using <code>kubectl port-forward</code> in a shell script but I find it is not reliable, or doesn't come up in time:</p> <pre><code>kubectl port-forward ${VOLT_NODE} ${VOLT_CLUSTER_ADMIN_PORT}:${VOLT_CLUSTER_ADMIN_PORT} -n ${NAMESPACE} &amp; if [ $? -ne 0 ]; then echo &quot;Unable to start port forwarding to node ${VOLT_NODE} on port ${VOLT_CLUSTER_ADMIN_PORT}&quot; exit 1 fi PORT_FORWARD_PID=$! sleep 10 </code></pre> <p>Often after I sleep for 10 seconds, the port isn't open or forwarding hasn't happened. Is there any way to wait for this to be ready. Something like <code>kubectl wait</code> would be ideal, but open to shell options also.</p>
eeijlar
<p>I took @AkinOzer's comment and turned it into this example where I port-forward a postgresql database's port so I can make a <code>pg_dump</code> of the database:</p> <pre><code>#!/bin/bash set -e localport=54320 typename=service/pvm-devel-kcpostgresql remoteport=5432 # This would show that the port is closed # nmap -sT -p $localport localhost || true kubectl port-forward $typename $localport:$remoteport &gt; /dev/null 2&gt;&amp;1 &amp; pid=$! # echo pid: $pid # kill the port-forward regardless of how this script exits trap '{ # echo killing $pid kill $pid }' EXIT # wait for $localport to become available while ! nc -vz localhost $localport &gt; /dev/null 2&gt;&amp;1 ; do # echo sleeping sleep 0.1 done # This would show that the port is open # nmap -sT -p $localport localhost # Actually use that port for something useful - here making a backup of the # keycloak database PGPASSWORD=keycloak pg_dump --host=localhost --port=54320 --username=keycloak -Fc --file keycloak.dump keycloak # the 'trap ... EXIT' above will take care of kill $pid </code></pre>
Peter V. Mørch
<p>When we use kubeadm to set up a k8s cluster, there are two options to config:</p> <p><code>--pod-network-cidr</code></p> <p><code>--service-cidr</code> (default ‘10.96.0.0/12’)</p> <p>Question is:</p> <ol> <li><p>If I use <code>10.244.0.0./12</code> for <code>pod-network-cidr</code>, do I need to save that IP range for Kubernetes? What happens if we already start to use <code>10.244.0.0/12</code> for other machines.</p></li> <li><p>Can I set <code>service-cidr</code> and the <code>pod-network-cidr</code> the same range? I don't understand how <code>service-cidr</code> works.</p></li> </ol>
xren
<p>To reply briefly:</p> <ul> <li><ol> <li>You do have to reserve <strong>both</strong> the pod-network range and the service network range. You can't use those on your LAN (and you can't have routes to it). Both ranges are configurable so you can pick something that is not used. Use ipcalc if you are unsure.</li> </ol></li> <li><ol start="2"> <li>You have to use separate ranges.</li> </ol></li> </ul> <p>Check out <a href="https://www.slideshare.net/CJCullen/kubernetes-networking-55835829" rel="noreferrer">these slides</a> for explanation about the different networks in play.</p>
Janos Lenart
<p>I'm using EKS (Kubernetes) in AWS and I have problems with posting a payload at around 400 Kilobytes to any web server that runs in a container in that Kubernetes. I hit some kind of limit but it's not a limit in size, it seems at around 400 Kilobytes many times works but sometimes I get (testing with Python requests)</p> <pre><code>requests.exceptions.ChunkedEncodingError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer')) </code></pre> <p>I test this with different containers (python web server on Alpine, Tomcat server on CentOS, nginx, etc).</p> <p>The more I increase the size over 400 Kilobytes, the more consistent I get: Connection reset by peer.</p> <p>Any ideas?</p>
StefanH
<p>Thanks for your answers and comments, helped me get closer to the source of the problem. I did upgrade the AWS cluster from 1.11 to 1.12 and that cleared this error when accessing from service to service within Kubernetes. However, the error still persisted when accessing from outside the Kubernetes cluster using a public dns, thus the load balancer. So after testing some more I found out that now the problem lies in the ALB or the ALB controller for Kubernetes: <a href="https://kubernetes-sigs.github.io/aws-alb-ingress-controller/" rel="noreferrer">https://kubernetes-sigs.github.io/aws-alb-ingress-controller/</a> So I switched back to a Kubernetes service that generates an older-generation ELB and the problem was fixed. The ELB is not ideal, but it's a good work-around for the moment, until the ALB controller gets fixed or I have the right button to press to fix it.</p>
StefanH
<p><strong>What I have</strong></p> <p>I have used Kube secrets for private Docker registry authentication in the <code>default</code> namespace. That works as expected. For example:</p> <pre><code>$ kubectl get secret regsecret NAME TYPE DATA AGE regsecret kubernetes.io/dockerconfigjson 1 30m </code></pre> <p>Which is referenced in my <code>deployment.yml</code> as shown in the snippet below:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx spec: replicas: 1 template: ... spec: containers: - name: bootstrap-nginx image: quay.io/example/nginx:latest ... imagePullSecrets: - name: regsecret </code></pre> <p><strong>Here's my question</strong></p> <p>I need to create the <code>regsecret</code> above in a <code>namepsace</code>, for example, <code>myns</code> as shown below:</p> <pre><code>$ kubectl get secret regsecret --namespace=myns NAME TYPE DATA AGE regsecret kubernetes.io/dockerconfigjson 1 30m </code></pre> <p>With this, how do I reference <code>regsecret</code> from <code>myns</code> namespace into my deployment spec? If I use <code>imagePullSecrets</code> as shown above, it fails saying that Kubernetes could not pull the image (the secret <code>regsecret</code> could not be found). Is there a way to reference "fully qualified" secret name in <code>imagePullSecrets</code>?</p>
Kartik Pandya
<p>By design, there is no way to accomplish this. You will need to create the <code>regsecret</code> in the same namespace where your Deployment is.</p> <blockquote> <p><code>ImagePullSecrets</code> is an optional list of references to secrets <strong>in the same namespace</strong> to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. For example, in the case of docker, only DockerConfig type secrets are honored.</p> </blockquote> <p>See also: <a href="https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod</a></p>
Janos Lenart
<p>I have a task. I need to write python code to generate a yaml file for kubernetes. So far I have been using pyyaml and it works fine. Here is my generated yaml file:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: info: name: hostname.com aio-max-nr: 262144 cpu: cpuLogicalCores: 4 memory: memTotal: 33567170560 net.core.somaxconn: 1024 ... </code></pre> <p>However, when I try to create this configMap the error is that info expects a string() but not a map. So I explored a bit and it seem the easiest way to resolve this is to add a pipe after info like this:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: info: | # this will translate everything in data into a string but still keep the format in yaml file for readability name: hostname.com aio-max-nr: 262144 cpu: cpuLogicalCores: 4 memory: memTotal: 33567170560 net.core.somaxconn: 1024 ... </code></pre> <p>This way, my configmap is created successfully. My struggling is I dont know how to add that pipe bar from python code. Here I manually added it, but I want to automate this whole process.</p> <p>part of the python code I wrote is, pretend data is a dict():</p> <pre><code>content = dict() content[&quot;apiVersion&quot;] = &quot;v1&quot; content[&quot;kind&quot;] = &quot;ConfigMap&quot; data = {...} info = {&quot;info&quot;: data} content[&quot;data&quot;] = info # Get all contents ready. Now write into a yaml file fileName = &quot;out.yaml&quot; with open(fileName, 'w') as outfile: yaml.dump(content, outfile, default_flow_style=False) </code></pre> <p>I searched online and found a lot of cases, but none of them fits my needs. Thanks in advance.</p>
Livid Font
<p>The pipe makes the contained values a string. That string is not processed by YAML, even if it contains data with YAML syntax. Consequently, you will need to give a string as value.</p> <p>Since the string contains data in YAML syntax, you can create the string by processing the contained data with YAML in a previous step. To make PyYAML dump the scalar in literal block style (i.e. with <code>|</code>), you need a custom representer:</p> <pre class="lang-py prettyprint-override"><code>import yaml, sys from yaml.resolver import BaseResolver class AsLiteral(str): pass def represent_literal(dumper, data): return dumper.represent_scalar(BaseResolver.DEFAULT_SCALAR_TAG, data, style=&quot;|&quot;) yaml.add_representer(AsLiteral, represent_literal) info = { &quot;name&quot;: &quot;hostname.com&quot;, &quot;aio-max-nr&quot;: 262144, &quot;cpu&quot;: { &quot;cpuLogicalCores&quot;: 4 } } info_str = AsLiteral(yaml.dump(info)) data = { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;kind&quot;: &quot;ConfigMap&quot;, &quot;data&quot;: { &quot;info&quot;: info_str } } yaml.dump(data, sys.stdout) </code></pre> <p>By putting the rendered YAML data into the type <code>AsLiteral</code>, the registered custom representer will be called which will set the desired style to <code>|</code>.</p>
flyx
<p>kube-controller-manager has the following property</p> <pre><code>-deployment-controller-sync-period duration Default: 30s Period for syncing the deployments. </code></pre> <p>What does this actually control and what does <code>period for syncing the deployments</code> mean?</p>
Mark
<p>Haha most curious thing. You'd expect it does something like controlling how often the controller checks whether the status of Deployment objects are compatible with spec or if there is a change needed.</p> <p>However currently the controller-manager is notified on changes by the apiserver so it always inherently knows this information already.</p> <p>There is <a href="https://github.com/kubernetes/kubernetes/issues/71510" rel="nofollow noreferrer">Issue #71510</a> where someone points out that parameter seems to be unused. I've done my own <a href="https://github.com/kubernetes/kubernetes/search?utf8=%E2%9C%93&amp;q=deployment-controller-sync-period&amp;type=" rel="nofollow noreferrer">search for the parameter</a> and a related <a href="https://github.com/kubernetes/kubernetes/search?utf8=%E2%9C%93&amp;q=DeploymentControllerSyncPeriod&amp;type=" rel="nofollow noreferrer">search for the variable</a>. As far as I can tell all of these uses are for copying this value around, conversions, declarations, etc, and none of them actually use it for anything at all.</p> <p>A good test would be setting it to a year and see what happens. I haven't done that though.</p>
Janos Lenart
<p>I want to reserve static IP address for my k8s exposed service. If I am not mistaken when I expose the k8s service it gets the random public IP address. I redeploy my app often and the IP changes. But I want to get permanent public IP address. My task is to get my application via permanent IP address (or DNS-name).</p>
malcolm
<p>This is cloud provider specific, but from the tag on your question it appears you are using Google Cloud Platform's Kubernetes Engine (GKE). My answer is specific for this situation.</p> <p>From the <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#step_5_optional_configuring_a_static_ip_address" rel="nofollow noreferrer">Setting up HTTP Load Balancing with Ingress</a> tutorial:</p> <blockquote> <pre><code>gcloud compute addresses create web-static-ip --global </code></pre> </blockquote> <p>And in your Ingress manifest:</p> <blockquote> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: basic-ingress annotations: kubernetes.io/ingress.global-static-ip-name: "web-static-ip" spec: backend: serviceName: web servicePort: 8080 </code></pre> </blockquote> <p>You can do something similar if you using <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip" rel="nofollow noreferrer">Service instead of Ingress</a>:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: helloweb labels: app: hello spec: type: LoadBalancer loadBalancerIP: "web-static-ip" selector: app: hello tier: web ports: - port: 80 targetPort: 8080 </code></pre>
Janos Lenart
<p>My kubernetes version is 1.10.4.</p> <p>I am trying to create a ConfigMap for java keystore files:</p> <pre><code>kubectl create configmap key-config --from-file=server-keystore=/home/ubuntu/ssl/server.keystore.jks --from-file=server-truststore=/home/ubuntu/ssl/server.truststore.jks --from-file=client--truststore=/home/ubuntu/ssl/client.truststore.jks --append-hash=false </code></pre> <p>It says <code>configmap "key-config" created</code>.</p> <p>But when I describe the configmap I am getting null value:</p> <pre><code>$ kubectl describe configmaps key-config Name: key-config Namespace: prod-es Labels: &lt;none&gt; Annotations: &lt;none&gt; Data ==== Events: &lt;none&gt; </code></pre> <p>I know my version kubernetes support binary data as configmaps or secrets but I am not sure what is wrong with my approach.</p> <p>Any input on this is highly appreciated.</p>
user1068861
<p><code>kubectl describe</code> does not show binary data in ConfigMaps at the moment (kubectl version v1.10.4); also the <code>DATA</code> column of the <code>kubectl get configmap</code> output does not include the binary elements:</p> <pre><code>$ kubectl get cm NAME DATA AGE key-config 0 1m </code></pre> <p>But the data is there, it's just a poor UI experience at the moment. You can verify that with:</p> <pre><code>kubectl get cm key-config -o json </code></pre> <p>Or you can use this friendly command to check that the ConfigMap can be mounted and the projected contents matches your original files:</p> <p><code>kubectl run cm-test --image=busybox --rm --attach --restart=Never --overrides='{"spec":{"volumes":[{"name":"cm", "configMap":{"name":"key-config"}}], "containers":[{"name":"cm-test", "image":"busybox", "command":["sh","-c","md5sum /cm/*"], "volumeMounts":[{"name":"cm", "mountPath":"/cm"}]}]}}'</code></p>
Janos Lenart
<p>I want to debug the pod in a simple way, therefore I want to start the pod without deployment.</p> <p>But it will automatically create a deployment</p> <pre><code>$ kubectl run nginx --image=nginx --port=80 deployment &quot;nginx&quot; created </code></pre> <p>So I have to create the <code>nginx.yaml</code> file</p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre> <p>And create the pod like below, then it creates pod only</p> <pre><code>kubectl create -f nginx.yaml pod &quot;nginx&quot; created </code></pre> <p>How can I specify in the command line the <code>kind:Pod</code> to avoid <code>deployment</code>?</p> <p>// I run under minikue 0.20.0 and kubernetes 1.7.0 under Windows 7</p>
Larry Cai
<pre><code>kubectl run nginx --image=nginx --port=80 --restart=Never </code></pre> <blockquote> <p><code>--restart=Always</code>: The restart policy for this Pod. Legal values [<code>Always</code>, <code>OnFailure</code>, <code>Never</code>]. If set to <code>Always</code> a deployment is created, if set to <code>OnFailure</code> a job is created, if set to <code>Never</code>, a regular pod is created. For the latter two <code>--replicas</code> must be <code>1</code>. Default <code>Always</code> [...]</p> </blockquote> <p>see official document <a href="https://kubernetes.io/docs/user-guide/kubectl-conventions/#generators" rel="noreferrer">https://kubernetes.io/docs/user-guide/kubectl-conventions/#generators</a></p>
Janos Lenart
<p>one of my keys in a Kubernetes deployment is really big and I want to break it into a multi line key:</p> <pre class="lang-yaml prettyprint-override"><code>annotations: container.apparmor.security.beta.kubernetes.io/9c2591b6-bd95-442a-9d35-fb600143a873: runtime/default </code></pre> <p>I tried this:</p> <pre class="lang-yaml prettyprint-override"><code>annotations: ? &gt;- container.apparmor.security.beta.kubernetes.io/ 9c2591b6-bd95-442a-9d35-fb600143a873 : runtime/default </code></pre> <p>but it renders like that:</p> <pre class="lang-yaml prettyprint-override"><code>annotations: container.apparmor.security.beta.kubernetes.io/ 9c2591b6-bd95-442a-9d35-fb600143a873: runtime/default </code></pre> <p>Any idea who to break object key into a multi line string without any spaces?<br /> I found a lot of solutions for multi line strings of the keys value but nothing regarding the key itself.</p> <p>Thanks in advance</p>
cmdjulian
<p>Use double quotes and escape the newlines:</p> <pre class="lang-yaml prettyprint-override"><code>annotations: ? &quot;container.apparmor.security.beta.kubernetes.io/\ 9c2591b6-bd95-442a-9d35-fb600143a873&quot; : runtime/default </code></pre> <p>Double quotes are the only YAML scalar that can be broken anywhere with an escaped newline.</p>
flyx
<p>So I've got a Kubernetes cluster up and running using the <a href="https://coreos.com/kubernetes/docs/latest/getting-started.html" rel="nofollow noreferrer">Kubernetes on CoreOS Manual Installation Guide</a>.</p> <pre><code>$ kubectl get no NAME STATUS AGE coreos-master-1 Ready,SchedulingDisabled 1h coreos-worker-1 Ready 54m $ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {&quot;health&quot;: &quot;true&quot;} etcd-2 Healthy {&quot;health&quot;: &quot;true&quot;} etcd-1 Healthy {&quot;health&quot;: &quot;true&quot;} $ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE default curl-2421989462-h0dr7 1/1 Running 1 53m 10.2.26.4 coreos-worker-1 kube-system busybox 1/1 Running 0 55m 10.2.26.3 coreos-worker-1 kube-system kube-apiserver-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1 kube-system kube-controller-manager-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1 kube-system kube-proxy-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1 kube-system kube-proxy-coreos-worker-1 1/1 Running 0 58m 192.168.0.204 coreos-worker-1 kube-system kube-scheduler-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1 $ kubectl get svc --all-namespaces NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes 10.3.0.1 &lt;none&gt; 443/TCP 1h </code></pre> <p>As with the guide, I've setup a service network <code>10.3.0.0/16</code> and a pod network <code>10.2.0.0/16</code>. Pod network seems fine as busybox and curl containers get IPs. But the services network has problems. Originally, I've encountered this when deploying <code>kube-dns</code>: the service IP <code>10.3.0.1</code> couldn't be reached, so kube-dns couldn't start all containers and DNS was ultimately not working.</p> <p>From within the curl pod, I can reproduce the issue:</p> <pre><code>[ root@curl-2421989462-h0dr7:/ ]$ curl https://10.3.0.1 curl: (7) Failed to connect to 10.3.0.1 port 443: No route to host [ root@curl-2421989462-h0dr7:/ ]$ ip route default via 10.2.26.1 dev eth0 10.2.0.0/16 via 10.2.26.1 dev eth0 10.2.26.0/24 dev eth0 src 10.2.26.4 </code></pre> <p>It seems ok that there's only a default route in the container. As I understood it, the request (to default route) should be intercepted by the <code>kube-proxy</code> on the worker node, forwarded to the the proxy on the master node where the IP is translated via iptables to the masters public IP.</p> <p>There seems to be a common problem with a bridge/netfilter sysctl setting, but that seems fine in my setup:</p> <pre><code>core@coreos-worker-1 ~ $ sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-iptables = 1 </code></pre> <p>I'm having a real hard time to troubleshoot, as I lack the understanding of what the service IP is used for, how the service network is supposed to work in terms of traffic flow and how to best debug this.</p> <p>So here're the questions I have:</p> <ul> <li>What is the 1st IP of the service network (10.3.0.1 in this case) used for?</li> <li>Is above description of the traffic flow correct? If not, what steps does it take for a container to reach a service IP?</li> <li>What are the best ways to debug each step in the traffic flow? (I can't get any idea what's wrong from the logs)</li> </ul> <p>Thanks!</p>
grasbueschel
<p>The Sevice network provides fixed IPs for Services. It is not a routeable network (so don't expect <code>ip ro</code> to show anything nor will ping work) but a collection iptables rules managed by kube-proxy on each node (see <code>iptables -L; iptables -t nat -L</code> on the nodes, not Pods). These <a href="https://kubernetes.io/docs/user-guide/services/#virtual-ips-and-service-proxies" rel="noreferrer">virtual IPs</a> (see the pics!) act as load balancing proxy for endpoints (<code>kubectl get ep</code>), which are usually ports of Pods (but not always) with a specific set of labels as defined in the Service.</p> <p>The first IP on the Service network is for reaching the kube-apiserver itself. It's listening on port 443 (<code>kubectl describe svc kubernetes</code>).</p> <p>Troubleshooting is different on each network/cluster setup. I would generally check:</p> <ul> <li>Is kube-proxy running on each node? On some setups it's run via systemd and on others there is a DeamonSet that schedules a Pod on each node. On your setup it is deployed as static Pods created by the kubelets thrmselves from <code>/etc/kubernetes/manifests/kube-proxy.yaml</code></li> <li>Locate logs for kube-proxy and find clues (can you post some?)</li> <li>Change kube-proxy into <code>userspace</code> mode. Again, the details depend on your setup. For you it's in the file I mentioned above. Append <code>--proxy-mode=userspace</code> as a parameter <strong>on each node</strong></li> <li>Is the overlay (pod) network functional?</li> </ul> <p>If you leave comments I will get back to you..</p>
Janos Lenart
<p>Team, my yaml syntax is correct as I validated it online. However, I am not able to run it and every time it throws a different error.</p> <pre><code>└─ $ ▶ kubectl create -f ~/waste/wf.yaml Error: failed to parse yaml file: error unmarshaling JSON: while decoding JSON: unknown field "\u00a0\u00a0\u00a0\u00a0completions" in workflow.WorkflowDefinition └─ $ ▶ kubectl create -f ~/waste/wf.yaml Error: failed to parse yaml file: error unmarshaling JSON: while decoding JSON: unknown field "\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0mountPath" in workflow.WorkflowDefinition └─ $ ▶ kubectl create -f ~/waste/wf.yaml Error: failed to parse yaml file: error unmarshaling JSON: while decoding JSON: unknown field "\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- name" in workflow.WorkflowDefinition </code></pre> <p>Any hint what this indicates?</p>
AhmFM
<p>0xA0 (decimal 160) is a white space looking character but is not actually space (0x20 or decimal 32). You have probably copy pasted that from a web page. Fix your yaml to use spaces instead.</p> <p>"Spaces" in your file: " ”</p> <p>Actual space: " "</p>
Janos Lenart
<p>I am trying to pass given part of values.yaml into helm template:</p> <pre><code> receivers: test1: test2: test3: test4: </code></pre> <p>using function:</p> <p><code>{{ .Values.receivers | toYaml | nindent 2}}</code></p> <p>Code is placed in correct format, however empty fields get filled with 'null':</p> <pre><code>receivers: test1: test2: test3: null test4: null </code></pre> <p>Is there any way to prevent this?</p> <p>I am expecting correct templating without insterted null fields.</p>
nlesniak
<p>There are no fields inserted. The processor only replaces values that already exist with a different serialization that has the same semantics.</p> <p><code>test3:</code> in YAML without a value is parsed as having an empty scalar value. The <a href="https://yaml.org/spec/1.2.2/#103-core-schema" rel="nofollow noreferrer">YAML Core Schema</a> defines the following for empty values:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Regular expression</th> <th>Resolved to tag</th> </tr> </thead> <tbody> <tr> <td><code>null | Null | NULL | ~</code></td> <td>tag:yaml.org,2002:null</td> </tr> <tr> <td><em><code>/* Empty */</code></em></td> <td>tag:yaml.org,2002:null</td> </tr> </tbody> </table> </div> <p>Since the empty value is resolved to have the tag <code>!!null</code> (which is a shorthand for the full form shown above), it is loaded as <strong><code>nil</code></strong> into Go.</p> <p>When <code>toYaml</code> receives your data, it doesn't know that the <strong><code>nil</code></strong> values originated from empty scalars. It needs to choose one of the possible serializations and chooses <code>null</code>. This adheres to the YAML spec and is therefore correct behavior.</p> <p>Any downstream processor that supports the Core Schema should process <code>test3: null</code> in the same way it processes <code>test3:</code> without value. Therefore there should be no problem.</p> <p>If you want <code>test3:</code> to specifically have the <em>empty string</em> as value instead of <code>null</code>, write</p> <pre class="lang-yaml prettyprint-override"><code>test3: &quot;&quot; </code></pre> <p>If you want it to contain an empty mapping, write</p> <pre class="lang-yaml prettyprint-override"><code>test3: {} </code></pre>
flyx
<p>Is there a way to get a trigger to shutdown, so we can close all connections gracefully before shutdown and don't proceed any actions after that probe and keeping the probe ready to kill.</p> <p>This including flushing logs, keeping any state of the application saved before handing over to the new pod and many more use cases.</p>
Kannaiyan
<p>You have 2 options:</p> <ol> <li><p>Containers (PID 1) receive SIGTERM before the container (and the pod) is removed. You can trap SIGTERM and act on it.</p></li> <li><p>You can use the <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details" rel="nofollow noreferrer">preStop lifecycle hook</a></p></li> </ol> <p>Important implementation details can be found here: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods</a></p> <h2>httpGet example</h2> <pre><code>apiVersion: v1 kind: Pod metadata: name: prestop-pod spec: terminationGracePeriodSeconds: 5 containers: - name: nginx image: nginx lifecycle: preStop: httpGet: # only port is reqired port: 80 path: "?preStop" # scheme: HTTP # host: ... # httpHeaders: # name: ... # value: ... </code></pre> <p>After <code>kubectl apply -f</code> on this file, run <code>kubectl log -f prestop-pod</code> while executing <code>kubectl delete pod prestop-pod</code> on another terminal. You should see something like:</p> <pre><code>$ kubectl apply -f prestop.yaml pod/prestop-pod created $ kubectl logs -f prestop-pod 10.244.0.1 - - [21/Mar/2019:09:15:20 +0000] "GET /?preStop HTTP/1.1" 200 612 "-" "Go-http-client/1.1" "-" </code></pre>
Janos Lenart
<p>When I run exec command</p> <pre><code> kubectl exec kubia-zgxn9 -- curl -s http://10.47.252.17 Error from server (BadRequest): pod kubia-zgxn9 does not have a host assigned </code></pre> <p>Describe pod shows host</p> <pre><code>IP: Controlled By: ReplicationController/kubia Containers: kubia: Image: luksa/kubia Port: 8080/TCP Host Port: 0/TCP Requests: cpu: 100m Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-xs7qx (ro) </code></pre> <p>This is my service</p> <pre><code>Name: kubia Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=kubia Type: ClusterIP IP: 10.47.252.17 Port: &lt;unset&gt; 80/TCP TargetPort: 8080/TCP Endpoints: &lt;none&gt; Session Affinity: None Events: &lt;none&gt; </code></pre> <p>Why did I get error from server?</p>
Richard Rublev
<p>The Pod is probably not yet scheduled to a Node.</p> <p>Maybe it just took a little longer than expected or perhaps it's asking for resources that no node can satisfy at the moment.</p> <p>Check the output of <code>kubectl get pod kubia-zgxn9</code> and see if the state is <code>Running</code>. If so, retry now. If it still fails to exec this might be a bug.</p> <p>If it's not running, check the describe output for notices. (Unfortunately you cut the output short in your question so we can't see what's wrong with it).</p>
Janos Lenart
<p>In the first yaml below, the second <code>podSelector</code> clause (under <code>to</code>) seems correctly formatted, with two spaces indent for <code>matchLabels</code>, consistent with standards and the rest of the yaml.</p> <p>The second yaml is identical, but <code>matchLabels</code> has four spaces. This format follows <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">the Kubernetes documentation.</a> (There are no tabs.)</p> <p>Yet the first yaml <strong>fails</strong> <code>kubectl</code> validation with <em>error validating "p.yaml": error validating data: ValidationError(NetworkPolicy.spec.egress[0].to[0]): unknown field "matchLabels" in io.k8s.api.networking.v1.NetworkPolicyPeer</em>, and the second <strong>passes</strong> validation. </p> <p>This does not pass validation:</p> <pre><code> apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: internal-policy spec: podSelector: matchLabels: name: internal policyTypes: - Egress egress: - to: - podSelector: matchLabels: name: mysql </code></pre> <p>This passes validation:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: internal-policy spec: podSelector: matchLabels: name: internal policyTypes: - Egress egress: - to: - podSelector: matchLabels: name: mysql </code></pre>
Joshua Fox
<p>Well apparently <code>matchLabels</code> should be a key in the mapping value of <code>podSelector</code>, hence it must be more indented. This:</p> <pre><code>- podSelector: matchLabels: </code></pre> <p>Places <code>matchLabels</code> on the same indentation level as <code>podSelector</code>, since the initial <code>-</code> is treated as part of the indentation as per YAML spec. Basically, there are two indentation levels defined here:</p> <ul> <li>The level of the sequence, starting with <code>-</code>. All subsequent sequence items must have their <code>-</code> at the same level.</li> <li>The level of the mapping which is a value of the sequence, starting with <code>p</code>. All subsequent keys of the mapping must start at the same level.</li> </ul> <p>Therefore, if you want <code>matchLabels</code> to be nested in <code>podSelector</code>, you must indent it more:</p> <pre><code>- podSelector: matchLabels: </code></pre>
flyx
<p>Using Kubernetes, exactly the <code>kubectl apply -f ./auth.yaml</code> statement, i'm trying to run a Authorization Server in a pod, but when I check out the logs, this show me the following error:</p> <pre><code> . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.6.13) 2022-12-07 01:33:30.099 INFO 1 --- [ main] o.v.s.msvc.auth.MsvcAuthApplication : Starting MsvcAuthApplication v1.0-SNAPSHOT using Java 18.0.2.1 on msvc-auth-7d696f776d-hpk99 with PID 1 (/app/msvc-auth-1.0-SNAPSHOT.jar started by root in /app) 2022-12-07 01:33:30.203 INFO 1 --- [ main] o.v.s.msvc.auth.MsvcAuthApplication : The following 1 profile is active: &quot;kubernetes&quot; 2022-12-07 01:33:48.711 INFO 1 --- [ main] o.s.c.k.client.KubernetesClientUtils : Created API client in the cluster. 2022-12-07 01:33:48.913 INFO 1 --- [ main] o.s.c.a.ConfigurationClassPostProcessor : Cannot enhance @Configuration bean definition 'org.springframework.cloud.kubernetes.client.KubernetesClientAutoConfiguration' since its singleton instance has been created too early. The typical cause is a non-static @Bean method with a BeanDefinitionRegistryPostProcessor return type: Consider declaring such methods as 'static'. 2022-12-07 01:33:49.794 INFO 1 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=9e09a67e-4528-373e-99ad-3031c15d14ab 2022-12-07 01:33:50.922 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsAutoConfiguration' of type [io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.113 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.commons.config.CommonsConfigAutoConfiguration' of type [org.springframework.cloud.commons.config.CommonsConfigAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.184 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration' of type [org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.187 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'loadBalancerClientsDefaultsMappingsProvider' of type [org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration$$Lambda$420/0x0000000800f30898] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.205 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'defaultsBindHandlerAdvisor' of type [org.springframework.cloud.commons.config.DefaultsBindHandlerAdvisor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.311 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'kubernetes.manifests-io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsProperties' of type [io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.412 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration' of type [org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.419 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration$ReactorDeferringLoadBalancerFilterConfig' of type [org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration$ReactorDeferringLoadBalancerFilterConfig] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.489 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'reactorDeferringLoadBalancerExchangeFilterFunction' of type [org.springframework.cloud.client.loadbalancer.reactive.DeferringLoadBalancerExchangeFilterFunction] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:58.301 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 9000 (http) 2022-12-07 01:33:58.393 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2022-12-07 01:33:58.393 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.68] 2022-12-07 01:33:58.795 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-12-07 01:33:58.796 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 26917 ms 2022-12-07 01:34:01.099 WARN 1 --- [ main] o.s.security.core.userdetails.User : User.withDefaultPasswordEncoder() is considered unsafe for production and is only intended for sample applications. 2022-12-07 01:34:02.385 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'authorizationServerSecurityFilterChain' defined in class path resource [org/villamzr/springcloud/msvc/auth/SecurityConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest 2022-12-07 01:34:02.413 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat] 2022-12-07 01:34:02.677 INFO 1 --- [ main] ConditionEvaluationReportLoggingListener : Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. 2022-12-07 01:34:02.991 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'authorizationServerSecurityFilterChain' defined in class path resource [org/villamzr/springcloud/msvc/auth/SecurityConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:658) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:638) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918) ~[spring-context-5.3.23.jar!/:5.3.23] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.23.jar!/:5.3.23] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:745) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:420) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1317) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1306) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.villamzr.springcloud.msvc.auth.MsvcAuthApplication.main(MsvcAuthApplication.java:12) ~[classes!/:1.0-SNAPSHOT] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:577) ~[na:na] at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) ~[spring-beans-5.3.23.jar!/:5.3.23] ... 25 common frames omitted Caused by: java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest at org.springframework.security.oauth2.server.authorization.config.annotation.web.configurers.OAuth2AuthorizationServerConfigurer.getEndpointsMatcher(OAuth2AuthorizationServerConfigurer.java:235) ~[spring-security-oauth2-authorization-server-1.0.0.jar!/:1.0.0] at org.springframework.security.oauth2.server.authorization.config.annotation.web.configuration.OAuth2AuthorizationServerConfiguration.applyDefaultSecurity(OAuth2AuthorizationServerConfiguration.java:63) ~[spring-security-oauth2-authorization-server-1.0.0.jar!/:1.0.0] at org.villamzr.springcloud.msvc.auth.SecurityConfig.authorizationServerSecurityFilterChain(SecurityConfig.java:51) ~[classes!/:1.0-SNAPSHOT] at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf.CGLIB$authorizationServerSecurityFilterChain$1(&lt;generated&gt;) ~[classes!/:1.0-SNAPSHOT] at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf$$FastClassBySpringCGLIB$$a983a242.invoke(&lt;generated&gt;) ~[classes!/:1.0-SNAPSHOT] at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244) ~[spring-core-5.3.23.jar!/:5.3.23] at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:331) ~[spring-context-5.3.23.jar!/:5.3.23] at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf.authorizationServerSecurityFilterChain(&lt;generated&gt;) ~[classes!/:1.0-SNAPSHOT] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:577) ~[na:na] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ~[spring-beans-5.3.23.jar!/:5.3.23] ... 26 common frames omitted Caused by: java.lang.ClassNotFoundException: jakarta.servlet.http.HttpServletRequest at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445) ~[na:na] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588) ~[na:na] at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) ~[na:na] ... 37 common frames omitted </code></pre> <p>This is the auth.yaml configuration.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: msvc-auth spec: replicas: 1 selector: matchLabels: app: msvc-auth template: metadata: labels: app: msvc-auth spec: containers: - image: villamzr/auth:latest name: msvc-auth ports: - containerPort: 9000 env: - name: LB_USUARIOS_URI valueFrom: configMapKeyRef: name: msvc-usuarios key: lb_usuarios_uri --- apiVersion: v1 kind: Service metadata: name: msvc-auth spec: type: LoadBalancer ports: - port: 9000 protocol: TCP targetPort: 9000 selector: app: msvc-auth </code></pre> <p>this one is the pom.xml of the microservice</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;project xmlns=&quot;http://maven.apache.org/POM/4.0.0&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xsi:schemaLocation=&quot;http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd&quot;&gt; &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt; &lt;parent&gt; &lt;groupId&gt;org.villamzr.springcloud.msvc&lt;/groupId&gt; &lt;artifactId&gt;curso-kubernetes&lt;/artifactId&gt; &lt;version&gt;1.0-SNAPSHOT&lt;/version&gt; &lt;/parent&gt; &lt;groupId&gt;org.villamzr.springcloud.msvc.auth&lt;/groupId&gt; &lt;artifactId&gt;msvc-auth&lt;/artifactId&gt; &lt;name&gt;msvc-auth&lt;/name&gt; &lt;description&gt;Demo project for Spring Boot&lt;/description&gt; &lt;properties&gt; &lt;java.version&gt;18&lt;/java.version&gt; &lt;spring-cloud.version&gt;2021.0.5&lt;/spring-cloud.version&gt; &lt;/properties&gt; &lt;dependencies&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-starter-security&lt;/artifactId&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.security&lt;/groupId&gt; &lt;artifactId&gt;spring-security-oauth2-client&lt;/artifactId&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.security&lt;/groupId&gt; &lt;artifactId&gt;spring-security-oauth2-authorization-server&lt;/artifactId&gt; &lt;version&gt;1.0.0&lt;/version&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-starter-webflux&lt;/artifactId&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-starter-kubernetes-client&lt;/artifactId&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-starter-kubernetes-client-loadbalancer&lt;/artifactId&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-starter-test&lt;/artifactId&gt; &lt;scope&gt;test&lt;/scope&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;io.projectreactor&lt;/groupId&gt; &lt;artifactId&gt;reactor-test&lt;/artifactId&gt; &lt;scope&gt;test&lt;/scope&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.security&lt;/groupId&gt; &lt;artifactId&gt;spring-security-test&lt;/artifactId&gt; &lt;scope&gt;test&lt;/scope&gt; &lt;/dependency&gt; &lt;/dependencies&gt; &lt;dependencyManagement&gt; &lt;dependencies&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-dependencies&lt;/artifactId&gt; &lt;version&gt;${spring-cloud.version}&lt;/version&gt; &lt;type&gt;pom&lt;/type&gt; &lt;scope&gt;import&lt;/scope&gt; &lt;/dependency&gt; &lt;/dependencies&gt; &lt;/dependencyManagement&gt; &lt;build&gt; &lt;plugins&gt; &lt;plugin&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-maven-plugin&lt;/artifactId&gt; &lt;/plugin&gt; &lt;/plugins&gt; &lt;/build&gt; &lt;/project&gt; </code></pre> <p>and this one is the Securityconfig</p> <pre><code>package org.villamzr.springcloud.msvc.auth; import com.nimbusds.jose.jwk.JWKSet; import com.nimbusds.jose.jwk.RSAKey; import com.nimbusds.jose.jwk.source.ImmutableJWKSet; import com.nimbusds.jose.jwk.source.JWKSource; import com.nimbusds.jose.proc.SecurityContext; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.annotation.Order; import org.springframework.core.env.Environment; import org.springframework.security.config.Customizer; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configurers.oauth2.server.resource.OAuth2ResourceServerConfigurer; import org.springframework.security.config.annotation.web.reactive.EnableWebFluxSecurity; import org.springframework.security.core.userdetails.User; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.oauth2.core.AuthorizationGrantType; import org.springframework.security.oauth2.core.ClientAuthenticationMethod; import org.springframework.security.oauth2.core.oidc.OidcScopes; import org.springframework.security.oauth2.jwt.JwtDecoder; import org.springframework.security.oauth2.server.authorization.client.InMemoryRegisteredClientRepository; import org.springframework.security.oauth2.server.authorization.client.RegisteredClient; import org.springframework.security.oauth2.server.authorization.client.RegisteredClientRepository; import org.springframework.security.oauth2.server.authorization.config.annotation.web.configuration.OAuth2AuthorizationServerConfiguration; import org.springframework.security.oauth2.server.authorization.config.annotation.web.configurers.OAuth2AuthorizationServerConfigurer; import org.springframework.security.oauth2.server.authorization.settings.AuthorizationServerSettings; import org.springframework.security.oauth2.server.authorization.settings.ClientSettings; import org.springframework.security.provisioning.InMemoryUserDetailsManager; import org.springframework.security.web.SecurityFilterChain; import org.springframework.security.web.authentication.LoginUrlAuthenticationEntryPoint; import java.security.KeyPair; import java.security.KeyPairGenerator; import java.security.interfaces.RSAPrivateKey; import java.security.interfaces.RSAPublicKey; import java.util.UUID; @Configuration public class SecurityConfig { @Autowired private Environment env; @Bean @Order(1) public SecurityFilterChain authorizationServerSecurityFilterChain(HttpSecurity http) throws Exception { OAuth2AuthorizationServerConfiguration.applyDefaultSecurity(http); http.getConfigurer(OAuth2AuthorizationServerConfigurer.class) .oidc(Customizer.withDefaults()); // Enable OpenID Connect 1.0 http // Redirect to the login page when not authenticated from the // authorization endpoint .exceptionHandling((exceptions) -&gt; exceptions .authenticationEntryPoint( new LoginUrlAuthenticationEntryPoint(&quot;/login&quot;)) ) // Accept access tokens for User Info and/or Client Registration .oauth2ResourceServer(OAuth2ResourceServerConfigurer::jwt); return http.build(); } @Bean @Order(2) public SecurityFilterChain defaultSecurityFilterChain(HttpSecurity http) throws Exception { http .authorizeHttpRequests((authorize) -&gt; authorize .anyRequest().authenticated() ) // Form login handles the redirect to the login page from the // authorization server filter chain .formLogin(Customizer.withDefaults()); return http.build(); } @Bean public UserDetailsService userDetailsService() { UserDetails userDetails = User.withDefaultPasswordEncoder() .username(&quot;admin&quot;) .password(&quot;12345&quot;) .roles(&quot;USER&quot;) .build(); return new InMemoryUserDetailsManager(userDetails); } @Bean public RegisteredClientRepository registeredClientRepository() { RegisteredClient registeredClient = RegisteredClient.withId(UUID.randomUUID().toString()) .clientId(&quot;usuarios-client&quot;) .clientSecret(&quot;{noop}12345&quot;) .clientAuthenticationMethod(ClientAuthenticationMethod.CLIENT_SECRET_BASIC) .authorizationGrantType(AuthorizationGrantType.AUTHORIZATION_CODE) .authorizationGrantType(AuthorizationGrantType.REFRESH_TOKEN) .authorizationGrantType(AuthorizationGrantType.CLIENT_CREDENTIALS) .redirectUri(env.getProperty(&quot;LB_USUARIOS_URI&quot;)+&quot;/login/oauth2/code/msvc-usuarios-client&quot;) .redirectUri(env.getProperty(&quot;LB_USUARIOS_URI&quot;)+&quot;/authorized&quot;) .scope(OidcScopes.OPENID) .scope(OidcScopes.PROFILE) .scope(&quot;read&quot;) .scope(&quot;write&quot;) .clientSettings(ClientSettings.builder().requireAuthorizationConsent(true).build()) .build(); return new InMemoryRegisteredClientRepository(registeredClient); } @Bean public JWKSource&lt;SecurityContext&gt; jwkSource() { KeyPair keyPair = generateRsaKey(); RSAPublicKey publicKey = (RSAPublicKey) keyPair.getPublic(); RSAPrivateKey privateKey = (RSAPrivateKey) keyPair.getPrivate(); RSAKey rsaKey = new RSAKey.Builder(publicKey) .privateKey(privateKey) .keyID(UUID.randomUUID().toString()) .build(); JWKSet jwkSet = new JWKSet(rsaKey); return new ImmutableJWKSet&lt;&gt;(jwkSet); } private static KeyPair generateRsaKey() { KeyPair keyPair; try { KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance(&quot;RSA&quot;); keyPairGenerator.initialize(2048); keyPair = keyPairGenerator.generateKeyPair(); } catch (Exception ex) { throw new IllegalStateException(ex); } return keyPair; } @Bean public JwtDecoder jwtDecoder(JWKSource&lt;SecurityContext&gt; jwkSource) { return OAuth2AuthorizationServerConfiguration.jwtDecoder(jwkSource); } @Bean public AuthorizationServerSettings authorizationServerSettings() { return AuthorizationServerSettings.builder().build(); } } </code></pre> <p><strong>SOLUTIONS I TESTED BUT IT DOWS NOT WORK</strong></p> <ol> <li>I changed the tomcat server version to 10.x</li> <li>I added the jakarta-api dependency to pom.xml of microservice, with 3.x, 5.x and 6.x versions</li> <li>I added the <code>@EnableWebSecurity</code></li> </ol> <p><strong>NOTES</strong></p> <ol> <li>I'm using java 18</li> <li>I'm using Oauth 2.1 and authorization server 1.0.0</li> </ol>
Alejandro Villamizar
<p>I was using Spring Boot 3 but was missing:</p> <pre><code> &lt;dependency&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt; &lt;/dependency&gt; </code></pre>
Chris
<p>I want to edit a configmap from <code>aws-auth</code> during a vagrant deployment to give my vagrant user access to the EKS cluster. I need to add a snippet into the existing <code>aws-auth</code> configmap. How do i do this programmatically?</p> <p>If you do a <code>kubectl edit -n kube-system configmap/aws-auth</code> you get</p> <pre><code>apiVersion: v1 data: mapRoles: | - groups: - system:bootstrappers - system:nodes rolearn: arn:aws:iam::123:role/nodegroup-abc123 username: system:node:{{EC2PrivateDNSName}} kind: ConfigMap metadata: creationTimestamp: "2019-05-30T03:00:18Z" name: aws-auth namespace: kube-system resourceVersion: "19055217" selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth uid: 0000-0000-0000 </code></pre> <p>i need to enter this bit in there somehow.</p> <pre><code> mapUsers: | - userarn: arn:aws:iam::123:user/sergeant-poopie-pants username: sergeant-poopie-pants groups: - system:masters </code></pre> <p>I've tried to do a <code>cat &lt;&lt;EOF &gt; {file} EOF</code> then patch from file. But that option doesn't exist in <code>patch</code> only in the <code>create</code> context.</p> <p>I also found this: <a href="https://stackoverflow.com/q/54571185/267490">How to patch a ConfigMap in Kubernetes</a></p> <p>but it didn't seem to work. or perhaps i didn't really understand the proposed solutions.</p>
Eli
<p>First, note that the <code>mapRoles</code> and <code>mapUsers</code> are actually treated as a string, even though it is structured data (yaml).</p> <p>While this problem is solvable by jsonpatch, it is much easier using <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer"><code>jq</code></a> and <code>kubectl apply</code> like this:</p> <pre><code>kubectl get cm aws-auth -o json \ | jq --arg add "`cat add.yaml`" '.data.mapUsers = $add' \ | kubectl apply -f - </code></pre> <p>Where <code>add.yaml</code> is something like this (notice the lack of extra indentation):</p> <pre><code>- userarn: arn:aws:iam::123:user/sergeant-poopie-pants username: sergeant-poopie-pants groups: - system:masters </code></pre> <p>See also <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html</a> for more information.</p>
Janos Lenart
<p>I am unable to get the TLS termination at nginx ingress controller working on my kubernetes cluster.</p> <p>my ingress rule looks as the following : </p> <pre><code>Christophers-MacBook-Pro-2:acme-microservice cjaime$ kubectl describe ing myapp-ingress-1 Name: myapp-ingress-1 Namespace: default Address: Default backend: default-http-backend:80 (&lt;none&gt;) TLS: acme-io terminates myapp-default.acme.io Rules: Host Path Backends ---- ---- -------- myapp-default.acme.io / myapp:80 (&lt;none&gt;) Annotations: ingress.kubernetes.io/ssl-redirect: true kubernetes.io/ingress.class: nginx Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal UPDATE 53m (x2 over 1h) nginx-ingress-controller Ingress default/myapp-ingress-1 Normal UPDATE 53m (x2 over 1h) nginx-ingress-controller Ingress default/myapp-ingress-1 Normal UPDATE 53m (x2 over 1h) nginx-ingress-controller Ingress default/myapp-ingress-1 Normal UPDATE 53m (x2 over 1h) nginx-ingress-controller Ingress default/myapp-ingress-1 </code></pre> <p>Whenever I try to access this from the browser I get the back the following server certificate</p> <pre><code>Server certificate subject=/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate issuer=/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate </code></pre> <p>This is preventing me from creating a valid SSL connection. I know my secret is correct because when using openssl I get a valid connection as follows</p> <pre><code>openssl s_client -servername myapp-default.acme.io -connect us1a-k8s-4.acme.io:31443 -showcerts CONNECTED(00000003) &lt;content omitted&gt; Start Time: 1528241749 Timeout : 300 (sec) Verify return code: 0 (ok) --- </code></pre> <p>However If I run the same command with the servername omitted I get the same fake certificate and a connection error</p> <pre><code>openssl s_client -connect us1a-k8s-4.acme.io:31443 -showcerts CONNECTED(00000003) depth=0 O = Acme Co, CN = Kubernetes Ingress Controller Fake Certificate verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 O = Acme Co, CN = Kubernetes Ingress Controller Fake Certificate verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate i:/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate &lt;content omitted&gt; Start Time: 1528241957 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) </code></pre>
hackmabrain
<p>Your tests with openssl are executed correctly and they show that nginx does offer the valid certificate for <strong>myapp-default.acme.io</strong> when that hostname is provided in the request via <a href="https://en.m.wikipedia.org/wiki/Server_Name_Indication" rel="nofollow noreferrer">SNI</a>. This is in harmony with what you configured in the Ingress.</p> <p>For other hostnames or requests without a hostname the <strong>default certificate</strong> is sent. That certificate is to be stored in a Secret and configured via a command line parameter to the ingress controller (<code>--default-ssl-certificate=$(POD_NAMESPACE)/tls-ingress</code>).</p> <p>Your browser warning was either because of a mismatch in the hostname or a cached fake certificate in your browser. I suggest you look up how flush the certificate cache in your b browser and/or redo the test with curl:</p> <pre><code>curl -v https://myapp-default.acme.io </code></pre> <p>If it still does not work correctly, you may be affected by <a href="https://github.com/kubernetes/ingress-nginx/issues/1954" rel="nofollow noreferrer">#1954</a> - update nginx-ingress-controller.</p>
Janos Lenart
<p>I installed istio using these commands:</p> <pre><code>VERSION = 1.0.5 GCP = gcloud K8S = kubectl @$(K8S) apply -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml @$(K8S) apply -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml @$(K8S) get pods -n istio-system @$(K8S) label namespace default istio-injection=enabled @$(K8S) get svc istio-ingressgateway -n istio-system </code></pre> <p>Now, how do I completely uninstall it including all containers/ingress/egress etc (everthing installed by istio-demo-auth.yaml?</p> <p>Thanks.</p>
user674669
<p>If you used <code>istioctl</code>, it's pretty easy:</p> <pre><code>istioctl x uninstall --purge </code></pre> <p>Of course, it would be easier if that command were listed in <code>istioctl --help</code>...</p> <p>Reference: <a href="https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio" rel="noreferrer">https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio</a></p>
Tin Can
<p>I have created a new GCP Kubernetes cluster. The cluster is private with NAT - not have connection to the internet. I also deploy <code>bastion</code> machine which allow my to connect into my private network (vpc) from the internet. This is the <a href="https://cloud.google.com/nat/docs/using-nat" rel="nofollow noreferrer">tutorial I based on</a>. SSH into <code>bastion</code> - working currently.</p> <p>The kubernetes master is not exposed outside. The result:</p> <pre><code>$ kubectl get pods The connection to the server 172.16.0.2 was refused - did you specify the right host or port? </code></pre> <p>So i install kubectl on <code>bastion</code> and run:</p> <pre><code>$ kubectl proxy --port 1111 Starting to serve on 127.0.0.1:3128 </code></pre> <p>now I want to connect my local <code>kubectl</code> to the remote proxy server. I installed secured tunnel to the <code>bastion</code> server and mapped the remote port into the local port. Also tried it with CURL and it's working.</p> <p>Now I looking for something like</p> <pre><code>$ kubectl --use-proxy=1111 get pods </code></pre> <p>(Make my local kubectl pass tru my remote proxy)</p> <p>How to do it?</p>
No1Lives4Ever
<p><code>kubectl proxy</code> acts exactly as an apiserver, exactly like the target apiserver - but the queries trough it are already authenticated. From your description, 'works with curl', it sounds like you've set it up correctly, you just need to target the client kubectl to it:</p> <pre><code>kubectl --server=http://localhost:1111 </code></pre> <p>(Where port 1111 on your local machine is where <code>kubectl proxy</code> is available; in your case trough a tunnel)</p> <p>If you need exec or attach trough <code>kubectl proxy</code> you'll need to run it with either <code>--disable-filter=true</code> or <code>--reject-paths='^$'</code>. Read the fine print and consequences for those options.</p> <h2>Safer way</h2> <p>All in all, this is not how I access clusters trough a bastion. The problem with above approach is if someone gains access to the bastion they immediately have valid Kubernetes credentials (as kubectl proxy needs those to function). It is also not the safest solution if the bastion is shared between multiple operators. One of the main points of a bastion would be that it never has credentials on it. What I fancy doing is accessing the bastion from my workstation with:</p> <pre><code>ssh -D 1080 bastion </code></pre> <p>That makes ssh act as SOCKS proxy. You need <code>GatewayPorts yes</code> in your sshd_config for this to work. Thereafter from the workstation I can use</p> <pre><code>HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pod </code></pre>
Janos Lenart
<p>Trying to run Elastic Search 6.2.4 on Openshift but it is not running and the container exits with the code 137. </p> <pre><code>[2018-06-01T14:24:58,148][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [ingest-common] [2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [lang-expression] [2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [lang-mustache] [2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [lang-painless] [2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [mapper-extras] [2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [parent-join] [2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [percolator] [2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [rank-eval] [2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [reindex] [2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [repository-url] [2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [transport-netty4] [2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [tribe] [2018-06-01T14:24:58,150][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [ingest-geoip] [2018-06-01T14:24:58,150][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [ingest-user-agent] [2018-06-01T14:24:58,150][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-core] [2018-06-01T14:24:58,150][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-deprecation] [2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-graph] [2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-logstash] [2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-ml] [2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-monitoring] [2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-security] [2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-upgrade] [2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-watcher] [2018-06-01T14:25:01,592][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/131] [Main.cc@128] controller (64 bit): Version 6.2.4 (Build 524e7fe231abc1) Copyright (c) 2018 Elasticsearch BV [2018-06-01T14:25:03,271][INFO ][o.e.d.DiscoveryModule ] [jge060C] using discovery type [zen] [2018-06-01T14:25:04,305][INFO ][o.e.n.Node ] initialized [2018-06-01T14:25:04,305][INFO ][o.e.n.Node ] [jge060C] starting ... [2018-06-01T14:25:04,497][INFO ][o.e.t.TransportService ] [jge060C] publish_address {10.131.3.134:9300}, bound_addresses {[::]:9300} [2018-06-01T14:25:04,520][INFO ][o.e.b.BootstrapChecks ] [jge060C] bound or publishing to a non-loopback address, enforcing bootstrap checks ERROR: [1] bootstrap checks failed [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] [2018-06-01T14:25:04,531][INFO ][o.e.n.Node ] [jge060C] stopping ... [2018-06-01T14:25:04,623][INFO ][o.e.n.Node ] [jge060C] stopped [2018-06-01T14:25:04,624][INFO ][o.e.n.Node ] [jge060C] closing ... [2018-06-01T14:25:04,634][INFO ][o.e.n.Node ] [jge060C] closed </code></pre> <p>As you can see from the logs, the vm max heap size has to be increased. As it turns out to be a kernel parameter, how to change that for the pod that is running ES?</p>
Jayabalan Bala
<p>Kernel <em>command line</em> parameters can't be changed per pod, but <code>vm.max_map_count</code> is parameter you can change via sysctl.</p> <p>See these two similar SO question for a solution:</p> <ul> <li><a href="https://stackoverflow.com/questions/44439372/how-to-pass-sysctl-flags-to-docker-from-k8s">How to pass `sysctl` flags to docker from k8s?</a></li> <li><a href="https://stackoverflow.com/questions/49961956/enabling-net-ipv4-ip-forward-for-a-container">Enabling net.ipv4.ip_forward for a container</a></li> </ul> <p>There is also a more general explanation in the official <a href="https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/" rel="nofollow noreferrer">Kubernetes documentation on sysctl</a></p>
Janos Lenart
<p>Adding the annotation:</p> <pre><code> annotations: nginx.ingress.kubernetes.io/auth-url: http://my-auth-service.my-api.svc.cluster.local:8080 </code></pre> <p>...to my ingress rule causes a 500 response from the ingress controller (the ingress works without it).</p> <p>The service exists and I can ssh into the ingress controller and CURL it, getting a response:</p> <p><code>curl http://my-auth-service.my-api.svc.cluster.local:8080</code> Produces a 200 response.</p> <p>I checked the ingress controller logs but it says that the service returned a <code>404</code>. If I can CURL to the same URL why would it return a <code>404</code>?</p> <pre><code>2019/07/01 20:26:11 [error] 558#558: *443367 auth request unexpected status: 404 while sending to client, client: 192.168.65.3, server: localhost, request: "GET /mocks HTTP/1.1", host: "localhost" </code></pre> <p>I'm not sure what to check to deterine the problem.</p>
Neilos
<p>FWIW, for future readers - I ran into the same problem, and after looking at my auth service logs, noticed nginx ingress' requests were appending a /_external-auth-xxxxxx path to the request url.</p> <p>Here's where the ingress controller does it, in the source:</p> <p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/template/template.go#L428" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/template/template.go#L428</a></p> <p>And how <a href="https://github.com/oliverbarnes/liquid-voting-auth/commit/9bf59d890f0bc27c7957768417ece1cf8e53501b" rel="nofollow noreferrer">I'm handling it</a> in my own auth service (a Elixir/Phoenix route):</p> <pre><code>get "/_external-auth*encoded_nginx_auth_url", TokenController, :index </code></pre>
oliverbarnes
<p>I have multiple pods running as below. I want to delete them all except the one having minimum age. How to do it?</p> <p><a href="https://i.stack.imgur.com/ejLRE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ejLRE.png" alt="enter image description here"></a></p>
Vatan Soni
<p>Something like this? Perhaps also add <code>-l app=value</code> to filter for a specific app</p> <pre><code>kubectl get pods --sort-by=.metadata.creationTimestamp -o name | head -n -1 | xargs echo kubectl delete </code></pre> <p>(Remove <code>echo</code> to do it for realz)</p>
Janos Lenart
<p>I have a Kubernetes environment with a rabbitmq servirve who deploys 2 pods of rabbitmq.</p> <p>I need to install a plugin on rabbitmq, (Delayed Message Plugin) but I don't like the "manual" way, so if the pod is deleted, I have to install the plugin again.</p> <p>I want to know which is the recommended way of achieving this. </p> <p>FYI: the manual way is to copy a file into the plugins folder, and then launch the following command: </p> <pre><code>rabbitmq-plugins enable rabbitmq_delayed_message_exchange </code></pre>
dragonalvaro
<p>You should mount the configuration for RabbitMQ from a config map.</p> <p>For example:</p> <p>The ConfigMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: rabbitmq-config namespace: rabbitmq data: enabled_plugins: | [rabbitmq_management,rabbitmq_peer_discovery_k8s]. rabbitmq.conf: | ... definitions.json: | ... </code></pre> <p>And then in your Deployment or StatefulSet:</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: rabbitmq namespace: rabbitmq spec: replicas: 3 ... template: ... spec: containers: - image: rabbitmq:3.7.4-management-alpine imagePullPolicy: IfNotPresent name: rabbitmq volumeMounts: - name: config-volume mountPath: /etc/rabbitmq ... volumes: - name: config-volume configMap: name: rabbitmq-config items: - key: rabbitmq.conf path: rabbitmq.conf - key: enabled_plugins path: enabled_plugins - key: definitions.json path: definitions.json ... </code></pre> <p>There are several ways to install the plugin in the first place. One is to base off of the image you are currently using, add the plugin, and use the new image instead. Alternatively you could utilize <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="noreferrer">Kubernetes life cycle hooks</a> to download the file pre start. Here is an <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="noreferrer">example of postStart</a></p>
matthias krull
<p>I am trying to run apache ignite cluster using Google Kubernetes Engine.</p> <p>After following the tutorial here are some <strong>yaml</strong> files.</p> <p>First I create a service - <strong>ignite-service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: # Name of Ignite Service used by Kubernetes IP finder. # The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName. name: ignite namespace: default spec: clusterIP: None # custom value. ports: - port: 9042 # custom value. selector: # Must be equal to one of the labels set in Ignite pods' # deployement configuration. app: ignite </code></pre> <p><strong><code>kubectl create -f ignite-service.yaml</code></strong></p> <p>Second, I create a deployment for my ignite nodes <strong>ignite-deployment.yaml</strong></p> <h1>An example of a Kubernetes configuration for Ignite pods deployment.</h1> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: # Custom Ignite cluster's name. name: ignite-cluster spec: # A number of Ignite pods to be started by Kubernetes initially. replicas: 2 template: metadata: labels: app: ignite spec: containers: # Custom Ignite pod name. - name: ignite-node image: apacheignite/ignite:2.4.0 env: - name: OPTION_LIBS value: ignite-kubernetes - name: CONFIG_URI value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube.xml ports: # Ports to open. # Might be optional depending on your Kubernetes environment. - containerPort: 11211 # REST port number. - containerPort: 47100 # communication SPI port number. - containerPort: 47500 # discovery SPI port number. - containerPort: 49112 # JMX port number. - containerPort: 10800 # SQL port number. </code></pre> <p><strong><code>kubectl create -f ignite-deployment.yaml</code></strong></p> <p>After that I check status of my pods which are running in my case. However when I check logs for any of my pod, I get the following error,</p> <pre><code>java.io.IOException: Server returned HTTP response code: 403 for URL: https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/endpoints/ignite </code></pre> <p>Things I have tried:-</p> <ol> <li>I followed this <a href="https://stackoverflow.com/questions/49395481/how-to-setmasterurl-in-ignite-xml-config-for-kubernetes-ipfinder/49405879#49405879">link</a> to make my cluster work. But in step 4, when I run the daemon yaml file, I get the following error</li> </ol> <p><code>error: error validating "daemon.yaml": error validating data: ValidationError(DaemonSet.spec.template.spec): missing required field "containers" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false</code></p> <p>Can anybody point me to my mistake which I might be doing here?</p> <p>Thanks.</p>
wadhwasahil
<p>Step 1: <code>kubectl apply -f ignite-service.yaml</code> (with the file in your question)</p> <p>Step 2: <code>kubectl apply -f ignite-rbac.yaml</code></p> <p>ignite-rbac.yaml is like this:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: ignite namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: ignite-endpoint-access namespace: default labels: app: ignite rules: - apiGroups: [""] resources: ["endpoints"] resourceNames: ["ignite"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ignite-role-binding namespace: default labels: app: ignite subjects: - kind: ServiceAccount name: ignite roleRef: kind: Role name: ignite-endpoint-access apiGroup: rbac.authorization.k8s.io </code></pre> <p>Step 3: <code>kubectl apply -f ignite-deployment.yaml</code> (very similar to your file, I've only added one line, <code>serviceAccount: ignite</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: # Custom Ignite cluster's name. name: ignite-cluster namespace: default spec: # A number of Ignite pods to be started by Kubernetes initially. replicas: 2 template: metadata: labels: app: ignite spec: serviceAccount: ignite ## Added line containers: # Custom Ignite pod name. - name: ignite-node image: apacheignite/ignite:2.4.0 env: - name: OPTION_LIBS value: ignite-kubernetes - name: CONFIG_URI value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube.xml ports: # Ports to open. # Might be optional depending on your Kubernetes environment. - containerPort: 11211 # REST port number. - containerPort: 47100 # communication SPI port number. - containerPort: 47500 # discovery SPI port number. - containerPort: 49112 # JMX port number. - containerPort: 10800 # SQL port number. </code></pre> <p>This should work fine. I've got this in the logs of the pod (<code>kubectl logs -f ignite-cluster-xx-yy</code>), showing the 2 Pods successfully locating each other:</p> <pre><code>[13:42:00] Ignite node started OK (id=f89698d6) [13:42:00] Topology snapshot [ver=1, servers=1, clients=0, CPUs=1, offheap=0.72GB, heap=1.0GB] [13:42:00] Data Regions Configured: [13:42:00] ^-- default [initSize=256.0 MiB, maxSize=740.8 MiB, persistenceEnabled=false] [13:42:01] Topology snapshot [ver=2, servers=2, clients=0, CPUs=2, offheap=1.4GB, heap=2.0GB] [13:42:01] Data Regions Configured: [13:42:01] ^-- default [initSize=256.0 MiB, maxSize=740.8 MiB, persistenceEnabled=false] </code></pre>
Janos Lenart
<p>In the documentation about affinity and anti-affinity rules for kubernetes there is a pratical use case arround a <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#more-practical-use-cases" rel="nofollow noreferrer">web application and a local redis cache</a>. </p> <ol> <li>The redis deployment has PodAntiAffinity configured to ensure the scheduler does not co-locate replicas on a single node.</li> <li>The webapplication deployment has a pod affinity to ensure the app is scheduled with the pod that has label store (Redis).</li> </ol> <p>To connect to the redis from the webapp we would have to define a service.</p> <p>Question: How are we sure that the webapp will always use the redis that is co-located on the same node and not another one? If I read the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#version-compatibility" rel="nofollow noreferrer">version compatibility</a> from Kubernetes v1.2 the <strong>iptables mode</strong> for kube-proxy became the <strong>default</strong>.</p> <p>Reading the docs about <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer">iptable mode for kube-proxy</a> it says <strong>by default</strong>, kube-proxy in iptables mode <strong>chooses a backend at random</strong>.</p> <p><s> So my answer to the question would be: <strong>No we can't be sure</strong>. If you want to be sure then put the redis and webapp in one pod? </s></p>
Geoffrey Samper
<p>This can be configured in the (redis) Service, but in general it is not recommended:</p> <blockquote> <p>Setting <code>spec.externalTrafficPolicy</code> to the value <code>Local</code> will only proxy requests to local endpoints, never forwarding traffic to other nodes</p> </blockquote> <p>This is a complex topic, read more here:</p> <ul> <li><a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/services/source-ip/</a></li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></li> </ul>
Janos Lenart
<p>Is there a way to get the actual resource (CPU and memory) constraints inside a container?</p> <p>Say the node has 4 cores, but my container is only configured with 1 core through resource requests/limits, so it actually uses 1 core, but it still sees 4 cores from /proc/cpuinfo. I want to determine the number of threads for my application based on the number of cores it can actually use. I'm also interested in memory.</p>
Dagang
<h2>Short answer</h2> <p>You can use the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-container-fields-as-values-for-environment-variables" rel="noreferrer">Downward API</a> to access the resource requests and limits. There is no need for service accounts or any other access to the apiserver for this.</p> <p>Example:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dapi-envars-resourcefieldref spec: containers: - name: test-container image: k8s.gcr.io/busybox:1.24 command: [ "sh", "-c"] args: - while true; do echo -en '\n'; printenv MY_CPU_REQUEST MY_CPU_LIMIT; printenv MY_MEM_REQUEST MY_MEM_LIMIT; sleep 10; done; resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: containerName: test-container resource: requests.cpu divisor: "1m" - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: containerName: test-container resource: limits.cpu divisor: "1m" - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: containerName: test-container resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: containerName: test-container resource: limits.memory restartPolicy: Never </code></pre> <p>Test:</p> <pre><code>$ kubectl logs dapi-envars-resourcefieldref 125 250 33554432 67108864 </code></pre> <h2>Long answer</h2> <p>Kubernetes translates resource requests and limits to kernel primitives. It is possible to access that information from the pod too, but considerably more complicated and also not portable (Window$ nodes, anyone?)</p> <ul> <li>CPU requests/limits: <code>/sys/fs/cgroup/cpu/kubepods/..QOS../podXX/cpu.*</code> : cpu.shares (this is requests; divide by 1024 to get core percentage), cpu.cfs_period_us, cpu.cfs_quota_us (divide cfs_quota_us by cfs_period_us to get cpu limit, relative to 1 core)</li> <li>Memory limit: <code>/sys/fs/cgroup/memory/kubepods/..QOS../podXX/memory.limit_in_bytes</code></li> <li>Memory request: this one is tricky. It gets translated into oom adjustment scores under <code>/proc/..PID../oom_score_adj</code> . Good luck calculating that back to memory request amount :)</li> </ul> <p>Short answer is great, right? ;)</p>
Janos Lenart
<p>What's the best way to store a persistent file in Kubernetes? I have a cert (.pfx) and I want to be passing to the application its path. From the looks of it it can't be stored in secrets. Was thinking about a volume but the question is how do I upload the file to it? And which type of volume to choose? Or is there any other efficient way?</p>
FRC
<p>It's unclear from your question why you came to the conclusion that it can't be stored as a Secret. This is one of the main <a href="https://kubernetes.io/docs/concepts/configuration/secret/#use-cases" rel="noreferrer">use cases</a> for Secrets.</p> <p>Step 1. Create a Secret from your file:</p> <pre><code>kubectl create secret generic mysecret --from-file=myfile=/tmp/my.pfx </code></pre> <p>Step 2. Mount the Secret volume into a Pod:</p> <pre><code>kind: Pod apiVersion: v1 metadata: name: secret-test-pod spec: volumes: - name: secret-volume secret: secretName: mysecret containers: - name: ... image: ... volumeMounts: - name: secret-volume mountPath: "/etc/secret-volume" </code></pre> <p>Your container should see a file at <code>/etc/secret-volume/myfile</code></p>
Janos Lenart
<p>when i run this command mentioned below <code>kubectl get po -n kube-system</code> I get this error :: <strong>The connection to the server localhost:8080 was refused - did you specify the right host or port?</strong></p>
Adarsha Jha
<p><code>localhost:8080</code> is the default server to connect to if there is no <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl" rel="nofollow noreferrer"><code>kubeconfig</code></a> present on your system (for the current user).</p> <p>Follow the instructions on the page linked. You will need to execute something like:</p> <blockquote> <p><code>gcloud container clusters get-credentials [CLUSTER_NAME]</code></p> </blockquote>
Janos Lenart
<p>I have the following Ingress resource:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: main-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/force-ssl-redirect: "false" nginx.ingress.kubernetes.io/proxy-read-timeout: "86400" nginx.ingress.kubernetes.io/proxy-send-timeout: "86400" spec: tls: - secretName: the-secret hosts: - sample.domain.com - sample2.domain.com - rabbit.domain.com - hub.domain.com - grafana.domain.com rules: - host: sample.domain.com http: paths: - path: / backend: serviceName: fe-srvc servicePort: 80 - path: /api backend: serviceName: be-srvc servicePort: 80 - host: sample2.domain.com http: paths: - path: / backend: serviceName: fe2-srvc servicePort: 80 - path: /api backend: serviceName: be-srvc servicePort: 80 ## The Extra Services ### - host: rabbit.domain.com http: paths: - path: / backend: serviceName: rabbitmq-srvc servicePort: 80 </code></pre> <p>and I want to patch it after it is deployed.</p> <p>So I use this, to try and replace the <code>be-srvc</code> value with <code>some-srvc</code> :</p> <pre><code>kubectl patch ing/main-ingress --patch '{ "spec" : { "rules": [{"http":{"paths":[ {"- path":"/"},{"backend":{"serviceName":"other-srvc"}},{"servicePort":"80"} ] }}]}}' </code></pre> <p>and I get this error:</p> <pre><code>The Ingress "main-ingress" is invalid: * spec.rules[0].http.backend.serviceName: Required value * spec.rules[0].http.backend.servicePort: Invalid value: 0: must be between 1 and 65535, inclusive </code></pre> <p>Any insight would be appreciated!</p>
Kostas Demiris
<p>Your patch has a number of problems; for example <code>"- path"</code> instead of <code>"path"</code> but also incorrect referencing of object levels. However, even if you fixed the mistakes this would not work as intended. Let's see why.</p> <p><code>kubectl patch</code> is a request for a <strong><em>strategic merge patch</em></strong>. When patching arrays, like the <code>.spec.rules</code> and <code>.spec.rules.http.paths</code> in this case, a <em>strategic merge patch</em> can use the defined <em>patch type</em> and <em>merge patch merge key</em> for the object to do The Right Thing. However, in case of the Ingress object no one bothered to define these. This means that any patch will overwrite the entire object; it will not be a nice merge that one is hoping for.</p> <p>To accomplish the particular change referred to in the question you can do:</p> <pre><code>kubectl get ing/main-ingress -o json \ | jq '(.spec.rules[].http.paths[].backend.serviceName | select(. == "be-srvc")) |= "some-srvc"' \ | kubectl apply -f - </code></pre> <p>The above will change all occurrences of the <code>be-srvc</code> Service to <code>some-srvc</code>. Keep in mind that there is a short race condition here: if the Ingress is modified after <code>kubectl get</code> ran the change will fail with the error <code>Operation cannot be fulfilled on ingresses.extensions "xx": the object has been modified</code>; to handle that case you need implement a retry logic.</p> <p>If the indexes are known in the arrays mentioned above you can accomplish the patch directly:</p> <pre><code>kubectl patch ing/main-ingress --type=json \ -p='[{"op": "replace", "path": "/spec/rules/0/http/paths/1/backend/serviceName", "value":"some-srvc"}]' kubectl patch ing/main-ingress --type=json \ -p='[{"op": "replace", "path": "/spec/rules/1/http/paths/1/backend/serviceName", "value":"some-srvc"}]' </code></pre> <p>The two commands above will change the backends for <code>sample.domain.com/api</code> and <code>sample2.domain.com/api</code> to <code>some-srvc</code>.</p> <p>The two commands can also be combined like this:</p> <pre><code>kubectl patch ing/main-ingress --type=json \ -p='[{"op": "replace", "path": "/spec/rules/0/http/paths/1/backend/serviceName", "value":"some-srvc"}, {"op": "replace", "path": "/spec/rules/1/http/paths/1/backend/serviceName", "value":"some-srvc"}]' </code></pre> <p>This has the same effect and as an added bonus there is no race condition here; the patch guaranteed to be atomic.</p>
Janos Lenart
<p>A secret went missing in one of my Kubernetes namespaces. Either some process or somebody deleted this accidentally. Is there a way to find out how this got deleted.</p>
Frqa
<p>If <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">audit logging</a> was enabled on the cluster at the time, then yes. Some hosted Kubernetes clusters (GKR, AKS, ...) can be enabled for this too, but you haven't specified the kind of cluster/provider. Otherwise there is no way.</p>
Janos Lenart
<p>I am following this tutorial at <a href="https://gettech1.wordpress.com/2016/05/26/setting-up-kubernetes-cluster-on-ubuntu-14-04-lts/" rel="nofollow noreferrer">https://gettech1.wordpress.com/2016/05/26/setting-up-kubernetes-cluster-on-ubuntu-14-04-lts/</a> to setup kubernet multi node with 2 minions and 1 master node on remote ubuntu machines, after following all the steps it goes OK. But when I am trying to run the ./kube-up.sh bash file. It returns the following errors</p> <blockquote> <p>ubuntu@ip-XXX-YYY-ZZZ-AAA:~/kubernetes/cluster</p> <p>$ ./kube-up.sh</p> <p>Starting cluster in us-central1-b using provider gce ... calling</p> <p>verify-prereqs Can't find gcloud in PATH, please fix and retry. The</p> <p>Google Cloud SDK can be downloaded from <a href="https://cloud.google.com/sdk/" rel="nofollow noreferrer">https://cloud.google.com/sdk/</a>.</p> </blockquote> <p><strong>Edit :</strong> I have fixed above issue after exporting different environment variables like</p> <pre><code>$ export KUBE_VERSION=2.2.1 $ export FLANNEL_VERSION=0.5.5 $ export ETCD_VERSION=1.1.8 </code></pre> <p>but after that it is generating this issue</p> <blockquote> <p>kubernet gzip: stdin: not in gzip format tar: Child returned status 1 tar: Error is not recoverable: exiting now</p> </blockquote>
A l w a y s S u n n y
<p>The command you should be executing is <code>KUBERNETES_PROVIDER=ubuntu ./kube-up.sh</code></p> <p>Without setting that environment variable kube-up.sh tries to deploy VMs on Google Compute Engine and to do so it needs the gcloud binary that you don't have installed.</p>
Janos Lenart
<p>How can I pass the <code>nginx.conf</code> configuration file to an nginx instance running inside a Kubernetes cluster?</p>
xechelonx
<p>You can create a ConfigMap object and then mount the values as files where you need them:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nginx-config data: nginx.conf: | your config comes here like this other.conf: | second file contents </code></pre> <p>And in you pod spec:</p> <pre><code>spec: containers: - name: nginx image: nginx volumeMounts: - name: nginx-config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf - name: other.conf mountPath: /etc/nginx/other.conf subPath: other.conf volumes: - name: nginx-config configMap: name: nginx-config </code></pre> <p>(Take note of the duplication of the filename in mountPath and using the exact same subPath; same as bind mounting files.)</p> <p>For more information about ConfigMap see: <a href="https://kubernetes.io/docs/user-guide/configmap/" rel="noreferrer">https://kubernetes.io/docs/user-guide/configmap/</a></p> <blockquote> <p>Note: A container using a ConfigMap as a subPath volume will not receive ConfigMap updates.</p> </blockquote>
Janos Lenart
<p>I am trying to get heapster eventer to work on a cluster with RBAC enabled. Using the same roles that work for /heapster command does not seem to be sufficient.</p> <p>On running the pod logs fill up with entries like this:</p> <pre><code>Failed to load events: events is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list events at the cluster scope </code></pre> <p>Does anyone know the proper authorization for my heapster service account, short of admin rights?</p> <p>Eventer deployment doc:</p> <pre><code>kind: Deployment apiVersion: extensions/v1beta1 metadata: labels: k8s-app: eventer name: eventer namespace: kube-system spec: replicas: 1 selector: matchLabels: k8s-app: eventer strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: k8s-app: eventer spec: serviceAccountName: heapster containers: - name: eventer image: k8s.gcr.io/heapster-amd64:v1.5.4 imagePullPolicy: IfNotPresent command: - /eventer - --source=kubernetes:https://kubernetes.default - --sink=log resources: limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi terminationMessagePath: /dev/termination-log restartPolicy: Always terminationGracePeriodSeconds: 30 </code></pre> <p>RBAC:</p> <pre><code># Original: https://brookbach.com/2018/10/29/Heapster-on-Kubernetes-1.11.3.html apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: heapster rules: - apiGroups: - "" resources: - pods - nodes - namespaces - events verbs: - get - list - watch - apiGroups: - extensions resources: - deployments verbs: - get - list - update - watch - apiGroups: - "" resources: - nodes/stats verbs: - get </code></pre> <p>Cluster role binding:</p> <pre><code># Original: https://github.com/kubernetes-retired/heapster/blob/master/deploy/kube-config/rbac/heapster-rbac.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: heapster roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: heapster subjects: - kind: ServiceAccount name: heapster namespace: kube-system </code></pre> <p>Related question: <a href="https://stackoverflow.com/questions/39496955/how-to-propagate-kubernetes-events-from-a-gke-cluster-to-google-cloud-log">How to propagate kubernetes events from a GKE cluster to google cloud log</a></p>
Gudlaugur Egilsson
<p>All of the above objects seem to be correct to me.</p> <p>It's just a hunch but perhaps you created the Deployment first and then the ClusterRole and/or ClusterBindingRole and/or the ServiceAccount itself. Make sure you have these 3 first, then delete the current heapster Pods (or the Deployment, and wait for the Pod to terminate before recreating the Deployment).</p> <p>(Create the ServiceAccount by <code>kubectl create sa heapster -n kube-system</code>)</p> <p>Also, you can test if ServiceAccount can list the events by:</p> <pre><code>kubectl get ev --all-namespaces --as system:serviceaccount:kube-system:heapster </code></pre>
Janos Lenart
<p>Is it possible to map, the device port(USB port) of a worker node, to a POD? Similar to <code>docker create --device=/dev/ttyACM0:/dev/ttyACM0</code></p> <p>Is it possible? I checked the refence doc, but could not find anything.</p> <p>In Docker service, is it possible to map <code>--device port</code> to service container(if I am running only 1 container)? </p>
jisan
<p>You can actually get this to work. You need to run the container privileged and use a hostPath like this:</p> <pre><code> containers: - name: acm securityContext: privileged: true volumeMounts: - mountPath: /dev/ttyACM0 name: ttyacm volumes: - name: ttyacm hostPath: path: /dev/ttyACM0 </code></pre>
Janos Lenart
<p>I tried to create k8s cluster on aws using kops. </p> <p>After create the cluster with default definition, I saw a LoadBalance has been created. </p> <pre><code>apiVersion: kops/v1alpha2 kind: Cluster metadata: name: bungee.staging.k8s.local spec: api: loadBalancer: type: Public .... </code></pre> <p>I just wondering about the reason of creating the LoadBalancer along with cluster.</p> <p>Appreciate !</p>
pham cuong
<p>In the type of cluster that kops creates the apiserver (referred to as api above, a component of the Kubernetes master, aka control plane) <em>may</em> not have a static IP address. Also, kops can create a HA (replicated) control plane, which means there <strong>will</strong> be multiple IPs where the apiserver is available.</p> <p>The apiserver functions as a central connection hub for all other Kubernetes components, for example all the nodes connect to it but also the operator humans connect to them via kubectl. For one, these configuration files do not support multiple IP address for the apiserver (as to make use of the HA setup). Plus updating the configuration files every time the apiserver IP address(es) change would be difficult.</p> <p>So the load balancer functions as a front for the apiserver(s) with a single, static IP address (an anycast IP with AWS/GCP). This load balancer IP is specified in the configuration files of Kubernetes components instead of actual apiserver IP(s).</p> <p>Actually, it is also possible to solve this program by using a DNS name that resolves to IP(s) of the apiserver(s) coupled with a mechanism that keeps this record updated. This solution can't react to changes of the underlying IP(s) as fast a load balancer can, but it does save you couple of bucks plus it is slightly less likely to fail and creates less dependency on the cloud provider. This can be configured like so:</p> <pre><code>spec: api: dns: {} </code></pre> <p>See <a href="https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#api" rel="noreferrer">specification</a> for more details.</p>
Janos Lenart
<p>I have the following services and would like to call those outside from kubernetes: </p> <pre><code>k get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE greeter-service ClusterIP 10.233.35.214 &lt;none&gt; 3000/TCP 4d9h helloweb ClusterIP 10.233.8.173 &lt;none&gt; 3000/TCP 4d9h kubernetes ClusterIP 10.233.0.1 &lt;none&gt; 443/TCP 4d13h movieweb ClusterIP 10.233.12.155 &lt;none&gt; 3000/TCP 3d9h\ </code></pre> <p>The <strong>greeter-service</strong> is the first candidate, that I would like to reach from outside. I've created a virtual services as follows: </p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: greeter-service spec: hosts: - greeter-service.default.svc.cluster.local http: - match: - uri: prefix: /greeting rewrite: uri: /hello route: - destination: host: greeter-service.default.svc.cluster.local port: number: 3000 subset: v2 - route: - destination: host: greeter-service.default.svc.cluster.local port: number: 3000 subset: v1 </code></pre> <p>then after the deployment: </p> <pre><code>k get virtualservices NAME GATEWAYS HOSTS AGE greeter-service [greeter-service.default.svc.cluster.local] 3d2h helloweb [gateway] [helloweb.dev] 4d5h movieweb [gateway] [movieweb.dev] 3d9h </code></pre> <p>as you can see, the virtual service for <strong>greeter-service</strong> is created. Then I tried to call it from outside via curl: </p> <pre><code>curl -v 172.17.8.180:80/greeting * Trying 172.17.8.180... * TCP_NODELAY set * Connected to 172.17.8.180 (172.17.8.180) port 80 (#0) &gt; GET /greeting HTTP/1.1 &gt; Host: 172.17.8.180 &gt; User-Agent: curl/7.58.0 &gt; Accept: */* &gt; &lt; HTTP/1.1 404 Not Found &lt; date: Wed, 04 Dec 2019 20:34:55 GMT &lt; server: istio-envoy &lt; content-length: 0 &lt; * Connection #0 to host 172.17.8.180 left intact </code></pre> <p>The ingress controller is configured as follows:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - '*' </code></pre> <p>As you can see, I can not reach the service. What is wrong?</p>
softshipper
<p>Your query didn't match the host. Try</p> <pre><code>curl -v -H 'Host: greeter-service.default.svc.cluster.local' 172.17.8.180:80/greeting </code></pre>
Janos Lenart
<p>How can I get the image ID (the docker sha256 hash) of a image / container within a Kubernetes deployment? </p>
Chris Stryczynski
<p>Something like this will do the trick (you must have <code>jq</code> installed):</p> <pre><code>$ kubectl get pod --namespace=xx yyyy -o json | jq '.status.containerStatuses[] | { "image": .image, "imageID": .imageID }' { "image": "nginx:latest", "imageID": "docker://sha256:b8efb18f159bd948486f18bd8940b56fd2298b438229f5bd2bcf4cedcf037448" } { "image": "eu.gcr.io/zzzzzzz/php-fpm-5:latest", "imageID": "docker://sha256:6ba3fe274b6110d7310f164eaaaaaaaaaa707a69df7324a1a0817fe3b475566a" } </code></pre>
Janos Lenart
<p>I have a simple meteor app deployed on kubernetes. I associated an external IP address with the server, so that it's accessible from within the cluster. Now, I am up to exposing it to the internet and securing it (using HTTPS protocol). Can anyone give simple instructions for this section?</p>
fay
<p>In my opinion <a href="https://github.com/jetstack/kube-lego" rel="noreferrer">kube-lego</a> is the best solution for GKE. See why:</p> <ul> <li>Uses <a href="https://letsencrypt.org/" rel="noreferrer">Let's Encrypt</a> as a CA</li> <li>Fully automated enrollment and renewals</li> <li>Minimal configuration in a single ConfigMap object</li> <li>Works with <a href="https://github.com/nginxinc/kubernetes-ingress" rel="noreferrer">nginx-ingress-controller</a> (see <a href="https://github.com/jetstack/kube-lego/tree/master/examples/nginx" rel="noreferrer">example</a>)</li> <li>Works with <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="noreferrer">GKE's HTTP Load Balancer</a> (see <a href="https://github.com/jetstack/kube-lego/tree/master/examples/gce" rel="noreferrer">example</a>)</li> <li>Multiple domains fully supported, including virtual hosting multiple https sites on one IP (with nginx-ingress-controller's SNI support)</li> </ul> <p>Example configuration (that's it!): </p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: kube-lego namespace: kube-lego data: lego.email: "your@email" lego.url: "https://acme-v01.api.letsencrypt.org/directory" </code></pre> <p>Example Ingress (you can create more of these): </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: site1 annotations: # remove next line if not using nginx-ingress-controller kubernetes.io/ingress.class: "nginx" # next line enable kube-lego for this Ingress kubernetes.io/tls-acme: "true" spec: tls: - hosts: - site1.com - www.site1.com - site2.com - www.site2.com secretName: site12-tls rules: ... </code></pre>
Janos Lenart
<p>I've created a secret using</p> <pre class="lang-shell prettyprint-override"><code>kubectl create secret generic production-tls \ --from-file=./tls.key \ --from-file=./tls.crt </code></pre> <p>If I'd like to update the values - how can I do this?</p>
Chris Stryczynski
<p>This should work:</p> <pre class="lang-shell prettyprint-override"><code>kubectl create secret generic production-tls \ --save-config \ --dry-run=client \ --from-file=./tls.key --from-file=./tls.crt \ -o yaml | \ kubectl apply -f - </code></pre>
Janos Lenart
<p>When trying to use the helm function: lookup, I do not get any result at all as expected.</p> <p>My Secret that I try to read looks like this</p> <pre><code>apiVersion: v1 data: adminPassword: VG9wU2VjcmV0UGFzc3dvcmQxIQ== adminUser: YWRtaW4= kind: Secret metadata: annotations: sealedsecrets.bitnami.com/cluster-wide: &quot;true&quot; name: activemq-artemis-broker-secret namespace: common type: Opaque </code></pre> <p>The template helm chart that should load the adminUser and adminPassword data looks like this</p> <pre><code>apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: {{ .Values.labels.app }} namespace: common spec: {{ $secret := lookup &quot;v1&quot; &quot;Secret&quot; .Release.Namespace &quot;activemq-artemis-broker-secret&quot; }} adminUser: {{ $secret.data.adminUser }} adminPassword: {{ $secret.data.adminPassword }} </code></pre> <p>When deploying this using ArgoCD I get the following error:</p> <pre><code>failed exit status 1: Error: template: broker/templates/deployment.yaml:7:23: executing &quot;broker/templates/deployment.yaml&quot; at &lt;$secret.data.adminUser&gt;: nil pointer evaluating interface {}.adminUser Use --debug flag to render out invalid YAML </code></pre> <p>Both the secret and the deployment is in the same namespace (common).</p> <p>If I try to get the secret with kubectl it works as below</p> <pre><code>kubectl get secret activemq-artemis-broker-secret -n common -o json { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;data&quot;: { &quot;adminPassword&quot;: &quot;VG9wU2VjcmV0UGFzc3dvcmQxIQ==&quot;, &quot;adminUser&quot;: &quot;YWRtaW4=&quot; }, &quot;kind&quot;: &quot;Secret&quot;, &quot;metadata&quot;: { &quot;annotations&quot;: { &quot;sealedsecrets.bitnami.com/cluster-wide&quot;: &quot;true&quot; }, &quot;creationTimestamp&quot;: &quot;2022-10-10T14:40:49Z&quot;, &quot;name&quot;: &quot;activemq-artemis-broker-secret&quot;, &quot;namespace&quot;: &quot;common&quot;, &quot;ownerReferences&quot;: [ { &quot;apiVersion&quot;: &quot;bitnami.com/v1alpha1&quot;, &quot;controller&quot;: true, &quot;kind&quot;: &quot;SealedSecret&quot;, &quot;name&quot;: &quot;activemq-artemis-broker-secret&quot;, &quot;uid&quot;: &quot;edff38fb-a966-47a6-a706-cb197ac1797d&quot; } ], &quot;resourceVersion&quot;: &quot;127303988&quot;, &quot;uid&quot;: &quot;0679fc5c-7465-4fe1-9197-b483073e93c2&quot; }, &quot;type&quot;: &quot;Opaque&quot; } </code></pre> <p>What is wrong here. I use helm version: 3.8.1 and Go version: 1.75</p>
Mikael Nyborg
<p>This error is the result of two parts <em>working</em> together:</p> <p>First, helm's <code>lookup</code> only works in a running cluster, not when running <code>helm template</code> (without <code>--validate</code>). If run in that manner it returns nil. (It is usually used as <code>lookup ... | default dict {}</code>, to avoid a nasty error message).</p> <p>Second, you're deploying with ArgoCD that is actually running <code>helm template</code> internally when deploying a helm chart. See open issue: <a href="https://github.com/argoproj/argo-cd/issues/5202" rel="nofollow noreferrer">https://github.com/argoproj/argo-cd/issues/5202</a> . The issue mentions a plugin that can be used to change this behaviour. However, doing so requires some reconfiguration of argocd itself, which is not trivial and is not without side effects.</p>
Janos Lenart
<p>Just curious about the intent for this default namespace.</p>
Steven Barragán
<p>That namespace exists in clusters created with kubeadm for now. It contains a single ConfigMap object, cluster-info, that aids discovery and security bootstrap (basically, contains the CA for the cluster and such). This object is readable without authentication.</p> <p>If you are courious:</p> <pre><code>$ kubectl get configmap -n kube-public cluster-info -o yaml </code></pre> <p>There are more details in this <a href="https://kubernetes.io/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters/" rel="noreferrer">blog post</a> and the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md#new-kube-public-namespace" rel="noreferrer">design document</a>:</p> <blockquote> <h2>NEW: kube-public namespace</h2> <p>[...] To create a config map that everyone can see, we introduce a new kube-public namespace. This namespace, by convention, is readable by all users (including those not authenticated). [...]</p> <p>In the initial implementation the kube-public namespace (and the cluster-info config map) will be created by kubeadm. That means that these won't exist for clusters that aren't bootstrapped with kubeadm. [...]</p> </blockquote>
Janos Lenart
<p>I read a bout metalLB in <a href="http://blog.cowger.us/2018/07/25/using-kubernetes-externaldns-with-a-home-bare-metal-k8s.html" rel="nofollow noreferrer">http://blog.cowger.us/2018/07/25/using-kubernetes-externaldns-with-a-home-bare-metal-k8s.html</a> the writers said</p> <blockquote> <p>Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have <strong>significant downsides</strong> for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.</p> </blockquote> <p>I want to know what is this significant downsides.</p>
yasin lachini
<p>A Service with <code>type: NodePort</code> would open the same port on all of the nodes enabling clients to direct their traffic to any of the nodes and kube-proxy can balance the traffic between Pods from that point on. You face 3 problems here:</p> <ol> <li>Unless you are happy with depending on a single node you'd need to create your own load balancing solution to target multiple (or even all) nodes. This is doable of course but you need extra software or hardware plus configuration</li> <li>For configuration above you also need a mechanism to discover the IP addresses of the nodes, keep that list updated and monitor for health of nodes. Again, doable but extra pain</li> <li>NodePort only supports picking a port number from a specific range (default is 30000-32767). The range can be modified but you won't be able to pick your favourite ports like 80 or 443 this way. Again, not a huge problem if you have an external load balancing solution which will hide this implementation detail</li> </ol> <p>As for Service with <code>type: ClusterIP</code> (default) and <code>externalIPs: [...]</code> (must specify IP address(es) of node(s) there your problems will be:</p> <ol> <li>You need some method to pick some nodes that are healthy and keep the Service object updated with that list. Doable but requires extra automation.</li> <li>Same 1., for NodePort</li> <li>Although you get to pick arbitrary port numbers here (so 80, 443, 3306 are okay) your will need do some housekeeping to avoid attempting to use the same port number on the same node from two different Service objects. Once again, doable but you probably have something better to do</li> </ol>
Janos Lenart
<p>Can I run both docker swarm and kubernetes on same nodes , can overlay network and kubernetes internal cluster network work together ?</p>
Rajib Mitra
<p>Technically yes, but it's not as good an idea as it sounds at first. Unfortunately it confuses Kubernetes about the amount of resources available on the nodes.</p>
Janos Lenart
<p>When a client sends a request to the Kubernetes apiserver, authentication plugins attempt to <a href="https://kubernetes.io/docs/admin/authentication/#webhook-token-authentication#authentication-strategies" rel="nofollow noreferrer">associate a number of attributes to the request</a>. These attributes can be used by authorisation plugins to determine whether the client's request can proceed. </p> <p>One such attribute is the UID of the client, however <a href="https://kubernetes.io/docs/admin/authorization#review-your-request-attributes" rel="nofollow noreferrer">Kubernetes does not review the UID attribute during authorisation</a>. If this is the case, how is the UID attribute used?</p>
dippynark
<p>The UID field is intentionally not used for authentication purposes, but it is to allow logging for audit purposes.</p> <p>For many organizations this might not be important, but for example Google allows employees to change their usernames (but of course not the numeric UID). Logging the UID would allow lookups of actions regardless of the current username.</p> <p>(Now some might point out, that changing the username will likely involve loosing the current privileges; this is an accepted limitation/inconvinience.)</p>
Janos Lenart
<p>I am using PostgreSQL helm chart and facing an issue while adding an init.sql script in the <code>/charts/postgresql/files/docker-entrypoint-initdb.d</code> and running <code>helm install</code> and I am getting the following issue - </p> <pre class="lang-none prettyprint-override"><code>Error: YAML parse error on iriusrisk/charts/postgresql/templates/._metrics-configmap.yaml: error converting YAML to JSON: yaml: control characters are not allowed </code></pre> <p>I believe it has more to do with some issue introduced by Mac I am currently using MacOS Mojave Version - 10.14.6</p> <p>I have uploaded the files here <a href="https://github.com/prav10194/helm-chart" rel="nofollow noreferrer">https://github.com/prav10194/helm-chart</a> and the <a href="https://github.com/prav10194/helm-chart/blob/master/charts/postgresql-8.6.16.tgz" rel="nofollow noreferrer">https://github.com/prav10194/helm-chart/blob/master/charts/postgresql-8.6.16.tgz</a> is the one with the sql script and <a href="https://github.com/prav10194/helm-chart/blob/master/charts/postgresql-8.6.12.tgz" rel="nofollow noreferrer">https://github.com/prav10194/helm-chart/blob/master/charts/postgresql-8.6.12.tgz</a> is without the sql script. </p> <p>Running it on minikube version: v1.6.2</p> <p>Helm version:</p> <pre class="lang-none prettyprint-override"><code>version.BuildInfo{Version:"`v3.0.2`", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"} </code></pre>
Pranav Bhatia
<p>Your error doesn't seem to have much to do with Mac. While it's not informative, it looks like the problem is that Helm can't find the chart version 8.1.16: it doesn't exist in Bitnami repo and the version is not updated in your local <code>Chart.yaml</code>. Here's what I did to replicate it:</p> <ol> <li>Cloned the repo.</li> <li>Changed the version in <code>requirements.yaml</code> from <code>*</code> to <code>8.6.16</code>.</li> <li>Ran <code>helm install . --generate-name</code></li> </ol> <p>Got this error:</p> <pre><code>Error: YAML parse error on iriusrisk/charts/postgresql/templates/.__helpers.tpl: error converting YAML to JSON: yaml: control characters are not allowed </code></pre> <p>If your error has the same origins, here's what you can do to fix it (provided you start from a clean clone of your repo):</p> <ol> <li>Delete the archive for <code>8.6.12</code>.</li> <li>Unpack the archive for <code>8.6.16</code> and delete it as well. You will now have <code>charts/postgresql</code> directory.</li> <li>Go to <code>charts/postgresql/Chart.yaml</code> and update the version there to <code>8.6.16</code>.</li> <li>Go to <code>requirements.yaml</code> and change the version to <code>8.6.16</code>. You can also remove/comment the <code>repository</code> line as you're using the local chart.</li> <li>Delete <code>requirements.lock</code>.</li> <li>Run <code>helm install . &lt;your name or --generate-name&gt;</code></li> </ol> <p>You should now have <code>8.6.16</code> installed in your minikube cluster.</p> <p>Tested using minikube 1.9.0 on macOS 10.15.4 (19E266) with Helm 3.1.2.</p>
unclenorton
<p>I am trying to install Minikube on a GCP VM. I am running into an issue where the OS is complaining that VT-X/AMD-v needs to be enabled. Are there any specific instructions for setting this up on GCP?</p>
cyberbeast
<p><a href="https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances" rel="nofollow noreferrer">Nested Virtualization</a> is supported on GCP and I can confirm the documentation I've linked is up to date and workable.</p> <p>Quoting the 3 basic points here that you need:</p> <ul> <li>A supported OS <ul> <li>CentOS 7 with kernel version 3.10</li> <li>Debian 9 with kernel version 4.9</li> <li>Debian 8 with kernel version 3.16</li> <li>RHEL 7 with kernel version 3.10</li> <li>SLES 12.2 with kernel version 4.4</li> <li>SLES 12.1 with kernel version 3.12</li> <li>Ubuntu 16.04 LTS with kernel version 4.4</li> <li>Ubuntu 14.04 LTS with kernel version 3.13</li> </ul></li> <li>Create an <strong>image</strong> using the special licence <code>https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx</code> (this is offered at no additional cost; it simply signals GCE that you want the feature enabled on instances using this image) <ul> <li>Create is using an already existing <strong>disk</strong> (for example): <code>gcloud compute images create nested-vm-image --source-disk disk1 --source-disk-zone us-central1-a --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"</code> (You will have to create disk1 yourself, for example by starting an instance from an OS image, and deleting the instance afterwards while keeping the boot disk)</li> <li>Create it using an already existing <strong>image</strong> with (for example): <code>gcloud compute images create nested-vm-image --source-image=debian-10-buster-v20200326 --source-image-project=debian-cloud --licenses="https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"</code></li> </ul></li> <li>Create an <strong>instance</strong> from a nested virtualization enabled image. Something like: <code>gcloud compute instances create example-nested-vm --zone us-central1-b --image nested-vm-image</code> . Keep in mind that you need to pick a zone that has at least Haswell CPUs.</li> </ul> <p>SSH into the new instance and verify that the feature is enabled by running <code>grep vmx /proc/cpuinfo</code>. If you get any output it means that the feature is enabled successfully.</p>
Janos Lenart
<p>Given I have created a ConfigMap with a file like that :</p> <pre><code>VARIABLE1=foo VARIABLE2=bar </code></pre> <p>Is there a way to access those values in Kubernetes or does it have to be in the YAML format?</p>
ZedTuX
<p>Let's say you have a file called <code>z</code> with the contents above. You have two options to make that into a ConfigMap.</p> <h2>Option 1 (--from-file)</h2> <pre><code>$ kubectl create cm cm1 --from-file=z </code></pre> <p>This will result in an object like this:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: cm1 data: z: | VARIABLE1=foo VARIABLE2=bar </code></pre> <p>There is no direct way to project a single value from this ConfigMap as it contains just one blob. However you can, from a shell used in <code>command</code> of a container source that blob (if you project it as a file) and then use the resulting environment variables.</p> <h2>Option 2 (--from-env-file)</h2> <pre><code>$ kubectl create cm cm2 --from-env-file=z </code></pre> <p>This will result in an object like this:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: cm2 data: VARIABLE1: foo VARIABLE2: bar </code></pre> <p>As you can see the different variables became separate key-value pairs in this case.</p> <p>There are many more examples in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">reference documentation</a></p>
Janos Lenart
<p>I am setting up a Kubernetes cluster on Google using the Google Kubernetes Engine. I have created the cluster with auto-scaling enabled on my nodepool. <a href="https://i.stack.imgur.com/g2Tu9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/g2Tu9.png" alt="nodepool_setup"></a></p> <p>As far as I understand this should be enough for the cluster to spin up extra nodes if needed.</p> <p>But when I run some load on my cluster, the HPA is activated and wants to spin up some extra instances but can't deploy them due to 'insufficient cpu'. At this point I expected the auto-scaling of the cluster to kick into action but it doesn't seem to scale up. I did however see this: <a href="https://i.stack.imgur.com/1mZzM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1mZzM.png" alt="error"></a> So the node that is wanting to be created (I guess thanks to the auto-scaler?) can't be created with following message: <strong>Quota 'IN_USE_ADDRESSES' exceeded. Limit: 8.0 in region europe-west1.</strong></p> <p>I also didn't touch the auto-scaling on the instance group, so when running <strong>gcloud compute instance-groups managed list</strong>, it shows as 'autoscaled: no'</p> <p>So any help getting this autoscaling to work would be appreciated.</p> <p>TL;DR I guess the reason it isn't working is: Quota 'IN_USE_ADDRESSES' exceeded. Limit: 8.0 in region europe-west1, but I don't know how I can fix it.</p>
darkownage
<p>You really have debugged it yourself already. You need to edit the <a href="https://console.cloud.google.com/iam-admin/quotas?usage=USED" rel="noreferrer">Quotas on the GCP Console</a>. Make sure you select the correct project. Increase all that are low: probably addresses and CPUs in the zone. This process is semi automated only, so you might need to wait a bit and possibly pay a deposit.</p>
Janos Lenart
<p>I'm trying to install minikube according to <a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/" rel="nofollow noreferrer">this</a> manual. First I had <a href="https://github.com/kubernetes/minikube/issues/2755" rel="nofollow noreferrer">this</a> bug so I downgraded minikube to version 0.25.2. Now i'm facing this error:</p> <pre><code>mac:~ username$ minikube start --vm-driver=xhyve --loglevel=0 Starting local Kubernetes v1.9.4 cluster... Starting VM... Getting VM IP address... Kubernetes version downgrade is not supported. Using version: v1.10.0 Moving files into cluster... E0504 19:09:14.812623 10018 start.go:234] Error updating cluster: Error running scp command: sudo scp -t /usr/local/bin output: scp: /usr/local/bin/localkube: No space left on device : Process exited with status 1 </code></pre> <p>My root directory is 100GB free, what am I missing? </p>
deez
<p>The default disk size for minikube is 2000MB.</p> <p>Set a new default size for minikube with</p> <pre><code>minikube config set disk-size </code></pre> <p>eg</p> <pre><code>minikube config set disk-size 8000 </code></pre> <p>for 8GB (8000 mb)</p> <p>Then delete your minikube with</p> <pre><code>minikube delete </code></pre> <p>and then start a fresh instance with </p> <pre><code>minikube start </code></pre>
Ganesh Krishnan
<p>I use <strong>kubeadm</strong> to launch cluster on <strong>AWS</strong>. I can successfully create a load balancer on <strong>AWS</strong> by using <strong>kubectl</strong>, but the load balancer is not registered with any EC2 instances. That causes problem that the service cannot be accessed from public. </p> <p>From the observation, when the ELB is created, it cannot find any healthy instances under all subnets. I am pretty sure I tag all my instances correctly. </p> <p><strong>Updated</strong>: I am reading the log from <strong>k8s-controller-manager</strong>, it shows my node does not have ProviderID set. And according to <a href="https://github.com/kubernetes/kubernetes/blob/82c986ecbcdf99a87cd12a7e2cf64f90057b9acd/pkg/cloudprovider/providers/aws/aws_loadbalancer.go#L1477" rel="noreferrer">Github</a> comment, ELB will ignore nodes where instance ID cannot be determined from provider. Could this cause the issue? How Should I set the providerID? </p> <h2>load balancer configuration</h2> <pre><code>apiVersion: v1 kind: Service metadata: name: load-balancer annotations: service.beta.kubernetes.io/aws-load-balancer-type: "elb" spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 - name: https port: 443 protocol: TCP targetPort: 443 selector: app: replica type: LoadBalancer </code></pre> <h2>deployment configuration</h2> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: replica-deployment labels: app: replica spec: replicas: 1 selector: matchLabels: app: replica template: metadata: labels: app: replica spec: containers: - name: web image: web imagePullPolicy: IfNotPresent ports: - containerPort: 80 - containerPort: 443 command: ["/bin/bash"] args: ["-c", "script_to_start_server.sh"] </code></pre> <h2>node output <code>status</code> section</h2> <pre><code>status: addresses: - address: 172.31.35.209 type: InternalIP - address: k8s type: Hostname allocatable: cpu: "4" ephemeral-storage: "119850776788" hugepages-1Gi: "0" hugepages-2Mi: "0" memory: 16328856Ki pods: "110" capacity: cpu: "4" ephemeral-storage: 130046416Ki hugepages-1Gi: "0" hugepages-2Mi: "0" memory: 16431256Ki pods: "110" conditions: - lastHeartbeatTime: 2018-07-12T04:01:54Z lastTransitionTime: 2018-07-11T15:45:06Z message: kubelet has sufficient disk space available reason: KubeletHasSufficientDisk status: "False" type: OutOfDisk - lastHeartbeatTime: 2018-07-12T04:01:54Z lastTransitionTime: 2018-07-11T15:45:06Z message: kubelet has sufficient memory available reason: KubeletHasSufficientMemory status: "False" type: MemoryPressure - lastHeartbeatTime: 2018-07-12T04:01:54Z lastTransitionTime: 2018-07-11T15:45:06Z message: kubelet has no disk pressure reason: KubeletHasNoDiskPressure status: "False" type: DiskPressure - lastHeartbeatTime: 2018-07-12T04:01:54Z lastTransitionTime: 2018-07-11T15:45:06Z message: kubelet has sufficient PID available reason: KubeletHasSufficientPID status: "False" type: PIDPressure - lastHeartbeatTime: 2018-07-12T04:01:54Z lastTransitionTime: 2018-07-11T15:45:06Z message: kubelet is posting ready status. AppArmor enabled reason: KubeletReady status: "True" type: Ready </code></pre> <p>How can I fix the issue?</p> <p>Thanks!</p>
jiashenC
<p>In My case the issue was with the worker nodes not getting the providerId assigned properly.</p> <p>I managed to patch the node like - kubectl patch node ip-xxxxx.ap-southeast-2.compute.internal -p '{"spec":{"providerID":"aws:///ap-southeast-2a/i-0xxxxx"}}'</p> <p>to add the ProviderID. And then when i deployed the service . The ELB got created. the node group got added and end to end it worked. This is not a straight forward answer . But until i find a better solution let remain here</p>
user373480
<p>Our users are allowed to access Kubernetes clusters only from the management station, there is no possibility to access the API directly from their laptops/workstations.</p> <p>Every user posses kubeconfig with relevant secrets belonging to this particular user. As the kubeconfig also contains the token used to authenticate against the Kubernetes API, it is not possible to store the kubeconfig "as is" on the management station file system.</p> <p>Is there any way how to provide the token/kubeconfig to kubectl e.g. via STDIN, not exposing it to other users (e.g. admin of the management station) on the file system?</p>
Sl4dy
<p>You could use bash process substitution to pass the entire <code>kubeconfig</code> to <code>kubectl</code> without saving it to a filesystem.</p> <p>Something like this works for CI systems:</p> <ol> <li>Base64-encode your <code>kubeconfig</code> and store it securely</li> </ol> <pre><code>export KUBECONFIG_DATA=$(cat kubeconfig | base64 -w0) </code></pre> <ol start="2"> <li>Use process substitution to Base64-decode and pass it directly to <code>kubectl</code>:</li> </ol> <pre><code>kubectl --kubeconfig &lt;(echo $KUBECONFIG_DATA | base64 --decode) ... </code></pre>
czak
<p>I have a cluster and set up kubelet on a node (name is <code>myNode</code>) with the <code>static</code> CPU Manager Policy. So I've started kubelet with <code>--cpu-manager-policy=static</code> (to set the static policy) and <code>--reserved-cpus=1</code> (to make sure kubelet has one core to run on exclusively) as explained <a href="https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy" rel="nofollow noreferrer">here</a>.</p> <p>Checking <code>/var/lib/kubelet/cpu_manager_state</code> it gives me</p> <pre class="lang-bash prettyprint-override"><code>cat /var/lib/kubelet/cpu_manager_state {&quot;policyName&quot;:&quot;static&quot;,&quot;defaultCpuSet&quot;:&quot;0-3&quot;,&quot;checksum&quot;:611748604} </code></pre> <p>which should be fine. I then start a pod with the following pod spec</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: wl labels: app: wl spec: containers: - name: wl image: docker.io/polinux/stress:latest imagePullPolicy: IfNotPresent command: [&quot;/bin/sh&quot;,&quot;-c&quot;] args: [&quot;echo 'workload' &amp;&amp; stress --cpu 4&quot;] resources: requests: cpu: 1 limits: cpu: 1 nodeName: myNode </code></pre> <p>and start it. It get's scheduled on the desired node &quot;myNode&quot;. I then check for the processes with</p> <pre class="lang-bash prettyprint-override"><code>ps aux | grep stress root 2966141 0.2 0.0 780 4 ? Ss 10:54 0:00 stress --cpu 4 root 2966154 27.1 0.0 780 36 ? R 10:54 0:02 stress --cpu 4 root 2966155 26.7 0.0 780 36 ? R 10:54 0:02 stress --cpu 4 root 2966156 28.6 0.0 780 36 ? R 10:54 0:02 stress --cpu 4 root 2966157 27.3 0.0 780 36 ? R 10:54 0:02 stress --cpu 4 </code></pre> <p>and then which CPUs they are running on with</p> <pre class="lang-bash prettyprint-override"><code>ps -o pid,psr,comm -p 2966154 2966155 2966156 2966157 PID PSR COMMAND 2966154 0 stress 2966155 1 stress 2966156 2 stress 2966157 3 stress </code></pre> <p>It looks like there are 4 processes running, but all of them on different CPUs. I would have expected that the Pod fails to run since it's allowed to only run on one core while the <code>stress --cpu 4</code> wants to start 4 threads on 4 CPUs. With the default CPU Manager Policy, this would be the expected behavior, but I've configured the static one.</p> <p>Any hint what the problem could be?</p>
Wolfson
<p>You also need to provide memory request and limit in order to qualify for the <em>Guaranteed</em> tier and exclusive cores:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: wl labels: app: wl spec: containers: - name: wl image: docker.io/polinux/stress:latest imagePullPolicy: IfNotPresent command: [&quot;/bin/sh&quot;,&quot;-c&quot;] args: [&quot;echo 'workload' &amp;&amp; stress --cpu 4&quot;] resources: requests: cpu: &quot;1&quot; memory: &quot;200Mi&quot; limits: cpu: &quot;1&quot; memory: &quot;200Mi&quot; nodeName: myNode </code></pre> <p>Verify the Pod by <code>kubectl describe pod wl</code></p>
Janos Lenart
<p>I would like to implement functionality (or even better reuse existing libraries/APIs!) that would intercept a kubectl command to create an object and perform some pre-creation validation tasks on it before allowing kubectl command to proceed.</p> <p>e.g. check various values in the yaml against external DB for example check a label conforms to the internal naming convention and so on..</p> <p>Is there an accepted pattern or existing tools etc? Any guidance appreciated</p>
user1843591
<p>The way to do this is by creating a <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook" rel="nofollow noreferrer">ValidatingAdmissionWebhook</a>. It's not for the faint of heart and even a brief example would be an overkill as a SO answer. A few pointers to start:</p> <p><a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook</a></p> <p><a href="https://banzaicloud.com/blog/k8s-admission-webhooks/" rel="nofollow noreferrer">https://banzaicloud.com/blog/k8s-admission-webhooks/</a></p> <p><a href="https://container-solutions.com/a-gentle-intro-to-validation-admission-webhooks-in-kubernetes/" rel="nofollow noreferrer">https://container-solutions.com/a-gentle-intro-to-validation-admission-webhooks-in-kubernetes/</a></p> <p>I hope this helps :-)</p>
Janos Lenart
<p>I'm running the theia code-editor on my EKS cluster and the image's default user is theia on which I grant read and write permissions on /home/project. However, when I mount that volume /home/project on my EFS and try to read or write on /home/project it returns permission denied I tried using initContainer but still the same problem:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: atouati spec: replicas: 1 selector: matchLabels: app: atouati template: metadata: labels: app: atouati spec: initContainers: - name: take-data-dir-ownership image: alpine:3 command: - chown - -R - 1001:1001 - /home/project:cached volumeMounts: - name: project-volume mountPath: /home/project:cached containers: - name: theia image: 'xxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/theia-code-editor:latest' ports: - containerPort: 3000 volumeMounts: - name: project-volume mountPath: &quot;/home/project:cached&quot; volumes: - name: project-volume persistentVolumeClaim: claimName: local-storage-pvc --- apiVersion: v1 kind: Service metadata: name: atouati spec: type: ClusterIP selector: app: atouati ports: - protocol: TCP port: 80 targetPort: 3000 </code></pre> <p>When I do ls -l on /home/project</p> <pre class="lang-sh prettyprint-override"><code>drwxr-xr-x 2 theia theia 6 Aug 21 17:33 project </code></pre> <p>On the efs directory :</p> <pre class="lang-sh prettyprint-override"><code>drwxr-xr-x 4 root root 6144 Aug 21 17:32 </code></pre>
touati ahmed
<p>You can instead set the <code>securityContext</code> in your pod spec to run the Pods as uid/gid 1001.</p> <p>For example</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: atouati spec: replicas: 1 selector: matchLabels: app: atouati template: metadata: labels: app: atouati spec: securityContext: runAsUser: 1001 runAsGroup: 1001 fsGroup: 1001 containers: - name: theia image: 'xxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/theia-code-editor:latest' ports: - containerPort: 3000 volumeMounts: - name: project-volume mountPath: &quot;/home/project:cached&quot; volumes: - name: project-volume persistentVolumeClaim: claimName: local-storage-pvc </code></pre> <p>Have you <code>kubectl exec</code>d into the container to confirm that that's the uid/gid that you need to use based on the apparent ownership?</p>
OregonTrail
<p>Pretty basic question. We have an existing swarm and I want to start migrating to Kubernetes. Can I run both using the same docker hosts?</p>
Wjdavis5
<p>See the official documentation for <em>Docker for Mac</em> at <a href="https://docs.docker.com/docker-for-mac/kubernetes/" rel="nofollow noreferrer">https://docs.docker.com/docker-for-mac/kubernetes/</a> stating:</p> <blockquote> <p>When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.</p> </blockquote> <p>So: yes, both should be able to run in parallel.</p> <p>If you're using Docker on Linux you won't have the convenient tools available like in Docker for Mac/Windows, but both orchestrators should still be able to run in parallel without further issues. On system level, details like e.g. ports on a network interface are still shared resources, so they cannot be bound by different orchestrators.</p>
gesellix
<p>I have a small company network with the following services/servers:</p> <ul> <li>Jenkins</li> <li>Stash (Atlassian)</li> <li>Confluence (Atlassian)</li> <li>LDAP</li> <li>Owncloud</li> <li>zabbix (monitoring)</li> <li>puppet</li> <li>and some Java web apps</li> </ul> <p>all running in separate kvm(libvirt)-vms in separate virtual-subnets on 2 machines (1 internal, 1 hetzner-rootserver) with shorewall inbetween. I'm thinking about switching to Docker.</p> <p>But I have two questions:</p> <ul> <li>How can I achieve network security between docker containers (i.e. I want to prevent owncloud to access any host in the network except ldap-hosts-sslport)</li> <li>Just by using docker-linking? If yes: does docker really allow to access only linked containers, but no others?</li> <li>By using kubernetes?</li> <li>By adding multiple bridging-network-interfaces for each container?</li> <li>Would you switch all my infra-services/-servers to docker, or a hybrid solution with just the owncloud and the java-web-apps on docker?</li> </ul>
stefa ng
<p>Regarding the multi-host networking: you're right that Docker links won't work across hosts. With Docker 1.9+ you can use "Docker Networking" like described in their blog post <a href="http://blog.docker.com/2015/11/docker-multi-host-networking-ga/" rel="nofollow">http://blog.docker.com/2015/11/docker-multi-host-networking-ga/</a></p> <p>They don't explain how to secure the connections, though. I strongly suggest to enable TLS on your Docker daemons, which should also secure your multi-host network (that's an assumption, I haven't tried).</p> <p>With Kubernetes you're going to add another layer of abstraction, so that you'll need to learn working with the pods and services concept. That's fine, but might be a bit too much. Keep in mind that you can still decide to use Kubernetes (or alternatives) later, so the first step should be to learn how you can wrap your services in Docker containers.</p> <p>You won't necessarily have to switch everything to Docker. You should start with Jenkins, the Java apps, or OwnCloud and then get a bit more used to the Docker universe. Jenkins and OwnCloud will give you enough challenges to gain some experience in maintaining containers. Then you can evaluate much better if Docker makes sense in your setup and with your needs to be applied to the other services.</p> <p>I personally tend to wrap everything in Docker, but only due to one reason: keeping the host clean. If you get to the point where everything runs in Docker you'll have much more freedom to choose where a service can run and you can move containers to other hosts much more easily.</p> <p>You should also explore the Docker Hub, where you can find ready to run solutions, e.g. Atlassian Stash: <a href="https://hub.docker.com/r/atlassian/stash/" rel="nofollow">https://hub.docker.com/r/atlassian/stash/</a></p> <p>If you need inspiration for special applications and how to wrap them in Docker, I recommend to have a look in <a href="https://github.com/jfrazelle/dockerfiles" rel="nofollow">https://github.com/jfrazelle/dockerfiles</a> - you'll find a bunch of good examples there.</p>
gesellix
<p>We have configured to use 2 metrics for HPA</p> <ol> <li>CPU Utilization</li> <li>App specific custom metrics</li> </ol> <p>When testing, we observed the scaling happening, but calculation of no.of replicas is not very clear. I am not able to locate any documentation on this.</p> <p><strong>Questions:</strong></p> <ol> <li>Can someone point to documentation or code on the calculation part?</li> <li>Is it a good practice to use multiple metrics for scaling?</li> </ol> <p>Thanks in Advance!</p>
arunk2
<p>From <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-the-horizontal-pod-autoscaler-work" rel="noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-the-horizontal-pod-autoscaler-work</a></p> <blockquote> <p>If multiple metrics are specified in a HorizontalPodAutoscaler, this calculation is done for each metric, and then the largest of the desired replica counts is chosen. If any of those metrics cannot be converted into a desired replica count (e.g. due to an error fetching the metrics from the metrics APIs), scaling is skipped.</p> <p>Finally, just before HPA scales the target, the scale recommendation is recorded. The controller considers all recommendations within a configurable window choosing the highest recommendation from within that window. This value can be configured using the <code>--horizontal-pod-autoscaler-downscale-stabilization-window</code> flag, which defaults to 5 minutes. This means that scaledowns will occur gradually, smoothing out the impact of rapidly fluctuating metric values</p> </blockquote>
Janos Lenart
<p>If I have under volumes</p> <pre><code>name: nfslocation nfs: server: 10.1.1.3 path: /vol/vol104/ostntfs0/folder/folder2 </code></pre> <p>And I want to move it to a patch file, how do I do that?</p> <pre><code>-op: replace ... </code></pre> <p>I am not clear on the format.</p> <p>Something like</p> <pre><code>-op: replace path: /spec/template/spec/volumes/0/... </code></pre> <p>We use kustomization...</p>
archcutbank
<pre><code>- op: replace path: /spec/template/spec/volumes/4/nfs/server value: 10.1.1.3 - op: replace path: /spec/template/spec/volumes/4/nfs/path value: /vol/vol104/ostntfs0/folder/folder2 </code></pre> <p>This worked out for me...</p>
archcutbank
<p>How can I pull <code>docker.pkg.github.com</code> Docker images from within Kubernetes cluster?</p> <p>Currently, the Github Docker registry requires authentication even for packages from public Github repositories.</p>
Vojtech Vitek - golang.cz
<ol> <li>Create new Github Personal Access Token with <code>read:packages</code> scope at <a href="https://github.com/settings/tokens/new" rel="noreferrer">https://github.com/settings/tokens/new</a>.</li> <li><p>Base-64 encode <code>&lt;your-github-username&gt;:&lt;TOKEN&gt;</code>, ie.:</p> <pre><code>$ echo -n VojtechVitek:4eee0faaab222ab333aa444aeee0eee7ccc555b7 | base64 &lt;AUTH&gt; </code></pre> <p><em>Note: Make sure not to encode a newline character at the end of the string.</em></p></li> <li><p>Create kubernetes.io/dockerconfigjson secret</p> <p>A) Create secret manually:</p> <pre><code>$ echo '{"auths":{"docker.pkg.github.com":{"auth":"&lt;AUTH&gt;"}}}' | kubectl create secret generic dockerconfigjson-github-com --type=kubernetes.io/dockerconfigjson --from-file=.dockerconfigjson=/dev/stdin </code></pre> <p>B) Or, create .yml file that can be used in <code>kubectl apply -f</code>:</p> <pre><code>kind: Secret type: kubernetes.io/dockerconfigjson apiVersion: v1 metadata: name: dockerconfigjson-github-com stringData: .dockerconfigjson: {"auths":{"docker.pkg.github.com":{"auth":"&lt;AUTH&gt;"}}} </code></pre> <p><em>Note for GitOps: I strongly recommend not to store the above file in plain-text in your git repository. Hydrate the value in your CD pipeline or encrypt/seal the file with tools like <a href="https://github.com/mozilla/sops" rel="noreferrer">https://github.com/mozilla/sops</a> or <a href="https://github.com/bitnami-labs/sealed-secrets" rel="noreferrer">https://github.com/bitnami-labs/sealed-secrets</a>.</em></p></li> <li><p>Now, you can reference the above secret from your pod's spec definition via <code>imagePullSecrets</code> field:</p> <pre><code>spec: containers: - name: your-container-name image: docker.pkg.github.com/&lt;ORG&gt;/&lt;REPO&gt;/&lt;PKG&gt;:&lt;TAG&gt; imagePullSecrets: - name: dockerconfigjson-github-com </code></pre></li> </ol>
Vojtech Vitek - golang.cz
<p>The timezone did not meet my expectations When i use <code>kubectl logs &lt;mypod&gt; --timestamps</code> to get the pod log.</p> <p>current output:</p> <pre class="lang-bash prettyprint-override"><code>2022-06-15T07:31:41.826543867Z 2022/06/15 15:31:41 [info] Start grpc server listen 58212 port. 2022-06-15T07:31:41.826568525Z 2022/06/15 15:31:41 [info] Start http server listen 10000 port. </code></pre> <p>expected output:</p> <pre class="lang-bash prettyprint-override"><code>2022-06-15T15:31:41+0800 2022/06/15 15:31:41 [info] Start grpc server listen 58212 port. 2022-06-15T15:31:41+0800 2022/06/15 15:31:41 [info] Start http server listen 10000 port. </code></pre> <p>What should I set up to achieve this output?</p>
Notscientific Farmer
<p><code>kubectl</code> does not support this directly at the time of writing this and it tricky to do this in a portable way. On Linux something like this would work:</p> <pre><code>kubectl logs --timestamps mypod | while read timestamp line; do \ echo &quot;$(env TZ=&quot;EST&quot; date -d &quot;$timestamp&quot; '+%Y-%m-%dT%H:%M:%S.%N%:z') $line&quot;; done </code></pre> <p>You will get an output like this:</p> <pre><code>2022-06-08T14:13:41.847615539-05:00 INFO [06-08|19:13:41.847] Starting Geth on Ethereum mainnet... </code></pre>
Janos Lenart
<p>I'm new to k8s, but I know that, as a k8s requirement, every Pod should be reachable from any other Pod. However, this is not happening in my setup: I can't ping from within a Pod another Pod in another Node. </p> <p><strong>Here is my setup:</strong></p> <p>I have one master node (<code>sauron</code>), and three workers (<code>gothmog</code>, <code>angmar</code>, <code>khamul</code>). I have installed the <code>weave</code> network via:</p> <pre><code>kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" </code></pre> <p>Here's the output of <code>kubectl get pods -n kube-system -o wide</code> </p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-5644d7b6d9-bd5qn 1/1 Running 1 59d 10.38.0.2 angmar &lt;none&gt; &lt;none&gt; etcd-sauron 1/1 Running 44 145d 192.168.201.207 sauron &lt;none&gt; &lt;none&gt; kube-apiserver-sauron 1/1 Running 82 145d 192.168.201.207 sauron &lt;none&gt; &lt;none&gt; kube-controller-manager-sauron 1/1 Running 393 145d 192.168.201.207 sauron &lt;none&gt; &lt;none&gt; kube-proxy-p97vw 1/1 Running 1 134d 192.168.202.235 angmar &lt;none&gt; &lt;none&gt; kube-proxy-pxpjm 1/1 Running 5 141d 192.168.201.209 gothmog &lt;none&gt; &lt;none&gt; kube-proxy-rfvcv 1/1 Running 8 145d 192.168.201.207 sauron &lt;none&gt; &lt;none&gt; kube-proxy-w6p74 1/1 Running 2 141d 192.168.201.213 khamul &lt;none&gt; &lt;none&gt; kube-scheduler-sauron 1/1 Running 371 145d 192.168.201.207 sauron &lt;none&gt; &lt;none&gt; weave-net-9sk7r 2/2 Running 0 16h 192.168.202.235 angmar &lt;none&gt; &lt;none&gt; weave-net-khl69 2/2 Running 0 16h 192.168.201.207 sauron &lt;none&gt; &lt;none&gt; weave-net-rsntg 2/2 Running 0 16h 192.168.201.213 khamul &lt;none&gt; &lt;none&gt; weave-net-xk2w4 2/2 Running 0 16h 192.168.201.209 gothmog &lt;none&gt; &lt;none&gt; </code></pre> <p>Here's my deployment yaml file content:</p> <pre><code>kind: Deployment metadata: name: my-deployment spec: replicas: 3 selector: matchLabels: app: my-deployment template: metadata: labels: app: my-deployment spec: containers: - name: my-image image: my-image:latest command: ["/bin/bash", "-c", "/opt/tools/bin/myapp"] imagePullPolicy: IfNotPresent ports: - containerPort: 15113 volumeMounts: - mountPath: /tmp name: tempdir imagePullSecrets: - name: registrypullsecret volumes: - name: tempdir emptyDir: {} </code></pre> <p>After applying the deployment via <code>kubectl apply -f mydeployment.yaml</code>, I verified that the pods started. But just can't ping anything outside their own internal (pod) IP address.</p> <pre><code># kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-deployment-77bbb7579c-4cnsk 1/1 Running 0 110s 10.38.0.0 angmar &lt;none&gt; &lt;none&gt; my-deployment-77bbb7579c-llm2x 1/1 Running 0 110s 10.44.0.2 khamul &lt;none&gt; &lt;none&gt; my-deployment-77bbb7579c-wbbmv 1/1 Running 0 110s 10.32.0.2 gothmog &lt;none&gt; &lt;none&gt; </code></pre> <p>As if not being able to ping wasn't enough, the pod <code>my-deployment-77bbb7579c-4cnsk</code> running in <code>angmar</code> has an IP <code>10.38.0.0</code>, which I find too odd... why is it like this?</p> <p>Also, each of the containers has an <code>/etc/resolv.conf</code> with <code>nameserver 10.96.0.10</code> in it, which is not reachable either from within any of the containers/pods.</p> <p>What should I do to be able to ping 10.44.0.2 (the pod running in <code>khamul</code>) from, let's say, the pod in <code>gothmog</code> (10.32.0.2)?</p> <p><strong>Update 1:</strong></p> <pre><code># kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME angmar Ready &lt;none&gt; 134d v1.16.3 192.168.202.235 &lt;none&gt; CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://1.13.1 gothmog Ready &lt;none&gt; 142d v1.16.2 192.168.201.209 &lt;none&gt; CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://1.13.1 khamul Ready &lt;none&gt; 142d v1.16.2 192.168.201.213 &lt;none&gt; CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://1.13.1 sauron Ready master 146d v1.16.2 192.168.201.207 &lt;none&gt; CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://1.13.1 </code></pre> <p>Some for the errors output of the weave pod at each node are: <strong><code>sauron</code> (master):</strong></p> <pre><code>INFO: 2020/04/08 21:52:31.042120 -&gt;[192.168.202.235:6783|fe:da:ea:36:b0:ea(angmar)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [22:eb:02:7c: 57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)]) INFO: 2020/04/08 21:52:33.675287 -&gt;[192.168.201.209:6783] error during connection attempt: dial tcp :0-&gt;192.168.201.209:6783: connect: connection refused INFO: 2020/04/08 21:52:34.992875 Error checking version: Get https://checkpoint-api.weave.works/v1/check/weave-net?arch=amd64&amp;flag_docker-version=none&amp;flag_kernel-version=3.10.0-957.10.1.el7.x 86_64&amp;flag_kubernetes-cluster-size=3&amp;flag_kubernetes-cluster-uid=428158f7-f097-4627-9dc0-56f5d77a1b3e&amp;flag_kubernetes-version=v1.16.3&amp;flag_network=fastdp&amp;os=linux&amp;signature=TQKdZQISNAlRStpfj1W vj%2BHWIBhqTt9XQ2czf6xSYNA%3D&amp;version=2.6.2: dial tcp: i/o timeout INFO: 2020/04/08 21:52:49.640011 -&gt;[192.168.201.209:6783] error during connection attempt: dial tcp :0-&gt;192.168.201.209:6783: connect: connection refused INFO: 2020/04/08 21:52:53.202321 -&gt;[192.168.202.235:6783|fe:da:ea:36:b0:ea(angmar)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [22:eb:02:7c: 57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)]) </code></pre> <p><strong><code>khamul</code> (worker):</strong></p> <pre><code>INFO: 2020/04/09 08:05:52.101683 -&gt;[192.168.201.209:49220|22:eb:02:7c:57:6a(gothmog)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [[663/1858]c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)]) INFO: 2020/04/09 08:06:46.642090 -&gt;[192.168.201.209:6783|22:eb:02:7c:57:6a(gothmog)]: connection shutting down due to error: no working forwarders to 22:eb:02:7c:57:6a(gothmog) INFO: 2020/04/09 08:08:40.131015 -&gt;[192.168.202.235:6783|fe:da:ea:36:b0:ea(angmar)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [22:eb:02:7c: 57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)]) INFO: 2020/04/09 08:09:39.378853 Error checking version: Get https://checkpoint-api.weave.works/v1/check/weave-net?arch=amd64&amp;flag_docker-version=none&amp;flag_kernel-version=3.10.0-957.10.1.el7.x 86_64&amp;flag_kubernetes-cluster-size=3&amp;flag_kubernetes-cluster-uid=428158f7-f097-4627-9dc0-56f5d77a1b3e&amp;flag_kubernetes-version=v1.16.3&amp;flag_network=fastdp&amp;os=linux&amp;signature=Oarh7uve3VP8qo%2BlV R6lukCi40hprasXxlwmmBYd5eI%3D&amp;version=2.6.2: dial tcp: i/o timeout INFO: 2020/04/09 08:09:48.873936 -&gt;[192.168.201.209:6783|22:eb:02:7c:57:6a(gothmog)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [22:eb:02:7c :57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)]) INFO: 2020/04/09 08:11:18.666790 -&gt;[192.168.201.209:45456|22:eb:02:7c:57:6a(gothmog)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [22:eb:02:7 c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)]) </code></pre> <p><strong><code>gothmog</code> (worker):</strong></p> <pre><code>INFO: 2020/04/09 16:50:08.818956 -&gt;[192.168.201.207:6783|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52:86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)]) INFO: 2020/04/09 16:52:52.751021 -&gt;[192.168.201.213:54822|e2:f6:ed:71:63:cb(khamul)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52:86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)]) INFO: 2020/04/09 16:53:18.934143 -&gt;[192.168.201.207:34423|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: no working forwarders to fe:5a:2a:52:86:22(sauron) INFO: 2020/04/09 16:53:49.773876 -&gt;[192.168.201.213:6783|e2:f6:ed:71:63:cb(khamul)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52:86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)]) INFO: 2020/04/09 16:53:57.784587 -&gt;[192.168.201.207:6783|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52:86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)]) </code></pre> <p><strong><code>angmar</code> (worker):</strong></p> <pre><code>INFO: 2020/04/09 16:01:46.081118 -&gt;[192.168.201.207:51620|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52 :86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)]) INFO: 2020/04/09 16:01:50.166722 -&gt;[192.168.201.207:6783|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52: 86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)]) INFO: 2020/04/09 16:06:48.277791 -&gt;[192.168.201.213:34950|e2:f6:ed:71:63:cb(khamul)]: connection shutting down due to error: read tcp 192.168.202.235:6783-&gt;192.168.201.213:34950: read: connect ion reset by peer INFO: 2020/04/09 16:07:13.270137 -&gt;[192.168.201.207:58071|fe:5a:2a:52:86:22(sauron)]: connection shutting down due to error: IP allocation was seeded by different peers (received: [fe:5a:2a:52 :86:22(sauron)], ours: [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)]) </code></pre> <p><strong>Update 2:</strong> All of my-deployment pods (independently of where they are running) contain this exact same <code>/etc/resolv.conf</code> file:</p> <pre><code>nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local testnet.ssd.com options ndots:5 </code></pre> <p>Thank you!</p>
Daniel
<p>Solved the issue by entering <em>each</em> worker node and doing the following:</p> <pre><code>rm /var/lib/weave/weave-netdata.db reboot </code></pre> <p><strong>Explanation:</strong></p> <p>My weave log files showed the excerpt:</p> <pre><code>INFO: 2020/04/08 21:52:31.042120-&gt;[192.168.202.235:6783|fe:da:ea:36:b0:ea(angmar)]: connection shutting down due to error: IP allocation was seeded by different peers (received [22:eb:02:7c:57:6a(gothmog) e2:f6:ed:71:63:cb(khamul)], ours: [fe:5a:2a:52:86:22(sauron)]) </code></pre> <p>The weave log output above is obtained by doing the following</p> <pre><code>kubectl logs -n kube-system &lt;a-weave-pod-id&gt; weave | grep -i error </code></pre> <p>For reference, see <a href="https://github.com/weaveworks/weave/blob/master/site/tasks/ipam/troubleshooting-ipam.md#seeded-by-different-peers" rel="noreferrer">here</a>.</p> <p>Thanks to everyone that chimed in, and especial thanks to @kitt for providing the answer.</p>
Daniel
<p>I have a Kubernetes cluster running Calico as the overlay and NetworkPolicy implementation configured for IP-in-IP encapsulation and I am trying to expose a simple nginx application using the following Service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx namespace: default spec: type: LoadBalancer ports: - port: 80 targetPort: 80 selector: app: nginx </code></pre> <p>I am trying to write a NetworkPolicy that only allows connections via the load balancer. On a cluster without an overlay, this can be achieved by allowing connections from the CIDR used to allocate IPs to the worker instances themselves - this allows a connection to hit the Service's NodePort on a particular worker and be forwarded to one of the containers behind the Service via IPTables rules. However, when using Calico configured for IP-in-IP, connections made via the NodePort use Calico's IP-in-IP tunnel IP address as the source address for cross node communication, as shown by the <code>ipv4IPIPTunnelAddr</code> field on the Calico Node object <a href="https://docs.projectcalico.org/v3.0/reference/calicoctl/resources/node" rel="nofollow noreferrer">here</a> (I deduced this by observing the source IP of connections to the nginx application made via the load balancer). Therefore, my NetworkPolicy needs to allow such connections.</p> <p>My question is how can I allow these types of connections without knowing the <code>ipv4IPIPTunnelAddr</code> values beforehand and without allowing connections from all Pods in the cluster (since the <code>ipv4IPIPTunnelAddr</code> values are drawn from the cluster's Pod CIDR range). If worker instances come up and die, the list of such IPs with surely change and I don't want my NetworkPolicy rules to depend on them.</p> <ul> <li>Calico version: 3.1.1</li> <li>Kubernetes version: 1.9.7</li> <li>Etcd version: 3.2.17</li> <li>Cloud provider: AWS</li> </ul>
dippynark
<p>I’m afraid we don’t have a simple way to match the tunnel IPs dynamically right now. If possible, the best solution would be to move away from IPIP; once you remove that overlay, everything gets a lot simpler.</p> <p>In case you’re wondering, we need to force the nodes to use the tunnel IP because, if you’re suing IPIP, we assume that your network doesn’t allow direct pod-to-node return traffic (since the network won’t be expecting the pod IP it may drop the packets)</p>
Fasaxc
<p>I have a kubernetes cluster and a nginx ingress. I have deployed an ingress to route traffic from a domain example.org to a specific container. Now, I am trying to block all requests which are not coming from a whitelisted ip range. Therefore I annotated the created ingress with <code>nginx.ingress.kubernetes.io/whitelist-source-range</code>. However, all traffic gets blocked so I looked at the logs from nginx and I realized that actually nginx sees the internal node ip address instead of the requestors public internet address.</p> <pre><code>2022/05/06 11:39:26 [error] 10719#10719: *44013470 access forbidden by rule, client: 172.5.5.84, server: example.org, request: &quot;GET /.svn/wc.db HTTP/1.1&quot;, host: &quot;example.org&quot; </code></pre> <p>I am not sure what is actually wrong. When I remove the whitelist annotation, then everything works as expected.</p>
mkn
<p>Okay, so this documentation fixed the issue <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip</a></p> <p>I had to change <code>externalTrafficPolicy: Cluster</code> to <code>externalTrafficPolicy: Local</code></p>
mkn