prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have installed the metric server on kubernetes, but its not working and logs </p> <pre><code>unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:xxx: unable to fetch metrics from Kubelet ... (X.X): Get https:....: x509: cannot validate certificate for 1x.x. x509: certificate signed by unknown authority </code></pre> <p>I was able to get metrics if modified the deployment yaml and added </p> <pre><code> command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP </code></pre> <p>this now collects metrics, and kubectl top node returns results...</p> <p>but logs still show</p> <pre><code> E1120 11:58:45.624974 1 reststorage.go:144] unable to fetch pod metrics for pod dev/pod-6bffbb9769-6z6qz: no metrics known for pod E1120 11:58:45.625289 1 reststorage.go:144] unable to fetch pod metrics for pod dev/pod-6bffbb9769-rzvfj: no metrics known for pod E1120 12:00:06.462505 1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-1x.x.x.eu-west-1.compute.internal: unable to get CPU for container ...discarding data: missing cpu usage metric, unable to fully scrape metrics from source </code></pre> <p>so questions</p> <p>1) All this works on minikube, but not on my dev cluster, why would that be?</p> <p>2) In production i dont want to do insecure-tls.. so can someone please explain why this issue is arising... or point me to some resource.</p>
<p>Kubeadm generates the kubelet certificate at <code>/var/lib/kubelet/pki</code> and those certificates (<code>kubelet.crt and kubelet.key</code>) are signed by different CA from the one which is used to generate all other certificates at <code>/etc/kubelet/pki</code>. </p> <p>You need to regenerate the kubelet certificates which is signed by your root CA (<code>/etc/kubernetes/pki/ca.crt</code>)</p> <p>You can use openssl or cfssl to generate the new certificates(I am using cfssl)</p> <pre><code>$ mkdir certs; cd certs $ cp /etc/kubernetes/pki/ca.crt ca.pem $ cp /etc/kubernetes/pki/ca.key ca-key.pem </code></pre> <p>Create a file <code>kubelet-csr.json</code>:</p> <pre><code>{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "&lt;node_name&gt;", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [{ "C": "US", "ST": "NY", "L": "City", "O": "Org", "OU": "Unit" }] } </code></pre> <p>Create a ca-config.json file:</p> <pre><code>{ "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "8760h" } } } } </code></pre> <p>Now generate the new certificates using above files:</p> <pre><code>$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ --config=ca-config.json -profile=kubernetes \ kubelet-csr.json | cfssljson -bare kubelet </code></pre> <p>Replace the old certificates with newly generated one:</p> <pre><code>$ scp kubelet.pem &lt;nodeip&gt;:/var/lib/kubelet/pki/kubelet.crt $ scp kubelet-key.pem &lt;nodeip&gt;:/var/lib/kubelet/pki/kubelet.key </code></pre> <p>Now restart the kubelet so that new certificates will take effect on your node.</p> <pre><code>$ systemctl restart kubelet </code></pre> <p>Look at the following tickets to get the context of issue:</p> <p><a href="https://github.com/kubernetes-incubator/metrics-server/issues/146" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/metrics-server/issues/146</a></p> <p>Hope this helps.</p>
<p>I am trying to deploy a fabric network with five orgs(nearly 20 node including ca,order,peer) in a k8s environment. k8s has three masters,so how many workers should i have? Is there any requirement of the number of master and worker eg, one master at least has one worker in number? </p>
<p>Kubernetes v1.12 supports up to 5000 nodes cluster. More specifically, k8s supports configurations that meet all of the following criteria:</p> <ul> <li><p>No more than 5000 nodes</p> </li> <li><p>No more than 150000 total pods</p> </li> <li><p>No more than 300000 total containers</p> </li> <li><p>No more than 100 pods per node</p> <p>Please check the official docs for setting up the large cluster <a href="https://kubernetes.io/docs/setup/cluster-large/" rel="nofollow noreferrer">here</a></p> </li> </ul> <p>Hope this helps.</p>
<p>I have this config file</p> <pre><code>apiVersion: v1 clusters: - cluster: server: [REDACTED] // IP of my cluster name: staging contexts: - context: cluster: staging user: "" name: staging-api current-context: staging-api kind: Config preferences: {} users: [] </code></pre> <p>I run this command</p> <pre><code>kubectl config --kubeconfig=kube-config use-context staging-api </code></pre> <p>I get this message</p> <pre><code>Switched to context "staging-api". </code></pre> <p>I then run</p> <pre><code>kubectl get pods </code></pre> <p>and I get this message</p> <pre><code>The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>As far as I can tell from the docs </p> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/</a></p> <p>I'm doing it right. Am I missing something?</p>
<p>Yes, Try the following steps to access the kubernetes cluster. This steps assumes that you have your k8s certificates in /etc/kubernetes.</p> <p>You need to setup the cluster name, Kubeconfig, User and Kube cert file in following variables and then simply run those commands:</p> <pre><code>CLUSTER_NAME="kubernetes" KCONFIG=admin.conf KUSER="kubernetes-admin" KCERT=admin cd /etc/kubernetes/ $ kubectl config set-cluster ${CLUSTER_NAME} \ --certificate-authority=pki/ca.crt \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ --kubeconfig=${KCONFIG} $ kubectl config set-credentials kubernetes-admin \ --client-certificate=admin.crt \ --client-key=admin.key \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/admin.conf $ kubectl config set-context ${KUSER}@${CLUSTER_NAME} \ --cluster=${CLUSTER_NAME} \ --user=${KUSER} \ --kubeconfig=${KCONFIG} $ kubectl config use-context ${KUSER}@${CLUSTER_NAME} --kubeconfig=${KCONFIG} $ kubectl config view --kubeconfig=${KCONFIG} </code></pre> <p>After this you will be able to access the cluster. Hope this helps.</p>
<p>When I deploy my golang service to any namespace but the <code>default</code> namespace, the service is unable to retrieve pods on any namespace. The same service deployed on the <code>default</code> namespace works perfectly, using the golang client-go api.</p> <p>Is this a security issue?</p> <p>Thanks.</p>
<p>This issue is permission issue. Since you are using <code>rest.InClusterConfig(config)</code> to create client. That means it using pod's service account as credential. So check whether that service account has the permission to get pods in any namespace.</p> <blockquote> <p>if service account in the pod is not defined, then it will use <code>default</code> service account.</p> </blockquote> <p>If RBAC is enabled in your cluster, then check the <strong>role binding</strong> in that namespace, to find out whether your service account has the permission. </p> <pre><code># to see the list of role bindings in 'default' namespace kubectl get rolebindings --namespace default </code></pre> <p>To see the specific rolebinding</p> <pre><code>kubectl get rolebindings ROLE-BINDING-NAME --namespace default -o yaml </code></pre> <p>Also you can create role and role binding to give permission. To know about RBAC role and role binding see here: <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a></p>
<p>I am trying to setup cronjobs in my kubernetes cluster,I have micro service that import data from another api to my database. I want to run this command every 10 minutes. I have following cronjob manifest</p> <pre><code>apiVersion: v1 items: - apiVersion: batch/v1beta1 kind: CronJob metadata: labels: chart: cronjobs-0.1.0 name: cron-cronjob1 namespace: default spec: concurrencyPolicy: Forbid failedJobsHistoryLimit: 1 jobTemplate: spec: template: metadata: labels: app: cron cron: cronjob1 spec: containers: command: ["/usr/local/bin/php"] args: ["artisan bulk:import"] env: - name: DB_CONNECTION value: postgres - name: DB_HOST value: postgres - name: DB_PORT value: "5432" - name: DB_DATABASE value: xxx - name: DB_USERNAME value: xxx - name: DB_PASSWORD value: xxxx - name: APP_KEY value: xxxxx image: registry.xxxxx.com/xxxx:2ecb785-e927977 imagePullPolicy: IfNotPresent name: cronjob1 ports: - containerPort: 80 name: http protocol: TCP imagePullSecrets: - name: xxxxx restartPolicy: OnFailure terminationGracePeriodSeconds: 30 schedule: '* * * * *' successfulJobsHistoryLimit: 3 </code></pre> <p>I am getting following error when cronjob scheduler spin up a pod</p> <blockquote> <p>Could not open input file: artisan bulk:import</p> </blockquote> <p>How to resolve this?</p>
<p>Assuming file <code>artisan</code> exists and <code>php</code> can execute it:</p> <pre><code>command: ["/usr/local/bin/php"] args: ["artisan", "bulk:import"] </code></pre> <p>This way there will be two arguments passed to php instead of one which php assumes is the file to execute.</p>
<p>We purchased a Komodo SSL certificate, which come in 5 files:</p> <p><a href="https://i.stack.imgur.com/OYuAK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OYuAK.png" alt="enter image description here"></a></p> <p>I am looking for a guide for how to apply it on our Kubernetes Ingress.</p>
<p>As it described in <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="noreferrer">documentation:</a> </p> <p>you need to create secret with your cert:</p> <pre><code>apiVersion: v1 data: tls.crt: content_of_file_condohub_com_br.crt tls.key: content_of_file_HSSL-5beedef526b9e.key kind: Secret metadata: name: secret-tls namespace: default type: Opaque </code></pre> <p>and then update your ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: tls-example-ingress spec: tls: - hosts: - your.amazing.host.com secretName: secret-tls rules: - host: your.amazing.host.com http: paths: - path: / backend: serviceName: service1 servicePort: 80 </code></pre> <p>Ingress will use the certs from secret files.</p>
<p>Does someone know if there is an attempt to integrate <a href="http://singularity.lbl.gov/docs-run" rel="noreferrer">Singularity</a> with Kubernetes? That would be awesome for everyone who wants to run an HPC program (e.g. in the cloud). My only other idea would be to use an Singularity run as entry point for Docker and run that one in Kubernetes.</p> <p>Edit: There is the plan to do an integration by the singularity team (<a href="https://groups.google.com/a/lbl.gov/forum/#!topic/singularity/tzpDGXot2YY" rel="noreferrer">post</a>).</p>
<p>In the <a href="https://groups.google.com/a/lbl.gov/forum/#!topic/singularity/tzpDGXot2YY" rel="noreferrer">thread mentioned</a> by the OP, Gregory Kurtzer (CEO, Sylabs Inc.) just added:</p> <blockquote> <p>Apologies that the thread got dropped, but our interest certainly has not changed. We have begun two projects which will help on this initiative:</p> <ol> <li><p>An OCI compatible interface (both CLI and library) to Singularity. This is a good path forward for community compliance, but it won't support features like cryptographically signed containers via SIF or encryption as they are not OCI compliant.</p> </li> <li><p>Because OCI doesn't support all of our features, we are also developing a Kubernetes CRI gRPC shim which will allow us to interface Singularity into Kubernetes at the same level as Docker and Podman. This will allow us to support all of our features under K8s.</p> </li> </ol> <p>Also, please note, that we have also prototyped and even demo'ed Singularity running under HashiCorp Nomad for services and AI workflows.</p> <p>The OCI, Kubernetes and the Nomad work in progress will be opened up in the coming weeks so stay tuned!</p> </blockquote> <p>In the meantime, a tool like <a href="https://github.com/dgruber/drmaa2os" rel="noreferrer"><code>dgruber/drmaa2os</code></a> does have support for <a href="https://github.com/dgruber/drmaa2os/blob/44887ddcabdc1c08b29d8981a7a0711317378afd/pkg/jobtracker/singularity/README.md" rel="noreferrer">Singularity Container</a>.</p>
<p>I am trying to install and run Minikube (or some kind of local Kubernetes) on an AWS EC2 instance of Windows 2016. I have seen multiple tutorials on how to do this with an Ubuntu instance, but wasn't sure if anyone has had success using nested VM's on EC2 Windows. Any guidance you can provide would be greatly appreciated!</p>
<p>EC2 instances don't support nested virtualization as some <a href="https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances" rel="nofollow noreferrer">GCP</a> or <a href="https://azure.microsoft.com/en-us/blog/nested-virtualization-in-azure/" rel="nofollow noreferrer">Azure</a> instances do. (As of this writing)</p> <p>Short answer, is that it won't work with regular instances. However, you can use a <a href="https://aws.amazon.com/blogs/aws/new-amazon-ec2-bare-metal-instances-with-direct-access-to-hardware/" rel="nofollow noreferrer">bare metal instance</a> (i3.metal, and they are a bit costly). </p> <p>I expect that AWS will create more bare metal offerings in the future and at some point offer nested virtualization on other types of instances.</p>
<p>I'm new with containers and kubernetes. What I'm trying to do is to create a pod with access to a local directory.</p> <p>I have followed the directions from: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">configure persistent volume storage</a></p> <p>Created my Persistent Volume, Persistent Volume Claim and my pod. </p> <p>The problem is that the tomcat is not able to write on the shared directory</p> <p>This is the Persistent Volume:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: pv-webapp6 labels: type: local spec: storageClassName: manual capacity: storage: 3Gi accessModes: - ReadWriteOnce hostPath: path: "/opt/test_tomcat/app" </code></pre> <p>This is the Persistent Volume Claim:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-webapp6 spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 1Gi </code></pre> <p>This is the tomcat Pod I'm trying to create:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: webapp6 spec: containers: - image: tomcat:8.0 name: webapp6 ports: - containerPort: 8080 name: webapp6 volumeMounts: - mountPath: /usr/local/tomcat/webapps name: test-volume volumes: - name: test-volume persistentVolumeClaim: claimName: pvc-webapp6 </code></pre> <p>Its a bit obvious, but this the error on the pod.</p> <blockquote> <p>[root@testserver webapp6-test]# kubectl exec -it webapp6 -- /bin/bash <br/> root@webapp6:/usr/local/tomcat# mkdir /usr/local/tomcat/webapps/sample <br/> mkdir: cannot create directory ‘/usr/local/tomcat/webapps/sample’: Permission denied</p> </blockquote>
<p>The issue is in your PVC yaml file where you're not specifying the <code>storageClassName</code>. Hence the PV and PVC couldn't bound to each other. Please replace the PVC yaml file with the following file:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-webapp6 spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3Gi </code></pre> <p>Now everything should work. Hope this helps.</p> <p>I quickly used your yaml to deploy pod and everything is working fine at my end:</p> <pre><code>[root@Master admin]# kubectl exec -it webapp6 bash root@webapp6:/usr/local/tomcat# mkdir /usr/local/tomcat/webapps/sample root@webapp6:/usr/local/tomcat# touch /usr/local/tomcat/webapps/sample/a root@webapp6:/usr/local/tomcat# ls /usr/local/tomcat/webapps/sample/ a </code></pre> <p>Now when I look at host, I can see the newly created <code>a</code> file</p> <pre><code>[root@Master admin]# ls /opt/test_tomcat/app/sample/ a </code></pre> <p>So, at least yaml files are working fine.</p>
<p>For the below files , ISTIO is showing output in the first v1 app only. If I change the version of the v1 the output changes. So the traffic is not moving to the other version at all.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: sampleweb namespace: default spec: hosts: - "web.xyz.com" gateways: - http-gateway http: - route: - destination: port: number: 8080 host: web subset: v1 weight: 30 - route: - destination: port: number: 8080 host: web subset: v2 weight: 30 - route: - destination: port: number: 8080 host: web subset: v3 weight: 40 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: samplewebdr namespace: default spec: host: web subsets: - name: v1 labels: app: web version: prod - name: v2 labels: app: web version: baseline - name: v3 labels: app: web version: canary trafficPolicy: tls: mode: ISTIO_MUTUAL </code></pre> <p>Can anyone please help on this?</p>
<p>Your problem is that you have created a <code>VirtualService</code> with 3 rules in it. The first rule, which has no specific match criteria, is therefore always the one that gets invoked. When you have multiple rules in a <code>VirtualService</code>, you need to be careful to order them properly, as described <a href="https://istio.io/docs/concepts/traffic-management/#precedence" rel="nofollow noreferrer">here</a>.</p> <p>That said, in your case, you really don't want multiple rules, but rather a single rule with multiple weighted destinations like this:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: sampleweb namespace: default spec: hosts: - "web.xyz.com" gateways: - http-gateway http: - route: - destination: port: number: 8080 host: web subset: v1 weight: 30 - destination: port: number: 8080 host: web subset: v2 weight: 30 - destination: port: number: 8080 host: web subset: v3 weight: 40 </code></pre> <p>Btw, although harmless, you don't need to include the <code>app: web</code> label in you <code>DestinationRule</code> subsets. You only need the labels that uniquely identify the difference between the subsets of the web service.</p>
<p>This is what I define in k8s.yml file:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: myservice namespace: mynamespace labels: app: myservice annotations: service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" external-dns.alpha.kubernetes.io/hostname: "myservice." spec: selector: app: myservice type: LoadBalancer ports: - name: http port: 8080 targetPort: 8080 protocol: TCP </code></pre> <p>Running this command:</p> <pre><code>kubectl describe service myservice </code></pre> <p>gives me the "LoadBalancer Ingress" like this:</p> <blockquote> <p>Type: LoadBalancer IP:<br> 25.0.162.225 LoadBalancer Ingress: internal-a9716e......us-west-2.elb.amazonaws.com</p> </blockquote> <p>As I understand, the publishing type I'm using is "LoadBalancer" which helps me expose my Service to external IP address (refer <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/</a>). And the Ingress is a different thing which sits in front of the Services and I didn't define it in my yml file. (refer: <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="nofollow noreferrer">https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0</a>) With the "LoadBalancer Ingress" I'm able to access my Service from outside the cluster, but I don't understand why it's called "LoadBalancer Ingress"? What does it have to do with Ingress? Or is it true that every load balancer is equipped with an Ingress for the Service exposing purpose?</p>
<p>Ingress is an abstract definition of what to expose and how. Usually refers to HTTP(S) traffic, but with some fiddling can also other modes/protocols..</p> <p>Ingress Controller is a particular implementation that will realize your Ingress defined expectations using a specific piece of software. Be it Nginx, Traefik or some other solution, potentially dedicated to particular cloud provider.</p> <p>They will use <code>Service</code> objects as means of finding what are the endpoints to use for specific traffing that reached them. It's of no consequence if this is <code>headless</code>, <code>ClusterIP</code>, <code>NodePort</code> or <code>LoadBalancer</code> type of service.</p> <p>That said, <code>LoadBalancer</code> type service exposes your service on a, surprise, loadbalancer. Again, usually related to your cloud provider. It's a completely different way of exposing your service, as is <code>NodePort</code> type.</p>
<p>I have a <code>values.yaml</code> where I need to mention multiple ports like the following:</p> <pre><code>kafkaClientPort: - 32000 - 32001 - 32002 </code></pre> <p>In yaml for statefulset, I need to get value using ordinal number. So for <code>kf-0</code>, I need to put first element of <code>kafkaClientPort</code>; and for <code>kf-1</code>, second element and so on. I am trying like the following:</p> <pre><code>args: - "KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://$(MY_NODE_NAME):{{ index .Values.kafkaClientPort ${HOSTNAME##*-} }}" </code></pre> <p>But it is showing an error.</p> <p>Please advise what is the best way to access dynamically <code>values.yaml</code> value.</p>
<p>The trick here is that Helm template doesn't know anything about ordinal in your stateful set. If you look at the Kafka <a href="https://github.com/helm/charts/blob/e0874c51fa95c1f9fafc1f8fdd714891642caa51/incubator/kafka/values.yaml#L176" rel="nofollow noreferrer">Helm Chart</a>, you see that they are using a base port <code>31090</code> and then they add the ordinal number but that substitution is in place 'after' the template is created. Something like this in your values: </p> <pre><code>"advertised.listener": |- PLAINTEXT://kafka.cluster.local:$((31090 + ${KAFKA_BROKER_ID})) </code></pre> <p>and then in the template file, the <a href="https://github.com/helm/charts/blob/3cacbb20dfc067bb601393aca87c17df80cf4ee6/incubator/kafka/templates/statefulset.yaml#L203" rel="nofollow noreferrer">use a bash export under <code>command</code></a> with a <code>printf</code> which is an alias for <code>fmt.Sprintf</code>. Something like this in your case:</p> <pre><code> command: - sh - -exc - | unset KAFKA_PORT &amp;&amp; \ export KAFKA_BROKER_ID=${HOSTNAME##*-} &amp;&amp; \ export "KAFKA_ADVERTISED_LISTENERS={{ printf "%s" $advertised.listener }} \\ ... </code></pre>
<p>I want to create a secret for my kubernetes cluster. So I composed following <code>dummy-secret.yaml</code> file:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: dummy-secret type: Opaque data: API_KEY: bWVnYV9zZWNyZXRfa2V5 API_SECRET: cmVhbGx5X3NlY3JldF92YWx1ZTE= </code></pre> <p>When I run <code>kubectl create -f dummy-secret.yaml</code> I receive back following message:</p> <pre><code>Error from server (BadRequest): error when creating "dummy-secret.yaml": Secret in version "v1" cannot be handled as a Secret: v1.Secret: Data: decode base64: illegal base64 data at input byte 8, error found in #10 byte of ...|Q89_Hj1Aq","API_SECR|..., bigger context ...|sion":"v1","data":{"API_KEY":"af76fsdK_cQ89_Hj1Aq","API_SECRET":"bsdfmkwegwegwe"},"kind":"Secret","m|... </code></pre> <p>Not sure why it happens. </p> <p>As I understood, I need to encode all values under the <code>data</code> key in the yaml file. So I did base64 encoding, but kubernetes still doesn't handle the yaml secret file as I expect.</p> <p><strong>UPDATE:</strong></p> <p>I used this command to encode <code>data</code> values on my mac:</p> <pre><code>echo -n 'mega_secret_key' | openssl base64 </code></pre>
<p>I got the decoded values &quot;mega_secret_key&quot; and &quot;really_secret_value1&quot; from from your encoded data. Seems they are not encoded in right way. So, encode your data in right way:</p> <pre><code>$ echo &quot;mega_secret_key&quot; | base64 bWVnYV9zZWNyZXRfa2V5Cg== $ echo &quot;really_secret_value1&quot; | base64 cmVhbGx5X3NlY3JldF92YWx1ZTEK </code></pre> <p>Then check whether they are encoded properly:</p> <pre><code>$ echo &quot;bWVnYV9zZWNyZXRfa2V5Cg==&quot; | base64 -d mega_secret_key $ echo &quot;cmVhbGx5X3NlY3JldF92YWx1ZTEK&quot; | base64 -d really_secret_value1 </code></pre> <p>So they are ok. Now use them in your <code>dummy-secret.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: dummy-secret type: Opaque data: API_KEY: bWVnYV9zZWNyZXRfa2V5Cg== API_SECRET: cmVhbGx5X3NlY3JldF92YWx1ZTEK </code></pre> <p>And run <code>$ kubectl create -f dummy-secret.yaml</code>.</p> <hr /> <p><strong>UPDATE on 11-02-2022:</strong></p> <p>The newer versions of Kubernetes support the optional <code>stringData</code> property where one can provide the value against any key without decoding.</p> <blockquote> <p>All key-value pairs in the <code>stringData</code> field are internally merged into the <code>data</code> field. If a key appears in both the <code>data</code> and the <code>stringData</code> field, the value specified in the <code>stringData</code> field takes precedence.</p> </blockquote> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Secret metadata: name: dummy-secret type: Opaque stringData: API_KEY: mega_secret_key API_SECRET: really_secret_value1 </code></pre> <p><strong>UPDATE:</strong></p> <p>If you use <code>-n</code> flag while running <code>$ echo &quot;some_text&quot;</code>, it will trim the trailing <code>\n</code> (newline) from the string you are printing.</p> <pre class="lang-sh prettyprint-override"><code>$ echo &quot;some_text&quot; some_text $ echo -n &quot;some_text&quot; some_text⏎ </code></pre> <p>Just try it,</p> <pre class="lang-sh prettyprint-override"><code># first encode $ echo -n &quot;mega_secret_key&quot; | base64 bWVnYV9zZWNyZXRfa2V5 $ echo -n &quot;really_secret_value1&quot; | base64 cmVhbGx5X3NlY3JldF92YWx1ZTE= # then decode and check whether newline is stripped $ echo &quot;bWVnYV9zZWNyZXRfa2V5&quot; | base64 -d mega_secret_key⏎ $ echo &quot;cmVhbGx5X3NlY3JldF92YWx1ZTE=&quot; | base64 -d really_secret_value1⏎ </code></pre> <p>You can use these newly (without newline) decoded data in your secret instead. That also should fine.</p> <pre class="lang-sh prettyprint-override"><code>$ cat - &lt;&lt;-EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: dummy-secret type: Opaque data: API_KEY: bWVnYV9zZWNyZXRfa2V5 API_SECRET: cmVhbGx5X3NlY3JldF92YWx1ZTE= EOF secret/dummy-secret created </code></pre> <blockquote> <p><strong>At the time of update, my kubernetes version is,</strong></p> <pre class="lang-sh prettyprint-override"><code>Minor:&quot;17&quot;, GitVersion:&quot;v1.17.3&quot;, GitCommit:&quot;06ad960bfd03b39c8310aaf92d1e7c1 2ce618213&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-02-11T18:14:22Z&quot;, GoVersion:&quot;go1.13.6&quot;, Compiler:&quot;gc&quot;, Platform:&quot;l inux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;17&quot;, GitVersion:&quot;v1.17.3&quot;, GitCommit:&quot;06ad960bfd03b39c8310aaf92d1e7c1 2ce618213&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-02-11T18:07:13Z&quot;, GoVersion:&quot;go1.13.6&quot;, Compiler:&quot;gc&quot;, Platform:&quot;l inux/amd64&quot;} ``` </code></pre> </blockquote>
<p>I'm trying to generate an SSL certificate with <code>certbot/certbot</code> docker container in kubernetes. I am using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer"><code>Job</code> controller</a> for this purpose which looks as the most suitable option. When I run the standalone option, I get the following error:</p> <blockquote> <p>Failed authorization procedure. staging.ishankhare.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching <a href="http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8" rel="noreferrer">http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8</a>: Timeout during connect (likely firewall problem)</p> </blockquote> <p>I've made sure that this isn't due to misconfigured DNS entries by running a simple nginx container, and it resolves properly. Following is my <code>Jobs</code> file:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: #labels: # app: certbot-generator name: certbot spec: template: metadata: labels: app: certbot-generate spec: volumes: - name: certs containers: - name: certbot image: certbot/certbot command: ["certbot"] #command: ["yes"] args: ["certonly", "--noninteractive", "--agree-tos", "--staging", "--standalone", "-d", "staging.ishankhare.com", "-m", "[email protected]"] volumeMounts: - name: certs mountPath: "/etc/letsencrypt/" #- name: certs #mountPath: "/opt/" ports: - containerPort: 80 - containerPort: 443 restartPolicy: "OnFailure" </code></pre> <p>and my service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: certbot-lb labels: app: certbot-lb spec: type: LoadBalancer loadBalancerIP: 35.189.170.149 ports: - port: 80 name: "http" protocol: TCP - port: 443 name: "tls" protocol: TCP selector: app: certbot-generator </code></pre> <p>the full error message is something like this:</p> <pre><code>Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator standalone, Installer None Obtaining a new certificate Performing the following challenges: http-01 challenge for staging.ishankhare.com Waiting for verification... Cleaning up challenges Failed authorization procedure. staging.ishankhare.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8: Timeout during connect (likely firewall problem) IMPORTANT NOTES: - The following errors were reported by the server: Domain: staging.ishankhare.com Type: connection Detail: Fetching http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8: Timeout during connect (likely firewall problem) To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address. Additionally, please check that your computer has a publicly routable IP address and that no firewalls are preventing the server from communicating with the client. If you're using the webroot plugin, you should also verify that you are serving files from the webroot path you provided. - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. </code></pre> <p>I've also tried running this as a simple <code>Pod</code> but to no help. Although I still feel running it as a <code>Job</code> to completion is the way to go.</p>
<p>First, be aware your <code>Job</code> definition is valid, but the <code>spec.template.metadata.labels.app: certbot-generate</code> value does <strong>not</strong> match with your <code>Service</code> definition <code>spec.selector.app: certbot-generator</code>: one is <code>certbot-generate</code>, the second is <code>certbot-generator</code>. So the pod run by the job controller is never added as an endpoint to the service.</p> <p>Adjust one or the other, but they have to match, and that might just work :)</p> <p>Although, I'm not sure using a <code>Service</code> with a selector targeting short-lived pods from a <code>Job</code> controller would work, neither with a simple <code>Pod</code> as you tested. The <code>certbot-randomId</code> pod created by the job (or whatever simple pod you create) takes about 15 seconds total to run/fail, and the HTTP validation challenge is triggered after just a few seconds of the pod life: it's not clear to me that would be enough time for kubernetes proxying to be already working between the service and the pod.</p> <p>We can safely assume that the <code>Service</code> is actually working because you mentioned that you tested DNS resolution, so you can easily ensure that's not a timing issue by adding a <code>sleep 10</code> (or more!) to give more time for the pod to be added as an endpoint to the service and being proxied appropriately <em>before</em> the HTTP challenge is triggered by certbot. Just change your <code>Job</code> command and args for those:</p> <pre><code>command: ["/bin/sh"] args: ["-c", "sleep 10 &amp;&amp; certbot certonly --noninteractive --agree-tos --staging --standalone -d staging.ishankhare.com -m [email protected]"] </code></pre> <p>And here too, that might just work :)</p> <hr> <p>That being said, I'd warmly recommend you to use <a href="https://cert-manager.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">cert-manager</a> which you can install easily through its <a href="https://github.com/helm/charts/tree/master/stable/cert-manager" rel="nofollow noreferrer">stable Helm chart</a>: the <code>Certificate</code> custom resource that it introduces will store your certificate in a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer"><code>Secret</code></a> which will make it straightforward to reuse from whatever K8s resource, and it takes care of renewal automatically so you can just forget about it all.</p>
<p>I looked for and couldn't find any answers to basic questions about GKE: - if it is managed k8s does that mean etcd used for storing resources is also fully managed - how updates and backup on etcd is assured - what are the limits? What if I have 50000 different resources, most coming from my CRDs</p> <p>Do you know any official resources that I can refer to and be sure this is really this way?</p>
<p>Yes, etcd is managed and all that it comes with it. There aren't very specific limits defined in the official <a href="https://cloud.google.com/kubernetes-engine/quotas" rel="nofollow noreferrer">docs</a> although <code>300,000 containers</code> should give you a pretty rough idea.</p> <p>If you have any specific needs, for example, hundreds of Deployments, or ConfigMaps, I would contact GCP support with your specific case.</p> <p>✌️</p>
<p>I'm looking to use Kubernetes DNS to requetes pods from pods. All is in my Kubernetes cluster.</p> <p>I would like to use a http requeste from a web app to call another web app </p> <p>For exemple I would like to call ProductWebApp from DashboardWebApp</p> <p>I found kubernetes rest api</p> <p>➜ ~ kubectl exec -it dashboard-57f598dd76-54s2x -- /bin/bash</p> <p>➜ ~ curl -X GET <a href="https://4B3449144A41F5488D670E69D41222D.sk1.us-east-1.eks.amazonaws.com/api/v1/namespaces/staging/services/product-app/proxy/api/product/5bf42b2ca5fc050616640dc6" rel="nofollow noreferrer">https://4B3449144A41F5488D670E69D41222D.sk1.us-east-1.eks.amazonaws.com/api/v1/namespaces/staging/services/product-app/proxy/api/product/5bf42b2ca5fc050616640dc6</a> { "kind": "Status", "apiVersion": "v1", "metadata": {</p> <p>}, "status": "Failure", "message": "services \"product-app\" is forbidden: User \"system:anonymous\" cannot get services/proxy in the namespace \"staging\"", "reason": "Forbidden", "details": { "name": "product-app", "kind": "services" }, "code": 403 }% </p> <p>I don't understand why it's block</p> <p>I found also this url<br> ➜ ~ curl -XGET product-app.staging.svc.cluster.local/api/product/5bf42b2ca5fc050616640dc6</p> <p>But it's also not work </p> <p>So what is the good way to make a call from a pod to service ?</p>
<p>For when <em>both</em> ProductWebApp and DashboardWebApp are running on the <em>same</em> Kubernetes cluster:</p> <p>Define a Service as described <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">here</a> for the app that you want to call (ProductWebApp) using <code>type: ClusterIP</code> service; configure the calling app (DashboardWebApp) with the service name as the URI to call.</p> <p>For example, assuming ProductWebApp is in a namespace named <code>staging</code>, define a service named <code>product-app</code> for the ProductWebApp deployment and then configure the DashboardWebApp to call ProductWebApp at this URI:</p> <pre><code>http://product-app.staging.svc.cluster.local/end/point/as/needed </code></pre> <p>Replace http with https if the ProductWebApp endpoint requires it. Notice that a Service name can be the same as the name of the Deployment for which the service is.</p> <p>This works when the Kubernetes cluster is running a DNS service (and most clusters do) - see this <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="noreferrer">link</a> and specifically the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records" rel="noreferrer">A records</a> section.</p>
<p>I'd like to see the 'config' details as shown by the command of:</p> <pre><code>kubectl config view </code></pre> <p>However this shows the entire config details of all contexts, how can I filter it (or perhaps there is another command), to view the config details of the CURRENT context?</p>
<p><code>kubectl config view --minify</code> displays only the current context</p>
<p>I'm trying to inject an HTTP status 500 fault in the bookinfo example.</p> <p>I managed to inject a 500 error status when the traffic is coming from the Gateway with:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo namespace: default spec: gateways: - bookinfo-gateway hosts: - '*' http: - fault: abort: httpStatus: 500 percent: 100 match: - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 </code></pre> <p>Example:</p> <pre><code>$ curl $(minikube ip):30890/api/v1/products fault filter abort </code></pre> <p>But, I fails to achieve this for traffic that is coming from other pods:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo namespace: default spec: gateways: - mesh hosts: - productpage http: - fault: abort: httpStatus: 500 percent: 100 match: - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 </code></pre> <p>Example:</p> <pre><code># jump into a random pod $ kubectl exec -ti details-v1-dasa231 -- bash root@details $ curl productpage:9080/api/v1/products [{"descriptionHtml": ... &lt;- actual product list, I expect a http 500 </code></pre> <ul> <li>I tried using the FQDN for the host <code>productpage.svc.default.cluster.local</code> but I get the same behavior.</li> <li><p>I checked the proxy status with <code>istioctl proxy-status</code> everything is synced.</p></li> <li><p>I tested if the istio-proxy is injected into the pods, it is:</p></li> </ul> <p>Pods:</p> <pre><code>NAME READY STATUS RESTARTS AGE details-v1-6764bbc7f7-bm9zq 2/2 Running 0 4h productpage-v1-54b8b9f55-72hfb 2/2 Running 0 4h ratings-v1-7bc85949-cfpj2 2/2 Running 0 4h reviews-v1-fdbf674bb-5sk5x 2/2 Running 0 4h reviews-v2-5bdc5877d6-cb86k 2/2 Running 0 4h reviews-v3-dd846cc78-lzb5t 2/2 Running 0 4h </code></pre> <p>I'm completely stuck and not sure what to check next. I feel like I am missing something very obvious.</p> <p>I would really appreciate any help on this topic.</p>
<p>This should work, and does when I tried. My guess is that you have other conflicting route rules for the productpage service defined.</p>
<p>I am not able to see any log output when deploying a very simple Pod:</p> <p>myconfig.yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'] </code></pre> <p>then</p> <pre><code>kubectl apply -f myconfig.yaml </code></pre> <p>This was taken from this official tutorial: <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes</a> </p> <p>The pod appears to be running fine:</p> <pre><code>kubectl describe pod counter Name: counter Namespace: default Node: ip-10-0-0-43.ec2.internal/10.0.0.43 Start Time: Tue, 20 Nov 2018 12:05:07 -0500 Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"counter","namespace":"default"},"spec":{"containers":[{"args":["/bin/sh","-c","i=0... Status: Running IP: 10.0.0.81 Containers: count: Container ID: docker://d2dfdb8644b5a6488d9d324c8c8c2d4637a460693012f35a14cfa135ab628303 Image: busybox Image ID: docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812 Port: &lt;none&gt; Host Port: &lt;none&gt; Args: /bin/sh -c i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done State: Running Started: Tue, 20 Nov 2018 12:05:08 -0500 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-r6tr6 (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-r6tr6: Type: Secret (a volume populated by a Secret) SecretName: default-token-r6tr6 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 16m default-scheduler Successfully assigned counter to ip-10-0-0-43.ec2.internal Normal SuccessfulMountVolume 16m kubelet, ip-10-0-0-43.ec2.internal MountVolume.SetUp succeeded for volume "default-token-r6tr6" Normal Pulling 16m kubelet, ip-10-0-0-43.ec2.internal pulling image "busybox" Normal Pulled 16m kubelet, ip-10-0-0-43.ec2.internal Successfully pulled image "busybox" Normal Created 16m kubelet, ip-10-0-0-43.ec2.internal Created container Normal Started 16m kubelet, ip-10-0-0-43.ec2.internal Started container </code></pre> <p>Nothing appears when running:</p> <pre><code>kubectl logs counter --follow=true </code></pre>
<p>The only thing I can think of that may be causing it to not output the logs is if you configured the <a href="https://docs.docker.com/config/containers/logging/configure/" rel="nofollow noreferrer">default logging driver</a> for Docker in your <code>/etc/docker/docker.json</code> config file for the node where your pod is running:</p> <pre><code>{ "log-driver": "anything-but-json-file", } </code></pre> <p>That would essentially make Docker, not output stdout/stderr logs for something like <code>kubectl logs &lt;podid&gt; -c &lt;containerid&gt;</code>. You can take a look at what's configured in the container in your pod in your node (<code>10.0.0.43</code>):</p> <pre><code>$ docker inspect -f '{{.HostConfig.LogConfig.Type}}' &lt;container-id&gt; </code></pre>
<p>Can anybody let me know how can we access the service deployed on one pod via another pod in a kubernetes cluster?</p> <p><em>Example:</em></p> <p>There is a nginx service which is deployed on Node1 (having pod name as nginx-12345) and another service which is deployed on Node2 (having pod name as service-23456). Now if 'service' wants to communicate with 'nginx' for some reason, then how can we access 'nginx' inside the 'service-23456' pod?</p>
<p>There are various ways to access the service in kubernetes, you can expose your services through NodePort or LoadBalancer and access it outside the cluster. </p> <p>See the official documentation of <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/" rel="noreferrer">how to access the services</a>.</p> <p>Kubernetes official document states that:</p> <blockquote> <p>Some clusters may allow you to ssh to a node in the cluster. From there you may be able to access cluster services. This is a non-standard method, and will work on some clusters but not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.</p> </blockquote> <p>So access a service directly from other node is dependent on which type of Kubernetes cluster you're using.</p> <p>EDIT:</p> <p>Once the service is deployed in your cluster you should be able to contact the service using its name, and <code>Kube-DNS</code> will answer with the correct <code>ClusterIP</code> to speak to your final pods. ClusterIPs are governed by IPTables rules created by kube-proxy on Workers that NAT your request to the final container’s IP.</p> <p>The Kube-DNS naming convention is <code>service.namespace.svc.cluster-domain.tld</code> and the default cluster domain is <code>cluster.local</code>.</p> <p>For example, if you want to contact a service called <code>mysql</code> in the <code>db</code> namespace from any namespace, you can simply speak to <code>mysql.db.svc.cluster.local</code>.</p> <p>If this is not working then there might be some issue with kube-dns in your cluster. Hope this helps. </p> <p>EDIT2 : There are some known issue with dns resolution in ubuntu, Kubernetes official document states that</p> <blockquote> <p>Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm 1.11 automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.</p> </blockquote>
<p>Is it possoble to have an ELK stack setup, in a "monitoring" namespace in kubernetes, that has read permission accross all the other namespaces so that i can still monitor all the pods.</p> <p>Im just wondering if that would just make it a little easier to manage?, especially when it comes to accessing other namespaces, where we have resrtictions.</p> <p>I know prometheous allows this, but has anyone tried with a ELK stack.</p>
<p>Yes it can be done. Following is the step by step guide to setup EFK stack on kubernetes in logging namespace.</p> <blockquote> <p><a href="https://blog.ptrk.io/how-to-deploy-an-efk-stack-to-kubernetes/" rel="nofollow noreferrer">https://blog.ptrk.io/how-to-deploy-an-efk-stack-to-kubernetes/</a><a href="https://blog.ptrk.io/how-to-deploy-an-efk-stack-to-kubernetes/" rel="nofollow noreferrer">enter link description here</a></p> </blockquote> <p>I am sure we can do the same for log-stash also. </p>
<p>I am trying to install gitlab with helm on a kubernetes cluster which already have an ingress(cluster created by RKE). With gitlab, I want to deploy it into seperate namespace. For that, I ran the below command:</p> <pre><code>$ gitlab-config helm upgrade --install gitlab gitlab/gitlab \ --timeout 600 \ --set global.hosts.domain=asdsa.asdasd.net \ --set [email protected] \ --set global.edition=ce \ --namespace gitlab-ci \ --set gitlab.migrations.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-rails-ce \ --set gitlab.sidekiq.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ce \ --set gitlab.unicorn.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-unicorn-ce \ --set gitlab.unicorn.workhorse.image=registry.gitlab.com/gitlab-org/build/cng/gitlab-workhorse-ce \ --set gitlab.task-runner.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-task-runner-ce </code></pre> <p>But the install fails while validating the domain with http01 test with cert-manager. For this, before running the above command, I've pointed my base domain to the existing Load Balancer in my cluster.</p> <p>Is there something different which needs to be done for successful http01 validation?</p> <p>Error:</p> <pre><code>Conditions: Last Transition Time: 2018-11-18T15:22:00Z Message: http-01 self check failed for domain "asdsa.asdasd.net" Reason: ValidateError Status: False Type: Ready </code></pre> <p>More information:</p> <p>The health checks for Load Balancer also keeps failing. So, even with using self-signed certificates, the installation is failing.</p> <p>When tried to ssh into one of the nodes and check return status, here's what I saw:</p> <pre><code>$ curl -v localhost:32030/healthz * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 32030 (#0) &gt; GET /healthz HTTP/1.1 &gt; Host: localhost:32030 &gt; User-Agent: curl/7.47.0 &gt; Accept: */* &gt; &lt; HTTP/1.1 503 Service Unavailable &lt; Content-Type: application/json &lt; Date: Mon, 19 Nov 2018 13:38:49 GMT &lt; Content-Length: 114 &lt; { "service": { "namespace": "gitlab-ci", "name": "gitlab-nginx-ingress-controller" }, "localEndpoints": 0 * Connection #0 to host localhost left intact } </code></pre> <p>And, when I checked ingress controller service, it was up and running:</p> <pre><code>gitlab-nginx-ingress-controller LoadBalancer 10.43.168.81 XXXXXXXXXXXXXX.us-east-2.elb.amazonaws.com 80:32006/TCP,443:31402/TCP,22:31858/TCP </code></pre>
<p>The issue was resolved here - <a href="https://gitlab.com/charts/gitlab/issues/939" rel="nofollow noreferrer">https://gitlab.com/charts/gitlab/issues/939</a></p> <p>Basically, the solution as mentioned in the thread is not formally documented because it needs confirmation.</p>
<p>I am trying to setup cronjobs in my kubernetes cluster,I have micro service that import data from another api to my database. I want to run this command every 10 minutes. I have following cronjob manifest</p> <pre><code>apiVersion: v1 items: - apiVersion: batch/v1beta1 kind: CronJob metadata: labels: chart: cronjobs-0.1.0 name: cron-cronjob1 namespace: default spec: concurrencyPolicy: Forbid failedJobsHistoryLimit: 1 jobTemplate: spec: template: metadata: labels: app: cron cron: cronjob1 spec: containers: command: ["/usr/local/bin/php"] args: ["artisan bulk:import"] env: - name: DB_CONNECTION value: postgres - name: DB_HOST value: postgres - name: DB_PORT value: "5432" - name: DB_DATABASE value: xxx - name: DB_USERNAME value: xxx - name: DB_PASSWORD value: xxxx - name: APP_KEY value: xxxxx image: registry.xxxxx.com/xxxx:2ecb785-e927977 imagePullPolicy: IfNotPresent name: cronjob1 ports: - containerPort: 80 name: http protocol: TCP imagePullSecrets: - name: xxxxx restartPolicy: OnFailure terminationGracePeriodSeconds: 30 schedule: '* * * * *' successfulJobsHistoryLimit: 3 </code></pre> <p>I am getting following error when cronjob scheduler spin up a pod</p> <blockquote> <p>Could not open input file: artisan bulk:import</p> </blockquote> <p>How to resolve this?</p>
<p>here is the fix</p> <pre><code> args: - "/bin/bash" - "-c" - "/var/www/html/artisan bulk:import" </code></pre>
<p>I'm trying to set up the Kubernetes master, by issuing:</p> <blockquote> <p>kubeadm init --pod-network-cidr=192.168.0.0/16</p> </blockquote> <ol> <li>followed by: <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network" rel="noreferrer">Installing a pod network add-on</a> (Calico)</li> <li>followed by: <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#master-isolation" rel="noreferrer">Master Isolation</a></li> </ol> <hr> <p>issue: <code>coredns</code> pods have <code>CrashLoopBackOff</code> or <code>Error</code> state:</p> <pre><code># kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-node-lflwx 2/2 Running 0 2d coredns-576cbf47c7-nm7gc 0/1 CrashLoopBackOff 69 2d coredns-576cbf47c7-nwcnx 0/1 CrashLoopBackOff 69 2d etcd-suey.nknwn.local 1/1 Running 0 2d kube-apiserver-suey.nknwn.local 1/1 Running 0 2d kube-controller-manager-suey.nknwn.local 1/1 Running 0 2d kube-proxy-xkgdr 1/1 Running 0 2d kube-scheduler-suey.nknwn.local 1/1 Running 0 2d # </code></pre> <p>I tried with <a href="https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#coredns-pods-have-crashloopbackoff-or-error-state" rel="noreferrer">Troubleshooting kubeadm - Kubernetes</a>, however my node isn't running <code>SELinux</code> and my Docker is up to date.</p> <pre><code># docker --version Docker version 18.06.1-ce, build e68fc7a # </code></pre> <p><code>kubectl</code>'s <code>describe</code>:</p> <pre><code># kubectl -n kube-system describe pod coredns-576cbf47c7-nwcnx Name: coredns-576cbf47c7-nwcnx Namespace: kube-system Priority: 0 PriorityClassName: &lt;none&gt; Node: suey.nknwn.local/192.168.86.81 Start Time: Sun, 28 Oct 2018 22:39:46 -0400 Labels: k8s-app=kube-dns pod-template-hash=576cbf47c7 Annotations: cni.projectcalico.org/podIP: 192.168.0.30/32 Status: Running IP: 192.168.0.30 Controlled By: ReplicaSet/coredns-576cbf47c7 Containers: coredns: Container ID: docker://ec65b8f40c38987961e9ed099dfa2e8bb35699a7f370a2cda0e0d522a0b05e79 Image: k8s.gcr.io/coredns:1.2.2 Image ID: docker-pullable://k8s.gcr.io/coredns@sha256:3e2be1cec87aca0b74b7668bbe8c02964a95a402e45ceb51b2252629d608d03a Ports: 53/UDP, 53/TCP, 9153/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: -conf /etc/coredns/Corefile State: Running Started: Wed, 31 Oct 2018 23:28:58 -0400 Last State: Terminated Reason: Error Exit Code: 137 Started: Wed, 31 Oct 2018 23:21:35 -0400 Finished: Wed, 31 Oct 2018 23:23:54 -0400 Ready: True Restart Count: 103 Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: &lt;none&gt; Mounts: /etc/coredns from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-xvq8b (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: coredns Optional: false coredns-token-xvq8b: Type: Secret (a volume populated by a Secret) SecretName: coredns-token-xvq8b Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: CriticalAddonsOnly node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Killing 54m (x10 over 4h19m) kubelet, suey.nknwn.local Killing container with id docker://coredns:Container failed liveness probe.. Container will be killed and recreated. Warning Unhealthy 9m56s (x92 over 4h20m) kubelet, suey.nknwn.local Liveness probe failed: HTTP probe failed with statuscode: 503 Warning BackOff 5m4s (x173 over 4h10m) kubelet, suey.nknwn.local Back-off restarting failed container # kubectl -n kube-system describe pod coredns-576cbf47c7-nm7gc Name: coredns-576cbf47c7-nm7gc Namespace: kube-system Priority: 0 PriorityClassName: &lt;none&gt; Node: suey.nknwn.local/192.168.86.81 Start Time: Sun, 28 Oct 2018 22:39:46 -0400 Labels: k8s-app=kube-dns pod-template-hash=576cbf47c7 Annotations: cni.projectcalico.org/podIP: 192.168.0.31/32 Status: Running IP: 192.168.0.31 Controlled By: ReplicaSet/coredns-576cbf47c7 Containers: coredns: Container ID: docker://0f2db8d89a4c439763e7293698d6a027a109bf556b806d232093300952a84359 Image: k8s.gcr.io/coredns:1.2.2 Image ID: docker-pullable://k8s.gcr.io/coredns@sha256:3e2be1cec87aca0b74b7668bbe8c02964a95a402e45ceb51b2252629d608d03a Ports: 53/UDP, 53/TCP, 9153/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: -conf /etc/coredns/Corefile State: Running Started: Wed, 31 Oct 2018 23:29:11 -0400 Last State: Terminated Reason: Error Exit Code: 137 Started: Wed, 31 Oct 2018 23:21:58 -0400 Finished: Wed, 31 Oct 2018 23:24:08 -0400 Ready: True Restart Count: 102 Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: &lt;none&gt; Mounts: /etc/coredns from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-xvq8b (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: coredns Optional: false coredns-token-xvq8b: Type: Secret (a volume populated by a Secret) SecretName: coredns-token-xvq8b Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: CriticalAddonsOnly node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Killing 44m (x12 over 4h18m) kubelet, suey.nknwn.local Killing container with id docker://coredns:Container failed liveness probe.. Container will be killed and recreated. Warning BackOff 4m58s (x170 over 4h9m) kubelet, suey.nknwn.local Back-off restarting failed container Warning Unhealthy 8s (x102 over 4h19m) kubelet, suey.nknwn.local Liveness probe failed: HTTP probe failed with statuscode: 503 # </code></pre> <p><code>kubectl</code>'s <code>log</code>:</p> <pre><code># kubectl -n kube-system logs -f coredns-576cbf47c7-nm7gc E1101 03:31:58.974836 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1101 03:31:58.974836 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1101 03:31:58.974857 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1101 03:32:29.975493 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1101 03:32:29.976732 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1101 03:32:29.977788 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1101 03:33:00.976164 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1101 03:33:00.977415 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1101 03:33:00.978332 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout 2018/11/01 03:33:08 [INFO] SIGTERM: Shutting down servers then terminating E1101 03:33:31.976864 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1101 03:33:31.978080 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1101 03:33:31.979156 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout # # kubectl -n kube-system log -f coredns-576cbf47c7-gqdgd .:53 2018/11/05 04:04:13 [INFO] CoreDNS-1.2.2 2018/11/05 04:04:13 [INFO] linux/amd64, go1.11, eb51e8b CoreDNS-1.2.2 linux/amd64, go1.11, eb51e8b 2018/11/05 04:04:13 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769 2018/11/05 04:04:19 [FATAL] plugin/loop: Seen "HINFO IN 3597544515206064936.6415437575707023337." more than twice, loop detected # kubectl -n kube-system log -f coredns-576cbf47c7-hhmws .:53 2018/11/05 04:04:18 [INFO] CoreDNS-1.2.2 2018/11/05 04:04:18 [INFO] linux/amd64, go1.11, eb51e8b CoreDNS-1.2.2 linux/amd64, go1.11, eb51e8b 2018/11/05 04:04:18 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769 2018/11/05 04:04:24 [FATAL] plugin/loop: Seen "HINFO IN 6900627972087569316.7905576541070882081." more than twice, loop detected # </code></pre> <p><code>describe</code> (<code>apiserver</code>):</p> <pre><code># kubectl -n kube-system describe pod kube-apiserver-suey.nknwn.local Name: kube-apiserver-suey.nknwn.local Namespace: kube-system Priority: 2000000000 PriorityClassName: system-cluster-critical Node: suey.nknwn.local/192.168.87.20 Start Time: Fri, 02 Nov 2018 00:28:44 -0400 Labels: component=kube-apiserver tier=control-plane Annotations: kubernetes.io/config.hash: 2433a531afe72165364aace3b746ea4c kubernetes.io/config.mirror: 2433a531afe72165364aace3b746ea4c kubernetes.io/config.seen: 2018-11-02T00:28:43.795663261-04:00 kubernetes.io/config.source: file scheduler.alpha.kubernetes.io/critical-pod: Status: Running IP: 192.168.87.20 Containers: kube-apiserver: Container ID: docker://659456385a1a859f078d36f4d1b91db9143d228b3bc5b3947a09460a39ce41fc Image: k8s.gcr.io/kube-apiserver:v1.12.2 Image ID: docker-pullable://k8s.gcr.io/kube-apiserver@sha256:094929baf3a7681945d83a7654b3248e586b20506e28526121f50eb359cee44f Port: &lt;none&gt; Host Port: &lt;none&gt; Command: kube-apiserver --authorization-mode=Node,RBAC --advertise-address=192.168.87.20 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key State: Running Started: Sun, 04 Nov 2018 22:57:27 -0500 Last State: Terminated Reason: Completed Exit Code: 0 Started: Sun, 04 Nov 2018 20:12:06 -0500 Finished: Sun, 04 Nov 2018 22:55:24 -0500 Ready: True Restart Count: 2 Requests: cpu: 250m Liveness: http-get https://192.168.87.20:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8 Environment: &lt;none&gt; Mounts: /etc/ca-certificates from etc-ca-certificates (ro) /etc/kubernetes/pki from k8s-certs (ro) /etc/ssl/certs from ca-certs (ro) /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro) /usr/share/ca-certificates from usr-share-ca-certificates (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: etc-ca-certificates: Type: HostPath (bare host directory volume) Path: /etc/ca-certificates HostPathType: DirectoryOrCreate k8s-certs: Type: HostPath (bare host directory volume) Path: /etc/kubernetes/pki HostPathType: DirectoryOrCreate ca-certs: Type: HostPath (bare host directory volume) Path: /etc/ssl/certs HostPathType: DirectoryOrCreate usr-share-ca-certificates: Type: HostPath (bare host directory volume) Path: /usr/share/ca-certificates HostPathType: DirectoryOrCreate usr-local-share-ca-certificates: Type: HostPath (bare host directory volume) Path: /usr/local/share/ca-certificates HostPathType: DirectoryOrCreate QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: :NoExecute Events: &lt;none&gt; # </code></pre> <p>syslog (host):</p> <blockquote> <p>Nov 4 22:59:36 suey kubelet[1234]: E1104 22:59:36.139538 1234 pod_workers.go:186] Error syncing pod d8146b7e-de57-11e8-a1e2-ec8eb57434c8 ("coredns-576cbf47c7-hhmws_kube-system(d8146b7e-de57-11e8-a1e2-ec8eb57434c8)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-576cbf47c7-hhmws_kube-system(d8146b7e-de57-11e8-a1e2-ec8eb57434c8)"</p> </blockquote> <p>Please advise.</p>
<p>This error</p> <pre><code>[FATAL] plugin/loop: Seen "HINFO IN 6900627972087569316.7905576541070882081." more than twice, loop detected </code></pre> <p>is caused when CoreDNS detects a loop in the resolve configuration, and it is the intended behavior. You are hitting this issue:</p> <p><a href="https://github.com/kubernetes/kubeadm/issues/1162" rel="noreferrer">https://github.com/kubernetes/kubeadm/issues/1162</a></p> <p><a href="https://github.com/coredns/coredns/issues/2087" rel="noreferrer">https://github.com/coredns/coredns/issues/2087</a></p> <p><strong>Hacky solution: Disable the CoreDNS loop detection</strong></p> <p>Edit the CoreDNS configmap:</p> <pre><code>kubectl -n kube-system edit configmap coredns </code></pre> <p>Remove or comment out the line with <code>loop</code>, save and exit.</p> <p>Then remove the CoreDNS pods, so new ones can be created with new config:</p> <pre><code>kubectl -n kube-system delete pod -l k8s-app=kube-dns </code></pre> <p>All should be fine after that.</p> <p><strong>Preferred Solution: Remove the loop in the DNS configuration</strong></p> <p>First, check if you are using <code>systemd-resolved</code>. If you are running Ubuntu 18.04, it is probably the case.</p> <pre><code>systemctl list-unit-files | grep enabled | grep systemd-resolved </code></pre> <p>If it is, check which <code>resolv.conf</code> file your cluster is using as reference:</p> <pre><code>ps auxww | grep kubelet </code></pre> <p>You might see a line like:</p> <pre><code>/usr/bin/kubelet ... --resolv-conf=/run/systemd/resolve/resolv.conf </code></pre> <p>The important part is <code>--resolv-conf</code> - we figure out if systemd resolv.conf is used, or not. </p> <p><strong>If it is the <code>resolv.conf</code> of <code>systemd</code>, do the following:</strong></p> <p>Check the content of <code>/run/systemd/resolve/resolv.conf</code> to see if there is a record like:</p> <pre><code>nameserver 127.0.0.1 </code></pre> <p>If there is <code>127.0.0.1</code>, it is the one causing the loop.</p> <p>To get rid of it, you should not edit that file, but check other places to make it properly generated.</p> <p>Check all files under <code>/etc/systemd/network</code> and if you find a record like</p> <pre><code>DNS=127.0.0.1 </code></pre> <p>delete that record. Also check <code>/etc/systemd/resolved.conf</code> and do the same if needed. Make sure you have at least one or two DNS servers configured, such as </p> <pre><code>DNS=1.1.1.1 1.0.0.1 </code></pre> <p>After doing all that, restart the systemd services to put your changes into effect: systemctl restart systemd-networkd systemd-resolved</p> <p>After that, verify that <code>DNS=127.0.0.1</code> is no more in the <code>resolv.conf</code> file:</p> <pre><code>cat /run/systemd/resolve/resolv.conf </code></pre> <p>Finally, trigger re-creation of the DNS pods</p> <pre><code>kubectl -n kube-system delete pod -l k8s-app=kube-dns </code></pre> <p><strong>Summary:</strong> The solution involves getting rid of what looks like a DNS lookup loop from the host DNS configuration. Steps vary between different resolv.conf managers/implementations.</p>
<p>I am not able to see any log output when deploying a very simple Pod:</p> <p>myconfig.yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'] </code></pre> <p>then</p> <pre><code>kubectl apply -f myconfig.yaml </code></pre> <p>This was taken from this official tutorial: <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes</a> </p> <p>The pod appears to be running fine:</p> <pre><code>kubectl describe pod counter Name: counter Namespace: default Node: ip-10-0-0-43.ec2.internal/10.0.0.43 Start Time: Tue, 20 Nov 2018 12:05:07 -0500 Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"counter","namespace":"default"},"spec":{"containers":[{"args":["/bin/sh","-c","i=0... Status: Running IP: 10.0.0.81 Containers: count: Container ID: docker://d2dfdb8644b5a6488d9d324c8c8c2d4637a460693012f35a14cfa135ab628303 Image: busybox Image ID: docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812 Port: &lt;none&gt; Host Port: &lt;none&gt; Args: /bin/sh -c i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done State: Running Started: Tue, 20 Nov 2018 12:05:08 -0500 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-r6tr6 (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-r6tr6: Type: Secret (a volume populated by a Secret) SecretName: default-token-r6tr6 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 16m default-scheduler Successfully assigned counter to ip-10-0-0-43.ec2.internal Normal SuccessfulMountVolume 16m kubelet, ip-10-0-0-43.ec2.internal MountVolume.SetUp succeeded for volume "default-token-r6tr6" Normal Pulling 16m kubelet, ip-10-0-0-43.ec2.internal pulling image "busybox" Normal Pulled 16m kubelet, ip-10-0-0-43.ec2.internal Successfully pulled image "busybox" Normal Created 16m kubelet, ip-10-0-0-43.ec2.internal Created container Normal Started 16m kubelet, ip-10-0-0-43.ec2.internal Started container </code></pre> <p>Nothing appears when running:</p> <pre><code>kubectl logs counter --follow=true </code></pre>
<p>I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now.</p>
<p>Connection between pods on the same cluster is failing.</p> <p>From what I understand, by default - the pods are exposed on the port specified in the yaml file. For example, I have configured my deployment file for redis as below:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis labels: app: myapp spec: replicas: 1 template: metadata: labels: app: myapp spec: containers: - env: - name: REDIS_PASS value: '**None**' image: tutum/redis ports: - containerPort: 6379 name: redis restartPolicy: Always </code></pre> <p>Below is the deployment file for the pod where the container is trying to access redis:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: jks labels: app: myapp spec: replicas: 1 template: metadata: labels: app: myapp spec: imagePullSecrets: - name: myappsecret containers: - env: - name: JOBQUEUE value: vae_jobqueue - name: PORT value: "80" image: repo.url name: jks ports: - containerPort: 80 volumeMounts: - name: config-vol mountPath: /etc/sys0 volumes: - name: config-vol configMap: name: config restartPolicy: Always </code></pre> <p>I did not create any service yet. But is it required? The pod is going to be accessed by another pod which is part of the same helm chart. With this setup,there are errors in the second pod which tries to access redis:</p> <pre><code>2018-11-21T16:12:31.939Z - [33mwarn[39m: Error: Redis connection to redis:6379 failed - getaddrinfo ENOTFOUND redis redis:6379 at errnoException (dns.js:27:10) at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26) </code></pre> <p><em>How do I make sure that my pod is able to connect to the redis pod on port 6379?</em></p> <p>---- UPDATE ----</p> <p>This is how my charts look like now: </p> <pre><code># Source: mychartv2/templates/redis-service.yaml apiVersion: v1 kind: Service metadata: name: redis spec: selector: app: myapp-redis clusterIP: None ports: - name: redis port: 6379 targetPort: 6379 --- # Source: mychartv2/templates/redis-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis labels: app: myapp-redis spec: replicas: 1 template: metadata: labels: app: myapp-redis spec: containers: - env: - name: REDIS_PASS value: '**None**' image: tutum/redis ports: - containerPort: 6379 name: redis restartPolicy: Always --- # Source: mychartv2/templates/jks-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: jks labels: app: myapp-jks spec: replicas: 1 template: metadata: labels: app: myapp-jks spec: imagePullSecrets: - name: jkssecret containers: - env: - name: JOBQUEUE value: jks_jobqueue - name: PORT value: "80" image: repo.url name: jks ports: - containerPort: 80 volumeMounts: - name: config-vol mountPath: /etc/sys0 volumes: - name: config-vol configMap: name: jksconfig restartPolicy: Always </code></pre> <p><strong>Note</strong>: I am using minikube as my kubernetes cluster</p>
<p>You'd need a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer"><code>Service</code></a> to get access to the Redis pod. With your current resources <code>redis:6379</code> does just not exist, a Service with <code>metadata.name: redis</code> and the appropriate <code>spec.selector</code> would make it available.</p> <p>Be aware the 2 deployments you posted have the same <code>metadata.labels.app</code> value of <code>myapp</code> so you'd have to change one to say <code>myapp-redis</code> for example so the service will target the right pods (with <code>metadata.name: myapp-redis</code> in that example) and not the pods from your HTTP application.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis labels: app: myapp spec: replicas: 1 template: metadata: labels: app: myapp-redis spec: containers: - env: - name: REDIS_PASS value: '**None**' image: tutum/redis ports: - containerPort: 6379 name: redis restartPolicy: Always apiVersion: v1 kind: Service metadata: name: redis spec: selector: app: myapp-redis ports: - protocol: TCP port: 6379 </code></pre> <p>Also, you added the tag <code>kubernetes-helm</code> to your question, so if you are using Helm I'd highly recommend <a href="https://github.com/helm/charts/tree/master/stable/redis" rel="nofollow noreferrer">this stable chart</a>: just install it with <code>helm install stable/redis</code> and you'll be able to access your Redis master with <code>redis-master:6379</code> and any read-only slave with <code>redis-slave:6379</code>. You can avoid having slaves if you don't need/want them, just go through <a href="https://github.com/helm/charts/tree/master/stable/redis#configuration" rel="nofollow noreferrer">the configuration</a> to know how.</p>
<p>I have create a 2 services in Kubernetes with 2 internal loadbalancer in GCP. Things are working fine. </p> <p>How can I point the 2 services to the same loadbalancer?</p> <p>I have used the below yml file, the service is not working.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: sample-app labels: name: sample-app app: sample-app spec: ports: - name: sampleapp protocol: TCP port: 8080 targetPort: 8080 selector: name: sample-app app: sample-app type: "LoadBalancer" loadBalancerIP: XX.XX.XX.XX </code></pre> <p>The loadBalancerIP, expects the actual loadbalancer IP. </p> <p>Error creating load balancer (will retry): failed to ensure load balancer for service default/sampleapp: requested ip "XX.XX.XX.XX" is neither static nor assigned to the LB</p>
<p>I was able to create the NGINX ingress controller by Kubernetes using the below blogs.</p> <p><a href="http://rahmonov.me/posts/nginx-ingress-controller/" rel="nofollow noreferrer">http://rahmonov.me/posts/nginx-ingress-controller/</a>? <a href="https://imti.co/web-cluster-ingress/" rel="nofollow noreferrer">https://imti.co/web-cluster-ingress/</a></p> <p>And created an Ingress to point to my endpoints.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/rewrite-target: / name: cobalt-app namespace: default spec: rules: - http: paths: - backend: serviceName: sampleapp servicePort: 8080 path: /greeting - backend: serviceName: echoserver servicePort: 8080 path: /echo </code></pre>
<p>I am trying to move a currently docker based app to Kubernetes. My app inspects network traffic that passes through it, and because of that it needs an accessible External IP, and it needs to accept traffic on all ports, not just some.</p> <p>Right now, I am using docker with a macvlan network driver in order to attach docker containers to multiple interfaces and allow them to inspect traffic that way.</p> <p>After research, I've found that the only way to access pods in Kubernetes is using Services, but services only allow that through some specific ports, because it is mostly intended for "server" type applications, and not "forwarder"/"sniffer" type which is what I am looking for.</p> <p>Is Kubernetes a good fit for this type of application? Does it offer tools to cope with this problem?</p>
<blockquote> <p>Is Kubernetes a good fit for this type of application? Does it offer tools to cope with this problem?</p> </blockquote> <p>Being a good fit is more of an opinion, the pods in Kubernetes have their own PodCidr that is not exposed to the outside world and a sniffer doesn't quite fit in either a service or a job definition which are the typical workloads in Kubernetes.</p> <p>Having said, it can be done if you can use your custom <a href="https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan" rel="nofollow noreferrer">CNI plugin that supports macvlan</a></p> <p>You can also use something like <a href="https://github.com/intel/multus-cni" rel="nofollow noreferrer">Multus</a> that supports the macvlan plugin.</p>
<p>Just I am starting to learn Kubernetes. I've installed CentOS 7.5 with SELinux disabled kubectl, kubeadm and kubelet by Kubernetes YUM repository.</p> <p>However, when I want to start a <code>kubeadm init</code> command. I get this error message:</p> <pre><code>[init] using Kubernetes version: v1.12.2 [preflight] running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [51.75.201.75 127.0.0.1 ::1] [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [vps604805.ovh.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 51.75.201.75] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 26.003496 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node vps604805.ovh.net as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node vps604805.ovh.net as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] error marking master: timed out waiting for the condition </code></pre> <p>According to Linux Foundation course, I don't need more command to execute to create my first start cluster into my VM.</p> <p>Wrong?</p> <p>Firewalld does have open ports into firewall. 6443/tcp and 10248-10252</p>
<p>I would recommend to bootstrap Kubernetes cluster as guided in the official <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="noreferrer">documentation</a>. I've proceeded with some steps to build cluster on the same CentOS version <code>CentOS Linux release 7.5.1804 (Core)</code> and will share them with you, hope it can be helpful to you to get rid of the issue during installation.</p> <p>First wipe your current cluster installation:</p> <pre><code># kubeadm reset -f &amp;&amp; rm -rf /etc/kubernetes/ </code></pre> <p>Add Kubernetes repo for further <code>kubeadm</code>, <code>kubelet</code>, <code>kubectl</code> installation:</p> <pre><code>[kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF </code></pre> <p>Check whether <code>SELinux</code> is in permissive mode:</p> <pre><code># getenforce Permissive </code></pre> <p>Ensure <code>net.bridge.bridge-nf-call-iptables</code> is set to 1 in your sysctl:</p> <pre><code># cat &lt;&lt;EOF &gt; /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system </code></pre> <p>Install required Kubernetes components and start services:</p> <pre><code># yum update &amp;&amp; yum upgrade &amp;&amp; yum install -y docker kubelet kubeadm kubectl --disableexcludes=kubernetes # systemctl start docker kubelet &amp;&amp; systemctl enable docker kubelet </code></pre> <p>Deploy the cluster via <code>kubeadm</code>:</p> <pre><code>kubeadm init --pod-network-cidr=10.244.0.0/16 </code></pre> <p>I prefer to install <code>Flannel</code> as the main <code>CNI</code> in my cluster, although there are some prerequisites for proper <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network" rel="noreferrer">Pod network</a> installation, I've passed <code>--pod-network-cidr=10.244.0.0/16</code> flag to <code>kubeadm init</code> command.</p> <p>Create Kubernetes Home directory for your user and store <code>config</code> file:</p> <pre><code>$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> <p>Install Pod network, in my case it was <code>Flannel</code>:</p> <p><code>$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml</code></p> <p>Finally check Kubernetes core Pods status:</p> <p><code>$ kubectl get pods --all-namespaces</code></p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-4x7zq 1/1 Running 0 36m kube-system coredns-576cbf47c7-666jm 1/1 Running 0 36m kube-system etcd-centos-7-5 1/1 Running 0 35m kube-system kube-apiserver-centos-7-5 1/1 Running 0 35m kube-system kube-controller-manager-centos-7-5 1/1 Running 0 35m kube-system kube-flannel-ds-amd64-2bmw9 1/1 Running 0 33m kube-system kube-proxy-pcgw8 1/1 Running 0 36m kube-system kube-scheduler-centos-7-5 1/1 Running 0 35m </code></pre> <p>In case you still have any doubts, just write down a comment below this answer.</p>
<p>I'm seeing the "http" directive not allowed error from the logs. I have mounted "nginx-basic.conf" file in "conf.d" folder as a config mount in Kubernetes.</p> <p><strong>nginx-basic.conf-</strong></p> <pre><code>http { server { location / { proxy_pass 35.239.243.201:9200; proxy_redirect off; } } } </code></pre> <p>I'm not sure what is wrong with this. Could someone help me with pointing it out?</p>
<p>You probably have another <code>http</code> directive in a base <code>nginx.conf</code> that includes everything under <code>/etc/nginx/conf.d</code></p> <p>For example (<code>nginx.conf</code>):</p> <pre><code>user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; ... include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } </code></pre> <p>You can try removing the <code>http</code> directive:</p> <pre><code>server { location / { proxy_pass 35.239.243.201:9200; proxy_redirect off; } } </code></pre>
<p>ASP.NET Core SPA with external config in wwwroot/config/config.json</p> <p>Contents of config.json:</p> <pre><code>{ "termsAndConditionsLink": "https://some-dev-url.with/legal/terms/" } </code></pre> <p>When the file is <strong>not being overwritten by a ConfigMap</strong>, it works fine and I am able to get the full content of file.</p> <pre><code>curl https://dev-app.com/config/config.json { "termsAndConditionsLink": "https://some-dev-url.with/legal/terms/" } </code></pre> <p>When ConfigMap data is mounted to this path with Volume, it is not returned entirely.</p> <pre><code>curl https://dev-app.com/config/config.json { "termsAndCon </code></pre> <p>The file is present in pod:</p> <pre><code>pwd /app/wwwroot/config ls -la total 12 drwxrwxrwx 3 root root 4096 Nov 20 08:48 . drwxr-xr-x 6 root root 4096 Nov 20 08:46 .. drwxr-xr-x 2 root root 4096 Nov 20 08:48 ..2018_11_20_08_48_02.390652870 lrwxrwxrwx 1 root root 31 Nov 20 08:48 ..data -&gt; ..2018_11_20_08_48_02.390652870 lrwxrwxrwx 1 root root 18 Nov 20 08:48 config.json -&gt; ..data/config.json cat config.json { "termsAndConditionsLink": "https://some-dev-url.with/legal/terms/" } </code></pre> <p><strong>ConfigMap.yaml</strong></p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: my-config data: config.json: |- { "termsAndConditionsLink": "https://some-dev-url.with/legal/terms/" } </code></pre> <p><strong>Deployment.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-app labels: ... spec: template: metadata: labels: ... spec: containers: ... volumeMounts: - name: my-volume mountPath: /app/wwwroot/config volumes: - name: my-volume configMap: name: my-config </code></pre>
<p>Do you mean that any other files in the <code>/app/wwwroot/config</code> directory are gone when mounting a volume at that location from the ConfigMap?</p> <p>Have you tried <a href="https://kubernetes.io/docs/concepts/storage/volumes/#projected" rel="nofollow noreferrer">projected</a> volume:</p> <pre><code>volumes: - name: my-volume projected: sources: - configMap: name: my-config </code></pre> <p><strong>EDIT:</strong> For those that may come across such problem and don't read through the messages below the posts - after exchanging messages with @edbighead, another suggestion was to use <code>subPath</code> in the Deployment to leave the config directory writeable as ConfigMap volume mounts are read-only:</p> <pre><code>mountPath: /app/wwwroot/config/config.json subPath: config.json </code></pre> <p>That apparently <em>solved the problem</em>.</p>
<p>I'm having troubles with nginx-lego (I know it's deprecated) and node autoscaler. I had to scale up manually through an HPA and patching temporarily minReplicas to a high number. All scaled well, new nodes were added because of pod increase.</p> <p>After the traffic spike, I set the number back to normal (which is really low) and I can see a lot of bad gateway 502 errors. After I examined the nginx-lego pod's log, I was able to see that plenty of requests were going to pods that aren't there anymore (connection refused or No route to host). </p> <pre><code>2018/11/21 17:48:49 [error] 5546#5546: *6908265 connect() failed (113: No route to host) while connecting to upstream, client: 100.112.130.0, server: xxxx.com, request: "GET /public/images/social-instagram.png HTTP/1.1", upstream: "http://X.X.X.X:3000/public/images/social-instagram.png", host: "xxxx.com", referrer: "https://outlook.live.com/" 2018/11/21 17:48:49 [error] 5409#5409: *6908419 connect() failed (113: No route to host) while connecting to upstream, client: 10.5.143.204, server: xxxx.com, request: "GET /public/images/social-instagram.png HTTP/1.1", upstream: "http://X.X.X.X:3000/public/images/social-instagram.png", host: "xxxx.com" 2018/11/21 17:48:49 [error] 5546#5546: *6908420 connect() failed (111: Connection refused) while connecting to upstream, client: 10.5.143.204, server: xxxx.com, request: "GET /public/images/social-facebook.png HTTP/1.1", upstream: "http://X.X.X.X:3000/public/images/social-facebook.png", host: "xxxx.com" </code></pre> <p>Any idea on what could be wrong?</p> <p>I guess that patching minReplicas isn't probably the best way how to do it, but I knew that there will be a spike and I didn't have a better idea on how to pre-scale the whole cluster.</p>
<p>Looks like a problem with your nginx ingress (lego) controller not updating the <code>nginx.conf</code>, when scaling down. I would examine the <code>nginx.conf</code> and see if it's pointing to backends that don't exist anymore.</p> <pre><code>$ kubectl cp &lt;nginx-lego-pod&gt;:nginx.conf . </code></pre> <p>If something looks odd you might have to delete the pod so that it gets created by the ReplicaSet managing your nginx ingress controller pods.</p> <pre><code>$ kubectl delete &lt;nginx-controller-pod&gt; </code></pre> <p>Then examine the <code>nginx.conf</code> again.</p> <p>Another issue could be your endpoints for your backend services not being updated by Kubernetes, but this would be unrelated directly to upscaling/downscaling your lego HPA. You can check with:</p> <pre><code>$ kubectl get ep </code></pre> <p>And see if there are any that don't exist anymore.</p>
<p>I have a 4 cores CPU, I create a Kubernetes Pod with CPU resource limit 100m, which mean it will occupy 1/10 of a core power.</p> <p>I wondering in this case, 100m is not even a full core, if my app is a multithread app, will my app's threads run in parallel? Or all the threads will run in the part of core (100 milli core) only?</p> <p>Can anyone further explain the mechanism behind?</p>
<p>The closest answer I found so far is this <a href="https://github.com/kubernetes/kubernetes/issues/24925#issuecomment-216616733" rel="noreferrer">one</a>:</p> <blockquote> <p>For a single-threaded program, a cpu usage of 0.1 means that if you could freeze the machine at a random moment in time, and look at what each core is doing, there is a 1 in 10 chance that your single thread is running at that instant. The number of cores on the machine does not affect the meaning of 0.1. For a container with multiple threads, the container's usage is the sum of its thread's usage (per previous definition.) <strong>There is no guarantee about which core you run on, and you might run on a different core at different points in your container's lifetime</strong>. A cpu limit of 0.1 means that your usage is not allowed to exceed 0.1 for a significant period of time. A cpu request of 0.1 means that the system will try to ensure that you are able to have a cpu usage of at least 0.1, if your thread is not blocking often.</p> </blockquote> <p>I think above sound quite logical. Based on my question, 100m core of CPUs power will spread across all the CPU cores, which mean multithreading should work in Kubernetes.</p> <p>Update:</p> <p>In addition, this <a href="https://stackoverflow.com/a/21650196/1542363">answer</a> explain quite well that, although it might be running a thread in single core (or less than one core power as per question), due to operating system's scheduling capability, it will still try to run the instruction unit in parallel, but not exceed the clocking power (100m as per question) as specified.</p>
<p>I am using kubernetes cluster to run dev environments for myself and other developers. I have written a few shell functions to help everyone deal with their pods without typing long kubectl commands by hand. For example, to get a prompt on one of the pods, my functions use the following </p> <pre><code>kubectl exec -it $(kubectl get pods --selector=run=${service} --field-selector=status.phase=Running -o jsonpath="{.items[*].metadata.name}") -- bash; </code></pre> <p>where $service is set to a service label I want to access, like postgres or redis or uwsgi.</p> <p>Since these are development environments there is always one of each types of pods. The problem I am having is that if I delete a pod to make it pull a fresh image (all pods are managed by deployments, so if I delete a pod it will create a new one), for a while there are two pods, one shows as terminating and the other as running in <code>kubectl get pods</code> output. I want to make sure that the command above selects the pod that is running and not the one terminating. I thought <code>--field-selector=status.phase=Running</code> flag would do it, but it does not. Apparently even if the pod is in the process of terminating it still reports Running status in status.phase field. What can I use to filter out terminating pods?</p>
<p>Use this one</p> <pre><code>$ kubectl exec -it $(kubectl get pods --selector=run=${service} | grep "running" | awk '{print $1}') -- bash; </code></pre> <p>or</p> <pre><code>$ kubectl exec -it $(kubectl get pods --selector=run=${service} -o=jsonpath='{.items[?(@.status.phase==“Running”)].metadata.name}') -- bash; </code></pre> <p>Ref: <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/jsonpath/</a></p>
<p>Minikube not starting with several error messages. kubectl version gives following message with port related message:</p> <pre><code>iqbal@ThinkPad:~$ kubectl version Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre>
<p>You didn't give more details, but there are some concerns that I solved few days ago about minikube issues with kubernetes <strong>1.12</strong>.</p> <p>Indeed, the compatibility matrix between kubernetes and docker recommends to run : Docker <strong>18.06</strong> + kubernetes <strong>1.12</strong> (Docker 18.09 is not supported now).</p> <p>Thus, make sure <code>docker version</code> is NOT above <strong>18.06</strong>. Then, run the following:</p> <pre><code># clean up minikube delete minikube start --vm-driver="none" kubectl get nodes </code></pre> <p>If you are still encountering issues, please give more details, namely <code>minikube logs</code>.</p>
<p>I deployed my app using kubernetes and now Id like to add custom domain to the app. I am using <a href="https://hackernoon.com/expose-your-kubernetes-service-from-your-own-custom-domains-cc8a1d965fc" rel="nofollow noreferrer">this</a> tutorial and it uses ingress to set the custom domain.<br> I noticed that the app load balancer has an ip. Why shouldn't I use that ip? What is the reason I need ingress?</p>
<p>Using domains over IPs has it's obvious advantages of not having to memorize 158.21.72.879 instead of mydomain.com.</p> <p>Next, using mydomain.com, you can change your IP as many times as you like without having to change your calls to mydomain.com.</p> <p><code>Ingress</code> comes in different flavors, is highly configurable, allows for traffic redirection using kubernetes service names and some of them even have their own stats page so you can monitor your requests.</p> <p>Furthermore, if you are using gcloud or the like, the <code>LoadBalancer</code> IP could change (unless confiugred otherwise), assigning you any available IP from your IP pool.</p> <p>The real question is - why NOT use an <code>Ingress</code>?</p>
<p>I like to create an app with kubernetes.</p> <ul> <li>I have api server, frontend server, a few scrapers and so on.</li> <li>I'd like to put the api and the frontend in the same pod, and all the scrapers in other pod.</li> <li>I'd like to be able to deploy a single project, only the api for example or a specific scraper.</li> <li>Each app has a docker file and circle ci configuration for deploy</li> </ul> <p>How should I structure my project?<br> Should each app be in it's own repository or should I use monorepo?<br> Where should I keep the k8s.yaml (since it relates to all projects)?<br> Where should I apply the k8s configurations? Should it happen on each deploy? How can I config domain names for each service easily?</p>
<p>There is some points:</p> <blockquote> <p>I'd like to put the api and the frontend in the same pod, and all the scrapers in other pod.</p> </blockquote> <p>It's ok. as long as they are on different containers in same pod. multi container pods are accessible from each other with localhost address. and pods can see each other with DNS. (and yes half healthy means unhealthy)</p> <blockquote> <p>How should I structure my project?</p> </blockquote> <p>I use different repo for different container. but its ok for every pod = 1 repo. Its easier to maintain this way, and you have separate CI/CD builds which ables you to update each app separately. Storing each Deployment manifest YAML file in the root of repo is a good idea. Because one Deployment means 1 app (one set of pods).</p> <blockquote> <p>Where should I apply the k8s configurations? Should it happen on each deploy? How can I config domain names for each service easily?</p> </blockquote> <p>In Deployment manifest file you can store configs in ENVs or use a config map. You dont need to use <code>kubectl apply</code> for each deployment. In your CI you can do this for each deploy: </p> <pre><code>- docker build -t {registry}:{version} . - docker push {registry}:{version} - kubectl set image deployment/{API_NAME} {containername}={registry}:{version} -n {NAMESPACE} </code></pre> <p>You need a reverse proxy in front of your APIs as a gateway, best option is Ingress which is easy to deploy and config. and it knows your pods. Ingress config can be like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: api namespace: production annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: api.yourdomain.com http: paths: - path: /application1 backend: serviceName: app1-production servicePort: 80 - path: /application2 backend: serviceName: app2-production servicePort: 80 </code></pre> <p>or just use one subdomain per api, as you prefer. </p>
<p>I am in the process of setting up a NFS server on my K8S cluster. I want it to act as a NFS server for external entities i.e. client will be from outside the K8S cluster such as VMs.</p> <p>The port requirements for the Docker image are :</p> <pre><code>================================================================== SERVER STARTUP COMPLETE ================================================================== ----&gt; list of enabled NFS protocol versions: 4.2, 4.1, 4 ----&gt; list of container exports: ----&gt; /exports *(rw,no_subtree_check) ----&gt; list of container ports that should be exposed: ----&gt; 111 (TCP and UDP) ----&gt; 2049 (TCP and UDP) ----&gt; 32765 (TCP and UDP) ----&gt; 32767 (TCP and UDP) </code></pre> <p>So I have created a Debian Stretch docker image. When I run it using <code>docker run</code>, I can successfully expose <code>/exports</code> and mount it from other systems.</p> <pre><code>docker run -v /data:/exports -v /tmp/exports.txt:/etc/exports:ro \ --cap-add SYS_ADMIN -p 2049:2049 -p 111:111 -p 32765:32765 \ -p 32767:32767 8113b6abeac </code></pre> <p>The above command spins up my docker container and when I do </p> <pre><code>mount.nfs4 &lt;DOKCER_HOST_IP&gt;:/exports /mount/ </code></pre> <p>from another VM, I can successfully mount the volume.</p> <p>So everything up until here is <strong>A OK</strong>!</p> <p>Now the task is to deploy this in K8S.</p> <p>My stateful-set definition is:</p> <pre><code>kind: StatefulSet apiVersion: apps/v1 metadata: name: nfs-provisioner spec: selector: matchLabels: app: nfs-provisioner serviceName: "nfs-provisioner" replicas: 1 template: metadata: labels: app: nfs-provisioner spec: serviceAccount: nfs-provisioner terminationGracePeriodSeconds: 10 imagePullSecrets: - name: artifactory containers: - name: nfs-provisioner image: repository.hybris.com:5005/test/nfs/nfs-server:1.2 ports: - name: nfs containerPort: 2049 - name: mountd containerPort: 20048 - name: rpcbind containerPort: 111 - name: rpcbind-udp containerPort: 111 protocol: UDP - name: filenet containerPort: 32767 - name: filenet-udp containerPort: 32767 protocol: UDP - name: unknown containerPort: 32765 - name: unknown-udp containerPort: 32765 protocol: UDP securityContext: privileged: true env: - name: SERVICE_NAME value: nfs-provisioner - name: NFS_EXPORT_0 value: '/exports *(rw,no_subtree_check)' imagePullPolicy: "IfNotPresent" volumeMounts: - name: export-volume mountPath: /exports volumes: - name: export-volume hostPath: path: /var/tmp </code></pre> <p>As you can see, I have specified all the ports (both TCP and UDP)</p> <p>And now to expose this to the outside world and not just inside the cluster, my <code>service.yaml</code> file deceleration is :</p> <pre><code>kind: Service apiVersion: v1 metadata: name: nfs-provisioner labels: app: nfs-provisioner spec: type: NodePort ports: - name: nfs port: 2049 - name: mountd port: 20048 - name: rpcbind port: 111 - name: rpcbind-udp port: 111 protocol: UDP - name: filenet port: 32767 - name: filenet-udp port: 32767 protocol: UDP - name: unknown port: 32765 - name: unknown-udp port: 32765 protocol: UDP selector: app: nfs-provisioner </code></pre> <p>This results in </p> <pre><code>kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nfs-provisioner NodePort 10.233.43.135 &lt;none&gt; 2049:30382/TCP,20048:31316/TCP,111:32720/TCP,111:32720/UDP,32767:30173/TCP,32767:30173/UDP,32765:31215/TCP,32765:31215/UDP 32m </code></pre> <p>Now I try to mount <code>/exports</code> from another node/VM that is external to the K8S cluster.</p> <p>I've tried </p> <pre><code>mount.nfs4 &lt;K8S_Node_IP&gt;:/exports /mount/ </code></pre> <p>and I've tried</p> <pre><code>mount.nfs4 -o port=&lt;NodePort&gt; &lt;K8S_Node_IP&gt;:/exports /mount/ </code></pre> <p>Ive tried each NodePort one at a time. But none of them work. I get the error :</p> <pre><code>mount.nfs4 -o port=31316 &lt;K8S_Node_IP&gt;:/exports /mount/ mount.nfs4: mount to NFS server '&lt;K8S_Node_IP&gt;:/exports' failed: RPC Error: Unable to receive </code></pre> <p>I'm unsure as to what might be the issue here. Is it that I need to specify all the nodePorts? If so, how can I do that? </p>
<p>The issue here is that all the NodePorts are different as seen externally as from:</p> <pre><code>----&gt; 111 (TCP and UDP) ----&gt; 2049 (TCP and UDP) ----&gt; 32765 (TCP and UDP) ----&gt; 32767 (TCP and UDP) </code></pre> <p>You can try an L4 load balancer that exposes exactly those ports on a given IP address (internal or external) and forwards them to the nodePorts (which is what <code>type=LoadBalancer</code> does too).</p> <p>Another option is to hard code the NodePorts in your services to match exactly the ones of the containers:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: nfs-provisioner labels: app: nfs-provisioner spec: type: NodePort ports: - name: nfs port: 2049 nodePort: 2049 - name: mountd port: 20048 nodePort: 20048 - name: rpcbind port: 111 nodePort: 111 - name: rpcbind-udp port: 111 nodePort: 111 protocol: UDP - name: filenet port: 32767 nodePort: 32767 - name: filenet-udp port: 32767 nodePort: 32767 protocol: UDP - name: unknown port: 32765 nodePort: 32765 - name: unknown-udp port: 32765 nodePort: 32765 protocol: UDP selector: app: nfs-provisioner </code></pre> <p>You will have to change the nodePort range <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">(<code>--service-node-port-range</code>)</a> on the kubelet though. This is so that you can use <code>2049</code> and <code>111</code>.</p> <p>You can also change the ports that you NFS server listens on for <code>2049</code> (nfs) and <code>111</code> (portmapper) for example, that way you don't have to change <code>--service-node-port-range</code></p>
<p>I am trying to setup a kubernetes cluster in aws using Kops. But i have requirement like deploy the master nodes in public subnet and some workers in public and some workers in private subnet. </p> <p>I need the network something like below: <a href="https://i.stack.imgur.com/9uNdM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9uNdM.png" alt="enter image description here"></a></p> <p>So, is it possible to create this network using kops? </p>
<p>Kubernetes nodes should never be directly connected to the internet. </p> <p>I assume you want to expose services via NodePort which is in general a bad idea. Because NodePort service are exposed on <strong>ALL</strong> nodes not just the ones where the pods are running. </p> <p>You should place all nodes and masters in private Subnets and manage the external Access via elastic load balancers and <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">ingress</a>. This way you can explicitly expose frontend services to the internet.</p> <p>The relevant <code>kops-spec.yaml</code> snippet would be:</p> <pre><code>topology: dns: type: Public masters: private nodes: private </code></pre>
<p>Minikube not starting with several error messages. kubectl version gives following message with port related message:</p> <pre><code>iqbal@ThinkPad:~$ kubectl version Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre>
<p>If you want to change the VM driver add the appropriate <code>--vm-driver=xxx</code> flag to <code>minikube start</code>. Minikube supports the following drivers:</p> <ul> <li>virtualbox</li> <li>vmwarefusion</li> <li><a href="https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver" rel="nofollow noreferrer">KVM2</a></li> <li><a href="https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm-driver" rel="nofollow noreferrer">KVM (deprecated in favor of KVM2)</a></li> <li><a href="https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperkit-driver" rel="nofollow noreferrer">hyperkit</a></li> <li><a href="https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#xhyve-driver" rel="nofollow noreferrer">xhyve</a></li> <li><a href="https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperV-driver" rel="nofollow noreferrer">hyperv</a></li> <li><p>none (<strong>Linux-only</strong>) - this driver can be used to run the Kubernetes cluster components on the host instead of in a VM. This can be useful for CI workloads which do not support nested virtualization. For example, if your vm is virtualbox then use:</p> <pre><code>$ minikube delete $ minikube start --vm-driver=virtualbox </code></pre></li> </ul>
<p>I deployed my app using kubernetes and now Id like to add custom domain to the app. I am using <a href="https://hackernoon.com/expose-your-kubernetes-service-from-your-own-custom-domains-cc8a1d965fc" rel="nofollow noreferrer">this</a> tutorial and it uses ingress to set the custom domain.<br> I noticed that the app load balancer has an ip. Why shouldn't I use that ip? What is the reason I need ingress?</p>
<p>If you want to expose your app, you could just as easily use a service of type <code>NodePort</code> instead of an Ingress. You could also use the type <code>LoadBalancer</code>. <code>LoadBalancer</code> is a superset of <code>NodePort</code> and assigns a fixed ip. With the type <code>LoadBalancer</code> you could assign a domain to this fixed IP. How to do this depends on where you have registered your domain.</p> <p>To answer your questions:</p> <ul> <li>You do not need an Ingress you could use a <code>NodePort</code> service or <code>LoadBalander</code> service. </li> <li>To assign a domain to your app, you do not need an Ingress, you could use a <code>LoadBalancer</code> service</li> <li>In any case, you could just use the ip, but as already pointed out, a domain is more convenient.</li> </ul> <p>If you just want to try out your app, you could just use the IP. A domain can be assigned later.</p> <p>Here is a official kubernetes tutorial on how to expose an app: <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/</a></p>
<p>Since K8S v1.11 Runtime was changed from dockerd to containerd. I'm using Jenkins over kubernetes to build docker images using Docker outside of Docker (dood).</p> <p>When I tried to switch to use the socket file from conatinerd (containerd/containerd.sock was mapped ad /var/run/docker.sock) with the regular docker client a got the following error <code>Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/containers/json: net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x00\x00\x00\x04\x00\x00\x00\x00\x00".</code></p> <p>Can docker client be used with containerd?</p>
<p>Disclaimer: as of this writing containerd didn't replace Docker, you can <a href="https://github.com/containerd/cri/blob/master/docs/installation.md" rel="nofollow noreferrer">install containerd</a> separately from Docker, and you can point the <a href="https://kubernetes.io/docs/setup/cri/" rel="nofollow noreferrer">Kubernetes CRI</a> to directly talk to the containerd socket.</p> <p>So, when you install Docker it does install together with containerd and the <em>Docker daemon</em> talks to it. You'll see a process like this:</p> <pre><code>docker-containerd --config /var/run/docker/containerd/containerd.toml </code></pre> <p>However, the Docker client still talks to the Docker daemon, that's why when you run the Docker client in your container you still need to talk directly to the Docker daemon (<code>/var/run/docker.sock</code>), so you can switch back to <code>/var/run/docker.sock</code> and I believe it should work.</p>
<p>I am stuck on a scenario where I have to get the log folder of container 1 into 2nd container. I have found a solution in which we will create a emptyDir directory.</p> <pre><code>spec: containers: - name: app image: app-image imagePullPolicy: Always ports: - containerPort: 8080 volumeMounts: - name: logs mountPath: /var/log/app/ - name: uf image: splunk/splunkuniversalforwarder ... volumeMounts: - name: logs mountPath: /var/log/app/ volumes: - name: logs emptyDir: {} </code></pre> <p>But in my situation I want to share /usr/var/log/tomcat/ of 1st container into /var/log/message. This is because splunkUF image will monitor /var/log/app/. so I want to share the log folder of different apps, be it /var/log/app/tomcat or /var/log/messages but at one same location with splunk container /var/log/app/.</p> <p>I can run copy command to get the log 1 time but how to get the logs continuously? </p>
<p>I don't see an issue here. you can mount the same volume at a different location in each container.</p> <p>According to your description this should be something like this:</p> <pre><code>spec: containers: - name: app image: app-image ... volumeMounts: - name: logs mountPath: /usr/var/log/tomcat/ - name: uf image: splunk/splunkuniversalforwarder ... volumeMounts: - name: logs mountPath: /var/log/app/ volumes: - name: logs emptyDir: {} </code></pre>
<p>I have tried with minikube tool, It's a single node. kubeadm tool, It's a multinode but single master. I am looking for the tool which can be configure multi master kubernetes cluster in local. </p>
<p>There's no tool to install a multi-master Kubernetes cluster locally as of this writing. Generally, a multi-master setup is meant for production environments and a local setup is generally far from what someone would describe as a production environment.</p> <p>You can probably piece together a local installation from <a href="https://kubernetes.io/docs/setup/independent/high-availability/" rel="nofollow noreferrer">this</a> and <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">Kubernetes the Hard Way</a>.</p>
<p>Google has ]this cool tool <code>kubemci</code> - <code>Command line tool to configure L7 load balancers using multiple kubernetes clusters</code> with which you can basically have a HA multi region Kubernetes setup. Which is kind of cool.</p> <p>But let's say we have an basic architecture like this:</p> <ul> <li>Front end is implemented as SPA and uses json API to talk to backend</li> <li>Backend is a set of microservices which use PostgreSQL as a DB storage engine.</li> </ul> <p>So I can create two Kubernetes Clusters on GKE, put both backend and frontend on them (e.g. let's say in London and Belgium) and all looks fine. </p> <p>Until we think about the database. PostgreSQL is single master only, so it must be placed in one of the regions only. And If backend from London region starts to talk to PostgreSQL in Belgium region the performance will really be poor considering the 6ms+ latency between those regions. </p> <p>So that whole HA setup kind of doesn't make any sense? Or am I missing something? One option to slightly mitigate the issue is would be have a readonly replica in the the "slave" region, and direct read-only queries there (is that even possible with PostgreSQL?)</p>
<p>This is a classic architecture scenario that has no easy solution. Making data available in multiple regions is a challenging problem that major companies spend a lot of time and money to solve.</p> <ul> <li><p>PostgreSQL does not natively support multi-master writes. Your idea of a replica located in the other region with logic in your app to read and write to the correct database would work. This will give you fast local reads, but slower writes in one region. It's also more complicated code in you app and more work to handle failover of the master. Bandwidth and costs can also be problems with heavy updates.</p></li> <li><p>Use 3rd-party solutions for multi-master Postgres (like <a href="https://www.2ndquadrant.com/en/resources/postgres-bdr-2ndquadrant/" rel="nofollow noreferrer">Postgres-BDR by 2nd Quadrant</a>) to offload the work to the database layer. This can get expensive and your application still has to manage data conflicts from two regions overwriting the same data at the same time.</p></li> <li><p>Choose another database that supports multi-regional replication with multi-master writes. <a href="http://cassandra.apache.org/" rel="nofollow noreferrer">Cassandra</a> (or <a href="https://www.scylladb.com/" rel="nofollow noreferrer">ScyllaDB</a>) is a good choice, or hosted options like <a href="https://cloud.google.com/spanner/" rel="nofollow noreferrer">Google Spanner</a>, <a href="https://learn.microsoft.com/en-us/azure/cosmos-db/introduction" rel="nofollow noreferrer">Azure CosmosDB</a>, <a href="https://aws.amazon.com/dynamodb/global-tables/" rel="nofollow noreferrer">AWS DynamoDB Global Tables</a>, and others. An interesting option is <a href="https://www.cockroachlabs.com/" rel="nofollow noreferrer">CockroachDB</a> which supports the PostgreSQL protocol but is a scalable relational database and supports multiple regions.</p></li> <li><p>If none of these options work, you'll have to create your own replication system. Some companies do this with a event-sourced / CQRS architecture where every write is a message sent to a central log, then applied in every location. This is a more work but provides the most flexibility. At this point you're also basically building your own database replication system.</p></li> </ul>
<p>I'm trying to inject an HTTP status 500 fault in the bookinfo example.</p> <p>I managed to inject a 500 error status when the traffic is coming from the Gateway with:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo namespace: default spec: gateways: - bookinfo-gateway hosts: - '*' http: - fault: abort: httpStatus: 500 percent: 100 match: - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 </code></pre> <p>Example:</p> <pre><code>$ curl $(minikube ip):30890/api/v1/products fault filter abort </code></pre> <p>But, I fails to achieve this for traffic that is coming from other pods:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo namespace: default spec: gateways: - mesh hosts: - productpage http: - fault: abort: httpStatus: 500 percent: 100 match: - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 </code></pre> <p>Example:</p> <pre><code># jump into a random pod $ kubectl exec -ti details-v1-dasa231 -- bash root@details $ curl productpage:9080/api/v1/products [{"descriptionHtml": ... &lt;- actual product list, I expect a http 500 </code></pre> <ul> <li>I tried using the FQDN for the host <code>productpage.svc.default.cluster.local</code> but I get the same behavior.</li> <li><p>I checked the proxy status with <code>istioctl proxy-status</code> everything is synced.</p></li> <li><p>I tested if the istio-proxy is injected into the pods, it is:</p></li> </ul> <p>Pods:</p> <pre><code>NAME READY STATUS RESTARTS AGE details-v1-6764bbc7f7-bm9zq 2/2 Running 0 4h productpage-v1-54b8b9f55-72hfb 2/2 Running 0 4h ratings-v1-7bc85949-cfpj2 2/2 Running 0 4h reviews-v1-fdbf674bb-5sk5x 2/2 Running 0 4h reviews-v2-5bdc5877d6-cb86k 2/2 Running 0 4h reviews-v3-dd846cc78-lzb5t 2/2 Running 0 4h </code></pre> <p>I'm completely stuck and not sure what to check next. I feel like I am missing something very obvious.</p> <p>I would really appreciate any help on this topic.</p>
<p>The root cause of my issues were an improperly set up includeIPRanges in my minicloud cluster. I set up the 10.0.0.1/24 CIDR, but some services were listening on 10.35.x.x.</p>
<p>I'm using <a href="https://wiki.jenkins.io/display/JENKINS/Kubernetes+Continuous+Deploy+Plugin" rel="nofollow noreferrer">Kubernetes Continuous Deploy Plugin</a> to deploy and upgrade a Deployment on my Kubernetes Cluster. I'm using pipeline and this is the Jenkinsfile:</p> <pre><code>pipeline { environment { JOB_NAME = "${JOB_NAME}".replace("-deploy", "") REGISTRY = "my-docker-registry" } agent any stages { stage('Fetching kubernetes config files') { steps { git 'git_url_of_k8s_configurations' } } stage('Deploy on kubernetes') { steps { kubernetesDeploy( kubeconfigId: 'k8s-default-namespace-config-id', configs: 'deployment.yml', enableConfigSubstitution: true ) } } } } </code></pre> <p>Deployment.yml instead is:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ${JOB_NAME} spec: replicas: 1 template: metadata: labels: build_number: ${BUILD_NUMBER} app: ${JOB_NAME} role: rolling-update spec: containers: - name: ${JOB_NAME}-container image: ${REGISTRY}/${JOB_NAME}:latest ports: - containerPort: 8080 envFrom: - configMapRef: name: postgres imagePullSecrets: - name: regcred strategy: type: RollingUpdate </code></pre> <p>In order to let Kubernetes understand that Deployment is changed ( so to upgrade it and pods ) I used the Jenkins build number as annotation:</p> <pre><code>... metadata: labels: build_number: ${BUILD_NUMBER} ... </code></pre> <p><strong>The problem or my misunderstanding:</strong></p> <p>If Deployment does not exists on Kubernetes, all works good, creating one Deployment and one ReplicaSet.</p> <p>If Deployment still exists and an upgrade is applied, Kubernetes creates a new ReplicaSet:</p> <p><strong>Before first deploy</strong></p> <p><a href="https://i.stack.imgur.com/8Kky8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Kky8.png" alt="before first deploy"></a></p> <p><strong>First deploy</strong></p> <p><a href="https://i.stack.imgur.com/LJ2V7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LJ2V7.png" alt="first deploy"></a></p> <p><strong>Second deploy</strong></p> <p><a href="https://i.stack.imgur.com/K3w19.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K3w19.png" alt="second deploy"></a></p> <p><strong>Third deploy</strong></p> <p><a href="https://i.stack.imgur.com/ci0nI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ci0nI.png" alt="enter image description here"></a></p> <p>As you can see, each new Jenkins deploy will update corretly the deployment but creates a new ReplicaSet without removing the old one.</p> <p>What could be the issue?</p>
<p>This is expected behavior. Every time you update a Deployment a new ReplicaSet will be created. But, old ReplicaSet will be kept so that you can roll-back to previous state in case of any problem in your updated Deployment.</p> <p>Ref: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="noreferrer">Updating a Deployment</a></p> <p>However, you can limit how many ReplicaSet should be kept through <code>spec.revisionHistoryLimit</code> field. Default value is 10. Ref: <a href="https://github.com/kubernetes/api/blob/b7bd5f2d334ce968edc54f5fdb2ac67ce39c56d5/apps/v1/types.go#L299" rel="noreferrer">RevisionHistoryLimit</a></p>
<p>On k8s cluster (GCP) during nodes auto-scaling, my pods are rescheduled automatically. The main problem that they perform computations and keep results in memory during auto-scaling. Because of rescheduling, pods lose all results and tasks.</p> <p>I want to disable rescheduling for specified pods. I know a few possible solutions:</p> <ul> <li>nodeSelector (not very flexible due to the dynamic nature of a cluster)</li> <li>pod disruption budget PDB</li> </ul> <p>I have tried PDB and set minAvailable = 1 but it didn't work. I found that you can also set maxUnavailable=0, will it more effective? I didn't understand exactly the behaviour if maxUnavailable when it's set to 0. Could you explain it more? Thank you!</p> <p>Link for more details - <a href="https://github.com/dask/dask-kubernetes/issues/112" rel="nofollow noreferrer">https://github.com/dask/dask-kubernetes/issues/112</a></p>
<p>Setting max unavailable to 0 is a way to go and also, using nodepools can be a good workaround.</p> <pre><code>gcloud container node-pools create &lt;nodepool&gt; --node-taints=app=dask-scheduler:NoSchedule gcloud container node-pools create &lt;nodepool&gt; --node-labels app=dask-scheduler </code></pre> <p>This will create the nodepool with the label app=dask-scheduler, after in the pod spec, you can do this:</p> <pre><code>nodeSelector: app: dask-scheduler </code></pre> <p>And put the dask scheduler on a node-pool that doesn't autoscale.</p> <p>There's an object called PDB where in its spec you can set maxUnavailable in the example of maxUnavailable=1, this means if you had 100 pods defined, always make sure there is only one removed/drained/re-scheduled at a time in the case of maxUnavailable, if you have 2 pods, and you set maxUnavailable to 0, it will never remove your pods. It being the scheduler</p> <pre><code>apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: zk-pdb spec: maxUnavailable: 1 selector: matchLabels: app: zookeeper </code></pre>
<p>I'm tired of writing all <code>kubectl</code> and <code>kubeadm</code> commands by hand. Is there any way of enabling autocomplete on these commands?</p>
<h3>Bash Solution</h3> <pre><code># Execute these commands $ echo &quot;source &lt;(kubectl completion bash)&quot; &gt;&gt; ~/.bashrc $ echo &quot;source &lt;(kubeadm completion bash)&quot; &gt;&gt; ~/.bashrc # Reload bash without logging out $ source ~/.bashrc </code></pre>
<p>At present I am creating a configmap from the file config.json by executing:</p> <pre><code>kubectl create configmap jksconfig --from-file=config.json </code></pre> <p>I would want the ConfigMap to be <em>created</em> as part of the deployment and tried to do this:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: jksconfig data: config.json: |- {{ .Files.Get "config.json" | indent 4 }} </code></pre> <p>But doesn't seem to work. What should be going into configmap.yaml so that the same configmap is created? </p> <p>---UPDATE---</p> <p>when I do a helm install dry run:</p> <pre><code># Source: mychartv2/templates/jks-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: jksconfig data: config.json: | </code></pre> <p><strong>Note</strong>: I am using minikube as my kubernetes cluster</p>
<p>Your <code>config.json</code> file should be inside your <strong>mychart/</strong> directory, not inside <strong>mychart/templates</strong></p> <p><a href="https://helm.sh/docs/chart_template_guide/accessing_files/#basic-example" rel="noreferrer">Chart Template Guide</a></p> <p><strong>configmap.yaml</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-configmap data: config.json: |- {{ .Files.Get &quot;config.json&quot; | indent 4}} </code></pre> <p><strong>config.json</strong></p> <pre><code>{ &quot;val&quot;: &quot;key&quot; } </code></pre> <p><code>helm install --dry-run --debug mychart</code></p> <pre><code>[debug] Created tunnel using local port: '52091' [debug] SERVER: &quot;127.0.0.1:52091&quot; ... NAME: dining-saola REVISION: 1 RELEASED: Fri Nov 23 15:06:17 2018 CHART: mychart-0.1.0 USER-SUPPLIED VALUES: {} ... --- # Source: mychart/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: dining-saola-configmap data: config.json: |- { &quot;val&quot;: &quot;key&quot; } </code></pre> <p>EDIT:</p> <blockquote> <p>But I want it the values in the config.json file to be taken from values.yaml. Is that possible?</p> </blockquote> <p><strong>configmap.yaml</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-configmap data: config.json: |- { {{- range $key, $val := .Values.json }} {{ $key | quote | indent 6}}: {{ $val | quote }} {{- end}} } </code></pre> <p><strong>values.yaml</strong></p> <pre><code>json: key1: val1 key2: val2 key3: val3 </code></pre> <p><code>helm install --dry-run --debug mychart</code></p> <pre><code># Source: mychart/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: mangy-hare-configmap data: config.json: |- { &quot;key1&quot;: &quot;val1&quot; &quot;key2&quot;: &quot;val2&quot; &quot;key3&quot;: &quot;val3&quot; } </code></pre>
<p>I have a self made Kubernetes cluster consisting of VMs. My problem is, that the coredns pods are always go in CrashLoopBackOff state, and after a while they go back to Running as nothing happened.. One solution that I found and could not try yet, is changing the default memory limit from 170Mi to something higher. As I'm not an expert in this, I thought this is not a hard thing, but I don't know how to change a running pod's configuration. It may be impossible, but there must be a way to recreate them with new configuration. I tried with kubectl patch, and looked up rolling-update too, but I just can't figure it out. How can I change the limit?</p> <p>Here is the relevant part of the pod's data:</p> <pre><code>apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/podIP: 176.16.0.12/32 creationTimestamp: 2018-11-18T10:29:53Z generateName: coredns-78fcdf6894- labels: k8s-app: kube-dns pod-template-hash: "3497892450" name: coredns-78fcdf6894-gnlqw namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: coredns-78fcdf6894 uid: e3349719-eb1c-11e8-9000-080027bbdf83 resourceVersion: "73564" selfLink: /api/v1/namespaces/kube-system/pods/coredns-78fcdf6894-gnlqw uid: e34930db-eb1c-11e8-9000-080027bbdf83 spec: containers: - args: - -conf - /etc/coredns/Corefile image: k8s.gcr.io/coredns:1.1.3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi </code></pre> <p>EDIT: It turned out, that in Ubuntu the Network Manager's dnsmasq drives the Corends pods crazy, so in /etc/NetworkManager/NetworkManager.conf I commented out the dnsmasq line, reboot and everything is okay.</p>
<p>You must edit coredns pod's template in coredns deployment definition:</p> <pre><code>kubectl edit deployment -n kube-system coredns </code></pre> <p>Once your default editor is opened with coredns deployment, in the templateSpec you will find part which is responsible for setting memory and cpu limits.</p>
<p>I use CentOs 7.4 virtual machine which has 32Gb memory.</p> <p>I have docker composer, it has following configurations:</p> <pre><code>version: "2" services: shiny-cas-server: image: shiny-cas command: puma -C config/puma.rb volumes: - ./cas-server/logs:/app/logs - ./cas-server/config:/app/config - ./cas-server/public:/app/public </code></pre> <p>With above docker level configuartions, I make kubernetes configuration:</p> <p>cas-server-depl.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cas-server-depl spec: replicas: 1 template: metadata: labels: app: cas-server-pod spec: containers: - name: cas-server-pod image: shiny-cas imagePullPolicy: Never command: ["puma -C /cas-server/config/puma.rb"] ports: - containerPort: 100 volumeMounts: - mountPath: /app/logs name: cas-server-logs - mountPath: /app/config name: cas-server-config - mountPath: /app/public name: cas-server-public volumes: - name: cas-server-logs hostPath: path: /cas-server/logs - name: cas-server-config hostPath: path: /cas-server/config - name: cas-server-public hostPath: path: /cas-server/public </code></pre> <p>In virtual machine, I copy <code>./cas-server</code> directory to <code>/cas-server</code>, and changed chown and chgrp as my login name <code>k8s</code>, when I do <code>sudo kubectl apply -f cas-server-depl.yaml</code>, it has following response:</p> <pre><code>[k8s@k8s config]$ sudo kubectl get po NAME READY STATUS RESTARTS AGE cas-server-depl-7f849bf94c-srg77 0/1 RunContainerError 1 5s </code></pre> <p>Then I use following command to see why:</p> <pre><code>[k8s@k8s config]$ sudo kubectl describe po cas-server-depl-7988d6b447-ffff5 Name: cas-server-depl-7988d6b447-ffff5 Namespace: default Priority: 0 IP: 100.68.142.72 Controlled By: ReplicaSet/cas-server-depl-7988d6b447 Containers: cas-server-pod: Command: puma -C /cas-server/config/puma.rb State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: ContainerCannotRun Message: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"puma -C /cas-server/config/puma.rb\": stat puma -C /cas-server/config/puma.rb: no such file or directory": unknown Exit Code: 128 ... Ready: False Restart Count: 2 Environment: &lt;none&gt; Mounts: /app/config from cas-server-config (rw) /app/logs from cas-server-logs (rw) /app/public from cas-server-public (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-mrkdx (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: cas-server-logs: Type: HostPath (bare host directory volume) Path: /cas-server/logs HostPathType: HostPathType: default-token-mrkdx: Type: Secret (a volume populated by a Secret) SecretName: default-token-mrkdx Optional: false Normal Created 15s (x3 over 29s) kubelet, k8s.xxx.com.cn Created container Warning Failed 15s (x3 over 28s) kubelet, k8s.xxx.com.cn Error: failed to start container "cas-server-pod": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"puma -C /cas-server/config/puma.rb\": stat puma -C /cas-server/config/puma.rb: no such file or directory": unknown Warning BackOff 1s (x3 over 26s) kubelet, k8s.shinyinfo.com.cn Back-off restarting failed container </code></pre> <p>It says: </p> <pre><code>Message: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"puma -C /cas-server/config/puma.rb\": stat puma -C /cas-server/config/puma.rb: no such file or directory": unknown </code></pre> <p>I tried <code>/app/config/puma.rb</code> and <code>config/puma.rb</code> in command, both have same error message. which directory I shall write? I could see puma.rb do exists.</p> <p>My cas-server-svc.yaml is pasted as reference:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: cas-server-svc labels: name: cas-server-svc spec: selector: app: cas-server-pod type: NodePort ports: - port: 100 nodePort: 31101 name: tcp </code></pre>
<p>When you say</p> <pre><code>command: ["puma -C /cas-server/config/puma.rb"] </code></pre> <p>You are telling Kubernetes to tell Docker to look for a single executable named <code>puma -C ...</code>, where what you think are the command-line options are actually part of the filename. You need to split out the arguments into separate elements in the YAML list syntax, something like</p> <pre><code>command: ["puma", "-C", "/cas-server/config/puma.rb"] </code></pre> <p>or</p> <pre><code>command: - puma - -C - /cas-server/config/puma.rb </code></pre>
<p><strong>Background</strong></p> <p>Recently my lab invested in GPU computation infrastructure. More specific: two TitanV installed in a standard server machine. Currently the machine is running a not at all configured Windows Server. Everyone from my lab can login and do whatever they want. From time to time it happens that the machine is completely useless for others, because someone accidentally occupied all available memory.</p> <p>Since ML is growing here. I am looking for a better way to make use of our infrastucture. </p> <p><strong>Requierments</strong></p> <ul> <li>Multi-user. PhDs and students should be able to run their tasks. </li> <li>Job queue or scheduling (preferably something like time-sliced scheduling) </li> <li>Dynamic allocation of resources. If a single task is running it is ok to utilize the whole memory, but as soon as a secound one is started they should share the resources. </li> <li>Easy / Remote job submission: Maybe a webpage? </li> </ul> <p><strong>What I tried so far</strong></p> <p>I have a small test setup (consumer PC with GTX 1070) for experimenting. My internet research pointed me to SLURM and Kubernetes. </p> <p>First of all I like the idea of a cluster management system, since it offers the option to extend the infrastructure in future. </p> <p>SLURM was fairly easy to setup, but I was not able to setup something like a remote submission or a time-slice scheduling. </p> <p>In the meanwhile I also tried to work with Kubernetes. To me it offers way more interesting features, above all the containerization. However, all these features makes it more complicated to setup and understand. And again I was not able to build something like a remote submission. </p> <p><strong>My question</strong></p> <p>Has someone faced the same problem and can report his/her solution? I have the feeling that Kubernetes is better prepared for the future. </p> <p>If you need more information, let me know. </p> <p>Thanks Tim! </p>
<p>As far as my knowledge goes, Kubernetes does not support sharing of GPU, which was asked <a href="https://github.com/NVIDIA/k8s-device-plugin/issues/76" rel="nofollow noreferrer">here</a>.</p> <p>There is an ongoing discussion <a href="https://github.com/kubernetes/kubernetes/issues/52757" rel="nofollow noreferrer">Is sharing GPU to multiple containers feasible? #52757</a></p> <p>I was able to find a docker image with examples which <em>"support share GPUs unofficially"</em>, available here <a href="https://hub.docker.com/r/cvaldit/nvidia-k8s-device-plugin/#enabling-gpu-support-in-kubernetes" rel="nofollow noreferrer">cvaldit/nvidia-k8s-device-plugin</a>.</p> <p>This can be used in a following way:</p> <p><code>apiVersion: v1 kind: Pod metadata: name: gpu-pod spec: containers: - name: cuda-container image: nvidia/cuda:9.0-devel resources: limits: nvidia.com/gpu: 2 # requesting 2 GPUs - name: digits-container image: nvidia/digits:6.0 resources: limits: nvidia.com/gpu: 2 # requesting 2 GPUs </code></p> <p>That would expose 2 GPUs inside the container to run your job in, also locking those 2 GPUs from further use until job ends.</p> <p>I'm not sure how would you scale those for multiple users, in other way then limiting them the maximum amount of used GPUs per job.</p> <p>Also you can read about <a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/" rel="nofollow noreferrer">Schedule GPUs</a> which is still experimental.</p>
<p>I see "workloads" but are workloads the same as "deployments"?</p> <p>I dont see any kubectl commands that can list ALL deployments just for describing a specific one.</p>
<p>Yes, you should use the command:</p> <pre><code>kubectl get deployments </code></pre> <p>By default, you will only see that ones that are in the namespace default. If your deployments are in other namespaces you have to specify it:</p> <pre><code>kubectl get deployments -n your_namespace </code></pre> <p>If you want to see all the deployments from all namespaces, use the following command:</p> <pre><code>kubectl get deployments --all-namespaces </code></pre> <p>From your question, if what you want is to see all you have (not just deployments), use the following command:</p> <pre><code>kubectl get all --all-namespaces </code></pre>
<p>I am trying to get keycloak up and running on my minikube. </p> <p>I am installing keycloak with</p> <p><code>helm upgrade -i -f kubernetes/keycloak/values.yaml keycloak stable/keycloak --set keycloak.persistence.dbHost=rolling-newt-postgresql</code></p> <p>I see an error in dashboard that says:</p> <blockquote> <p>MountVolume.SetUp failed for volume "realm-secret" : secrets "realm-secret" not found</p> </blockquote> <p>In my <code>values.yaml</code> I have this configuration:</p> <pre><code> extraVolumes: | - name: realm-secret secret: secretName: realm-secret - name: theme emptyDir: {} - name: spi emptyDir: {} extraVolumeMounts: | - name: realm-secret mountPath: "/realm/" readOnly: true - name: theme mountPath: /opt/jboss/keycloak/themes/mytheme - name: spi mountPath: /opt/jboss/keycloak/standalone/deployments </code></pre> <p>I also have a <code>realm.json</code> file. </p> <p><strong>Question</strong></p> <p>What do I need to do with this <code>real.json</code> file prior to installing keycloak? How do I do that ?</p>
<p>The reason is you are referencing a secret named <code>realm-secret</code> in <code>extraVolumes</code>, but that secret with name <code>realm-secret</code> is created neither by the helm chart (named <code>stable/keycloak</code>) nor by you manually.</p> <p>You can easily find that chart in <a href="https://github.com/helm/charts/tree/master/stable/keycloak" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/keycloak</a>.</p> <h3>Solution</h3> <p>In <code>values.yaml</code>, the field <code>extraVolume</code> and <code>extraVolumeMount</code> is kept to provide an extra <code>volume</code> and extra <code>volumeMount</code> by user if they need. They will be used in the keycloak pod. </p> <p>So if you need to provide <code>extraVolumes</code> that will mount a secret, then you have to create that secret all by yourself, so you'll need to create secret <code>realm-secret</code> in the same namespace in which you install/upgrade your chart. And only then install/upgrade the chart.</p> <pre><code>$ kubectl create secret generic realm-secret --namespace=&lt;chart_namespace&gt; --from-file=path/to/realm.json </code></pre>
<p>I am moving my application from docker to kubernetes \ helm - and so far I have been successful except for setting up incoming \ outgoing connections.</p> <p>One particular issue I am facing is that I am unable to connect to the rabbitmq instance running locally on my machine on another docker container. </p> <pre><code>app-deployment.yaml: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: jks labels: app: myapp spec: replicas: 1 template: metadata: labels: app: myapp spec: imagePullSecrets: - name: ivsecret containers: - env: - name: JOBQUEUE value: jks_jobqueue - name: PORT value: "80" image: repo.url name: jks ports: - containerPort: 80 volumeMounts: - name: config-vol mountPath: /etc/sys0 volumes: - name: config-vol configMap: name: config restartPolicy: Always ------------ app-service.yaml: apiVersion: v1 kind: Service metadata: name: jks spec: ports: - name: "80" port: 80 targetPort: 80 selector: app: myapp </code></pre> <p>I see errors on my container, complaining that it is not able to connect to my machine. I tried curl from inside the container:</p> <pre><code>curl 10.2.10.122:5672 curl: (7) Failed to connect to 10.20.11.11 port 5672: Connection timed out </code></pre> <p>But the same when I deploy as a docker container works fine - and I am able to connect to the rabbit mq instance running on my machine on port 5672.</p> <p>Is there something I would need to do to set up a connection from the pod to my local machine?</p>
<p>If I understood the setup:</p> <ul> <li>minikube is running on the local machine. </li> <li>rabbitmq is running on the local machine, too, and is listening on port 5672.</li> <li>the IP where rabbitmq is running is 10.2.10.122 .</li> <li>an application - jks - is running on minikube.</li> </ul> <p>The problem is that it is not possible to connect from the jks application to rabbitmq, correct?</p> <p>One way to make it work is to first create a <em>Service without selector</em> :</p> <pre><code>apiVersion: "v1" kind: "Service" metadata: name: "svc-external-rabbitmq" spec: ports: - name: "rabbitmq" protocol: "TCP" port: 5672 targetPort: 5672 nodePort: 0 selector: {} </code></pre> <p>...next, create Endpoints object for the Service:</p> <pre><code>apiVersion: "v1" kind: "Endpoints" metadata: name: "svc-external-rabbitmq" subsets: - addresses: - ip: "10.2.10.122" ports: - name: "rabbitmq" port: 5672 </code></pre> <p>...then use the service name - <code>svc-external-rabbitmq</code> - in the jks application to connect to rabbitmq.</p> <p>For an explanation, see <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">Services without selectors</a> in the Kubernetes documentation. I've used this setup with a Cassandra cluster where the Cassandra nodes' IPs were all listed as <code>addresses</code>.</p> <p>EDIT: Notice that in some cases a Service of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">ExternalName</a> could work, too.</p>
<p>I've followed the steps at <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="noreferrer">https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine</a> to set up MySQL user accounts and service accounts. I've downloaded the JSON file containing my credentials.</p> <p>My issue is that in the code I copied from the site:</p> <pre><code>- name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=&lt;INSTANCE_CONNECTION_NAME&gt;=tcp:3306", "-credential_file=/secrets/cloudsql/credentials.json"] securityContext: runAsUser: 2 # non-root user allowPrivilegeEscalation: false volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true </code></pre> <p>the path /secrets/cloudsql/credentials.json is specified and I have no idea where it's coming from.</p> <p>I think I'm supposed to create the credentials as a secret via </p> <p><code>kubectl create secret generic cloudsql-instance-credentials --from-file=k8s\secrets\my-credentials.json</code></p> <p>But after that I have no idea what to do. How does this secret become the path <code>/secrets/cloudsql/credentials.json</code>?</p>
<p>you have to add a volume entry under the spec like so:</p> <pre><code> volumes: - name: cloudsql-instance-credentials secret: defaultMode: 420 secretName: cloudsql-instance-credentials </code></pre> <p><strong>Note:</strong> This belongs to the deployment spec not the container spec.</p> <p><em>Edit:</em> Further Information can be found here: <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-pod-that-has-access-to-the-secret-data-through-a-volume" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-pod-that-has-access-to-the-secret-data-through-a-volume</a> thanks shalvah for pointing that out.</p>
<p>I try to start RStudio in docker container via <code>kubernetes</code>. All objects are created, but when I try to open rstudio using such commands in Ubuntu 18: </p> <pre><code>kubectl create -f rstudio-ing.yml IP=$(minikube ip) xdg-open http://$IP/rstudio/ </code></pre> <p>there is error: <code>#RStudio initialization error: unable connect to service</code>. </p> <p>Usual docker command works fine:</p> <pre><code>docker run -d -p 8787:8787 -e PASSWORD=123 -v /home/aabor/r-projects:/home/rstudio aabor/rstudio </code></pre> <p>The same intended operation in <code>kubernetes</code> fails. </p> <p><code>rstudio-ing.yml</code> file creates all objects well. RStudio is accessible if I do not mount any folder. But if I add folder mounts it produces an error. Any suggestions?</p> <p>The content of the <code>rstudio-ing.yml</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: r-ingress annotations: kubernetes.io/ingress.class: "nginx" ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /rstudio/ backend: serviceName: rstudio servicePort: 8787 --- apiVersion: apps/v1 kind: Deployment metadata: name: rstudio spec: replicas: 1 selector: matchLabels: service: rstudio template: metadata: labels: service: rstudio language: R spec: containers: - name: rstudio image: aabor/rstudio env: - name: PASSWORD value: "123" volumeMounts: - name: home-dir mountPath: /home/rstudio/ volumes: - name: home-dir hostPath: #RStudio initialization error: unable connect to service path: /home/aabor/r-projects --- apiVersion: v1 kind: Service metadata: name: rstudio spec: ports: - port: 8787 selector: service: rstudio </code></pre> <p>This is pod description:</p> <pre><code> Name: rstudio-689c4fd6c8-fgt7w Namespace: default Node: minikube/10.0.2.15 Start Time: Fri, 23 Nov 2018 21:42:35 +0300 Labels: language=R pod-template-hash=2457098274 service=rstudio Annotations: &lt;none&gt; Status: Running IP: 172.17.0.9 Controlled By: ReplicaSet/rstudio-689c4fd6c8 Containers: rstudio: Container ID: docker://a6bdcbfdf8dc5489a4c1fa6f23fb782bc3d58dd75d50823cd370c43bd3bffa3c Image: aabor/rstudio Image ID: docker-pullable://aabor/rstudio@sha256:2326e5daa3c4293da2909f7e8fd15fdcab88b4eb54f891b4a3cb536395e5572f Port: &lt;none&gt; Host Port: &lt;none&gt; State: Running Started: Fri, 23 Nov 2018 21:42:39 +0300 Ready: True Restart Count: 0 Environment: PASSWORD: 123 Mounts: /home/rstudio/ from home-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-mrkd8 (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: home-dir: Type: HostPath (bare host directory volume) Path: /home/aabor/r-projects HostPathType: default-token-mrkd8: Type: Secret (a volume populated by a Secret) SecretName: default-token-mrkd8 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10s default-scheduler Successfully assigned rstudio-689c4fd6c8-fgt7w to minikube Normal SuccessfulMountVolume 10s kubelet, minikube MountVolume.SetUp succeeded for volume "home-dir" Normal SuccessfulMountVolume 10s kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-mrkd8" Normal Pulling 9s kubelet, minikube pulling image "aabor/rstudio" Normal Pulled 7s kubelet, minikube Successfully pulled image "aabor/rstudio" Normal Created 7s kubelet, minikube Created container Normal Started 6s kubelet, minikube Started container </code></pre>
<p>You have created a service of type <code>ClusterIP</code> that can only be possible to access in the cluster not the outside. So to make it available outside of the cluster, change the service type <code>LoadBalancer</code>.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: rstudio spec: ports: - port: 8787 selector: service: rstudio type: LoadBalancer </code></pre> <p>In that case, the loadbalancer type service don't need the ingress and use the url as:</p> <pre><code>$ minikube service rstudio --url </code></pre>
<p>I deploy <code>traefik ingress controller</code> pod and then two services, one of them a <code>LoadBalancer</code> type for reverse-proxy and the other a <code>ClusterIP</code> for dashboard. </p> <p>Also I create ingress for redirect all <code>&lt;elb-address&gt;/dashboard</code> to my traefik dashboard.</p> <p>but for some reason I get 404 error code when I trying to request my dashboard at <code>aws-ip/dashboard</code></p> <p>That is the manifest yamls that I use to set up traefik</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: traefik-ingress-controller namespace: kube-system --- kind: Deployment apiVersion: apps/v1 metadata: name: traefik-ingress-controller namespace: kube-system labels: k8s-app: traefik-ingress-lb spec: replicas: 1 selector: matchLabels: k8s-app: traefik-ingress-lb template: metadata: labels: k8s-app: traefik-ingress-lb name: traefik-ingress-lb spec: serviceAccountName: traefik-ingress-controller terminationGracePeriodSeconds: 60 containers: - image: traefik name: traefik-ingress-lb ports: - name: http containerPort: 80 - name: admin containerPort: 8080 args: - --api - --kubernetes - --logLevel=INFO --- kind: Service apiVersion: v1 metadata: name: traefik-ingress-service namespace: kube-system spec: selector: k8s-app: traefik-ingress-lb ports: - protocol: TCP targetPort: 80 port: 80 type: LoadBalancer --- kind: Service apiVersion: v1 metadata: name: traefik-web-ui namespace: kube-system spec: selector: k8s-app: traefik-ingress-lb ports: - name: web port: 80 targetPort: 8080 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: namespace: kube-system name: traefik-ingress annotations: kubernetes.io/ingress.class: traefik spec: rules: - http: paths: - path: /dashboard backend: serviceName: traefik-web-ui servicePort: web </code></pre> <hr> <p>Update</p> <p>I am watching the log and get a the follow errors with rbac activated and the ClusterRole, ServiceRole and ServiceAccount created:</p> <pre><code>E1124 18:56:23.267560 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:traefik-ingress" cannot list endpoints in the namespace "default" E1124 18:56:23.648207 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:traefik-ingress" cannot list services in the namespace "default" E1124 18:56:23.267560 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:traefik-ingress" cannot list endpoints in the namespace "default" </code></pre> <p>This are my serviceAccount, clusterRole and RoleBingind</p> <pre><code>kind: ServiceAccount apiVersion: v1 metadata: name: traefik-ingress --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: traefik-ingress rules: - apiGroups: - "" resources: - pods - services - endpoints - secrets verbs: - get - list - watch - apiGroups: - extensions resources: - ingresses verbs: - get - list - watch - apiGroups: - extensions resources: - ingresses/status verbs: - update --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: traefik-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: traefik-ingress subjects: - kind: ServiceAccount name: traefik-ingress namespace: default </code></pre>
<p>Solution</p> <p>I apply this</p> <pre><code>kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{&quot;spec&quot;:{&quot;template&quot;:{&quot;spec&quot;:{&quot;serviceAccount&quot;:&quot;tiller&quot;}}}}' </code></pre> <p>and then installed the stable/traefik template with helm</p> <pre><code>helm install stable/traefik --name=traefik-ingress-controller --values values.yaml </code></pre> <p><code>values.yaml</code> file is:</p> <pre><code> dashboard: enabled: true domain: traefik-ui.k8s.io rbac: enabled: true kubernetes: namespaces: - default - kube-system </code></pre> <p>Thanks for help</p>
<p>I've followed the steps at <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="noreferrer">https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine</a> to set up MySQL user accounts and service accounts. I've downloaded the JSON file containing my credentials.</p> <p>My issue is that in the code I copied from the site:</p> <pre><code>- name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=&lt;INSTANCE_CONNECTION_NAME&gt;=tcp:3306", "-credential_file=/secrets/cloudsql/credentials.json"] securityContext: runAsUser: 2 # non-root user allowPrivilegeEscalation: false volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true </code></pre> <p>the path /secrets/cloudsql/credentials.json is specified and I have no idea where it's coming from.</p> <p>I think I'm supposed to create the credentials as a secret via </p> <p><code>kubectl create secret generic cloudsql-instance-credentials --from-file=k8s\secrets\my-credentials.json</code></p> <p>But after that I have no idea what to do. How does this secret become the path <code>/secrets/cloudsql/credentials.json</code>?</p>
<p>Actually we can mount configmaps or secrets as files in the pod's container runtime. And then in runtime we can use them in whatever case we need. But to do that, we need to properly set up them.</p> <ul> <li>create secret/configmap</li> <li>add a volume for the secret in <code>.spec.volumes</code> in the pod (if you deploy the pod using deployment then add volume in <code>.spec.template.spec.volumes</code>)</li> <li>mount the created volume in <code>.spec.container[].volumemount</code></li> </ul> <p>Ref: <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets" rel="nofollow noreferrer">official kubernetes doc</a></p> <p>There is a sample for your use case:</p> <pre><code> - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=&lt;INSTANCE_CONNECTION_NAME&gt;=tcp:3306", "-credential_file=/secrets/cloudsql/credentials.json"] securityContext: runAsUser: 2 # non-root user allowPrivilegeEscalation: false volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true volumes: - name: cloudsql-instance-credentials secret: defaultMode: 511 secretName: cloudsql-instance-credentials </code></pre>
<p>I'm trying to create a dynamic Azure Disk volume to use in a pod that has specific permissions requirements.</p> <p>The container <a href="http://docs.grafana.org/installation/docker/#user-id-changes" rel="nofollow noreferrer">runs under the user id <code>472</code></a>, so I need to find a way to mount the volume with rw permissions for (at least) that user.</p> <p>With the following <code>StorageClass</code> defined</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass provisioner: kubernetes.io/azure-disk reclaimPolicy: Delete volumeBindingMode: Immediate metadata: name: foo-storage mountOptions: - rw parameters: cachingmode: None kind: Managed storageaccounttype: Standard_LRS </code></pre> <p>and this PVC</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: foo-storage namespace: foo spec: accessModes: - ReadWriteOnce storageClassName: foo-storage resources: requests: storage: 1Gi </code></pre> <p>I can run the following in a pod:</p> <pre><code>containers: - image: ubuntu name: foo imagePullPolicy: IfNotPresent command: - ls - -l - /var/lib/foo volumeMounts: - name: foo-persistent-storage mountPath: /var/lib/foo volumes: - name: foo-persistent-storage persistentVolumeClaim: claimName: foo-storage </code></pre> <p>The pod will mount and start correctly, but <code>kubectl logs &lt;the-pod&gt;</code> will show</p> <pre><code>total 24 drwxr-xr-x 3 root root 4096 Nov 23 11:42 . drwxr-xr-x 1 root root 4096 Nov 13 12:32 .. drwx------ 2 root root 16384 Nov 23 11:42 lost+found </code></pre> <p>i.e. the current directory is mounted as owned by <code>root</code> and read-only for all other users.</p> <p>I've tried adding a <code>mountOptions</code> section to the <code>StorageClass</code>, but whatever I try (<code>uid=472</code>, <code>user=472</code> etc) I get mount errors on startup, e.g.</p> <pre><code>mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199 --scope -- mount -t ext4 -o group=472,rw,user=472,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199 Output: Running scope as unit run-r7165038756bf43e49db934e8968cca8b.scope. mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. </code></pre> <p>I've also tried to get some info from <a href="https://linux.die.net/man/8/mount" rel="nofollow noreferrer">man mount</a>, but I haven't found anything that worked.</p> <p><strong>How can I configure this storage class, persistent volume claim and volume mount so that the non-root user running the container process gets access to write (and create subdirectories) in the mounted path?</strong></p>
<p>You need to define the <code>securityContext</code> of your pod spec like the following, so it matches the new running user and group id:</p> <pre><code>securityContext: runAsUser: 472 fsGroup: 472 </code></pre> <p>The stable Grafana Helm Chart also does it in the same way. See <code>securityContext</code> under Configuration here: <a href="https://github.com/helm/charts/tree/master/stable/grafana#configuration" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/grafana#configuration</a></p>
<p>I want to use istio with existing jaeger tracing system in K8S, I began with installing jaeger system following <a href="https://github.com/jaegertracing/jaeger-kubernetes" rel="noreferrer">the official link</a> with cassandra as backend storage. Then installed istio by <a href="https://istio.io/docs/setup/kubernetes/helm-install/" rel="noreferrer">the helm way</a>, but with only some selected components enabled: </p> <pre><code>helm upgrade istio -i install/kubernetes/helm/istio --namespace istio-system \ --set security.enabled=true \ --set ingress.enabled=false \ --set gateways.istio-ingressgateway.enabled=true \ --set gateways.istio-egressgateway.enabled=false \ --set galley.enabled=false \ --set sidecarInjectorWebhook.enabled=true \ --set mixer.enabled=false \ --set prometheus.enabled=false \ --set global.proxy.envoyStatsd.enabled=false \ --set pilot.sidecar=true \ --set tracing.enabled=false </code></pre> <p>Jaeger and istio are installed inside the same namespace <code>istio-sytem</code>, after all done, all pods inside it looks like this:</p> <pre><code>kubectl -n istio-system get pods NAME READY STATUS RESTARTS AGE istio-citadel-5c9544c886-gr4db 1/1 Running 0 46m istio-ingressgateway-8488676c6b-zq2dz 1/1 Running 0 51m istio-pilot-987746df9-gwzxw 2/2 Running 1 51m istio-sidecar-injector-6bd4d9487c-q9zvk 1/1 Running 0 45m jaeger-collector-5cb88d449f-rrd7b 1/1 Running 0 59m jaeger-query-5b5948f586-gxtk7 1/1 Running 0 59m </code></pre> <p>Then I followed <a href="https://istio.io/docs/examples/bookinfo/" rel="noreferrer">the link</a> to deploy the bookinfo sample into another namespace <code>istio-play</code>, which has label <code>istio-injection=enabled</code>, but no matter how I flush the <code>productpage</code> page, there's no tracing data be filled into jaeger.</p> <p>I guess maybe tracing spans are sent to jaeger by mixer, like the way istio do all other telementry stuff, so I <code>-set mixer.enabled=true</code>, but unfortunately only some services like <code>istio-mixer</code> or <code>istio-telementry</code> are displayed. Finally I cleaned up all the above installation and followed <a href="https://istio.io/docs/tasks/telemetry/distributed-tracing/" rel="noreferrer">this task</a> step by step, but the tracing data of bookinfo app are still not there.</p> <p>My questions is: How indeed istio send tracing data to jaeger? Does sidecar proxy send it directly to jaeger-collector(<code>zipkin.istio-system:9411</code>) like <a href="https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/jaeger_tracing" rel="noreferrer">how envoy does</a>, or the data flows like this: <code>sidecar-proxy -&gt; mixer -&gt; jaeger-collector</code>? And how could I debug how the data flow between all kinds of components inside the istio mesh? </p> <p>Thanks for any help and info :-)</p> <hr> <p><strong>Update</strong>: I tried again by installing istio without helm: <code>kubectl -n istio-system apply -f install/kubernetes/istio-demo.yaml</code>, this time everything works just fine, there must be something different between <code>kubectl way</code> and <code>helm way</code>. </p>
<p>Based on my experience and reading online, I found this interesting line in Istio <a href="https://istio.io/help/faq/mixer/" rel="nofollow noreferrer">mixer faq</a></p> <blockquote> <p>Mixer trace generation is controlled by command-line flags: trace_zipkin_url, trace_jaeger_url, and trace_log_spans. If any of those flag values are set, trace data will be written directly to those locations. If no tracing options are provided, Mixer will not generate any application-level trace information.</p> </blockquote> <p>Also, if you go deep into mixer <a href="https://github.com/istio/istio/blob/d6c3ebcaaffd7e45772beefeeb71708ef1588cb4/install/kubernetes/helm/subcharts/mixer/templates/deployment.yaml" rel="nofollow noreferrer">helm chart</a>, you will find traces of Zipkin and Jaeger signifying that it’s mixer that is passing trace info to Jaeger.</p> <p>I also got confused which reading this line in one of the articles </p> <blockquote> <p>Istio injects a sidecar proxy (Envoy) in the pod in which your application container is running. This sidecar proxy transparently intercepts (iptables magic) all network traffic going in and out of your application. Because of this interception, the sidecar proxy is in a unique position to automatically trace all network requests (HTTP/1.1, HTTP/2.0 &amp; gRPC).</p> </blockquote> <p>On Istio mixer documentation, The Envoy sidecar logically calls Mixer before each request to perform precondition checks, and after each request to report telemetry. The sidecar has local caching such that a large percentage of precondition checks can be performed from cache. Additionally, the sidecar buffers outgoing telemetry such that it only calls Mixer infrequently.</p> <p><strong>Update:</strong> You can enable tracing to understand what happens to a request in Istio and also the role of mixer and envoy. Read more information <a href="https://istio.io/help/faq/telemetry/#life-of-a-request" rel="nofollow noreferrer">here</a></p>
<p><strong>Background</strong>: I'm trying to set up a Bitcoin Core regtest pod on Google Cloud Platform. I borrowed some code from <a href="https://gist.github.com/zquestz/0007d1ede543478d44556280fdf238c9" rel="noreferrer">https://gist.github.com/zquestz/0007d1ede543478d44556280fdf238c9</a>, editing it so that instead of using Bitcoin ABC (a different client implementation), it uses Bitcoin Core instead, and changed the RPC username and password to both be "test". I also added some command arguments for the docker-entrypoint.sh script to forward to bitcoind, the daemon for the nodes I am running. When attempting to deploy the following three YAML files, the dashboard in "workloads" shows bitcoin has not having minimum availability. Getting the pod to deploy correctly is important so I can send RPC commands to the Load Balancer. Attached below are my YAML files being used. I am not very familiar with Kubernetes, and I'm doing a research project on scalability which entails running RPC commands against this pod. Ask for relevant logs and I will provide them in seperate pastebins. Right now, I'm only running three machines on my cluster, as I'm am still setting this up. The zone is us-east1-d, machine type is n1-standard-2.</p> <p><strong>Question</strong>: Given these files below, what is causing GCP Kubernetes Engine to respond with "Does not have minimum availability", and how can this be fixed?</p> <hr> <p><strong>bitcoin-deployment.sh</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: namespace: default labels: service: bitcoin name: bitcoin spec: strategy: type: Recreate replicas: 1 template: metadata: labels: service: bitcoin spec: containers: - env: - name: BITCOIN_RPC_USER valueFrom: secretKeyRef: name: test key: test - name: BITCOIN_RPC_PASSWORD valueFrom: secretKeyRef: name: test key: test image: ruimarinho/bitcoin-core:0.17.0 name: bitcoin ports: - containerPort: 18443 protocol: TCP volumeMounts: - mountPath: /data name: bitcoin-data resources: requests: memory: "1.5Gi" command: ["./entrypoint.sh"] args: ["-server", "-daemon", "-regtest", "-rpcbind=127.0.0.1", "-rpcallowip=0.0.0.0/0", "-rpcport=18443", "-rpcuser=test", "-rpcpassport=test"] restartPolicy: Always volumes: - name: bitcoin-data gcePersistentDisk: pdName: disk-bitcoincore-1 fsType: ext4 </code></pre> <hr> <p><strong>bitcoin-secrets.yml</strong></p> <pre><code>apiVersion: v1 kind: Secret metadata: name: bitcoin type: Opaque data: rpcuser: dGVzdAo= rpcpass: dGVzdAo= </code></pre> <hr> <p><strong>bitcoin-srv.yml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: bitcoin namespace: default spec: ports: - port: 18443 targetPort: 18443 selector: service: bitcoin type: LoadBalancer externalTrafficPolicy: Local </code></pre>
<p>I have run into this issue several times. The solutions that I used:</p> <ol> <li>Wait. Google Cloud does not have enough resource available in the Region/Zone that you are trying to launch into. In some cases this took an hour to an entire day.</li> <li>Select a different Region/Zone.</li> </ol> <p>An example was earlier this month. I could not launch new resources in us-west1-a. I think just switched to us-east4-c. Everything launched.</p> <p>I really do not know why this happens under the covers with Google. I have personally experienced this problem three times in the last three months and I have seen this problem several times on StackOverflow. The real answer might be a simple is that Google Cloud is really started to grow faster than their infrastructure. This is a good thing for Google as I know that they are investing in major new reasources for the cloud. Personally, I really like working with their cloud.</p>
<p>As soon as I add,</p> <pre><code>spec: containers: - args: - /bin/sh - '-c' - touch /tmp/healthy; touch /tmp/liveness env: </code></pre> <p>to the deployment file, the application is not coming up without any error in the description logs. The deployment succeed, but no output. Both files getting created in the container. Can I run docker build inside kubernetes deployment?</p> <p>Below is the complete deployment yaml.</p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: labels: app: web name: web namespace: default spec: replicas: 1 selector: matchLabels: app: web version: prod template: metadata: annotations: prometheus.io/scrape: 'true' labels: app: web version: prod spec: containers: - args: - /bin/sh - '-c' - &gt;- touch /tmp/healthy; touch /tmp/liveness; while true; do echo .; sleep 1; done env: - name: SUCCESS_RATE valueFrom: configMapKeyRef: key: SUCCESS_RATE name: web-config-prod image: busybox livenessProbe: exec: command: - cat - /tmp/liveness initialDelaySeconds: 5 name: web ports: - containerPort: 8080 - containerPort: 8000 </code></pre>
<p>The problem was in your case is <code>container is not found</code> after finishing it's task. You told to execute a shell script to your conatainer. And after doing that the container is finished. That's why you can't see whether the files were created or not. Also it didn't put any logs. So you need to keep alive the container after creating the files. You can do that by putting a infinite while loop. Here it comes:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello labels: app: hi spec: replicas: 1 selector: matchLabels: app: hi template: metadata: labels: app: hi spec: containers: - name: hi image: busybox args: - /bin/sh - "-c" - "touch /tmp/healthy; touch /tmp/liveness; while true; do echo .; sleep 1; done" ports: - containerPort: 80 </code></pre> <p>Save it to hello-deployment.yaml and run,</p> <pre><code>$ kubectl create -f hello-deployment.yaml $ pod_name=$(kubectl get pods -l app=hi -o jsonpath='{.items[0].metadata.name}') $ kubectl logs -f $pod_name $ kubectl exec -it -f $pod_name -- ls /tmp </code></pre>
<p>I would like to secure my web application running on Kubernetes (EKS). All the nodes attached to the cluster are running on private subnets.</p> <p>I have one front-end service and a dozen back-end services.</p> <p>The front-end service is a pod running a container which is running on port 80. It is configured to be attached to an ELB which is only accepting traffic from 443 with an https certificate.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: service_name labels: app: service_name annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxxxxxxx service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http spec: ports: - port: 443 # Exposed port targetPort: 80 # Container port selector: app: service_name type: LoadBalancer </code></pre> <p>The back-end services are pods running containers also running on port 80. None of them have been configured to be accessible from outside the cluster. Back-end services talk to each other by pointing to <a href="http://service_name" rel="nofollow noreferrer">http://service_name</a> (NOT https) as I configured them with this template:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: service_name spec: ports: - port: 80 # Exposed port targetPort: 80 # Container port selector: app: service_name </code></pre> <p>It all works but is it sufficient?</p> <p>Should the front-end/back-end containers use certificate/443 too with a wildcard https certificate? Should this configuration be done inside the container or on the services' configurations?</p>
<p>I have done quite a bit of investigation now and here is what I came down to.</p> <p>All my EKS EC2 instances are running on the private subnets which means they are not accessible from outside. Yes, by default Kubernetes does not encrypt traffic between pods which means that a hacker who gained access to my VPC (could be a rogue AWS engineer, someone who manages to physically access AWS data centers, someone who managed to access my AWS account...) will be able to sniff the network traffic. At the same time, I feel that in that instance the hacker will have access to much more! If he has access to my AWS account, he can download the https certificate himself for instance. If he manages to walk into an (high security) AWS data center and finds my server - it's good to compare the risk he has to take against the value of your data. If your data includes credit card/payments or sensitive personal data (date of birth, health details...), SSL encryption is a must. Anyway, to secure pods traffic, there are 2 options.</p> <ol> <li>Update all the pod source code and add the certificate there. It requires a lot of maintenance if you are running many pods and certificates expire every other year..</li> <li>Add an extra 'network layer' like <a href="https://istio.io/" rel="nofollow noreferrer">https://istio.io/</a>. This will add complexity to your cluster and in the case of EKS, support from AWS will be 'best effort'. Ideally, you would pay for Istio support.</li> </ol> <p>For the load balancer, I decided to add an ingress to the cluster (Ngnix, Traefik...) and set it up with https. That's critical as the ELB sits on the public subnets.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-workloads-overview" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-workloads-overview</a></p> <p>Im a little confused, the GCP kubernetes web console has a "workloads" section that seems to just have k8s "deployments". and In k8s documentations "workloads" is a section (empty): <a href="https://kubernetes.io/docs/concepts/workloads/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/</a></p> <p>Are "workloads" an actual thing? Is there a "workloads" class? Or is workloads just used in the general sense of the term in the gke console and k8s documentation?</p> <p>edit: ===============</p> <p>Is there specific documentation for what google considers a GKE "workload" and a list of what will appear under the "Workloads" section of the GKE web console in gcp? Will the GCP "Workloads" section include only the following components? <a href="https://i.stack.imgur.com/4tT1x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4tT1x.png" alt="enter image description here"></a></p>
<p><a href="https://kubernetes.io/docs/concepts/workloads" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads</a> is not a page. It's a just path. You should take a look into the pages those are under this path link.</p> <p>Kubernetes workloads means all of the followings that has a <code>podspec</code> and can run containers. It includes:</p> <ul> <li>Deployment</li> <li>StatefulSet</li> <li>ReplicaSet</li> <li>ReplicationController (will be depricate in future)</li> <li>DaemonSet</li> <li>Job</li> <li>CronJob</li> <li>Pod</li> </ul>
<p>Its not clear to me how to do this.</p> <p>I create a service for my cluster like this:</p> <pre><code>kubectl expose deployment my-deployment --type=LoadBalancer --port 8888 --target-port 8888 </code></pre> <p>And now my service is accessible from the internet on port 8888. But I dont want that, I only want to make my service accessible from a list of specific public IPs. How do I apply a gcp firewall rule to a specific service? Not clear how this works and why by default the service is accessible publicly from the internet.</p>
<p><code>loadBalancerSourceRanges</code> seems to work and also updates the dynamically created GCE firewall rules for the service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: na-server-service spec: type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80 loadBalancerSourceRanges: - 50.1.1.1/32 </code></pre>
<p>I have a docker Image that basically runs a one time script. That scripts takes 3 arguments. My docker file is </p> <pre><code>FROM &lt;some image&gt; ARG URL ARG USER ARG PASSWORD RUN apt update &amp;&amp; apt install curl -y COPY register.sh . RUN chmod u+x register.sh CMD ["sh", "-c", "./register.sh $URL $USER $PASSWORD"] </code></pre> <p>When I spin up the contianer using <code>docker run -e URL=someUrl -e USER=someUser -e PASSWORD=somePassword -itd &lt;IMAGE_ID&gt;</code> it works perfectly fine.</p> <p>Now I want to deploy this as a job.</p> <p>My basic Job looks like:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: register spec: template: spec: containers: - name: register image: registeration:1.0 args: ["someUrl", "someUser", "somePassword"] restartPolicy: Never backoffLimit: 4 </code></pre> <p>But this the pod errors out on </p> <pre><code>Error: failed to start container "register": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"someUrl\": executable file not found in $PATH" </code></pre> <p>Looks like it is taking my args as commands and trying to execute them. Is that correct ? What can I do to fix this ?</p>
<p>In the Dockerfile as you've written it, two things happen:</p> <ol> <li><p>The URL, username, and password are fixed in the image. Anyone who can get the image can run <code>docker history</code> and see them in plain text.</p></li> <li><p>The container startup doesn't take any arguments; it just runs the single command with its fixed set of arguments.</p></li> </ol> <p>Especially since you're planning to pass these arguments in at execution time, I wouldn't bother trying to include them in the image. I'd reduce the Dockerfile to:</p> <pre class="lang-sh prettyprint-override"><code>FROM ubuntu:18.04 RUN apt update \ &amp;&amp; DEBIAN_FRONTEND=noninteractive \ apt install --assume-yes --no-install-recommends \ curl COPY register.sh /usr/bin RUN chmod u+x /usr/bin/register.sh ENTRYPOINT ["register.sh"] </code></pre> <p>When you launch it, the Kubernetes <code>args:</code> get passed as command-line parameters to the entrypoint. (It is the same thing as the Docker Compose <code>command:</code> and the free-form command at the end of a plain <code>docker run</code> command.) Making the script be the container entrypoint will make your Kubernetes YAML work the way you expect.</p> <p>In general I prefer using CMD to ENTRYPOINT. (Among other things, it makes it easier to <code>docker run --rm -it ... /bin/sh</code> to debug your image build.) If you do that, then the Kubernetes <code>args:</code> need to include the name of the script it's running:</p> <pre><code>args: ["./register.sh", "someUrl", "someUser", "somePassword"] </code></pre>
<p>I am having trouble pulling images from GCR ( pulled by my deployments ) I got <strong>ImagePullBackOff</strong> error.</p> <p>I have followed this tutorial already, step by step.</p> <p><a href="https://container-solutions.com/using-google-container-registry-with-kubernetes/" rel="nofollow noreferrer">https://container-solutions.com/using-google-container-registry-with-kubernetes/</a></p> <p>However it doesn't seem to work for me. I have even tried using the <strong>Storage Admin</strong> role when creating the service account key but its still no use.</p> <p>When describing the pod, I got this error:</p> <pre><code> Warning Failed 14s (x2 over 30s) kubelet, docker-for-desktop Failed to pull image "gcr.io/&lt;project-name&gt;/&lt;image-name&gt;": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/&lt;project-name&gt;/&lt;image-name&gt;/manifests/latest: unknown: Unable to parse json key. Warning Failed 14s (x2 over 30s) kubelet, docker-for-desktop Error: ErrImagePull Normal BackOff 2s (x3 over 29s) kubelet, docker-for-desktop Back-off pulling image "gcr.io/&lt;project-name&gt;/&lt;image-name&gt;" Warning Failed 2s (x3 over 29s) kubelet, docker-for-desktop Error: ImagePullBackOff </code></pre> <p>When visiting the <strong><a href="https://gcr.io/v2/" rel="nofollow noreferrer">https://gcr.io/v2/</a><em>project-name</em>/<em>image-name</em>/manifests/latest</strong> url, I got this:</p> <pre><code>// 20181124152036 // https://gcr.io/v2/project-name/image-name/manifests/latest { "errors": Array[1][ { "code": "UNAUTHORIZED", "message": "You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication" } ] } </code></pre> <p><strong>Pod Definition:</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: microservice-1-deployment spec: replicas: 3 selector: matchLabels: app: microservice-1 template: metadata: labels: app: microservice-1 spec: containers: - name: microservice-1 image: gcr.io/project-name/image-name ports: - containerPort: 80 </code></pre> <p><strong>Notes:</strong></p> <p>My deployments are able to pull images when they where hosted on docker hub, issue only occurs on pulling images in GCR.</p> <p><strong>Env</strong></p> <ul> <li>Windows 10 </li> <li>Docker Version 2.0.0.0-win78 (28905)</li> <li>Kubernetes 1.10.3 (Included on docker for desktop)</li> </ul> <p>I hope you can help me on this,</p> <p>Thanks in advance</p>
<p>OK found the culprit, it has something to do with <strong>Powershell</strong> and <strong>Command Prompt</strong>.</p> <p>I switched to using <strong>Git Bash</strong> and followed the same instructions in this tutorial</p> <p><a href="https://container-solutions.com/using-google-container-registry-with-kubernetes/" rel="nofollow noreferrer">https://container-solutions.com/using-google-container-registry-with-kubernetes/</a></p> <p>and it worked! </p> <p>Probably the culprit occurred when creating <strong>imagePullSecrets</strong> on <strong>Powershell</strong> and/or <strong>Command Prompt</strong>. Something probably went wrong when reading the json file, related to encoding or something.</p> <p>Hope this helps anyone.</p>
<p>I'm fairly novice in GCP and would like to ask a question:</p> <p>I have two private clusters in the same region with internal LB (all in one VPC), currently pods from both clusters are able to communicate with each other over HTTP.</p> <p>As far as I understand from the documentation - internal LB is a regional product, therefore if the private clusters were located in different regions the above scenario wouldn't be possible.</p> <p>What do I need to do in order to make pods of two private clusters which are located on different regions to be able to communicate with each other?</p> <p>My guess is that I have to define external LB for both of those clusters and using firewall rules allow communication only cluster to cluster via external IP and block all communication from the outside world.</p>
<p>Google's VPC is global. This means that all of your regions are part of the same network. Everything in your VPC that uses IP addresses in the VPC can talk to each other with appropriate rules in the VPC Firewall.</p>
<p>What about ReplicaSet_B and ReplicaSet_A update the same db? I hoped the pods in ReplicaSet_A were stopped with taking a snapshot. But there is not any explanation like this in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a>. I think, It is assumed that the containers are running online applications in the pods. What if they are batch applications? I mean the old pods belonging to old replicas will update the dbs in old manner. This will require also a data migration issue.</p>
<p>Yes. <code>ReplicaSets</code> (managed by <code>Deployments</code>) make two assumptions: 1. your workload is stateless, and 2. all pods are identical clones (other than their IP addresses). Now, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> address some aspects, for example, you can assign pods a certain identity (for example: leader or follower), but really only work for specific workloads. Also, the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs</a> abstractions in Kubernetes won't really help you a lot concerning stateful workloads. What you likely are looking at is a custom controller or operator. We're collecting good practices and tooling via <a href="https://stateful.kubernetes.sh/" rel="nofollow noreferrer">stateful.kubernetes.sh</a>, maybe there's something there that can be of help?</p>
<p>I am trying to create a Dynamic storage volume on Kubernetes in Ali cloud. First I have created a storage class.</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: alicloud-pv-class provisioner: alicloud/disk parameters: type: cloud_ssd regionid: cn-beijing zoneid: cn-beijing-b </code></pre> <p>Then, tried creating a persistence volume claim as per below.</p> <pre><code>apiVersion: v1 kind: List items: - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: node-pv spec: accessModes: - ReadWriteOnce storageClassName: alicloud-pv-class resources: requests: storage: 64Mi </code></pre> <p>Creation of persistence volume fails with the following error.</p> <blockquote> <p>Warning ProvisioningFailed 0s alicloud/disk alicloud-disk-controller-68dd8f98cc-z6ql5 5ef317c7-f110-11e8-96de-0a58ac100006 Failed to provision volume with StorageClass "alicloud-pv-class": Aliyun API Error: RequestId: 7B2CA409-3FDE-4BA1-85B9-80F15109824B Status Code: 400 Code: InvalidParameter Message: The specified parameter "Size" is not valid.</p> </blockquote> <p>I am not sure where this Size parameter is specified. Did anyone come across a similar problem?</p>
<p>As pointed out in <a href="https://www.alibabacloud.com/help/doc-detail/86612.htm" rel="noreferrer">the docs</a>, the minimum size for SSD is <code>20Gi</code>, so I'd suggest to change <code>storage: 64Mi</code> to <code>storage: 20Gi</code> to fix it.</p>
<p>I am trying to add some extra flags to my kubernetes controller manager and I am updating the flags in the /etc/kubernetes/manifests/kube-controller-manager.yaml file. But the changes that I am adding are not taking effect. The kubelet is detecting changes to the file and is restarting the pods but once restarted they come back with the old flags.</p> <p>Any ideas?</p>
<p>So it seems that any file under /etc/kubernetes/manifests is loaded by the kubelet. So when I was adding the new flags I was taking a backup of the existing file with a .bak extension but kubelet was still loading the .bak file instead of the new .yaml file. Seems to me thats a bug. Anyways, happy to have spotted the error. </p>
<p>I am looking at following 2 examples, in the first <a href="https://github.com/aaronlevy/kube-controller-demo/blob/master/reboot-controller/main.go#L155" rel="nofollow noreferrer">example</a> A lister is used to retrieve the item.</p> <p>In the <a href="https://github.com/trstringer/k8s-controller-core-resource/blob/master/controller.go#L102" rel="nofollow noreferrer">second example</a> , an index is used.</p> <p>I am wondering which is the preferred way and way to retrieve an element from the local cache.</p>
<p>The examples you showed above, they both use <strong>indexer</strong>, if you go deeper you will see it.</p> <p>For <a href="https://github.com/aaronlevy/kube-controller-demo/blob/master/reboot-controller/main.go#L155" rel="nofollow noreferrer">First example</a> (see <a href="https://github.com/aaronlevy/kube-controller-demo/blob/7a6784a5953931c5b95896ad218760db893d2d29/vendor/k8s.io/client-go/listers/core/v1/node.go#L56" rel="nofollow noreferrer">here</a>)</p> <pre><code>// Get retrieves the Node from the index for a given name. func (s *nodeLister) Get(name string) (*v1.Node, error) { obj, exists, err := s.indexer.GetByKey(name) if err != nil { return nil, err } if !exists { return nil, errors.NewNotFound(v1.Resource("node"), name) } return obj.(*v1.Node), nil } </code></pre> <p>For <a href="https://github.com/trstringer/k8s-controller-core-resource/blob/master/controller.go#L102" rel="nofollow noreferrer">second example</a></p> <pre><code>item, exists, err := c.informer.GetIndexer().GetByKey(keyRaw) </code></pre>
<p>I have searched some but could not get much as I am new to k8s. My pods are evicted and I get message as:</p> <pre><code>&quot;Status: Failed Reason: Evicted Message: The node was low on resource: nodefs.&quot; </code></pre> <p>Any help how can I figure out what is going on?</p>
<p>Run <code>kubectl describe pod &lt;pod name&gt;</code> and look for the node name of this pod. Followed by <code>kubectl describe node &lt;node-name&gt;</code> that will show what type of resource cap the node is hitting under <code>Conditions:</code> section. </p> <p>From my experience this happens when the host node runs out of disk space. </p>
<p>I'm trying to execute a <code>curl</code> command inside a container in <code>gke</code>.</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: app spec: schedule: "* * * * *" jobTemplate: spec: template: spec: containers: - name: app image: appropriate/curl env: - name: URL value: "https://app.com" - name: PASSWORD value: "pass" args: ["-vk", "-H", "\"Authorization: Bearer $(PASSWORD)\"", "$(URL)"] restartPolicy: OnFailure </code></pre> <p>Error:</p> <pre><code>curl: option -vk -H "Authorization: Bearer pass" https://app.com: is unknown </code></pre> <p>I just can't find out how to execute the <code>curl</code> with the <code>args</code> field using environment variables.</p> <p>This curl command works in my pc.<br> What am I doing wrong?<br> How can I integrate env vars with container curl command args?</p>
<p>You don't need to wrap the auth header in quotes, kubernetes will do that for you.</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: app spec: schedule: "* * * * *" jobTemplate: spec: template: spec: containers: - name: app image: appropriate/curl env: - name: URL value: "app.com" - name: PASSWORD value: "pass" args: ["-vk", "-H", "Authorization: Bearer $(PASSWORD)", "$(URL)"] restartPolicy: OnFailure </code></pre> <p>You can test the output yaml by doing:</p> <pre><code>kubectl apply -f job.yaml -o yaml --dry-run </code></pre> <p>which shows the final output is fine</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"batch/v1beta1","kind":"CronJob","metadata":{"annotations":{},"name":"app","namespace":"default"},"spec":{"jobTemplate":{"spec":{"template":{"spec":{"containers":[{"args":["-vk","-H","Authorization: Bearer $(PASSWORD)","$(URL)"],"env":[{"name":"URL","value":"https://app.com"},{"name":"PASSWORD","value":"pass"}],"image":"appropriate/curl","name":"app"}],"restartPolicy":"OnFailure"}}}},"schedule":"* * * * *"}} name: app namespace: default spec: jobTemplate: spec: template: spec: containers: - args: - -vk - -H - 'Authorization: Bearer $(PASSWORD)' - $(URL) env: - name: URL value: https://app.com - name: PASSWORD value: pass image: appropriate/curl name: app restartPolicy: OnFailure </code></pre> <p>I tested this with <a href="https://requestbin.fullcontact.com/" rel="noreferrer">https://requestbin.fullcontact.com/</a> and the bearer token was passed without issue</p>
<p>To use a docker container from a private docker repo, kubernetes recommends creating a secret of type 'docker-registry' and referencing it in your deployment. </p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p> <pre><code>kubectl create secret docker-registry regcred --docker-server=&lt;your-registry-server&gt; --docker-username=&lt;your-name&gt; --docker-password=&lt;your-pword&gt; --docker-email=&lt;your-email&gt; </code></pre> <p>Then in your helm chart or kubernetes deployment file, use <code>imagePullSecrets</code></p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: foo spec: replicas: {{ .Values.replicaCount }} template: spec: imagePullSecrets: - name: regcred containers: - name: foo image: foo.example.com </code></pre> <p>This works, but requires that all containers be sourced from the same registry. </p> <p><strong>How would you pull 2 containers from 2 registries</strong> (e.g. when using a sidecar that is stored separate from the primary container) ? </p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: foo spec: replicas: {{ .Values.replicaCount }} template: spec: containers: - name: foo image: foo.example.com imagePullSecrets: - name: foo-secret - name: bar image: bar.example.com imagePullSecrets: - name: bar-secret </code></pre> <p>I've tried creating 2 secrets <code>foo-secret</code> and <code>bar-secret</code> and referencing each appropriately, but I find it fails to pull both containers. </p>
<p>You have to include <code>imagePullSecrets:</code> directly at the pod level, but you can have multiple secrets there.</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: foo spec: replicas: {{ .Values.replicaCount }} template: spec: imagePullSecrets: - name: foo-secret - name: bar-secret containers: - name: foo image: foo.example.com/foo-image - name: bar image: bar.example.com/bar-image </code></pre> <p>The <a href="https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod" rel="noreferrer">Kubernetes documentation on this</a> notes:</p> <blockquote> <p>If you need access to multiple registries, you can create one secret for each registry. Kubelet will merge any <code>imagePullSecrets</code> into a single virtual <code>.docker/config.json</code> when pulling images for your Pods.</p> </blockquote>
<p>I need to provide a specific node name to my master node in kuberenetes. I am using kubeadm to setup my cluster and I know there is an option <code>--node-name master</code> which you can provide to kubeadm init and it works fine.</p> <p>Now, the issue is I am using the config file to initialise the cluster and I have tried various ways to provide that node-name to the cluster but it is not picking up the name. My config file of kubeadm init is:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration api: advertiseAddress: 10.0.1.149 controlPlaneEndpoint: 10.0.1.149 etcd: endpoints: - http://10.0.1.149:2379 caFile: /etc/kubernetes/pki/etcd/ca.pem certFile: /etc/kubernetes/pki/etcd/client.pem keyFile: /etc/kubernetes/pki/etcd/client-key.pem networking: podSubnet: 192.168.13.0/24 kubernetesVersion: 1.10.3 apiServerCertSANs: - 10.0.1.149 apiServerExtraArgs: endpoint-reconciler-type: lease nodeRegistration: name: master </code></pre> <p>Now I run <code>kubeadm init --config=config.yaml</code> and it timeouts with following error:</p> <pre><code>[uploadconfig] Storing the configuration used in ConfigMap "kubeadm- config" in the "kube-system" Namespace [markmaster] Will mark node ip-x-x-x-x.ec2.internal as master by adding a label and a taint timed out waiting for the condition </code></pre> <p>PS: This issue also comes when you don't provide <code>--hostname-override</code> to kubelet along with <code>--node-name</code> to kubeadm init. I am providing both. Also, I am not facing any issues when I don't use <code>config.yaml</code> file and use command line to provide <code>--node-name</code> option to kubeadm init.</p> <p>I want to know how can we provide <code>--node-name</code> option in config.yaml file. Any pointers are appreciated.</p>
<p>I am able to resolve this issue using the following config file, Just updating if anyone encounters the same issue:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration api: advertiseAddress: 10.0.1.149 controlPlaneEndpoint: 10.0.1.149 etcd: endpoints: - http://10.0.1.149:2379 caFile: /etc/kubernetes/pki/etcd/ca.pem certFile: /etc/kubernetes/pki/etcd/client.pem keyFile: /etc/kubernetes/pki/etcd/client-key.pem networking: podSubnet: 192.168.13.0/24 kubernetesVersion: 1.10.3 apiServerCertSANs: - 10.0.1.149 apiServerExtraArgs: endpoint-reconciler-type: lease nodeName: master </code></pre> <p>This is the way you can specify <code>--node-name</code> in config.yaml</p>
<p>I am very confused about why my pods are staying in pending status.</p> <p>Vitess seems have problem scheduling the vttablet pod on nodes. I built a 2-worker-node Kubernetes cluster (nodes A &amp; B), and started vttablets on the cluster, but only two vttablets start normally, the other three is stay in pending state. </p> <p>When I allow the master node to schedule pods, then the three pending vttablets all start on the master (first error, then running normally), and I create tables, two vttablet failed to execute.</p> <p>When I add two new nodes (nodes C &amp; D) to my kubernetes cluster, tear down vitess and restart vttablet, I find that the three vttablet pods still remain in pending state, also if I kick off node A or node B, I get <code>vttablet lost</code>, and it will not restart on new node. I tear down vitess, and also tear down k8s cluster, rebuild it, and this time I use nodes C &amp; D to build a 2-worker-node k8s cluster, and all vttablet now remain in pending status.</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE default etcd-global-5zh4k77slf 1/1 Running 0 46m 192.168.2.3 t-searchredis-a2 &lt;none&gt; default etcd-global-f7db9nnfq9 1/1 Running 0 45m 192.168.2.5 t-searchredis-a2 &lt;none&gt; default etcd-global-ksh5r9k45l 1/1 Running 0 45m 192.168.1.4 t-searchredis-a1 &lt;none&gt; default etcd-operator-6f44498865-t84l5 1/1 Running 0 50m 192.168.2.2 t-searchredis-a2 &lt;none&gt; default etcd-test-5g5lmcrl2x 1/1 Running 0 46m 192.168.2.4 t-searchredis-a2 &lt;none&gt; default etcd-test-g4xrkk7wgg 1/1 Running 0 45m 192.168.1.5 t-searchredis-a1 &lt;none&gt; default etcd-test-jkq4rjrwm8 1/1 Running 0 45m 192.168.2.6 t-searchredis-a2 &lt;none&gt; default vtctld-z5d46 1/1 Running 0 44m 192.168.1.6 t-searchredis-a1 &lt;none&gt; default vttablet-100 0/2 Pending 0 40m &lt;none&gt; &lt;none&gt; &lt;none&gt; default vttablet-101 0/2 Pending 0 40m &lt;none&gt; &lt;none&gt; &lt;none&gt; default vttablet-102 0/2 Pending 0 40m &lt;none&gt; &lt;none&gt; &lt;none&gt; default vttablet-103 0/2 Pending 0 40m &lt;none&gt; &lt;none&gt; &lt;none&gt; default vttablet-104 0/2 Pending 0 40m &lt;none&gt; &lt;none&gt; &lt;none&gt; apiVersion: v1 kind: Pod metadata: creationTimestamp: 2018-11-27T07:25:19Z labels: app: vitess component: vttablet keyspace: test_keyspace shard: "0" tablet: test-0000000100 name: vttablet-100 namespace: default resourceVersion: "22304" selfLink: /api/v1/namespaces/default/pods/vttablet-100 uid: 98258046-f215-11e8-b6a1-fa163e0411d1 spec: containers: - command: - bash - -c - |- set -e mkdir -p $VTDATAROOT/tmp chown -R vitess /vt su -p -s /bin/bash -c "/vt/bin/vttablet -binlog_use_v3_resharding_mode -topo_implementation etcd2 -topo_global_server_address http://etcd-global-client:2379 -topo_global_root /global -log_dir $VTDATAROOT/tmp -alsologtostderr -port 15002 -grpc_port 16002 -service_map 'grpc-queryservice,grpc-tabletmanager,grpc-updatestream' -tablet-path test-0000000100 -tablet_hostname $(hostname -i) -init_keyspace test_keyspace -init_shard 0 -init_tablet_type replica -health_check_interval 5s -mysqlctl_socket $VTDATAROOT/mysqlctl.sock -enable_semi_sync -enable_replication_reporter -orc_api_url http://orchestrator/api -orc_discover_interval 5m -restore_from_backup -backup_storage_implementation file -file_backup_storage_root '/usr/local/MySQL_DB_Backup/test'" vitess env: - name: EXTRA_MY_CNF value: /vt/config/mycnf/master_mysql56.cnf image: vitess/lite imagePullPolicy: Always livenessProbe: failureThreshold: 3 httpGet: path: /debug/vars port: 15002 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 name: vttablet ports: - containerPort: 15002 name: web protocol: TCP - containerPort: 16002 name: grpc protocol: TCP resources: limits: cpu: 500m memory: 1Gi requests: cpu: 500m memory: 1Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /dev/log name: syslog - mountPath: /vt/vtdataroot name: vtdataroot - mountPath: /etc/ssl/certs/ca-certificates.crt name: certs readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-7g2jb readOnly: true - command: - sh - -c - |- mkdir -p $VTDATAROOT/tmp &amp;&amp; chown -R vitess /vt su -p -c "/vt/bin/mysqlctld -log_dir $VTDATAROOT/tmp -alsologtostderr -tablet_uid 100 -socket_file $VTDATAROOT/mysqlctl.sock -init_db_sql_file $VTROOT/config/init_db.sql" vitess env: - name: EXTRA_MY_CNF value: /vt/config/mycnf/master_mysql56.cnf image: vitess/lite imagePullPolicy: Always name: mysql resources: limits: cpu: 500m memory: 1Gi requests: cpu: 500m memory: 1Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /dev/log name: syslog - mountPath: /vt/vtdataroot name: vtdataroot - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-7g2jb readOnly: true dnsPolicy: ClusterFirst priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - hostPath: path: /dev/log type: "" name: syslog - emptyDir: {} name: vtdataroot - hostPath: path: /etc/ssl/certs/ca-certificates.crt type: "" name: certs - name: default-token-7g2jb secret: defaultMode: 420 secretName: default-token-7g2jb status: conditions: - lastProbeTime: null lastTransitionTime: 2018-11-27T07:25:19Z message: '0/3 nodes are available: 1 node(s) had taints that the pod didn''t tolerate, 2 Insufficient cpu.' reason: Unschedulable status: "False" type: PodScheduled phase: Pending qosClass: Guaranteed </code></pre>
<p>As you can see down at the bottom:</p> <pre><code>message: '0/3 nodes are available: 1 node(s) had taints that the pod didn''t tolerate, 2 Insufficient cpu.' </code></pre> <p>Meaning that your two worker nodes are out of resources based on the limits you specified in the pod. You will need more workers, or smaller CPU requests.</p>
<p>I'm trying to build a script that can follow(-f) <code>kubectl get pods</code> see a realtime update when I make any changes/delete pods on Ubuntu server.</p> <p>What could be the easiest/efficient way to do so?</p>
<p>You can just use</p> <pre><code>kubectl get pod &lt;your pod name&gt; -w </code></pre> <p>whenever any update/change/delete happen to the pod, you will see the update.</p> <p>You can also use</p> <pre><code>watch -n 1 kubectl get pod &lt;your pod name&gt; </code></pre> <p>This will continuously run <code>kubectl get pod ...</code> with 1 seconds interval. So, you will see latest state.</p>
<p>I try to deploy a set of k8s on the cloud, there are two options:the masters are in trust to the cloud provider or maintained by myself. so i wonder about that if the masters in trust will leak the data on workers? Shortly, will the master know the data on workers/nodes?</p>
<p>The abstractions in Kubernetes are very well defined with clear boundaries. You have to understand the concept of Volumes first. As defined <a href="http://kubernetesbyexample.com/volumes/" rel="nofollow noreferrer">here</a>, </p> <blockquote> <p>A Kubernetes volume is essentially a directory accessible to all containers running in a pod. In contrast to the container-local filesystem, the data in volumes is preserved across container restarts. </p> </blockquote> <p>Volumes are attached to the containers in a pod and There are several <a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="nofollow noreferrer">types of volumes</a></p> <p>You can see the layers of abstraction <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/" rel="nofollow noreferrer">source</a><a href="https://i.stack.imgur.com/S7pgW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S7pgW.png" alt="The layers of abstraction"></a></p> <p><strong>Master to Cluster communication</strong> </p> <p>There are two primary communication paths from the master (apiserver) to the cluster. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver’s proxy functionality.</p> <p>Also, you should check <a href="https://kubernetes.io/docs/concepts/architecture/cloud-controller/" rel="nofollow noreferrer">the CCM</a> - The cloud controller manager (CCM) concept (not to be confused with the binary) was originally created to allow cloud specific vendor code and the Kubernetes core to evolve independent of one another. The cloud controller manager runs alongside other master components such as the Kubernetes controller manager, the API server, and scheduler. It can also be started as a Kubernetes addon, in which case it runs on top of Kubernetes.</p> <p>Hope this answers all your questions related to Master accessing the data on Workers. </p> <p>If you are still looking for more secure ways, check <a href="https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/" rel="nofollow noreferrer">11 Ways (Not) to Get Hacked</a></p>
<p>When I try to edit the PVC, Kubernetes gives error saying:</p> <blockquote> <p>The StatefulSet "es-data" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden.</p> </blockquote> <p>I am trying to increase the disk size of elasticsearch which is deployed as a statefulset on AKS.</p>
<p>The error is self explaining. You can only update <code>template</code> and <code>updateStrategy</code> part of a StatefulSet. Also, you can't resize a PVC. However, from kubernetes 1.11 you can resize pvc but it is still alpha feature.</p> <p>Ref: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim" rel="nofollow noreferrer">Resizing an in-use PersistentVolumeClaim</a></p> <blockquote> <p>Note: Alpha features are not enabled by default and you have to enable manually while creating the cluster.</p> </blockquote>
<p>I'm currently working on deploying an elasticseacrh cluster in K8s. Can anyone help me understand what are the cons/pros of deploying the ES cluster inside our K8s cluster or outside? <strong>Thanks in advance!</strong></p>
<p>A big pro is data ingestion. If you have your ES cluster inside your k8s cluster, data ingestion will be faster.</p> <p>However, a big con is resources. ES will eat away your resources worse than google-chrome eats your ram. And I mean, a lot. </p> <p>And maintaining it can be quite cumbersome. Not sure about your use case but if it is logging (as in most cases), usually cloud providers have their own solution for that.</p> <p>If not, then:</p> <p>I would recommend having dedicated nodes for ES in your cluster, otherwise it might affect other pods if there are peaks and starts using a lot of node resources. </p> <p>Also make sure to familiarize yourself and optimize your cold-warm-hot data, it will save you a lot of time and resources.</p> <p><strong>EDIT</strong></p> <p>I haven't emphasized how important is this faster data ingestion so it might not seem like a good enough reason to deploy it inside the cluster. Bottom line is pretty obvious: <strong>Network latency and bandwidth</strong>. </p> <p>These things can <em>really add up</em> (picking up all those logs from all those pods, then scaling those same pods, then expanding the cluster, then again...), so every unit counts. If your VMs will not suffer from those two (meaning, they have same latency as any other node of the cluster), I think it <em>won't make a huge difference</em>.</p> <p>On the other hand, I see no big benefit in separating them from the cluster. It is a part of your infrastructure anyway. </p> <p>What if tomorrow you decide to switch to AWS or GKE? You would have to change your deployments, setup the whole thing again. On the other hand, if it's already a part of your cluster, just <code>kubectl apply</code> and 🤷</p> <p>I can also guess that you will try to setup an ELK stack. If time and good will allows, give <a href="https://www.fluentd.org/" rel="nofollow noreferrer">fluentd</a> a chance (it is 100% compatible with all logstash clients but much more lghtweight).</p>
<p>I am trying to submit the resource to Container using Kubernetes CLI.</p> <p>The below is what I use to submit jobs.</p> <pre><code>kubectl -n namespace create -f &lt;manifest file PATH&gt; </code></pre> <p>I am actually running this on a LINUX server which would meet application team requirement and provide this service for the users to use it.</p> <p>The Challenge is I don't want to store the application teams Configuration files(<em>.yml/</em>.json) in the LINUX server and call it from the LOCALPATH instead call the configuration file remotely.</p> <p>I think of BITBUCKET. Can someone please assist me on how we can call the file from remote location so that <code>kubectl</code> can accept it?</p> <p>Appreciate your time and effort.</p>
<p>You can use pipe with <code>kubectl</code> command to achieve it.</p> <p>For example,</p> <pre><code>curl &lt;your file url&gt; | kubectl apply -f - </code></pre> <p>Here is a working example:</p> <pre><code>curl https://gist.githubusercontent.com/hossainemruz/7926eb2660cc8a1bb214019b623e72ea/raw/d9505d06aee33e0144d9f2f9107290f1aba62cd5/mysql-initialization-with-init-container.yaml | kubectl apply -f - </code></pre>
<p>I tried installing dgraph (single server) using Kubernetes.<br> I created pod using:</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml </code></pre> <p>Now all I need to do is to delete the created pods.<br> I tried deleting the pod using:</p> <pre><code>kubectl delete pod pod-name </code></pre> <p>The result shows <code>pod deleted</code>, but the pod keeps recreating itself.<br> I need to remove those pods from my Kubernetes. What should I do now?</p>
<p>I did face same issue. Run command: </p> <pre><code>kubectl get deployment </code></pre> <p>you will get respective deployment to your pod. Copy it and then run command:</p> <pre><code>kubectl delete deployment xyz </code></pre> <p>then check. No new pods will be created.</p>
<p>I'm developing an application in ASP.NET Core 2.1, and running it on a Kubernetes cluster. I've implemented authentication using OpenIDConnect, using Auth0 as my provider.</p> <p>This all works fine. Actions or controllers marked with the <code>[Authorize]</code> attribute redirect anonymous user to the identity provider, they log in, redirects back, and Bob's your uncle. </p> <p>The problems start occurring when I scale my deployment to 2 or more containers. When a user visits the application, they log in, and depending on what container they get served during the callback, authentication either succeeds or fails. Even in the case of authentication succeeding, repeatedly F5-ing will eventually redirect to the identity provider when the user hits a container they aren't authorized on.</p> <p>My train of thought on this would be that, using cookie authentication, the user stores a cookie in their browser, that gets passed along with each request, the application decodes it and grabs the JWT, and subsequently the claims from it, and the user is authenticated. This makes the whole thing stateless, and therefore should work regardless of the container servicing the request. As described above however, it doesn't appear to actually work that way. </p> <p>My configuration in <code>Startup.cs</code> looks like this:</p> <pre><code>services.AddAuthentication(options =&gt; { options.DefaultAuthenticateScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultSignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = CookieAuthenticationDefaults.AuthenticationScheme; }) .AddCookie() .AddOpenIdConnect("Auth0", options =&gt; { options.Authority = $"https://{Configuration["Auth0:Domain"]}"; options.ClientId = Configuration["Auth0:ClientId"]; options.ClientSecret = Configuration["Auth0:ClientSecret"]; options.ResponseType = "code"; options.Scope.Clear(); options.Scope.Add("openid"); options.Scope.Add("profile"); options.Scope.Add("email"); options.TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name" }; options.SaveTokens = true; options.CallbackPath = new PathString("/signin-auth0"); options.ClaimsIssuer = "Auth0"; options.Events = new OpenIdConnectEvents { OnRedirectToIdentityProviderForSignOut = context =&gt; { var logoutUri = $"https://{Configuration["Auth0:Domain"]}/v2/logout?client_id={Configuration["Auth0:ClientId"]}"; var postLogoutUri = context.Properties.RedirectUri; if (!string.IsNullOrEmpty(postLogoutUri)) { if (postLogoutUri.StartsWith("/")) { var request = context.Request; postLogoutUri = request.Scheme + "://" + request.Host + request.PathBase + postLogoutUri; } logoutUri += $"&amp;returnTo={Uri.EscapeDataString(postLogoutUri)}"; } context.Response.Redirect(logoutUri); context.HandleResponse(); return Task.CompletedTask; }, OnRedirectToIdentityProvider = context =&gt; { context.ProtocolMessage.SetParameter("audience", "https://api.myapp.com"); // Force the scheme to be HTTPS, otherwise we end up redirecting back to HTTP in production. // They should seriously make it easier to make Kestrel serve over TLS in the same way ngninx does... context.ProtocolMessage.RedirectUri = context.ProtocolMessage.RedirectUri.Replace("http://", "https://", StringComparison.OrdinalIgnoreCase); Debug.WriteLine($"RedirectURI: {context.ProtocolMessage.RedirectUri}"); return Task.FromResult(0); } }; }); </code></pre> <p>I've spent hours trying to address this issue, and came up empty. The only thing I can think of that could theoretically work now is using sticky load balancing, but that's more applying a band-aid than actually fixing the problem.</p> <p>One of the main reasons to use Kubernetes is its resilience and ability to handle scaling very well. As it stands, I can only scale my backing services, and my main application would have to run as a single pod. That's far from ideal.</p> <p>Perhaps there is some mechanism somewhere that creates affinity with a specific instance that I'm not aware of?</p> <p>I hope someone can point me in the right direction.</p> <p>Thanks!</p>
<p>The cookie issued by authentication is encrypted via Data Protection. Data Protection by default is scoped to a particular application, or instance thereof. If you need to share an auth cookie between instances, you need to ensure that the data protection keys are persisted to a common location and that the application name is the same.</p> <pre><code>services.AddDataProtection() .PersistKeysToFileSystem(new DirectoryInfo(@"\\server\share\directory\")) .SetApplicationName("MyApp"); </code></pre> <p>You can find more info in the <a href="http://services.AddDataProtection()%20%20%20%20%20.PersistKeysToFileSystem(GetKeyRingDirInfo())%20%20%20%20%20.SetApplicationName(%22SharedCookieApp%22);%20%20services.ConfigureApplicationCookie(options%20=%3E%20%7B%20%20%20%20%20options.Cookie.Name%20=%20%22.AspNet.SharedCookie%22;%20%7D);" rel="noreferrer">docs</a>.</p>
<p>I am trying to install istio in a minikube cluster</p> <p>I followed the tutorial on this page <a href="https://istio.io/docs/setup/kubernetes/quick-start/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/quick-start/</a></p> <p>I am trying to use Option 1 : <a href="https://istio.io/docs/setup/kubernetes/quick-start/#option-1-install-istio-without-mutual-tls-authentication-between-sidecars" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/quick-start/#option-1-install-istio-without-mutual-tls-authentication-between-sidecars</a></p> <p>I can see that the services have been created but the deployment seems to have failed.</p> <pre><code>kubectl get pods -n istio-system No resources found </code></pre> <p>How can i troubleshoot this ?</p> <p>Here are the results of get deployment</p> <pre><code>kubectl get deployment -n istio-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE grafana 1 0 0 0 4m istio-citadel 1 0 0 0 4m istio-egressgateway 1 0 0 0 4m istio-galley 1 0 0 0 4m istio-ingressgateway 1 0 0 0 4m istio-pilot 1 0 0 0 4m istio-policy 1 0 0 0 4m istio-sidecar-injector 1 0 0 0 4m istio-telemetry 1 0 0 0 4m istio-tracing 1 0 0 0 4m prometheus 1 0 0 0 4m servicegraph 1 0 0 0 4m </code></pre>
<p>This is what worked for me. Don't use the <code>--extra-config</code>s while starting minikube. This is crashing kube-controller-manager-minikube as its not able to find the file </p> <blockquote> <p>error starting controllers: failed to start certificate controller: error reading CA cert file "/var/lib/localkube/certs/ca.crt": open /var/lib/localkube/certs/ca.crt: no such file or directory</p> </blockquote> <p>Just start minikube with this command. I have minikube V0.30.0.</p> <pre><code>minikube start </code></pre> <p><strong>Output:</strong></p> <pre><code>Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 170.78 MB / 170.78 MB [============================================] 100.00% 0s Getting VM IP address... Moving files into cluster... Downloading kubelet v1.10.0 Downloading kubeadm v1.10.0 Finished Downloading kubeadm v1.10.0 Finished Downloading kubelet v1.10.0 Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file. </code></pre> <p>Pointing to <code>istio-1.0.4 folder</code>, run this command </p> <pre><code>kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml </code></pre> <p>This should install all the required crds </p> <p>Run this command</p> <pre><code>kubectl apply -f install/kubernetes/istio-demo.yaml </code></pre> <p>After successful creation of rules, services, deployments etc., Run this command</p> <pre><code> kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE grafana-9cfc9d4c9-h2zn8 1/1 Running 0 5m istio-citadel-74df865579-d2pbq 1/1 Running 0 5m istio-cleanup-secrets-ghlbf 0/1 Completed 0 5m istio-egressgateway-58df7c4d8-4tg4p 1/1 Running 0 5m istio-galley-8487989b9b-jbp2d 1/1 Running 0 5m istio-grafana-post-install-dn6bw 0/1 Completed 0 5m istio-ingressgateway-6fc88db97f-49z88 1/1 Running 0 5m istio-pilot-74bb7dcdd-xjgvz 0/2 Pending 0 5m istio-policy-58878f57fb-t6fqt 2/2 Running 0 5m istio-security-post-install-vqbzw 0/1 Completed 0 5m istio-sidecar-injector-5cfcf6dd86-lr8ll 1/1 Running 0 5m istio-telemetry-bf5558589-8hzcc 2/2 Running 0 5m istio-tracing-ff94688bb-bwzfs 1/1 Running 0 5m prometheus-f556886b8-9z6vp 1/1 Running 0 5m servicegraph-55d57f69f5-fvqbg 1/1 Running 0 5m </code></pre>
<p>I am going to use K8S to orchestrate docker containers. In k8s, I need to copy a file from host directory (<code>/configs/nginx/cas-server.conf</code>) to pod container directory(<code>/etc/nginx/nginx.conf</code>), but the current k8s only allows mount a directory, not to mount/copy a file. How to solve this problem? </p> <p>Below is my nginx-cas-server-deply.yaml file. </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-cas-server-depl spec: replicas: 1 template: metadata: labels: app: nginx-cas-server-pod spec: containers: - name: nginx-cas-server-pod image: nginx imagePullPolicy: Never ports: - containerPort: 100 volumeMounts: - mountPath: /etc/nginx/nginx.conf name: nginx-cas-server-conf - mountPath: /app/cas-server/public name: nginx-cas-server-public volumes: - name: nginx-cas-server-conf hostPath: path: /configs/nginx/cas-server.conf - name: nginx-cas-server-public hostPath: path: /cas-server/public </code></pre>
<p>In a configuration for your Deployment, you need to use <code>mountPath</code> with <strong>directory</strong> and <strong>file</strong> names and <code>subPath</code> field with <strong>file</strong> name. Also, what is important, you need to have file on a Node named exactly as you want it to be mounted, therefore if you want to mount to <code>/etc/nginx/nginx.conf</code>, file should be named <code>nginx.conf</code></p> <p>Here is the example:</p> <p>Content of the directory on the Node:</p> <pre><code># ls /config/ nginx.conf some_dir </code></pre> <p>Configuration file for Nginx Deployment</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: nginx name: nginx namespace: default spec: replicas: 1 selector: matchLabels: run: nginx template: metadata: labels: run: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /etc/nginx/nginx.conf name: test subPath: nginx.conf volumes: - hostPath: path: /config name: test </code></pre>
<p>With the instruction <a href="https://docs.aws.amazon.com/eks/latest/userguide/worker.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/worker.html</a> it is possible to bring up Kube cluster worker nodes. I wanted the worker node not to have public ip. I don't see Amazon gives me that option as when running the cloudformation script. How can I have option not to have public ip on worker nodes</p>
<p>You would normally set this up ahead of time in the Subnet rather than doing it per machine. You can set <code>Auto-assign public IPv4 address</code> to false in the subnets you are using the for the worker instances.</p>
<p>I'm using <code>activeDeadlineSeconds</code> in my <code>Job</code> definition but it doesn't appear to have any effect. I have a CronJob that kicks off a job every minute, and I'd like that job to automatically kill off all its pods before another one is created (so 50 seconds seems reasonable). I know there are other ways to do this but this is ideal for our circumstances.</p> <p>I'm noticing that the pods aren't being killed off, however. Are there any limitations with <code>activeDeadlineSeconds</code>? I don't see anything in the documentation for K8s 1.7 - <a href="https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/#jobspec-v1-batch" rel="nofollow noreferrer">https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/#jobspec-v1-batch</a> I've also checked more recent versions.</p> <p>Here is a condensed version of my CronJob definition - </p> <pre><code>apiVersion: batch/v2alpha1 kind: CronJob metadata: name: kafka-consumer-cron spec: schedule: "*/1 * * * *" jobTemplate: spec: # JobSpec activeDeadlineSeconds: 50 # This needs to be shorter than the cron interval ## TODO - NOT WORKING! parallelism: 1 ... </code></pre>
<p>You can use <code>concurrencyPolicy: "Replace"</code>. This will terminate previous running pod then start a new one.</p> <p>Check comments from here: <a href="https://github.com/kubernetes/api/blob/b7bd5f2d334ce968edc54f5fdb2ac67ce39c56d5/batch/v1beta1/types.go#L108" rel="nofollow noreferrer">ConcurrencyPolicy</a></p>
<p>So far we have been using GKE public cluster for all our workloads. We have created a second, private cluster (still GKE) with improved security and availability (old one is single zone, new one is regional cluster). We are using Gitlab.com for our code, but using self-hosted Gitlab CI runner in the clusters.</p> <p>The runner is working fine on the public cluster, all workloads complete successfully. However on the private cluster, all kubectl commands of thr CI fail with <code>Unable to connect to the server: dial tcp &lt;IP&gt;:443: i/o timeout error</code>. The CI configuration has not changed - same base image, still using gcloud SDK with a CI-specific service account to authenticate to the cluster.</p> <p>Both clusters have master authorized networks enabled and have only our office IPs are set. Master is accessible from a public IP. Authentication is successful, client certificate &amp; basic auth are disabled on both. Cloud NAT is configured, nodes have access to the Internet (can pull container images, Gitlab CI can connect etc).</p> <p>Am I missing some vital configuration? What else should I be looking at?</p>
<p>I have found the solution to my problem, but I am not fully sure of the cause.</p> <p>I used <code>gcloud container clusters get-credentials [CLUSTER_NAME]</code>, which gave the master's public endpoint. However that is inaccessible from within the cluster for some reason - so I assume it would require adding the public IP of the NAT (which is not statically provided) to the authorized networks.</p> <p>I added the --internal-ip flag, which gave the cluster's internal IP address. The CI is now able to connect to the master.</p> <p>Source: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#internal_ip" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#internal_ip</a></p> <p>tl;dr - <code>gcloud container clusters get-credentials --internal-ip [CLUSTER_NAME]</code> </p>
<p><a href="https://i.stack.imgur.com/ShZX2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ShZX2.jpg" alt="enter image description here"></a></p> <p>I have an application running in multiple pods. You can imagine the app as a web application which connects to Postgres (so each container has both app and Postgres processes). I would like to mount the volume into each pod at <code>/var/lib/postgresql/data</code> so that every app can have the same state of the database. They can read/write at the same time.</p> <p>This is just an idea of how I will go. My question is: is there any concern I need to be aware of? Or is this the totally wrong way to go?</p> <p>Or will it be better to separate Postgres from the app container into a single pod and let the app containers connect to that one pod?</p> <p>If my questions show knowledge I lack, please provide links I should read, thank you!</p>
<p>This will absolutely fail to work, and PostgreSQL will try to prevent you from starting several postmasters against the same data directory as good as it can. If you still manage to do it, instant data corruption will ensue.</p> <p>The correct way to do this is to have a single database server and have all your &ldquo;pods&rdquo; connect to that one. If you have many of these &ldquo;pods&rdquo;, you should probably use a connection pooler like pgbouncer to fight the problems caused by too many database connections.</p>
<p>I have Kubernetes 1.9.3 cluster and deployed Istio 1.0.12 on it. Create a namespace with istio-injection=enabled and created a deployment in that namespace. I don't see envoy proxy getting automatically injected into the pods created by deployments.</p>
<p>Istio calls kube-apiserver to inject envoy proxy into the pods. Two plugins need to be enabled in kube-apiserver for proxy injection to work. </p> <p>kube-apiserver runs as a static pod and the pod manifest is available at <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>. Update the line as shown below to include <code>MutatingAdmissionWebhook</code> and <code>ValidatingAdmissionWebhook</code> plugins (available since Kubernetes 1.9).</p> <pre><code>- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota </code></pre> <p>The kubelet will detect the changes and re-create kube-apiserver pod automatically.</p>
<p>Team, I need to delete 10s of pods on k8s cluster that has error. I am getting them as below:</p> <pre><code>kubectl get pods --all-namespaces | grep -i -e Evict -e Error | awk -F ' ' '{print $1, $2, $4}' test-asdfasdf asdfasdf2215251 Error test-asdfasdf asdfasdf2215252 Error test-asdfasdf asdfasdf2215253 Error test-asdfasdf asdfasdf2215254 Error test-asdfasdf asdfasdf2215255 Error test-asdfasdf asdfasdf2215256 Error </code></pre> <p>manually am deleting them like this:</p> <pre><code>kubectl delete pod asdfasdf2215251 -n test-asdfasdf </code></pre> <p>but can I write a script that just looks for error on any pod and deletes all of them? I am working on script myself but new to this hence getting late.. </p>
<p>Start point:</p> <pre><code>kubectl get pods --all-namespaces | grep -i -e Evict -e Error | awk -F ' ' '{print $1, $2}' | </code></pre> <p>will produce a stream of:</p> <pre><code>test-asdfasdf asdfasdf2215251 test-asdfasdf asdfasdf2215252 test-asdfasdf asdfasdf2215253 test-asdfasdf asdfasdf2215254 test-asdfasdf asdfasdf2215255 test-asdfasdf asdfasdf2215256 </code></pre> <p>we can go here:</p> <pre><code>while IFS=' ' read -r arg1 arg2; do kubectl delete pod "$arg2" -n "$arg1" done </code></pre> <p>we can go here:</p> <pre><code>xargs -l1 -- sh -c 'kubectl delete pod "$2" -n "$1"' -- </code></pre> <p>or use <code>parallel</code> or any kind of other tools to do that.</p>
<p>since this morning i've a question in my mind. What is the best things to manage file configuration on kubernetes!</p> <p>For the moment we use <code>PVC &amp; PV</code> </p> <p>But i would want to update <strong>automatically</strong> during the <strong>CI pipeline</strong> the config files . For one application we have almost <strong>10 configs files</strong> </p> <p>How can i update it automatically with kubernetes ? Or do you think i will have to change for configMap ?</p> <p>I still reading the documentation on internet but i dont find an answear to my question :( </p> <p>Thanks </p>
<p>If it is configuration then a ConfigMap fits. You can update the ConfigMap resource descriptor file as part of your CI pipeline and then apply that change with 'kubectl apply' and you can do a rolling update on your app. This moves you in the direction of every config change being a tracked and versioned change. </p> <p>You may also want to look at Secrets, depending on how sensitive your config is. </p> <p>I guess you will have the same number of files whether in a PV or a ConfigMap - the choice only affects how they are made available to the app in Kubernetes. But if you find your CI pipelines are doing a lot of replacements then a templating system could help. I'd suggest looking at helm so that you can pass parameters into your deployments at the time of deploy. </p>
<p>I created a helm chart which is deriving value of <code>app.kubernetes.io/instance</code> from a template value like this:</p> <pre><code>labels: app.kubernetes.io/name: {{ include "mychart.name" . }} helm.sh/chart: {{ include "mychart.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} </code></pre> <p>I don't know in which file and how to override its value. I don't want to override it by command line as described at <a href="https://stackoverflow.com/questions/51718202/helm-how-to-define-release-name-value">Helm how to define .Release.Name value</a></p>
<p>The intention is that you don't set the <code>.Release.Name</code> within the helm chart. It is either set to an automatically generated value by helm when the user runs <code>helm install</code> or is set by the user as a parameter with <code>helm install &lt;name&gt;</code>. Imagine if the chart were to set the value of <code>.Release.Name</code> - the user would still be able to set a different value for it with <code>helm install &lt;name&gt;</code> and there would then be a conflict where it wouldn't be clear which name would be used.</p>
<p>I was wondering how pods are accessed when no service is defined for that specific pod. If it's through the environment variables, how does the cluster retrieve these?</p> <p>Also, when services are defined, where on the master node is it stored? </p> <p>Kind regards, Charles</p>
<ul> <li><p>If you define a service for your app , you can access it outside the cluster using that service</p></li> <li><p>Services are of several types , including nodePort , where you can access that port on any cluster node and you will have access to the service regardless of the actual location of the pod</p></li> <li><p>you can access the endpoints or actual pod ports inside the cluster as well , but not outside</p></li> <li><p>all of the above uses the kubernetes service discovery </p></li> <li>There are two type of service dicovery though</li> <li>Internal Service discovery</li> <li>External Service Discovery.</li> </ul> <p><a href="https://i.stack.imgur.com/hrovm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hrovm.png" alt="enter image description here"></a></p>
<p>I have multiple Secrets in a Kubernetes. All of them contain many values, as example:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: paypal-secret type: Opaque data: PAYPAL_CLIENT_ID: base64_PP_client_id PAYPAL_SECRET: base64_pp_secret stringData: PAYPAL_API: https://api.paypal.com/v1 PAYPAL_HOST: api.paypal.com </code></pre> <p>I'm curious how to pass all of the values from all <code>Secrets</code> to a <code>ReplicaSet</code> for example.</p> <p>I tried this one approach:</p> <pre><code>apiVersion: apps/v1 kind: ReplicaSet metadata: name: pp-debts labels: environment: prod spec: replicas: 1 selector: matchLabels: environment: prod template: metadata: labels: environment: prod spec: containers: - name: integration-app image: my-container-image envFrom: - secretRef: name: intercom-secret envFrom: - secretRef: name: paypal-secret envFrom: - secretRef: name: postgres-secret envFrom: - secretRef: name: redis-secret </code></pre> <p>But when I connected to the pod, and looked on the env variables, I was able to see only values from the <code>redis-secret</code>.</p>
<p>Try using one <code>envFrom</code> with multiple entries under it as below:</p> <pre><code> - name: integration-app image: my-container-image envFrom: - secretRef: name: intercom-secret - secretRef: name: paypal-secret - secretRef: name: postgres-secret - secretRef: name: redis-secret </code></pre> <p>There's an example at the bottom of <a href="https://web.archive.org/web/20211122093817/https://dchua.com/posts/2017-04-21-load-env-variables-from-configmaps-and-secrets-upon-pod-boot/" rel="noreferrer">this blog post by David Chua</a></p>
<p>Using an NGINX Ingresss in Kubernetes, I can't see a way to forward my traffic from non-www to www, or to another domain etc on a per-host basis</p> <p>I've tried looking in configmap docs but can't see what I need. Maybe it can go in the ingress itself?</p> <p>I've also seen an example using annotations but this seems to be ingress-wide, so I couldn't have specific redirects per host</p>
<p>Indeed a redirect is possible with a simple annotation:</p> <ul> <li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#permanent-redirect" rel="noreferrer"><code>nginx.ingress.kubernetes.io/permanent-redirect: https://www.gothereinstead.com</code></a></li> <li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#redirect-fromto-www" rel="noreferrer"><code>nginx.ingress.kubernetes.io/from-to-www-redirect: "true"</code></a></li> </ul> <p>But as you mentioned, it's "Ingress" wide and not configurable per host, per domain or even per path. So you'll have to do it yourself through the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="noreferrer"><code>ingress.kubernetes.io/configuration-snippet</code></a> annotation, which gives you a great deal of power thanks to regular expressions:</p> <pre><code>kind: Ingress apiVersion: extensions/v1beta1 metadata: name: self-made-redirect annotations: ingress.kubernetes.io/configuration-snippet: | if ($host = 'blog.yourdomain.com') { return 301 https://yournewblogurl.com; } if ($host ~ ^(.+)\.yourdomain\.com$) { return 301 https://$1.anotherdomain.com$request_uri; } spec: rules: - host: ... </code></pre> <p>If you are not quite used to NGINX, you'll know more about what's possible in the snippet, particularly what is the <code>$host</code> variable right in <a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#variables" rel="noreferrer">the NGINX documentation</a>.</p>
<p>I have multiple Node.js apps / Services running on Google Kubernetes Engine (GKE), Actually 8 pods are running. I didnot set up resources limit when I created the pods so now I'm getting CPU Unscheduled error. </p> <p>I understand I have to set up resource limits. From what I know, 1 CPU / Node = 1000Mi ? My question is, </p> <p>1) what's the ideal resource limit I should set up? Like the minimum? for a Pod that's rarely used, can I set up 20Mi? or 50Mi? </p> <p>2) How many Pods are ideal to run on a single Kubernetes Node? Right now I have 2 Nodes set up which I want to reduce to 1. </p> <p>3) what do people use in Production? and for development Cluster?</p> <p>Here are my Nodes</p> <p>Node 1:</p> <pre><code>Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default express-gateway-58dff8647-f2kft 100m (10%) 0 (0%) 0 (0%) 0 (0%) default openidconnect-57c48dc448-9jmbn 100m (10%) 0 (0%) 0 (0%) 0 (0%) default web-78d87bdb6b-4ldsv 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system event-exporter-v0.1.9-5c8fb98cdb-tcd68 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-gcp-v2.0.17-mhpgb 100m (10%) 0 (0%) 200Mi (7%) 300Mi (11%) kube-system kube-dns-5df78f75cd-6hdfv 260m (27%) 0 (0%) 110Mi (4%) 170Mi (6%) kube-system kube-dns-autoscaler-69c5cbdcdd-2v2dj 20m (2%) 0 (0%) 10Mi (0%) 0 (0%) kube-system kube-proxy-gke-qp-cluster-default-pool-7b00cb40-6z79 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system kubernetes-dashboard-7b89cff8-9xnsm 50m (5%) 100m (10%) 100Mi (3%) 300Mi (11%) kube-system l7-default-backend-57856c5f55-k9wgh 10m (1%) 10m (1%) 20Mi (0%) 20Mi (0%) kube-system metrics-server-v0.2.1-7f8dd98c8f-5z5zd 53m (5%) 148m (15%) 154Mi (5%) 404Mi (15%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 893m (95%) 258m (27%) 594Mi (22%) 1194Mi (45%) </code></pre> <p>Node 2:</p> <pre><code> Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default kube-healthcheck-55bf58578d-p2tn6 100m (10%) 0 (0%) 0 (0%) 0 (0%) default pubsub-function-675585cfbf-2qgmh 100m (10%) 0 (0%) 0 (0%) 0 (0%) default servicing-84787cfc75-kdbzf 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-gcp-v2.0.17-ptnlg 100m (10%) 0 (0%) 200Mi (7%) 300Mi (11%) kube-system heapster-v1.5.2-7dbb64c4f9-bpc48 138m (14%) 138m (14%) 301656Ki (11%) 301656Ki (11%) kube-system kube-dns-5df78f75cd-89c5b 260m (27%) 0 (0%) 110Mi (4%) 170Mi (6%) kube-system kube-proxy-gke-qp-cluster-default-pool-7b00cb40-9n92 100m (10%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 898m (95%) 138m (14%) 619096Ki (22%) 782936Ki (28%) </code></pre> <p>My plan is to move all this into 1 Node. </p>
<p>According to kubernetes official documentation</p> <p>1) You can go low in terms of memory and CPU, but you need to give enough CPU and memory to pods to function properly. I have gone as low as to CPU 100 and Memory 200 (It is highly dependent on the application you're running also the number of replicas)</p> <p>2) <code>There should not be 100 pods per node</code> (This is the extreme case)</p> <p>3) Production cluster are not of single node in any case. This is a very good read around <a href="https://techbeacon.com/one-year-using-kubernetes-production-lessons-learned" rel="nofollow noreferrer">kubernetes in production</a></p> <p>But keep in mind, if you increase the number of pod on single node, you might need to increase the size (<code>in terms of resources</code>) of node.</p> <p>Memory and CPU usage tends to grow proportionally with size/load on cluster</p> <p>Here is the official documentation stating the requirements</p> <blockquote> <p><a href="https://kubernetes.io/docs/setup/cluster-large/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/cluster-large/</a></p> </blockquote>