prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p><code>kubectl</code> exposes commands that can be used to create a Service for an application and assigns an IP address to access it from internet.</p> <p>As far as I understand, to access any application within Kubernetes cluster there should be a Service resource created and that should have an IP address which is accessible from an external network.</p> <p>But in case of <code>port-forward</code> how does <code>kubectl</code> create a connection to the application without an IP address which is accessible externally?</p>
<p>To start, it's useful to note and remember that in Kubernetes, <em>every</em> pod gets its own ip address from 10.*, that is usable only within the cluster. Now, the <code>port-forward</code> feature of kubectl simply tunnels the traffic from a specified port at your local host machine to the specified port on the specified pod. API server then becomes, in a sense, a temporary gateway between your local port and the Kubernetes cluster.</p> <p><code>kubectl port-forward</code> forwards connections to a local port to a port on a pod. Compared to <code>kubectl proxy</code>, <code>kubectl port-forward</code> is more generic as it can forward TCP traffic while <code>kubectl proxy</code> can only forward HTTP traffic.</p> <p><code>kubectl port-forward</code> is useful for testing/debugging purposes so you can access your service locally without exposing it.</p> <p>Below is the name of the <code>pod</code> and it will forward it's port <code>6379</code> to <code>localhost:6379</code>.</p> <pre><code>kubectl port-forward redis-master-765d459796-258hz 6379:6379 </code></pre> <p>which is the same as</p> <pre><code>kubectl port-forward pods/redis-master-765d459796-258hz 6379:6379 </code></pre> <p>or</p> <pre><code>kubectl port-forward deployment/redis-master 6379:6379 </code></pre> <p>or</p> <pre><code>kubectl port-forward rs/redis-master 6379:6379 </code></pre> <p>or</p> <pre><code>kubectl port-forward svc/redis-master 6379:6379 </code></pre>
<p>Having trouble getting a wordpress Kubertenes service to listen on my machine so that I can access it with my web browser. It just says "External IP" is pending. <strong>I'm using the Kubertenes configuration from Docker Edge v18.06 on Mac, with advanced Kube config enabled (not swarm).</strong></p> <p>Following this tutorial FROM: <a href="https://www.youtube.com/watch?time_continue=65&amp;v=jWupQjdjLN0" rel="nofollow noreferrer">https://www.youtube.com/watch?time_continue=65&amp;v=jWupQjdjLN0</a></p> <p>And using .yaml config files from <a href="https://github.com/kubernetes/examples/tree/master/mysql-wordpress-pd" rel="nofollow noreferrer">https://github.com/kubernetes/examples/tree/master/mysql-wordpress-pd</a></p> <pre><code>MACPRO:mysql-wordpress-pd me$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 48m wordpress LoadBalancer 10.99.205.222 &lt;pending&gt; 80:30875/TCP 19m wordpress-mysql ClusterIP None &lt;none&gt; 3306/TCP 19m </code></pre> <p>The commands to get things running, to see for yourself:</p> <pre><code>kubectl create -f local-volumes.yaml kubectl create secret generic mysql-pass --from-literal=password=DockerCon kubectl create -f mysql-deployment.yaml kubectl create -f wordpress-deployment.yaml kubectl get pods kubectl get services </code></pre> <p>Start admin console to see more detailed config in your web browser:</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml kubectl proxy </code></pre> <p>I'm hoping someone can clarify things for me here. Thank you.</p>
<p>For Docker for Mac, you should use your host's DNS name or IP address to access exposed services. The "external IP" field will never fill in here. (If you were in an environment like AWS or GCP where a LoadBalancer Kubernetes Service creates a cloud-hosted load balancer, the cloud provider integration would provide the load balancer's IP address here, but that doesn't make sense for single-host solutions.)</p> <p>Note that <a href="https://github.com/docker/for-mac/issues/2445" rel="nofollow noreferrer">I've had some trouble</a> figuring out which <em>port</em> is involved; answers to that issue suggest you need to use the service port (80) but you might need to try other things.</p>
<p>I deploy an <code>nginx ingress controller</code> to my cluster. This provisions a load balancer in my cloud provider (assume AWS or GCE). However, all traffic inside the cluster is routed by the controller based on my ingress rules and annotations.</p> <p>What is then the purpose of having a load balancer in the cloud sit in front of this controller? It seems like the controller is doing the actual load balancing anyway?</p> <p>I would like to understand how to have it so that the cloud load balancer is actually routing traffic towards machines inside the cluster while still following all my <code>nginx</code> configurations/annotations or even if that is possible/makes sense.</p>
<p>You may have a High Availability (HA) Cluster with multiple masters, a Load Balancer is a easy and practical way to "enter" in your Kubernetes cluster, as your applications are supposed to be usable by your users (who are on a different net from your cluster). So you need to have an entry point to your K8S cluster. </p> <p>A LB is an easy configurable entrypoint.</p> <p>Take a look at this picture as example: <a href="https://i.stack.imgur.com/982Fk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/982Fk.png" alt="enter image description here"></a></p> <p>Your API servers are load balanced. A call from outside your cluster will pass through the LB and will be manager by a API server. Only one master (the elected one) will be responsible to persist the status of the cluster in the etcd database. </p> <p>When you have <code>ingress controller</code> and <code>ingress rules</code>, in my opinion it's easier to configure and manage them inside K8S, instead of writing them in the LB configuration file (and reload the configuration on each modification). </p> <p>I suggest you to take a look at <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way</a> and make the exercise inside it. It's a good way to understand the flow.</p>
<p>I have deployed a Kubernetes cluster to GCP. For this cluster, I added some deployments. Those deployments are using external resources that protected with security policy to reject connection from unallow IP address. </p> <p>So, in order to pod to connect the external resource, I need manually allow the node (who hosting the pod) IP address.</p> <p>It's also possible to me to allow range of IP address, where one of my nodes are expected to be running. </p> <p>Untill now, I just find their internal IP addresses range. It looks like this:</p> <pre><code>Pod address range 10.16.0.0/14 </code></pre> <p>The question is how to find the <strong>range of external</strong> IP addresses for my nodes? </p>
<p>Let's begin with the IPs that are assigned to Nodes:</p> <ul> <li>When we create a Kubernetes cluster, GCP in the backend creates compute engines machines with a specific internal and external IP address.</li> <li>In your case, just go to the compute engine section of the Google Cloud Console and capture all the external IPs of the VM whose initials starts with gke-(*) and whitelist it.</li> <li>Talking about the range, as such in GCP only the internal IP ranges are known and external IP address are randomly assigned from a pool of IPs hence you need to whitelist it one at a time.</li> </ul> <p>To get the pod description and IPs run <code>kubectl describe pods</code>.</p>
<p>On kubernetes 1.5.2 all of sudden <strong>kubectl logs</strong> is showing error while other commands are working fine, so definitely there is no issue with cluster setup but possibly some sort of bug. Kindly advise if there is workaround to get the logs working.</p> <pre><code>$ kubectl logs -f some-pod-name </code></pre> <p>Error is given below:</p> <pre><code>**Error from server: Get https://Minion-1-IP:10250/containerLogs/default/some-pod-name-3851540691-b18vp/some-pod-name?follow=true: net/http: TLS handshake timeout** </code></pre> <p>Please help.</p>
<p>In short, for me, the problem is caused by misconfigured proxy.</p> <p>I came across this very same symptom last week. After poking around for some time This <a href="https://github.com/kubernetes/kubeadm/issues/211" rel="nofollow noreferrer">ISSUE</a> showed up. </p> <p>For me, It's because I initialized the cluster with </p> <pre><code>HTTP_PROXY=http://10.196.109.214:8118 HTTPS_PROXY=http://10.196.109.214:8118 NO_PROXY=10.196.109.214,localhost,127.0.0.1 kubeadm init </code></pre> <p><code>10.196.109.214</code> is my master node and on which I set up an http proxy. The proxy settings are automatically written in kubernete manifists. NO_PROXY here does not include any worker nodes, so that cause everything works fine but I can't retrieve any log from workers.</p> <p>I just hand edited the env part of <code>/etc/kubernetes/manifests/kube-*.yaml</code> and add worker nodes' ips:</p> <pre><code>env: - name: NO_PROXY value: 10.196.109.214,10.196.109.215,10.196.109.216,10.196.109.217,localhost,127.0.0.1 - name: HTTP_PROXY value: http://10.196.109.214:8118 - name: HTTPS_PROXY value: http://10.196.109.214:8118 </code></pre> <p>Then find relative pods with <code>kubectl -n kube-system get pods</code> and delete them with <code>kubectl -n kube-system delete pod &lt;pod-name&gt;</code>, wait for them to be recreated by kubelet. Everything works fine now.</p>
<p>(Before I start, I'm using minikube v27 on Windows 10.)</p> <p>I have created a deployment with the nginx 'hello world' container with a desired count of 2:</p> <p><a href="https://i.stack.imgur.com/L46pe.png" rel="noreferrer"><img src="https://i.stack.imgur.com/L46pe.png" alt="pods before scaling up"></a></p> <p>I actually went into the '2 hours' old pod and edited the index.html file from the welcome message to "broken" - I want to play with k8s to seem what it would look like if one pod was 'faulty'.</p> <p>If I scale this deployment up to more instances and then scale down again, I almost expected k8s to remove the oldest pods, but it consistently removes the newest:</p> <p><a href="https://i.stack.imgur.com/reAxW.png" rel="noreferrer"><img src="https://i.stack.imgur.com/reAxW.png" alt="pods after scaling down"></a></p> <p>How do I make it remove the oldest pods first?</p> <p>(Ideally, I'd like to be able to just say "redeploy everything as the exact same version/image/desired count in a rolling deployment" if that is possible)</p>
<p>Pod deletion preference is based on a ordered series of checks, defined in code here:</p> <p><a href="https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/controller/controller_utils.go#L737" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/controller/controller_utils.go#L737</a></p> <p>Summarizing- precedence is given to delete pods:</p> <ul> <li>that are unassigned to a node, vs assigned to a node</li> <li>that are in pending or not running state, vs running</li> <li>that are in not-ready, vs ready</li> <li>that have been in ready state for fewer seconds</li> <li>that have higher restart counts</li> <li>that have newer vs older creation times</li> </ul> <p>These checks are not directly configurable. </p> <p>Given the rules, if you can make an old pod to be not ready, or cause an old pod to restart, it will be removed at scale down time before a newer pod that is ready and has not restarted.</p> <p>There is discussion around use cases for the ability to control deletion priority, which mostly involve workloads that are a mix of job and service, here:</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/45509" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/45509</a></p>
<p>I have a config file named "pod.yaml" for making a pod like bellow:</p> <p><code> apiVersion: v1 kind: Pod metadata: name: myapp labels: app: myapp spec: containers: - name: comet-app image: gcr.io/my-project/my-app:v2 ports: - containerPort: 5000 </code></p> <p>and a config file named "service.yaml" for running a service in that "myapp" pod.</p> <p><code> apiVersion: v1 kind: Service metadata: name: myapp spec: type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 5000 selector: run: myapp </code></p> <p>When I run </p> <pre><code> kubectl apply -f pod.yaml kubectl apply -f service.yaml </code></pre> <p>The 'myapp' service is created but I couldn't access my website by the internal ip and it returned ERR_CONNECTION_TIMED_OUT.</p> <p><code> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.xx.xxx.1 &lt;none&gt; 443/TCP 11d myapp LoadBalancer 10.xx.xxx.133 35.xxx.xx.172 80:30273/TCP 3s </code></p> <p>But when I deleted that service and re-run by exposing a service with bellow command, everything worked well and I could access to my website by the external-ip.</p> <pre><code> kubectl expose pod myapp --type=LoadBalancer --port=80 --target-port=5000 </code></pre> <p>Could anyone explain it for me and tell me what is wrong in my service.yaml?</p>
<p>The problem with <code>service.yaml</code> is that the selector is wrong. <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">How it works</a> is that a service by default routes traffic to pods with a certain label. Your pod has the label <code>app: myapp</code> whereas in the service your selector is <code>run: myapp</code>. So, changing <code>service.yaml</code> to the following should solve the issue:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: myapp spec: type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 5000 selector: app: myapp </code></pre>
<p>I am facing an error while deploying Airflow on Kubernetes (precisely this version of Airflow <a href="https://github.com/puckel/docker-airflow/blob/1.8.1/Dockerfile" rel="noreferrer">https://github.com/puckel/docker-airflow/blob/1.8.1/Dockerfile</a>) regarding writing permissions onto the filesystem.</p> <p>The error displayed on the logs of the pod is:</p> <pre><code>sed: couldn't open temporary file /usr/local/airflow/sed18bPUH: Read-only file system sed: -e expression #1, char 131: unterminated `s' command sed: -e expression #1, char 118: unterminated `s' command Initialize database... sed: couldn't open temporary file /usr/local/airflow/sedouxZBL: Read-only file system Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/airflow/configuration.py", line 769, in .... with open(TEST_CONFIG_FILE, 'w') as f: IOError: [Errno 30] Read-only file system: '/usr/local/airflow/unittests.cfg' </code></pre> <p>It seems that the filesystem is read-only but I do not understand why it is. I am not sure if it is a <strong>Kubernetes misconfiguration</strong> (do I need a special RBAC for pods ? No idea) or if it is a problem with the <strong>Dockerfile</strong>. </p> <p>The deployment file looks like the following:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: airflow namespace: test spec: replicas: 1 revisionHistoryLimit: 3 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0 maxSurge: 1 template: metadata: labels: app: airflow spec: restartPolicy: Always containers: - name: webserver image: davideberdin/docker-airflow:0.0.4 imagePullPolicy: Always resources: limits: cpu: 1 memory: 1Gi requests: cpu: 50m memory: 128Mi securityContext: #does not have any effect runAsUser: 0 #does not have any effect ports: - name: airflow-web containerPort: 8080 args: ["webserver"] volumeMounts: - name: airflow-config-volume mountPath: /usr/local/airflow readOnly: false #does not have any effect - name: airflow-logs mountPath: /usr/local/logs readOnly: false #does not have any effect volumes: - name: airflow-config-volume secret: secretName: airflow-config-secret - name: airflow-parameters-volume secret: secretName: airflow-parameters-secret - name: airflow-logs emptyDir: {} </code></pre> <p>Any idea how I can make the filesystem writable? The container is running as <strong>USER airflow</strong> but I think that this user has root privileges.</p>
<p>Since kubernetes version 1.9 and forth, volumeMounts behavior on secret, configMap, downwardAPI and projected have changed to Read-Only by default.</p> <p>A workaround to the problem is to create an emtpyDir volume and copy the contents into it and execute/write whatever you need.</p> <p>this is a small snippet to demonstrate.</p> <pre><code> initContainers: - name: copy-ro-scripts image: busybox command: ['sh', '-c', 'cp /scripts/* /etc/pre-install/'] volumeMounts: - name: scripts mountPath: /scripts - name: pre-install mountPath: /etc/pre-install volumes: - name: pre-install emptyDir: {} - name: scripts configMap: name: bla </code></pre> <p>Merged PR which causes this break :( <a href="https://github.com/kubernetes/kubernetes/pull/58720" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/58720</a></p>
<p>I have a K8s 1.7 cluster using <strong>vSphere</strong> as persistent storage provider. I have also deployed <strong>Prometheus</strong>, <strong>node_exporter</strong> and <strong>kube-state-metrics</strong>.<p> I'm trying to find a way to monitor a persistent volume's usage using <strong>Prometheus</strong>. I have added custom labels to some PVs, eg. <code>app=rabbitmq-0</code>, etc. </p> <p>How can I combine <code>kube_persistentvolume_labels</code> with <code>node_filesystem_size</code> metrics so that I can query PV usage using my custom label?</p> <p>PS.<br> I know that K8s 1.8 directly exposes these metrics from kubelet as mentioned in <a href="https://stackoverflow.com/questions/44718268/how-to-monitor-disk-usage-of-kubernetes-persistent-volumes">How to monitor disk usage of kubernetes persistent volumes?</a> but currently a cluster upgrade is not an option.</p>
<p>Starting from (v1.3.0-rc.0 / 2018-03-23) in the <strong>kube-state-metrics</strong>, two metrics that can convert <a href="https://github.com/kubernetes/kube-state-metrics/blob/641f44d26ccbad2272d5bf3e10e43efd41e7976d/pkg/collectors/persistentvolume.go#L128" rel="nofollow noreferrer">PersistentVolume</a> and <a href="https://github.com/kubernetes/kube-state-metrics/blob/641f44d26ccbad2272d5bf3e10e43efd41e7976d/pkg/collectors/persistentvolumeclaim.go#L151" rel="nofollow noreferrer">PersistenVolumeClaims</a> labels to Prometheus labels accordingly were added:</p> <pre><code>kube_persistentvolume_labels kube_persistentvolumeclaim_lables </code></pre> <p>To get more details about implementing aggregation of metrics based on labels, consider reading these articles:</p> <ul> <li><a href="https://www.weave.works/blog/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels-prometheus/" rel="nofollow noreferrer">Aggregating Pod resource (CPU, memory) usage by arbitrary labels with Prometheus</a></li> <li><a href="https://www.robustperception.io/exposing-the-software-version-to-prometheus" rel="nofollow noreferrer">Exposing the software version to Prometheus</a></li> </ul>
<p>I am doing a test here and need to delivery a build, test and deploy pipeline using Jenkins and Kubernetes.</p> <p>I am using a Mac and created a VM using VirtualBox with Ubuntu 18 and installed Jenkins there.</p> <p>I installed Kubernetes (minukube) but when I try to start it I receive:</p> <blockquote> <p>"This computer doesn't have VT-x/AMD/v enabler. Enabling it in the BIOS is mandatory"</p> </blockquote> <p>Reading some blogs they said that VM inside VM is not a good architecture.</p> <p>My question is: what is the best approach to do it and have something to delivery in the end - like a VM or a weblink?</p>
<p>I couldn't put a comment, cause I don't have enough permissions to do so.</p> <p>This is not related to k8s, your issue most likely is with the virtualization in your PC, you may follow the error you got and try to enable to the VTx/AMD option in the boot.</p>
<p>I'm using a simple <code>ballerina</code> code to build my program (simple hello world) with <code>ballerinax/kubernetes</code> annotations. The service is being compiled succesfully and accessible via the specific bind port from local host.</p> <p>When configuration a kubernetes deployment I'm specifying the image build and push flags:</p> <pre><code>@kubernetes:Deployment { replicas: 2, name: "hello-deployment", image: "gcr.io/&lt;gct-project-name&gt;/hello-ballerina:0.0.2", imagePullPolicy: "always", buildImage: true, push: true } </code></pre> <p>When building the source code:</p> <pre><code>ballerina build hello.bal </code></pre> <p>This is what I'm getting:</p> <pre><code>Compiling source hello.bal Generating executable ./target/hello.balx @docker - complete 3/3 Run following command to start docker container: docker run -d -p 9090:9090 gcr.io/&lt;gcr-project-name&gt;/hello-ballerina:0.0.2 @kubernetes:Service - complete 1/1 @kubernetes:Deployment - complete 1/1 error [k8s plugin]: Unable to push docker image: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication </code></pre> <p>Note that when pushing it manually via docker on my local machine it works find and the new image is getting pushed. </p> <p>What am I missing? Is there a way to tell ballerina about docker registry credentials via the <code>kubernetes</code> package?</p>
<p>Ballerina doesn't support gcloud docker registry yet, but it supports dockerhub. Please refer <a href="https://github.com/ballerinax/kubernetes/tree/master/samples/sample6" rel="nofollow noreferrer">sample6</a> for more info. </p> <p>Basically, you can export docker registry username and password as environment variables.</p> <p>Please create an issue at <a href="https://github.com/ballerinax/kubernetes/issues" rel="nofollow noreferrer">https://github.com/ballerinax/kubernetes/issues</a> for track this.</p>
<p>I need to install a IPV6 haproxy docker in IBM Cloud Kubernetes service. All info I see is just about IPV4. I don't even know if IBM Kubernetes supports IPV6 so I can support IPV6 requests in my docker instance deployed in one of my cluster nodes. Thanks for any help on this.</p>
<p>IBM Cloud Kubernetes Service does not use IPV6 for cluster services.</p>
<p>I'm frequently installing multiple instances of an umbrella Helm chart across multiple namespaces for testing. I'd like to continue using the randomly generated names, but also be able to tear down multiple releases of the same chart in one command that doesn't need to change for each new release name.</p> <p>So for charts like this:</p> <pre><code>$ helm ls NAME REVISION UPDATED STATUS CHART NAMESPACE braided-chimp 1 Mon Jul 23 15:52:43 2018 DEPLOYED foo-platform-0.2.1 foo-2 juiced-meerkat 1 Mon Jul 9 15:19:43 2018 DEPLOYED postgresql-0.9.4 default sweet-sabertooth 1 Mon Jul 23 15:52:34 2018 DEPLOYED foo-platform-0.2.1 foo-1 </code></pre> <p>I can delete all releases of the <code>foo-platform-0.2.1</code> chart by typing the release names like:</p> <pre><code>$ helm delete braided-chimp sweet-sabertooth </code></pre> <p>But every time I run the command, I have to update it with the new release names.</p> <p>Is it possible to run list / delete on all instances of a given chart across all namespaces based on the chart name? (I'm thinking something like what <code>kubectl</code> supports with the <code>-l</code> flag.)</p> <p>For instance, how can I achieve something equivalent to this?</p> <pre><code>$ helm delete -l 'chart=foo-platform-0.2.1' </code></pre> <p>Is there a better way to do this?</p>
<p>You could try:</p> <p><code>helm delete $(helm ls | awk '$9 ~ /SEARCH/ { print $1 }')</code></p> <p>Replacing <code>SEARCH</code> with whatever chart name pattern you want to use</p> <p>It gets thrown off a little because awk is going to delimit on the spaces, which the timestamp has several of.</p> <p>So what would traditionally be tab delimited:</p> <p><code>1=NAME</code> <code>2=REVISION</code> <code>3=UPDATED</code> <code>4=STATUS</code> <code>5=CHART</code> <code>6=NAMESPACE</code></p> <p>becomes:</p> <p><code>1=mottled-whippet</code> <code>2=1</code> <code>3=Fri</code> <code>4=Jul</code> <code>5=20</code> <code>6=13:15:45</code> <code>7=2018</code> <code>8=DEPLOYED</code> <code>9=postgresql-0.15.0</code> <code>10=namespace</code></p>
<p>I have a service and 4 pods which wordpress installed on each. This is my service configuration:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: ports: - port: 31001 nodePort: 31001 targetPort: apache protocol: TCP selector: app: wordpress type: NodePort </code></pre> <p>Now all traffic are distributed randomly by the service. I want to change it that work non-random (I think It's name was <code>round robin</code>). I have read <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">the official document</a> but I don't understand it.</p> <p>Is there any way to manage traffic respectively? Could anybody please show me an example?</p>
<p>As @Meysam mentioned, Kubernetes service distributes the request to pods using &quot;round robin&quot; technology by default.</p> <p>I would advise you (and all who will read this topic in the future) to read more information about <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Services</a> and <a href="http://blog.wercker.com/how-does-kubernetes-handle-load-balancing" rel="nofollow noreferrer">How does kubernetes handle load balancing</a>. It will shed a light on tons of questions.</p> <blockquote> <p>Kubernetes uses a feature called kube-proxy to handle the virtual IPs for services. Kubernetes allocates tasks to pods within a service by the round-robin method</p> <p>With round-robin allocation, the system maintains a list of destinations. When a request comes in, it assigns the request to the next destination on the list, then permutes the list (either by simple rotation, or a more complex method), so the next request goes to the following destination on the list.</p> </blockquote>
<p>I'm looking for the best possible Kubernetes configuration to be able to run this command in a <strong>local Kubernetes environment</strong> (via Minikube)</p> <pre><code>composer install </code></pre> <p>What I was hoping to do is something like the docker version: This will spin up a container, run the composer install command and then remove the container again. </p> <pre><code>docker run --rm -i -t -v $(pwd):/app composer:latest composer install </code></pre> <p>This doesn't work because kubectl doesn't have a option to mount a volume (this is required to indicate the project's composer.json file) </p> <p>What I'm doing now is, including a <strong>composer container configuration</strong> within a Kubernetes deployment.yml file.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-site labels: app: web spec: selector: matchLabels: app: web template: metadata: labels: app: web spec: containers: # The Application - name: api-app image: api_app:my_company volumeMounts: - name: shared-api-data mountPath: /var/www env: - name: DB_HOST value: "xx.xx.xx.xx" - name: DB_DATABASE value: "my_database" - name: DB_USER value: "my_database_user" - name: DB_PASSWORD value: "my_database_password" # The Web Server - name: api-web image: api_web:my_company volumeMounts: - name: shared-api-data mountPath: /var/www ports: - containerPort: 80 # Composer - name: composer image: composer volumeMounts: - name: shared-api-data mountPath: /app args: - composer - install volumes: - name: shared-api-data hostPath: path: /var/www </code></pre> <p>I create the actual deployment with this command: </p> <pre><code>kubectl create -f deployment.yml </code></pre> <p>I keep seeing this error (in my Minikube dashboard): 'back-off failed to restart container' (probably because the composer install command finished and therefore the container exits). I know there's a <strong>restartPolicy</strong> config parameter for pods, but I don't want to set this value for my other containers (web &amp; app). So it might be better to create a separate deployment.yml file just for composer? </p> <p>Another option (that I'm keeping for last, as I'm not really a fan of it) is to install composer in my php-fpm image. </p> <p>Cheers!</p>
<p>You should build a custom Docker image containing your application’s code. The <code>composer install</code> command should be in your <code>Dockerfile</code>. When you build a new version of your application that you want to deploy, you (or, better, your CI system) need to <code>docker build</code> a new image, <code>docker push</code> it to a registry, and then update the version tag in the Kubernetes Deployment object.</p> <p>The Docker development pattern of β€œrunning a Docker container” where the image only has the language runtime and the application code actually lives on a bind-mounted host directory doesn’t really work in Kubernetes, or any other clustered installation (people have had trouble trying to do it in Docker Swarm too, for similar reasons).</p>
<p>I'm trying to start a standard example SparkPi on a kubernetes cluster. Spark-submitt creates the pod and fails with error - "Error: Could not find or load main class org.apache.spark.examples.SparkPi".</p> <p>spark-submit</p> <pre><code>spark-submit \ --master k8s://https://k8s-cluster:6443 \ --deploy-mode cluster \ --name spark-pi \ --class org.apache.spark.examples.SparkPi \ --conf spark.kubernetes.namespace=ca-app \ --conf spark.executor.instances=5 \ --conf spark.kubernetes.container.image=gcr.io/cloud-solutions-images/spark:v2.3.0-gcs \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=default \ https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar </code></pre> <p>Kubernetes creates 2 containers in pod. spark-init in which writes, that examples jar is copied.</p> <pre><code>2018-07-22 15:13:35 INFO SparkPodInitContainer:54 - Downloading remote jars: Some(https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar,https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar) 2018-07-22 15:13:35 INFO SparkPodInitContainer:54 - Downloading remote files: None 2018-07-22 15:13:37 INFO Utils:54 - Fetching https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar to /var/spark-data/spark-jars/fetchFileTemp6219129583337519707.tmp 2018-07-22 15:13:37 INFO Utils:54 - Fetching https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar to /var/spark-data/spark-jars/fetchFileTemp8698641635325948552.tmp 2018-07-22 15:13:37 INFO SparkPodInitContainer:54 - Finished downloading application dependencies. </code></pre> <p>And spark-kubernetes-driver, throws me the error.</p> <pre><code>+ readarray -t SPARK_JAVA_OPTS + '[' -n /var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar:/var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar ']' + SPARK_CLASSPATH=':/opt/spark/jars/*:/var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar:/var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar' + '[' -n /var/spark-data/spark-files ']' + cp -R /var/spark-data/spark-files/. . + case "$SPARK_K8S_CMD" in + CMD=(${JAVA_HOME}/bin/java "${SPARK_JAVA_OPTS[@]}" -cp "$SPARK_CLASSPATH" -Xms$SPARK_DRIVER_MEMORY -Xmx$SPARK_DRIVER_MEMORY -Dspark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS $SPARK_DRIVER_CLASS $SPARK_DRIVER_ARGS) + exec /sbin/tini -s -- /usr/lib/jvm/java-1.8-openjdk/bin/java -Dspark.app.id=spark-e032bc91fc884e568b777f404bfbdeae -Dspark.kubernetes.container.image=gcr.io/cloud-solutions-images/spark:v2.3.0-gcs -Dspark.kubernetes.namespace=ca-app -Dspark.jars=https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar,https://github.com/JWebDev/spark/blob/master/spark-examples_2.11-2.3.1.jar -Dspark.driver.host=spark-pi-11f2cd9133b33fc480a7b2f1d5c2fcc0-driver-svc.ca-app.svc -Dspark.master=k8s://https://k8s-cluster:6443 -Dspark.kubernetes.initContainer.configMapName=spark-pi-11f2cd9133b33fc480a7b2f1d5c2fcc0-init-config -Dspark.kubernetes.authenticate.driver.serviceAccountName=default -Dspark.driver.port=7078 -Dspark.kubernetes.driver.pod.name=spark-pi-11f2cd9133b33fc480a7b2f1d5c2fcc0-driver -Dspark.app.name=spark-pi -Dspark.kubernetes.executor.podNamePrefix=spark-pi-11f2cd9133b33fc480a7b2f1d5c2fcc0 -Dspark.driver.blockManager.port=7079 -Dspark.submit.deployMode=cluster -Dspark.executor.instances=5 -Dspark.kubernetes.initContainer.configMapKey=spark-init.properties -cp ':/opt/spark/jars/*:/var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar:/var/spark-data/spark-jars/spark-examples_2.11-2.3.1.jar' -Xms1g -Xmx1g -Dspark.driver.bindAddress=10.233.71.5 org.apache.spark.examples.SparkPi Error: Could not find or load main class org.apache.spark.examples.SparkPi </code></pre> <p>What am I doing wrong? Thanks for the tips.</p>
<p>I would suggest using <code>https://github.com/JWebDev/spark/raw/master/spark-examples_2.11-2.3.1.jar</code> since <code>/blob/</code> is the HTML view of an asset, whereas <code>/raw/</code> will 302-redirect to the actual storage URL for it</p>
<p>I'm using Azure for my Continuous Deployment, My secret name is "<strong>cisecret</strong>" using</p> <pre><code>kubectl create secret docker-registry cisecret --docker-username=XXXXX --docker-password=XXXXXXX [email protected] --docker-server=XXXXXXXXXX.azurecr.io </code></pre> <p>In my Visual Studio Online Release Task <strong>kubectl run</strong> Under <strong>Secrets</strong> section <br/> Type of secret: dockerRegistry<br/> Container Registry type: Azure Container Registry<br/> Secret name: <strong>cisecret</strong></p> <p>My Release is successfully, but when proxy into kubernetes</p> <blockquote> <p>Failed to pull image xxxxxxx unauthorized: authentication required.</p> </blockquote>
<p>I need to grant AKS access to ACR.</p> <p>Please refer to the link <a href="https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/container-registry/container-registry-auth-aks.md" rel="nofollow noreferrer">here</a></p>
<p>What is the recommended architecture for scheduled jobs in Kuberntes cluster?</p> <p>Consider following situation: You have some kind of job which you wish to run every ~ 24 hours and it takes around 2 hours to complete. Let it be for example a parser scraping info from some websites.</p> <p>You want it to run in your Kuberntes cluster so you enclose it in Docker image.</p> <p>The docker convention propose looking at container as at executable so you use your parser script as the default command in your Dockerfile:</p> <pre><code>CMD nodejs /src/parser.js </code></pre> <p>But now in Kuberntes when the parser finishes the container dies with it and will be restarted immediately.</p> <p>Coming around this you can specify some other bash script as the <code>CMD</code>. This script will run indefinitely and will run your parser script every 24 hours. However this means you've lost this nice property fo your image and can't just do</p> <pre><code>docker run my-parser-image </code></pre> <hr> <p>So is there a way in Kuberntes to run some container every xx hours and if it fails to run it again? More broadly what is the proposed way of running scheduled containerized jobs in Kuberntes cluster?</p>
<p>One way you can approach this is by creating a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a> object in Kubernetes:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "0 */24 * * *" jobTemplate: spec: template: spec: containers: - name: my-parser-cronjob image: my-parser-image restartPolicy: OnFailure </code></pre> <p>Similar to this is to use the object called <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a>, but keep in mind that Job runs only once till completion.</p>
<p>I have a kubernetes cluster spread across two zones, A and B. I am using nfs volumes for persistent storage. I have nfs volumes in both the zones. I am creating a stateful set of 2 replicas which will be spread across these zones (I used pod anti-affinity to achieve this). Now I want the pods in zone A to use the volumes in zone A and ones in zone B to use the volumes in zone B.</p> <p>I can add labels to the persistent volumes and match the persistent volume claims with these labels. But how do I make sure that the pvc for a pod does not get bound to a pv in another zone?</p>
<p>You can try to bind <code>persistent volume claims (PVCs)</code> to <code>persistent volumes (PVs)</code> and split Kubernetes pods across your cluster between two zones using the special built-in label <code>failure-domain.beta.kubernetes.io/zone</code>. If you create volumes manually, it is possible to label them with <code>failure-domain.beta.kubernetes.io/zone:zoneA</code> value, ensuring that pod is only scheduled to nodes in the same zone as the zone of the persistent volume.</p> <p>For example, to set <code>label</code> for a Node and PV:</p> <pre><code>kubectl label node &lt;node-name&gt; failure-domain.beta.kubernetes.io/zone=zoneA kubectl label pv &lt;pv-name&gt; failure-domain.beta.kubernetes.io/zone=zoneA </code></pre> <p>Find some useful information from official Kubernetes <a href="https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone" rel="nofollow noreferrer">documentation</a>. </p>
<p>In our GKE we have one service called <code>php-services</code>. It is defined like so:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: php-services labels: name: php-services spec: type: NodePort ports: - port: 80 selector: name: php-services </code></pre> <p>I can access this service from inside the cluster. If I run these commands on one of our pods (in <code>Default</code> namespace), I get expected results:</p> <pre><code>bash-4.4$ nslookup 'php-services' Name: php-services Address 1: 10.15.250.136 php-services.default.svc.cluster.local </code></pre> <p>and</p> <pre><code>bash-4.4$ wget -q -O- 'php-services/health' {"status":"ok"} </code></pre> <p>So the service is ready and responding correctly. I need to expose this service to foreign traffic. I'm trying to do it with Ingress with following config:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-tls annotations: kubernetes.io/ingress.class: "gce" kubernetes.io/tls-acme: "true" kubernetes.io/ingress.global-static-ip-name: "kubernetes-ingress" kubernetes.io/ingress.allow-http: "false" external-dns.alpha.kubernetes.io/hostname: "gke-ingress.goout.net" namespace: default spec: tls: - hosts: - php.service.goout.net secretName: router-tls rules: - host: php.service.goout.net http: paths: - backend: serviceName: php-services servicePort: 80 path: /* </code></pre> <p>But then accessing <a href="http://php.service.goout.net/health" rel="noreferrer">http://php.service.goout.net/health</a> gives an 502 error:</p> <blockquote> <p>Error: Server Error The server encountered a temporary error and could<br> not complete your request.<br> Please try again in 30 seconds. </p> </blockquote> <p>We also have other services with the same config which run ok and are accessible form outside. </p> <p>I've found a <a href="https://stackoverflow.com/questions/49076629/ingress-backend-rest-error-the-server-encountered-a-temporary-error-and-could-n">similar question</a> but that doesn't bring any sufficient answer either.<br> I've been also following the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="noreferrer">Debug Service</a> article but that also didn't help as the service itself is OK.</p> <p>Any help with this issue highly appreciated. </p>
<h2>EDIT TLDR</h2> <p>GKE Loadbalancer <a href="https://cloud.google.com/load-balancing/docs/health-check-concepts" rel="noreferrer">only accepts HTTP status 200</a> while Kubernetes health checks accept <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">any code greater than or equal to 200 and less than 400</a>.</p> <h2>Orignal answer</h2> <p>Ok, so we've figured out what was wrong.</p> <p>Take a look at <code>yaml</code> definition of the deployment for the <code>php-services</code> service: (shortened)</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: php-services namespace: default spec: replicas: 1 selector: matchLabels: name: php-services template: metadata: labels: name: php-services spec: containers: - name: php-services image: IMAGE_TAG livenessProbe: failureThreshold: 3 httpGet: path: /health port: 80 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 60 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /health port: 80 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 60 successThreshold: 1 timeoutSeconds: 10 ports: - containerPort: 80 </code></pre> <p>The Apache server inside the image was configured in a way that it redirected from paths without trailing slash to paths with it. So when you requested <code>/health</code> you actually got HTTP status 301 pointing to <code>/health/</code> which then responded with 200.</p> <p>In the scope of Kubernetes health checks this is OK as "<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">Any code greater than or equal to 200 and less than 400 indicates success.</a>"</p> <p>However, the problem lied in the GKE Loadbalancer. It also has it's own GKE healthchecks derived from the checks in Deployment definition. The important difference is that <a href="https://cloud.google.com/load-balancing/docs/health-check-concepts" rel="noreferrer">it only accepts HTTP status 200</a>. And if the loadbalancer doesn't find a backend service healthy it won't pass any foreign traffic to it.</p> <p>Therefore we had two options to fix this:</p> <ul> <li>Make the server inside the container respond with HTTPS status 200 to both <code>/health</code> and <code>/health/</code> (or more precisely just to <code>/health</code>)</li> <li>or change the <code>readinessProbe</code> and <code>livenessProbe</code> path definition to <code>/health/</code>.</li> </ul> <p>We choose the later and it fixed the problem.</p>
<p>I have pods who deployed to Kubernetes cluster (hosted with Google Cloud Kubernetes). Those pods are using some <code>secret</code>, which are plain text files. I added the secret to the <code>yaml</code> file and deployed the deployment. The application is working fine.</p> <p>Now, let say that someone compromised my code and somehow get access to all my files on the container. In that case, the attacker can find the <code>secrets</code> directory and print all the secrets written there. It's a plain text.</p> <h2>Question:</h2> <p>Why it more secure use kubernetes-secrets instead of just a plain-text?</p>
<p>There are different levels of security and as @Vishal Biyani says in the comments, it sounds like you're looking for a level of security you'd get from a project like Sealed Secrets. </p> <p>As you say, out of the box secrets doesn't give you encryption at the container level. But it does give controls on access through kubectl and the kubernetes APIs. For example, you could use role-based access control so that specific users could see that a secret exists without seeing (through the k8s APIs) what its value is.</p>
<p>I'm implementing a custom resource controller using the watch API. It needs to create/delete objects in aws when objects are created/deleted in kubernetes.</p> <p>When it starts up the watch, it receives a list of historic events. However, I noticed that if an object is created and then deleted, these events "cancel out" in the historic event stream. That is, when I start a watch, instead of seeing an ADDED event and a DELETED event for the given object, I see no events at all, as though it never existed. This means that if an object is deleted while the controller is down, it will completely miss this delete event when it starts back up.</p> <p>For controllers that need to take an action when a kubernetes object is deleted (for example, deleting an object in AWS), what is the recommended approach? Is there a way to make kubernetes keep DELETED events? Is it just expected that controllers work by polling the list of all objects in all namespaces rather than using the watch API?</p>
<p>In case you need to synchronize states or list of existing objects in one system with objects in another system, you should be able to get the lists of objects on both systems, compare them, and deal with the difference. </p> <p>If you rely only on watching instant events like CREATE and DELETE, you will end up with unsynchronized systems sooner or later.</p> <p>The only reliable source of information about Kubernetes apiserver events I can imagine is the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">Audit log</a>.</p>
<p>I have kubernetes cluster working fine. I have one <code>master</code> node and 5 <code>worker</code> nodes, and all these are running pods. When all the nodes are on and if the kubernetes master goes down/ powered off, will the worker nodes keep working as normally.?</p> <p>If <code>master</code> node is down and one of the <code>worker</code> node also goes down and then come back online after sometime. Then will the pod automatically be started on that worker as the <code>master</code> node is still down.?</p>
<blockquote> <p>When all the nodes are on and if the kubernetes master goes down/ powered off, will the worker nodes keep working as normally.?</p> </blockquote> <p>Yes, they will work in their last state.</p> <blockquote> <p>If <code>master</code> node is down and one of the <code>worker</code> node also goes down and then come back online after sometime. Then will the pod automatically be started on that worker as the <code>master</code> node is still down.?</p> </blockquote> <p>No.</p> <p>As you can read in <a href="https://kubernetes.io/docs/concepts/overview/components/" rel="nofollow noreferrer">Kubernetes Components</a> section:</p> <p><code>Master components provide the cluster’s control plane. Master components make global decisions about the cluster (for example, scheduling), and detecting and responding to cluster events (starting up a new pod when a replication controller’s β€˜replicas’ field is unsatisfied).</code></p>
<p>First of all, i'm not an expert, so bear with me. I managed to install and setup Rancher in my vcenter at home (got a baremetal setup for free, a bit old, but still ok). I have 3 nodes running well and i can also provision VMs in vmware with it. On top of it, i also added Kubernetes within Rancher. Now, my plan is to deploy services which should get external endpoints (reachable from the internet) and SSL automatically. I already have bought from Namecheap mydomain.com, plus a wildcard certificate for it. Also, in my vcenter i have an nginx server running, and namecheap dns is pointing to it, but i think i should run it in Kubernetes instead, only that i don't want to manage the config files for nginx manually.</p> <p>What would be the best approach? I fail to understand how the ingress controllers work or set them up correctly. I followed many tutorials and no success so far. I also played around with Traefik, but no success. I always get nothing at the external endpoints section.</p> <p>I don't want a step by step guide on how to do it, but someone please point me in the right direction, at least. I was also thinking to use Let'sEncrypt, but not sure if it's a good idea since i already have my domain and ssl certs.</p> <p>Thank you!</p>
<p>The reason you might be struggling is because when using BareMetal, you don't have an external LoadBalancer provisioned. When using things like Traefik, you need to expose the ingress controller on a NodePort or something else.</p> <p>If you're using baremetal, you have a couple of options for ingress into the cluster.</p> <p><a href="https://metallb.universe.tf/" rel="noreferrer">MetalLB</a> is one such controller which will use layer2 or BGP configuration to advertise your <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Services</a> externally. Using metallb, you'll be able to define a service of Type LoadBalancer, like so:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: my-service spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 type: LoadBalancer </code></pre> <p>This will provision a LoadBalancer in metallb for you. At this point, you can then start to use an Ingress Controller, by deploying something like traefik, defining a service and then using the LoadBalancer type on the ingress controller.</p> <p>For TLS, you can have <a href="https://cert-manager.readthedocs.io/en/latest/" rel="noreferrer">cert-manager</a> provision certificates for you automatically, assuming you DNS resolves to the ingresses you use.</p> <p>Finally, you automated DNS, consider <a href="https://github.com/kubernetes-incubator/external-dns" rel="noreferrer">external-dns</a></p>
<p>Battling with Kubernetes manifest on Azure. I have a simple api app running on port <code>443 (https)</code>. I simply want to run and replicate this app 3 times within a kubernetes cluster with a load balancer.</p> <p><strong>Kubernetes cluster:</strong></p> <p><a href="https://i.stack.imgur.com/iEcVT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iEcVT.png" alt="Kubernetes cluster figure"></a></p> <p>My manifest file:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: apiApp spec: replicas: 3 template: metadata: labels: app: apiApp spec: containers: - name: apiApp image: {image name on Registry} ports: - containerPort: 443 hostPort: 443 --- apiVersion: v1 kind: Service metadata: name: apiApp spec: type: LoadBalancer ports: - name: https port: 443 targetPort: 443 selector: app: apiApp </code></pre> <p>In the above manifest the loadbalancer does not seem to find the app on port 443 within the container.</p> <p>1) How can I create this manifest to link load balancer to port 443 of the containers and also expose the load balancer to the outside world on port 443.</p> <p>2)How would manifest look like in multi cluster environment (same conditions as above)</p>
<p>For your issue, I did the test with the load balancer follow the document <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="nofollow noreferrer">Deploy an Azure Kubernetes Service (AKS) cluster</a>.</p> <p>This example only has one pod, so I scale up the pod in to 3 with the command <code>kubectl scale --replicas=3 deployment/azure-vote-front</code>. The yaml file about scales and Load Balancer will like the screenshot below.</p> <p><a href="https://i.stack.imgur.com/UdQku.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UdQku.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/c5JmF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c5JmF.png" alt="enter image description here"></a></p> <p>When the Cluster finish, I can access the service from Internet via Web Browse. And you can use the command <a href="https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-browse" rel="nofollow noreferrer"><code>az aks browse</code></a> to go into the Kubernets dashboard to get a overview of the Kubernets Cluster.</p> <p><strong>Update</strong></p> <p>The Azure Kubernets Cluster is just a resource group like below and so as the load balancer: <a href="https://i.stack.imgur.com/tRC3x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tRC3x.png" alt="enter image description here"></a></p>
<p>I've recently started using kubernetes and am now looking at how I can configure centralised logging. For the majority of the pods, the application itself logs straight to the GELF endpoint (logstash), however there are a number of "management" pods which I need to get the logs from too. </p> <p>Previously when I was using Docker Swarm I would simply add the log driver (and relevant configuration) into the <a href="https://docs.docker.com/compose/compose-file/#logging" rel="nofollow noreferrer">compose file</a>. However there doesn't seem to be that option in Kubernetes.</p> <p>I looked at using Fluentd to read the logs straight from /var/log/containers, but I ran into a couple of issues here:</p> <ol> <li><p>There doesn't seem to be any easy way to specify which pods to log to logstash; I get that you can create filters etc but this doesn't seem very maintainable going forward, something using annotations on the individual pods seems more sensible.</p></li> <li><p>The logs in /var/log/containers are in the json-file log format, not GELF.</p></li> </ol> <p>Is there any way in kubernetes to use the built in Docker logging driver on a per-pod basis to easily log to the GELF endpoint?</p>
<p>Try to use <strong>fluentd</strong> with the <a href="https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter" rel="nofollow noreferrer">Kubernetes metadata plugin</a> to extract local json-file docker logs and send to Graylog2.</p> <p><code>tag_to_kubernetes_name_regexp</code> - the regular expression used to extract Kubernetes metadata (pod name, container name, namespace) from the current fluentd tag.</p>
<p>I'm currently building a containerized Kubernetes Cluster.</p> <p>When trying to get start the kubelet binary I get this fail </p> <pre><code>server.go:233] failed to run Kubelet: could not detect clock speed from output: "" </code></pre> <p>I start the kubelet with</p> <pre class="lang-js prettyprint-override"><code>/bin/kubelet \ --address 192.168.0.4 \ --allow-privileged true \ --anonymous-auth true \ --authorization-mode AlwaysAllow \ --cloud-provider ""\ --cni-bin-dir /opt/cni/bin \ --cni-conf-dir /etc/cni/net.d \ --containerized true \ --container-runtime remote \ --container-runtime-endpoint unix:///var/run/containerd/containerd.sock \ --image-pull-progress-deadline 2m \ --kubeconfig /var/lib/kubelet/kubeconfig \ --network-plugin cni \ --register-node true \ --root-dir /var/lib/kubelet \ --v 2 </code></pre> <p>Onto an alpine image. The image is run privileged.</p> <p>Checking the corresponding sourcecode did not reveal the source of the issue. Does someone have direction to point me to?</p>
<p>The solution was adding a volume to link to the cgroup, so cadvisor had the required rights to read the required file.</p> <pre><code>-v /cgroup:/sys/fs/cgroup:ro </code></pre>
<p>I need to run two "instances" of an application, but they should access the same DB, and I need it to be run on my Kubernetes cluster, so I can provide multi-AZ access to users.</p> <p>Is it possible to be achieved on Kubernetes? Do I need StatefulSets? And, more important, is it possible to manage the DB pod with Kubernetes?</p>
<blockquote> <p>I need to run two "instances" of an application, but they should access the same DB, and I need it to be run on my Kubernetes cluster, so I can provide multi-AZ access to users.</p> </blockquote> <p>This really depends on what you mean by <em>instances</em>. The recommended way is to create a deployment with <code>replicas: 2</code> like so:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: my_app name: my_app spec: replicas: 2 selector: matchLabels: run: my_app template: metadata: labels: run: my_app spec: containers: - image: my_app:version name: my_app </code></pre> <p>This will ensure you have 2 "instances" of the app.</p> <p>If you need to run 2 "instances" with differing configuration, you might choose to do two distinct deployments, and change the names and labels on them:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: my_app_1 name: my_app_1 spec: replicas: 1 selector: matchLabels: run: my_app_1 template: metadata: labels: run: my_app_1 spec: containers: - image: my_app:version name: my_app_1 </code></pre> <p>Connecting these two instances to the database is fairly easy, you'd just pass your database connection string as a configuration option to the database. It can live inside or outside the cluster.</p> <blockquote> <p>Do I need StatefulSets?</p> </blockquote> <p>You only need statefulsets if your app needs to have predictable names, and stores state in some manner. </p> <blockquote> <p>And, more important, is it possible to manage the DB pod with Kubernetes?</p> </blockquote> <p>It is entirely possible to run the database inside the cluster. Whether it's a good idea to do so is up to you.</p> <p>Databases are not traditionally very good at unexpected outages. With Kubernetes, it's possible the database pod could be moved at any time, and this could cause an issue for your app, or database.</p> <p>You'll need to configure some kind of reattachable storage using a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">persistent volume</a> but even then, there's no guarantee your db will be resistent to Kubernetes restarting it and restarting the database.</p> <p>There are databases designed to run more successfully in Kubernetes, such as <a href="https://vitess.io/" rel="nofollow noreferrer">Vitess</a> which may solve your problem.</p>
<p>I'm trying to connect one pod to another, but getting a connection refused error.</p> <p>I only run:</p> <ol> <li><p>RavenDB Server</p> <ul> <li>Deployment which has: <ul> <li>ports: <ul> <li>containerPort:8080, protocol: TCP</li> <li>containerPort:38888, protocol: TCP</li> </ul></li> </ul></li> <li>Service: <ul> <li>ravendb-cluster01-service</li> <li>clusterIP: None, ports: 8080 / 38888</li> </ul></li> </ul></li> <li><p>RavenDB Client</p> <ul> <li>Connects to ravendb-cluster01-service.staging.svc.cluster.local:8080 <ul> <li>Though fails with a connection refused error</li> </ul></li> </ul></li> </ol> <p>What doesn't work:</p> <ul> <li>Client cannot connect to server, connection refused</li> </ul> <p>What does work:</p> <ul> <li>when accessing the client pod using interactive shell: <code>docker -it ... -- bash</code>, <ul> <li>I can ping the service</li> <li>and telnet to it</li> </ul></li> <li>when using <code>kubectl ... port-forward 8080:8080</code>, I can locally enjoy the database server, so the server is running</li> </ul> <p>Strange enough, when accessing the docker I'm able to connect to it, though the running script itself refuses to connect to the target pod.</p> <p>It's connecting pod to pod, and tagged the target server (RavenDB) with a service, without service IP address, to resolve the domain name to the current IP address of the pod.</p> <p>Any idea what I'm doing wrong?</p> <p>Full config:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: ravendb-cluster01 tier: backend name: ravendb-cluster01 namespace: staging spec: replicas: 1 selector: matchLabels: app: ravendb-cluster01 tier: backend template: metadata: labels: app: ravendb-cluster01 tier: backend name: ravendb-cluster01 namespace: staging spec: containers: - env: - name: RAVEN_ARGS value: --ServerUrl=http://ravendb-cluster01-service.staging.svc.cluster.local:8080 --ServerUrl.Tcp=tcp://ravendb-cluster01-service.staging.svc.cluster.local:38888 --PublicServerUrl=http://localhost:8080 --PublicServerUrl.Tcp=tcp://localhost:38888 --DataDir=/ravendb/ --Setup.Mode=None --License.Eula.Accepted=true image: ravendb/ravendb-nightly:4.0.6-nightly-20180720-0400-ubuntu.16.04-x64 name: ravendb ports: - containerPort: 8080 name: http protocol: TCP - containerPort: 38888 name: tcp protocol: TCP resources: limits: memory: 26000Mi requests: memory: 26000Mi volumeMounts: - mountPath: /ravendb/ name: ravendb-cluster01-storage volumes: - gcePersistentDisk: fsType: ext4 pdName: ravendb-cluster01-storage name: ravendb-cluster01-storage --- apiVersion: v1 kind: Service metadata: labels: app: ravendb-cluster01-service tier: backend name: ravendb-cluster01-service namespace: staging spec: clusterIP: None ports: - name: http port: 8080 protocol: TCP targetPort: 8080 - name: tcp port: 38888 protocol: TCP targetPort: 38888 selector: app: ravendb-cluster01 tier: backend sessionAffinity: None type: ClusterIP </code></pre>
<p>The issue appears to be your <code>PublicServerUrl</code> setting.</p> <pre><code>--PublicServerUrl=http://localhost:8080 --PublicServerUrl.Tcp=tcp://localhost:38888 </code></pre> <p>As per the RavenDB documentation:</p> <blockquote> <p>Set the URL to be accessible by clients and other nodes, regardless of which IP is used to access the server internally. This is useful when using a secured connection via https URL, or behind a proxy server.</p> </blockquote> <p>You either need to configure this to be the service name, or remove the option entirely. After reviewing the docs for <a href="https://ravendb.net/docs/article-page/4.0/csharp/server/configuration/core-configuration#serverurl" rel="nofollow noreferrer">ServerUrl</a> I would personally recommend updating your args to be something like this:</p> <pre><code>value: --ServerUrl=http://0.0.0.0:8080 --ServerUrl.Tcp=tcp://0.0.0.0:38888 --PublicServerUrl=http://ravendb-cluster01-service.staging.svc.cluster.local:8080 --PublicServerUrl.Tcp=tcp://ravendb-cluster01-service.staging.svc.cluster.local:38888 --DataDir=/ravendb/ --Setup.Mode=None --License.Eula.Accepted=true </code></pre> <p>You want the <code>ServerUrl</code> to be listening on all ports ideally, so setting to <code>0.0.0.0</code> makes sense for the PublicUrl.</p> <p>The reason it works with both <code>port-forward</code> and from the local docker container is probably because RavenDB is listening on the loopback device, and both those methods of connection give you a local process inside the container, so the loopback device is accessible.</p>
<p>I am working on Problem statement around Making Windows VM work on Kubernetes , I came across a VM orchestrator on kubernetes <a href="https://kubevirt.io" rel="nofollow noreferrer">https://kubevirt.io</a> . There documentation does not clearly say if it supports Windows ? Any other solution or advise on the same is appreciated.</p>
<p>Yes, you can create VMs using different operating systems, including Windows.</p> <h3>Reference:</h3> <p><a href="http://superuser.openstack.org/articles/kubevirt-kata-containers-vm-use-case/" rel="nofollow noreferrer">http://superuser.openstack.org/articles/kubevirt-kata-containers-vm-use-case/</a></p>
<p>I tried to launch Istio on Google Kubernetes Engine using the Google Cloud Deployment Manager as described in the Istio <a href="https://istio.io/docs/setup/kubernetes/quick-start-gke-dm/" rel="nofollow noreferrer">Quick Start Guide</a>. My goal is to have a cluster as small as possible for a few very lightweight microservices.</p> <p>Unfortunately, Istio pods in the cluster failed to boot up correctly when using a 1 node GKE</p> <ul> <li>g1-small or</li> <li>n1-standard-1</li> </ul> <p>cluster. For example, istio-pilot fails and the status is "0 of 1 updated replicas available - Unschedulable".</p> <p>I did not find any hints that the resources of my cluster are exceeded so I am wondering:</p> <p><strong>What is the minimum GKE cluster size to successfully run Istio (and a few lightweight microservices)?</strong></p> <p>What I found is the issue <a href="https://github.com/istio/istio/issues/216" rel="nofollow noreferrer">Istio#216</a> but it did not contain the answer. Also, of course, the cluster size depends on the microservices but I am basically interested in the minimum cluster to start with.</p>
<p>As per <a href="https://istio.io/docs/guides/bookinfo/" rel="nofollow noreferrer">this page</a></p> <blockquote> <p>If you use GKE, please ensure your cluster has at least 4 standard GKE nodes. If you use Minikube, please ensure you have at least 4GB RAM.</p> </blockquote>
<p>I remember DNS records being cached locally on various linux distros in the past, but this appears to have changed over the years (<a href="https://stackoverflow.com/questions/11020027/dns-caching-in-linux">DNS caching in linux</a>). </p> <p>Within our environment (non-K8S) we found a noticeable delay (1-2ms) for each request due to DNS lookups because of this.</p> <p>I also noticed there is no local DNS cache within K8S by default (<a href="https://github.com/kubernetes/kubernetes/issues/45363" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/45363</a>) and the DNS cache within CoreOS is also disabled by default (<a href="https://coreos.com/os/docs/latest/configuring-dns.html" rel="nofollow noreferrer">https://coreos.com/os/docs/latest/configuring-dns.html</a>).</p> <p>Given we're considering migrating towards K8S I was wondering; why is this not enabled for Kubernetes in particular?</p> <p>My only theory is within kube-dns records are updated pre-emptively to ensure high-availability; but I'm not sure if K8S actually does that?</p> <p>As a workaround, if I were to run dnsmasq on every node, would I break things? I noticed there have been attempts to make that setup the default within K8S, but those attempts/PR's appear to have gone stale and I'm not sure why.</p>
<p>Since <code>Kubernetes 1.9+</code> was announced, <a href="https://coredns.io/" rel="nofollow noreferrer">CoreDNS</a> had been included in <code>kubeadm</code>, <code>minikube</code> tools, etc. as a default DNS server and replaced former <code>kube-dns</code> (which was based on <code>dnsmasq</code>). </p> <p>It was built as a fork of the <a href="https://caddyserver.com/" rel="nofollow noreferrer">Caddy</a> web server and middleware chains in a way that each middleware component carries some DNS feature. If you already use <code>kube-dns</code>, it is possible to launch <code>CoreDNS</code> using this <a href="https://github.com/coredns/deployment/tree/master/kubernetes" rel="nofollow noreferrer">Link</a>. </p> <p>CoreDNS is already equipped with caching and forwarding features, assuming that caching runs as a separate component, and brokes dependency for using <code>dnsmasq</code>.</p> <pre><code>. { proxy . 8.8.8.8:53 cache example.org } </code></pre> <p>There are a lot of <a href="https://coredns.io/plugins/" rel="nofollow noreferrer">plugins</a> which you can use extending DNS functionality, like proxying requests, rewriting requests, doing health checks on endpoints, and publishing metrics into <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a>.</p>
<p>I'm trying to create an horizontal pod autoscaling after installing Kubernetes with kubeadm.</p> <p>The main symptom is that <code>kubectl get hpa</code> returns the CPU metric in the column <code>TARGETS</code> as "undefined":</p> <pre><code>$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE fibonacci Deployment/fibonacci &lt;unknown&gt; / 50% 1 3 1 1h </code></pre> <p>On further investigation, it appears that <code>hpa</code> is trying to receive the CPU metric from Heapster - but on my configuration the cpu metric is being provided by cAdvisor.</p> <p>I am making this assumption based on the output of <code>kubectl describe hpa fibonacci</code>:</p> <pre><code>Name: fibonacci Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; CreationTimestamp: Sun, 14 May 2017 18:08:53 +0000 Reference: Deployment/fibonacci Metrics: ( current / target ) resource cpu on pods (as a percentage of request): &lt;unknown&gt; / 50% Min replicas: 1 Max replicas: 3 Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1h 3s 148 horizontal-pod-autoscaler Warning FailedGetResourceMetric unable to get metrics for resource cpu: no metrics returned from heapster 1h 3s 148 horizontal-pod-autoscaler Warning FailedComputeMetricsReplicas failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from heapster </code></pre> <p>Why does <code>hpa</code> try to receive this metric from heapster instead of cAdvisor? </p> <p>How can I fix this?</p> <p>Please find below my deployment, along with the contents of <code>/var/log/container/kube-controller-manager.log</code> and the output of <code>kubectl get pods --namespace=kube-system</code> and <code>kubectl describe pods</code></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: fibonacci labels: app: fibonacci spec: template: metadata: labels: app: fibonacci spec: containers: - name: fibonacci image: oghma/fibonacci ports: - containerPort: 8088 resources: requests: memory: "64Mi" cpu: "75m" limits: memory: "128Mi" cpu: "100m" --- kind: Service apiVersion: v1 metadata: name: fibonacci spec: selector: app: fibonacci ports: - protocol: TCP port: 8088 targetPort: 8088 externalIPs: - 192.168.66.103 --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: fibonacci spec: scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: fibonacci minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 50 </code></pre> <hr> <pre><code>$ kubectl describe pods Name: fibonacci-1503002127-3k755 Namespace: default Node: kubernetesnode1/192.168.66.101 Start Time: Sun, 14 May 2017 17:47:08 +0000 Labels: app=fibonacci pod-template-hash=1503002127 Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"fibonacci-1503002127","uid":"59ea64bb-38cd-11e7-b345-fa163edb1ca... Status: Running IP: 192.168.202.1 Controllers: ReplicaSet/fibonacci-1503002127 Containers: fibonacci: Container ID: docker://315375c6a978fd689f4ba61919c15f15035deb9139982844cefcd46092fbec14 Image: oghma/fibonacci Image ID: docker://sha256:26f9b6b2c0073c766b472ec476fbcd2599969b6e5e7f564c3c0a03f8355ba9f6 Port: 8088/TCP State: Running Started: Sun, 14 May 2017 17:47:16 +0000 Ready: True Restart Count: 0 Limits: cpu: 100m memory: 128Mi Requests: cpu: 75m memory: 64Mi Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-45kp8 (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-45kp8: Type: Secret (a volume populated by a Secret) SecretName: default-token-45kp8 Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s Events: &lt;none&gt; </code></pre> <hr> <pre><code>$ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE calico-etcd-k1g53 1/1 Running 0 2h calico-node-6n4gp 2/2 Running 1 2h calico-node-nhmz7 2/2 Running 0 2h calico-policy-controller-1324707180-65m78 1/1 Running 0 2h etcd-kubernetesmaster 1/1 Running 0 2h heapster-1428305041-zjzd1 1/1 Running 0 1h kube-apiserver-kubernetesmaster 1/1 Running 0 2h kube-controller-manager-kubernetesmaster 1/1 Running 0 2h kube-dns-3913472980-gbg5h 3/3 Running 0 2h kube-proxy-1dt3c 1/1 Running 0 2h kube-proxy-tfhr9 1/1 Running 0 2h kube-scheduler-kubernetesmaster 1/1 Running 0 2h monitoring-grafana-3975459543-9q189 1/1 Running 0 1h monitoring-influxdb-3480804314-7bvr3 1/1 Running 0 1h </code></pre> <hr> <pre><code>$ cat /var/log/container/kube-controller-manager.log "log":"I0514 17:47:08.631314 1 event.go:217] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"default\", Name:\"fibonacci\", UID:\"59e980d9-38cd-11e7-b345-fa163edb1ca6\", APIVersion:\"extensions\", ResourceVersion:\"1303\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set fibonacci-1503002127 to 1\n","stream":"stderr","time":"2017-05-14T17:47:08.63177467Z"} {"log":"I0514 17:47:08.650662 1 event.go:217] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"default\", Name:\"fibonacci-1503002127\", UID:\"59ea64bb-38cd-11e7-b345-fa163edb1ca6\", APIVersion:\"extensions\", ResourceVersion:\"1304\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: fibonacci-1503002127-3k755\n","stream":"stderr","time":"2017-05-14T17:47:08.650826398Z"} {"log":"E0514 17:49:00.873703 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:49:00.874034952Z"} {"log":"E0514 17:49:30.884078 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:49:30.884546461Z"} {"log":"E0514 17:50:00.896563 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:50:00.89688734Z"} {"log":"E0514 17:50:30.906293 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:50:30.906825794Z"} {"log":"E0514 17:51:00.915996 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:51:00.916348218Z"} {"log":"E0514 17:51:30.926043 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:51:30.926367623Z"} {"log":"E0514 17:52:00.936574 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:52:00.936903072Z"} {"log":"E0514 17:52:30.944724 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:52:30.945120508Z"} {"log":"E0514 17:53:00.954785 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:53:00.955126309Z"} {"log":"E0514 17:53:30.970454 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:53:30.972996568Z"} {"log":"E0514 17:54:00.980735 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:54:00.981098832Z"} {"log":"E0514 17:54:30.993176 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:54:30.993538841Z"} {"log":"E0514 17:55:01.002941 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:55:01.003265908Z"} {"log":"W0514 17:55:06.511756 1 reflector.go:323] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:192: watch of \u003cnil\u003e ended with: etcdserver: mvcc: required revision has been compacted\n","stream":"stderr","time":"2017-05-14T17:55:06.511957851Z"} {"log":"E0514 17:55:31.013415 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:55:31.013776243Z"} {"log":"E0514 17:56:01.024507 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:56:01.0248332Z"} {"log":"E0514 17:56:31.036191 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:56:31.036606698Z"} {"log":"E0514 17:57:01.049277 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:57:01.049616359Z"} {"log":"E0514 17:57:31.064104 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:57:31.064489485Z"} {"log":"E0514 17:58:01.073988 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:58:01.074339488Z"} {"log":"E0514 17:58:31.084511 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:58:31.084839352Z"} {"log":"E0514 17:59:01.096507 1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:59:01.096896254Z"} </code></pre>
<p>there is an option to enable autscaling on cluster pool make sure to turn it on first.</p> <p>and then apply your hpa, and don't forgot to set cpu, memory requests,limits on k8s controllers</p> <p>one thing to note is if you have multiple containers on your pod, you should then to specify cpu, memory requests,limits for each container</p>
<p>I'm doing the kubernetes tutorial locally with minikube on osx. In <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/</a> step 3, I get the error</p> <pre><code>% curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/ Error: 'dial tcp 172.17.0.4:8080: getsockopt: connection refused' Trying to reach: 'http://172.17.0.4:8080/'% </code></pre> <p>any idea why this doesn't work locally? the simpler request does work</p> <pre><code>% curl http://localhost:8001/version { "major": "1", "minor": "10", "gitVersion": "v1.10.0", "gitCommit": "fc32d2f3698e36b93322a3465f63a14e9f0eaead", "gitTreeState": "clean", "buildDate": "2018-03-26T16:44:10Z", "goVersion": "go1.9.3", "compiler": "gc", "platform": "linux/amd64" </code></pre> <p>info</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE kubernetes-bootcamp-74f58d6b87-ntn5r 0/1 ImagePullBackOff 0 21h </code></pre> <p>logs</p> <pre><code>$ kubectl logs $POD_NAME Error from server (BadRequest): container "kubernetes-bootcamp" in pod "kubernetes-bootcamp-74f58d6b87-w4zh8" is waiting to start: trying and failing to pull image </code></pre> <p>so then the run command is starting the node but the pod crashes? why?</p> <pre><code>$ kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080 </code></pre> <p>I can pull the image without a problem</p> <pre><code>$ docker pull gcr.io/google-samples/kubernetes-bootcamp:v1 v1: Pulling from google-samples/kubernetes-bootcamp 5c90d4a2d1a8: Pull complete ab30c63719b1: Pull complete 29d0bc1e8c52: Pull complete d4fe0dc68927: Pull complete dfa9e924f957: Pull complete Digest: sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af Status: Downloaded newer image for gcr.io/google-samples/kubernetes-bootcamp:v1 </code></pre> <p>describe</p> <pre><code>$ kubectl describe pods Name: kubernetes-bootcamp-74f58d6b87-w4zh8 Namespace: default Node: minikube/10.0.2.15 Start Time: Tue, 24 Jul 2018 15:05:00 -0400 Labels: pod-template-hash=3091482643 run=kubernetes-bootcamp Annotations: &lt;none&gt; Status: Pending IP: 172.17.0.3 Controlled By: ReplicaSet/kubernetes-bootcamp-74f58d6b87 Containers: kubernetes-bootcamp: Container ID: Image: gci.io/google-samples/kubernetes-bootcamp:v1 Image ID: Port: 8080/TCP State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-wp28q (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-wp28q: Type: Secret (a volume populated by a Secret) SecretName: default-token-wp28q Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BackOff 23m (x281 over 1h) kubelet, minikube Back-off pulling image "gci.io/google-samples/kubernetes-bootcamp:v1" Warning Failed 4m (x366 over 1h) kubelet, minikube Error: ImagePullBackOff </code></pre>
<p><a href="https://github.com/kubernetes/minikube" rel="nofollow noreferrer">Minikube</a> is a tool that makes it easy to run <a href="https://kubernetes.io" rel="nofollow noreferrer">Kubernetes</a> locally. </p> <p>Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.</p> <p>Back to your issue. Have you checked if you provided enough resources to run Minikube environment?</p> <p>You may try to run minikube and force allocate more memory:</p> <pre><code>minikube start --memory 4096 </code></pre> <p>For further analysis, please provide information about resources dedicated to this installation and type of hypervisor you use.</p>
<p>I've a running kubernetes(v1.11.1) cluster consisting of three nodes. I need to remove a node from the cluster properly. What should be the proper way to do that? I've used kubeadm to create the cluster.</p>
<p>Always drain the node before removing it:</p> <pre><code>kubectl drain $NODE </code></pre> <p>Draining evicts every pod in the node and cordons it, so no new pods will be scheduled in it.</p> <p>You can use these parameters to 'force' draining, overriding some restrictions:</p> <pre><code>kubectl drain $NODE --force=true --delete-local-data=true --ignore-daemonsets=true </code></pre> <p>Find further info here:</p> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/</a></p>
<p>Can somebody explain why the following command shows that there have been no restarts but the age is 2 hours when it was started 17 days ago</p> <pre><code>kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE api-depl-nm-xxx 1/1 Running 0 17d xxx.xxx.xxx.xxx ip-xxx-xxx-xxx-xxx.eu-west-1.compute.internal ei-depl-nm-xxx 1/1 Running 0 2h xxx.xxx.xxx.xxx ip-xxx-xxx-xxx-xxx.eu-west-1.compute.internal jenkins-depl-nm-xxx 1/1 Running 0 2h xxx.xxx.xxx.xxx ip-xxx-xxx-xxx-xxx.eu-west-1.compute.internal </code></pre> <p>The deployments have been running for 17 days:</p> <pre><code>kubectl get deploy -o wide NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINER(S) IMAGE(S) SELECTOR api-depl-nm 1 1 1 1 17d api-depl-nm xxx name=api-depl-nm ei-depl-nm 1 1 1 1 17d ei-depl-nm xxx name=ei-depl-nm jenkins-depl-nm 1 1 1 1 17d jenkins-depl-nm xxx name=jenkins-depl-nm </code></pre> <p>The start time was 2 hours ago:</p> <pre><code>kubectl describe po ei-depl-nm-xxx | grep Start Start Time: Tue, 24 Jul 2018 09:07:05 +0100 Started: Tue, 24 Jul 2018 09:10:33 +0100 </code></pre> <p>The application logs show it restarted. So why is the restarts 0?</p> <p><strong>Updated with more information as a response to answer.</strong></p> <p>I may be wrong but I don't think the deployment was updated or scaled it certainly was not done be me and no one else has access to the system.</p> <pre><code> kubectl describe deployment ei-depl-nm ... CreationTimestamp: Fri, 06 Jul 2018 17:06:24 +0100 Labels: name=ei-depl-nm ... Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate ... Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable OldReplicaSets: &lt;none&gt; NewReplicaSet: ei-depl-nm-xxx (1/1 replicas created) Events: &lt;none&gt; </code></pre> <p>I may be wrong but I don't think the worker node was restarted or shut down</p> <pre><code>kubectl describe nodes ip-xxx.eu-west-1.compute.internal Taints: &lt;none&gt; CreationTimestamp: Fri, 06 Jul 2018 16:39:40 +0100 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Fri, 06 Jul 2018 16:39:45 +0100 Fri, 06 Jul 2018 16:39:45 +0100 RouteCreated RouteController created a route OutOfDisk False Wed, 25 Jul 2018 16:30:36 +0100 Fri, 06 Jul 2018 16:39:40 +0100 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 25 Jul 2018 16:30:36 +0100 Wed, 25 Jul 2018 02:23:01 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 25 Jul 2018 16:30:36 +0100 Wed, 25 Jul 2018 02:23:01 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure Ready True Wed, 25 Jul 2018 16:30:36 +0100 Wed, 25 Jul 2018 02:23:11 +0100 KubeletReady kubelet is posting ready status ...... Non-terminated Pods: (4 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default ei-depl-nm-xxx 100m (5%) 0 (0%) 0 (0%) 0 (0%) default jenkins-depl-nm-xxx 100m (5%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-dns-xxx 260m (13%) 0 (0%) 110Mi (1%) 170Mi (2%) kube-system kube-proxy-ip-xxx.eu-west-1.compute.internal 100m (5%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 560m (28%) 0 (0%) 110Mi (1%) 170Mi (2%) Events: &lt;none&gt; </code></pre>
<p>There are two things that might happen:</p> <ol> <li><p>The deployment was updated or scaled:</p> <ul> <li>age of deployment does not change</li> <li><p>new ReplicaSet is created, old ReplicaSet is deleted. You can check it by running </p> <pre><code>$ kubectl describe deployment &lt;deployment_name&gt; ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 1m deployment-controller Scaled up replica set testdep1-75488876f6 to 1 Normal ScalingReplicaSet 1m deployment-controller Scaled down replica set testdep1-d4884df5f to 0 </code></pre></li> <li><p>pods created by old ReplicaSet are terminated, new ReplicaSet created brand new pod with restarts 0 and age 0 sec.</p></li> </ul></li> <li><p>Worker node was restarted or shut down.</p> <ul> <li>Pod on old worker node disappears</li> <li>Scheduler creates a brand new pod on the first available node (it can be the same node after reboot) with restarts 0 and age 0 sec.</li> <li><p>You can check the node start events by running</p> <pre><code>kubectl describe nodes &lt;node_name&gt; ... Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 32s kubelet, &lt;node-name&gt; Starting kubelet. Normal NodeHasSufficientPID 31s (x5 over 32s) kubelet, &lt;node-name&gt; Node &lt;node-name&gt; status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 31s kubelet, &lt;node-name&gt; Updated Node Allocatable limit across pods Normal NodeHasSufficientDisk 30s (x6 over 32s) kubelet, &lt;node-name&gt; Node &lt;node-name&gt; status is now: NodeHasSufficientDisk Normal NodeHasSufficientMemory 30s (x6 over 32s) kubelet, &lt;node-name&gt; Node &lt;node-name&gt; status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 30s (x6 over 32s) kubelet, &lt;node-name&gt; Node &lt;node-name&gt; status is now: NodeHasNoDiskPressure Normal Starting 10s kube-proxy, &lt;node-name&gt; Starting kube-proxy. </code></pre></li> </ul></li> </ol>
<p>I have a volume with a secret called config-volume. I want to have that file in the /home/code/config folder, which is where the rest of the configuration files are. For that, I mount it as this:</p> <pre><code>volumeMounts: - name: config-volumes - mountPath: /home/code/config </code></pre> <p>The issue is that, after deploying, in the /home/code/config I only have the secret file and the rest of them are gone</p> <p>So the /home/code/config is an existing folder (not empty), I suspect that the volumeMount overwrites the folder.</p> <p>Is there a way that this can be done without overwriting everything?</p>
<p>You can do the following, taken from <a href="https://github.com/kubernetes/kubernetes/issues/44815#issuecomment-297077509" rel="noreferrer">this GitHub issue</a> </p> <pre><code>containers: - volumeMounts: - name: config-volumes mountPath: /home/code/config subPath: config volumes: - name: config-volumes configMap: name: my-config </code></pre> <p>Suggested that your <code>ConfigMap</code>is called <em>my-config</em> and that you have a key <em>config</em> in it.</p>
<p>We need to enable some sysctl parameters in kubernetes. This should be achievable with the below annotation in the Deployment. </p> <pre><code>annotations: security.alpha.kubernetes.io/unsafe-sysctls: net.ipv4.ip_local_port_range="10240 65535" </code></pre> <p>When doing so the container fails to start with the error:</p> <pre><code>Warning FailedCreatePodSandBox 8s (x12 over 19s) kubelet, &lt;node&gt; Failed create pod sandbox. </code></pre> <p>The solution looks to be to add this flag to the kublet:</p> <pre><code>--experimental-allowed-unsafe-sysctls </code></pre> <p>Which for other flags can be done under <em>kubelet</em> in</p> <pre><code>kops edit cluster </code></pre> <p>Does anyone know the correct way to do this as it refuses to pick up the setting when entering the flag there.</p> <p>Thanks, Alex</p>
<p>A fix for this was merged back in May, you can see the PR here: <a href="https://github.com/kubernetes/kops/pull/5104/files" rel="nofollow noreferrer">https://github.com/kubernetes/kops/pull/5104/files</a></p> <p>You'd enable it with:</p> <pre><code>spec: kubelet: ExperimentalAllowedUnsafeSysctls: - 'net.ipv4.ip_local_port_range="10240 65535"' </code></pre> <p>It seems the flag takes a stringSlice, so you'd need to pass an array.</p> <p>If that doesn't work, ensure you're using the right version of kops</p>
<p><strong>Problem</strong></p> <hr> <p>I am trying to make a client in Java using <code>gRPC</code>. I have been given access to a <code>kubernetes</code> namespace to test out the client. However, all I have is the certificate authority for the cluster and a bearer token.</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority: /etc/ssl/certs/devwat-dal13-cruiser15-ca-bundle.pem server: https://&lt;host-ip&gt;:&lt;port&gt; name: devwat-dal13-cruiser15 contexts: - context: cluster: devwat-dal13-cruiser15 namespace: interns user: devwat-dal13-cruiser15-sa-interns-editor name: devwat-dal13-cruiser15-interns current-context: devwat-dal13-cruiser15-interns kind: Config preferences: {} users: - name: devwat-dal13-cruiser15-sa-interns-editor user: token: &lt;token&gt; </code></pre> <p><strong>Code</strong></p> <hr> <p>I don't know much about <code>SSL</code> and certificates but I tried to follow the documentation online on using <code>SSL/TLS</code> with <code>gRPC</code> with Java and came up with the following:</p> <pre><code>public class TrainerClient { private ManagedChannel channel; private TrainerGrpc.TrainerBlockingStub stub; //private final String OVERRIDE_AUTHORITY = "24164dfe5c7842c98de431e53b6111d9-kubernetes-ca"; private final String CERT_FILE_PATH = Paths.get("/etc", "ssl", "certs", "devwat-dal13-cruiser15-ca-bundle.pem").toString(); private static final Logger logger = Logger.getLogger(TrainerClient.class.getName()); public TrainerClient(URL serviceUrl) { File certFile = new File(CERT_FILE_PATH); try { logger.info("Initializing channel using SSL..."); this.channel = NettyChannelBuilder.forAddress(serviceUrl.getHost(), serviceUrl.getPort()) //.overrideAuthority(OVERRIDE_AUTHORITY) .sslContext(getSslContext(certFile)) .build(); logger.info("Initializing new blocking stub..."); this.stub = TrainerGrpc.newBlockingStub(channel); } catch (Exception ex) { logger.log(Level.SEVERE, "Channel build failed: {0}", ex.toString()); System.exit(1); } } public static void main(String[] args) { TrainerClient client = null; URL url = null; String fullUrl = "http://localhost:8443"; try { logger.info("Forming URL..."); url = new URL(fullUrl); logger.info("Initializing client..."); client = new TrainerClient(url); // Client Function Calls TrainerOuterClass.GetAllRequest request = TrainerOuterClass.GetAllRequest.newBuilder().setUserId("").build(); TrainerOuterClass.GetAllResponse response = client.getAllTrainingsJobs(request); } catch (Exception ex) { if (ex instanceof MalformedURLException) { logger.log(Level.SEVERE, "URL is malformed."); } else { logger.log(Level.SEVERE, "Exception has occurred: {0}", ex.getStackTrace()); ex.printStackTrace(); } } finally { if (client != null) { try { logger.info("Shutting down client..."); client.shutdown(); } catch (InterruptedException ex) { logger.log(Level.WARNING, "Channel shutdown was interrupted."); } } } } public SslContext getSslContext(File certFile) throws SSLException { return GrpcSslContexts.forClient() .trustManager(certFile) .build(); } private void shutdown() throws InterruptedException { channel.shutdown().awaitTermination(5, TimeUnit.SECONDS); } } </code></pre> <p>The pod type is <code>ClusterIP</code> and is being port-forwarded to <code>localhost</code> with port <code>8443</code>.</p> <p><strong>Error</strong></p> <hr> <p>When I run this, I get the following stack trace:</p> <pre><code>SEVERE: Exception has occurred: io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:210) io.grpc.StatusRuntimeException: UNAVAILABLE at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:210) at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:191) at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:124) at grpc.trainer.v2.TrainerGrpc$TrainerBlockingStub.getAllTrainingsJobs(TrainerGrpc.java:695) at me.mikeygulati.grpc.TrainerClient.getAllTrainingsJobs(TrainerClient.java:70) at me.mikeygulati.grpc.TrainerClient.main(TrainerClient.java:138) Caused by: javax.net.ssl.SSLHandshakeException: General OpenSslEngine problem at io.netty.handler.ssl.ReferenceCountedOpenSslContext$AbstractCertificateVerifier.verify(ReferenceCountedOpenSslContext.java:648) at io.netty.internal.tcnative.SSL.readFromSSL(Native Method) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.readPlaintextData(ReferenceCountedOpenSslEngine.java:482) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1020) at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1127) at io.netty.handler.ssl.SslHandler$SslEngineType$1.unwrap(SslHandler.java:210) at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1215) at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1127) at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1162) at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) Caused by: java.security.cert.CertificateException: No name matching localhost found at sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:231) at sun.security.util.HostnameChecker.match(HostnameChecker.java:96) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at io.netty.handler.ssl.ReferenceCountedOpenSslClientContext$ExtendedTrustManagerVerifyCallback.verify(ReferenceCountedOpenSslClientContext.java:221) at io.netty.handler.ssl.ReferenceCountedOpenSslContext$AbstractCertificateVerifier.verify(ReferenceCountedOpenSslContext.java:644) ... 26 more Jul 24, 2018 10:52:05 AM me.mikeygulati.grpc.TrainerClient main </code></pre> <p>From what I have read online, this happens because the <code>Common Name</code> on the <code>CA</code> does not match the hostname, in my case, <code>localhost</code>. I have tried using an <code>Override Authority</code> so that it would match the <code>Common Name</code> in the <code>CA</code> but I got the same error.</p> <p>So, I am fairly sure this is not the correct way to do it. I feel like I should have been supplied a client certificate and a client key with the <code>kubernetes</code> cluster but I didn't so, I want to ask if maybe there's something wrong with what I am doing.</p>
<p>Figured it out.</p> <p>My company had a client certificate (<code>client.crt</code>) lying around that I was supposed to use instead of the <code>CA</code>. When I used that certificate instead with the proper override authority, the error went away.</p>
<p>I can get the ReplicaSet if given the <code>replica-set-name</code> using the api as below:</p> <pre><code>GET /apis/apps/v1/namespaces/{namespace}/replicasets/{name} </code></pre> <p>But how can I get the ReplicaSet based on the deployment?</p> <p>Any help is appreciated.</p> <p>Thank you</p>
<blockquote> <p>But how can I get the ReplicaSet based on the deployment?</p> </blockquote> <p>With quite some gymnastics... If you inspect how <code>kubectl</code> does it (by executing <code>kubectl -n my-namespace describe deploy my-deployment --v=9</code>) you will see that it dos the following:</p> <ul> <li>first gets deployment details with: <code>/apis/extensions/v1beta1/namespaces/my-namespace/deployments/my-deployment</code>. From there it gets labels for replicaset selection. </li> <li>then gets replicaset details using labels from deployment in previous step (say labels were <code>my-key1:my-value1</code> and <code>my-key2:my-value2</code>) like so: <code>/apis/extensions/v1beta1/namespaces/my-namespace/replicasets?labelSelector=my-key1%3Dmy-value1%2Cmy-key2%3Dmy-value2</code></li> </ul> <p>Funny part here begins with extracting multiple labels from deployment output and formatting them for replicaset call, and that is task for grep, awk, jq or even python, depending on your actual use case (from bash, python, some client or whatever...)</p>
<p>I'm working on writing custom controller for our kubernetes cluster and that'll listen to node events and perform some operation on the node.I'm using kubernetes client-go library and able to capture kubernetes events whenever a node is attached or removed from the cluster. But is it possible to get AWS instance details of kubernetes node that has been created like instance id, tags etc ? Thanks in advance.</p> <p>PS: I have installed the kubernetes cluster using kops</p>
<p>On a Kubernetes node in AWS, you'll have some things populated as part of the node labels and various other parts of the node's metadata:</p> <pre><code>kubectl get nodes -o json | jq '.items[].metadata.labels' { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/instance-type": "c5.large", "beta.kubernetes.io/os": "linux", "failure-domain.beta.kubernetes.io/region": "us-east-1", "failure-domain.beta.kubernetes.io/zone": "us-east-1b", "kubernetes.io/hostname": "&lt;hostname&gt;", "kubernetes.io/role": "master", "manufacturer": "amazon_ec2", "node-role.kubernetes.io/master": "", "operatingsystem": "centos", "tier": "production", "virtual": "kvm" } </code></pre> <p>The node information is in <code>client-go</code> in the <a href="https://github.com/kubernetes/client-go/blob/master/kubernetes/typed/core/v1/node.go" rel="nofollow noreferrer">node package here</a> using the <code>Get</code> method. Here's an example:</p> <pre><code> client := kubernetes.NewForConfigOrDie(config) list, err := client.CoreV1().Nodes().List(metav1.ListOptions{}) if err != nil { fmt.Fprintf(os.Stderr, "error listing nodes: %v", err) os.Exit(1) } for _, node := range list.Items { fmt.Printf("Node: %s\n", node.Name) node, err := client.CoreV1().Nodes().Get(node.Name, metav1.GetOptions{}) if err != nil { fmt.Fprintf(os.Stderr, "error getting node: %v", err) os.Exit(1) } fmt.Println(node) } </code></pre> <p><em>However</em> this is really probably not the way you want to go about it. If you're running this on a kops cluster in AWS, the node your workload is running on already has access to the AWS API and also the <a href="https://aws.amazon.com/iam/faqs/" rel="nofollow noreferrer">IAM role</a> needed to query node data.</p> <p>With that in mind, please consider using the <a href="https://github.com/aws/aws-sdk-go" rel="nofollow noreferrer">AWS Go SDK</a> instead. You can query EC2 quite easily, here's an <a href="https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/go/example_code/ec2/describing_instances.go" rel="nofollow noreferrer">adapted example</a>:</p> <pre><code>package main import ( "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/ec2" ) func main() { // Load session from shared config sess := session.Must(session.NewSessionWithOptions(session.Options{ SharedConfigState: session.SharedConfigEnable, })) // Create new EC2 client ec2Svc := ec2.New(sess) // Call to get detailed information on each instance result, err := ec2Svc.DescribeInstances(nil) if err != nil { fmt.Println("Error", err) } else { fmt.Println("Success", result) } } </code></pre>
<p>Trying to move from Flink 1.3.2 to 1.5 We have cluster deployed with kubernetes. Everything works fine with 1.3.2 but I can not submit job with 1.5. When I am trying to do that I just see spinner spin around infinitely, same via REST api. I even can't submit wordcount example job. Seems my taskmanagers can not connect to jobmanager, I can see them in flink UI, but in logs I see</p> <blockquote> <p>level=WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with org.apache.flink.shaded.akka.org.jboss.netty.channel.ConnectTimeoutException: connection timed out: flink-jobmanager-nonprod-2.rpds.svc.cluster.local/25.0.84.226:6123</p> <p>level=WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://[email protected]:6123] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://[email protected]:6123]] Caused by: [No response from remote for outbound association. Associate timed out after [20000 ms].]</p> <p>level=WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with org.apache.flink.shaded.akka.org.jboss.netty.channel.ConnectTimeoutException: connection timed out: flink-jobmanager-nonprod-2.rpds.svc.cluster.local/25.0.84.226:6123</p> </blockquote> <p>But I can do telnet from taskmanager to jobmanager </p> <p>Moreover everything works on my local if I start flink in cluster mode (jobmanager + taskmanager). In 1.5 documentation I found <strong>mode</strong> option which flip mode between flip6 and legacy (default flip6), but If I set mode: legacy I don't see my taskmanagers registered at all.</p> <p>Is this something specific about k8s deployment and 1.5 I need to do? I checked 1.5 k8s config and it looks pretty same as we have, but we using customized docker image for flink (Security, HA, checkpointing)</p> <p>Thank you.</p>
<p>The issue with jobmanage connectivity. Jobmanager docker image cannot connect to "flink-jobmanager" (${JOB_MANAGER_RPC_ADDRESS}) address.</p> <p><strong>Just use afilichkin/flink-k8s Docker instead of flink:latest</strong></p> <p>I've fixed it by adding new host to jobmanager docker. You can see it in my github project</p> <p><a href="https://github.com/Aleksandr-Filichkin/flink-k8s/tree/master" rel="nofollow noreferrer">https://github.com/Aleksandr-Filichkin/flink-k8s/tree/master</a></p>
<p>I have at least 4 config files for my service.</p> <p>Like, application.properties, log config file, query file, ...</p> <p>There are 4 config files which I am storing into kubernetes ConfigMaps and they are for different purposes. Currently, I am creating 4 configMaps for these 4 files and it becomes more work to configure them in deployment file.</p> <p>We basically keep all these files in GIT and would be driving changes from GIT. So, if we need to modify something in the configMap, we will first update our file in GIT and then recreate a configMap.</p> <p>Is there any other better way to just update few changes?</p> <p>Does it make sense to keep all these 4 files in a single configMap.</p> <p>Any advice please.</p>
<p>This is my personal opinion based on what I have learned so far and there may be different or easier or even better ways out there, so please take this answer with a pinch of salt.</p> <p>Considering you have multiple services or projects, etc. And each of these projects has its own configuration files or environment variables which is needed by service to function as expected.</p> <p>What I would do is : </p> <ol> <li>Decide which configuration options are secrets and which are normal env variables.</li> <li>Create two files for these named maybe .. secret-config.yml and env-config.yml</li> <li><p>Make sure you set the appropriate kind to each of the document. Here is a sample config file and similar secret file can be stored.</p> <pre><code>apiVersion: v1 data: ENV_1: test ENV_1: test2 ENV_1: test3 kind: ConfigMap metadata: annotations: field.cattle.io/creatorId: user-8rtg2 name: my-service-configs namespace: my-namespace </code></pre></li> <li><p>Now assuming you have a CI-CD environment setup, in your pipeline or deployment-config, add few steps after the service deploys steps to update these secrets/configmaps with the updates values.</p> <pre><code>sh deploy_to_kubernets sh update_config_maps sh update_secrets </code></pre></li> </ol> <p>Now to explain myself :</p> <ul> <li><p>Why are we storing config kubernetes YML in repo ?</p> <p>Because we want to make sure all the related config is accessible and stored in a place where it is relevant. And it makes sense as we are going to use it in CI process.</p></li> <li><p>What about creating or updating the config &amp; secrets from CI ?</p> <p>Now for me, since I am using rancher API on top of kubernetes, I can use the API to access my cluster and create or update resources. I am sure you can find something similar to perform this actions on your cluster.</p></li> </ul> <p>So concluding, just store your config in your services as it config is part of the service and is necessary for code to work(figure out a way to encode or hide secrets values from being exposed in code). Make sure you create secrets and config maps based on the type of variables you are using. And with this, you will be able to automate the creation of configmaps and each of these configmaps will be in proper context.</p>
<p>If a node loses communication with the master, will it continue to run its workload in a self-healing way?</p> <p>For instance, if the master is unavailable and a pod exceeds its cpu limit and is killed, will the node independently restart the pod because that pod has already been scheduled on the node?</p>
<p>Yes. The local (node) supervisor looking after your pods is the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> and while you can't change things while the connection to the API server is not available, the pods already scheduled on the node will continue to be supervised by the kubelet. In this context, an interesting but not super useful thing to know (for end-users) is that there are also so called <a href="https://kubernetes.io/docs/tasks/administer-cluster/static-pod/" rel="nofollow noreferrer">static pods</a> which you can launch manually, on the node.</p>
<p>Our aim is to horizontally scale a .NET Core 2.0 Web API using Kubernetes. The Web API application will be served by Kestrel.</p> <p>It looks like we can gracefully handle the termination of pods by configuring Kestrel's shutdown timeout so now we are looking into how to probe the application to determine readiness and liveness.</p> <p>Would it be enough to simply probe the Web API with a HTTP request? If so, would it be a good idea to create a new healthcheck controller to handle these probing requests or would it make more sense to probe an actual endpoint that would be consumed in normal use?</p> <p>What should we consider when differentiating between the liveness and readiness probes?</p>
<p>I would recommend to perform health checks through separate endpoints. In general, there are a number of good reasons for doing so, like:</p> <ol> <li>Checking that the application is live/ready or, more in general, in a healthy status is not necessarily the same as sending a user request to your web service. When performing health checks you should define what makes your web service healthy: this could be e.g. checking access to external resources, like database.</li> <li>It is easier to control who can actually perform health checks through your endpoints.</li> <li>More in general, you do not want to mess up with the actual service functionalities: you would otherwise need to re-think the way you do health checks when maintaining your service's functionalities. E.g. if your service interacts with a database, in a health checks context you want to verify the connection to the database is fine, but you do not actually care much about the data being manipulated internally by your service.</li> <li>Things get even more complicated if your web service is not stateless: in such case, you will need to make sure data remain consistent independently from your health checks.</li> </ol> <p>As you pointed out, a good way to avoid any of the above could be setting up a separate Controller to handle health checks.</p> <p>As an alternative option, there is a standard library available in ASP.NET Core for enabling <a href="https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/implement-resilient-applications/monitor-app-health" rel="noreferrer">Health Checks</a> on your web service: at the time of writing this answer, it is not officially part of ASP.NET Core and no NuGet packages are available yet, but there is a plan for this to happen on future releases. For now, you can easily pull the code from the <a href="https://github.com/dotnet-architecture/HealthChecks" rel="noreferrer">Official Repository</a> and include it in your solution as explained in the <a href="https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/implement-resilient-applications/monitor-app-health" rel="noreferrer">Microsoft documentation</a>. This is currently planned to be included in ASP.NET Core 2.2 as described in the <a href="https://github.com/aspnet/Announcements/issues/307" rel="noreferrer">ASP.NET Core 2.2 Roadmap</a>.</p> <p>I personally find it very elegant, as you will configure everything through the <code>Startup.cs</code> and <code>Program.cs</code> and won't need to explicitly create a new endpoint as the library already handles that for you.</p> <p>I have been using it in a few projects and I would definitely recommend it. The repository includes an <a href="https://github.com/dotnet-architecture/HealthChecks/tree/dev/samples/SampleHealthChecker.AspNetCore" rel="noreferrer">example</a> specific for ASP.NET Core projects you can use to get quickly up to speed.</p> <h1>Liveness vs Readiness</h1> <p>In Kubernetes, you may then setup liveness and readiness probes through HTTP: as explained in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">Kubernetes documentation</a>, while the setup for both is almost identical, Kubernetes takes different actions depending on the probe:</p> <p><strong>Liveness probe</strong> from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">Kubernetes documentation</a>:</p> <blockquote> <p>Many applications running for long periods of time eventually transition to broken states, and cannot recover except by being restarted. Kubernetes provides liveness probes to detect and remedy such situations.</p> </blockquote> <p><strong>Readiness probe</strong> from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">Kubernetes documentation</a>:</p> <blockquote> <p>Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup. In such cases, you don’t want to kill the application, but you don’t want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.</p> </blockquote> <p>So, while an unhealthy response to a liveness probe will cause the Pod (and so, the application) to be killed, an unhealthy response to a readiness probe will simply cause the Pod to receive no traffic until it gets back to a healthy status.</p> <p><strong>What to consider when differentiating liveness and readiness probes?</strong></p> <p>For liveness probe: I would recommend to define what makes your application healthy, i.e. minimum requirements for user consumption, and implement health checks based on that. This typically involves external resources or applications running as separate processes, e.g. databases, web services, etc. You may define health checks by using ASP.NET Core Health Checks library or manually with a separate Controller.</p> <p>For readiness probe: You simply want to load your service to verify it actually responds in time and so allows Kubernetes to balance traffic accordingly. Trivially (and in most cases as suggested by Lukas in another answer), you may use the same exact endpoint you would use for liveness but setting up different timeouts, but this then really depends on your needs and requirements.</p>
<p>I have a microservice that needs to build a cache on a Redis server upon initialization, fetching data from a MongoDB instance outside the cluster.</p> <p>Is it a bad practice to put the service and the Redis server in two separate containers on the same pod? Or should they be on different pods?</p>
<p>They should be on different pods (and different deployments).</p> <p>Two reasons for this: you probably want to share a single Redis cache across multiple copies of the service (their scaling properties are different); and you can deploy a prebuilt Redis system (via Helm) without needing to tightly integrate it with your application deployment.</p>
<p>We have a SpringBoot Web app docker container deployed in Kubernetes with 3 replicas. when the controller redirects to a different url within the same controller, we pass an object via the flashAttributes. When we run 1 pod, everything works. But when I scale to 3 pods, the object comes with all internal attributes set to Null. Has anyone come across the issue ? If so prescribe a solution ?</p> <p>Thanks,</p> <p>SR</p>
<p>Kubernetes can send multiple requests from a session to different pod within a deployment. That's why the data is lost because the data might in memory for one pod but another pod will not have that data at all. </p> <p>To avoid this - you can either maintain the session cache in an external store like Redis or use sticky sessions so that for given session requests are sent to the same pod.</p> <p>Some pointers to the solution</p> <ul> <li><p><a href="https://www.jeroenreijn.com/2015/09/testing-session-replication-with-docker-compose-redis-spring-session.html" rel="nofollow noreferrer">Using Redis for external session data</a></p></li> <li><p><a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/session-persistence" rel="nofollow noreferrer">Sticky sessions</a> - this approach needs using Nginx as an ingress controller</p></li> </ul>
<p>I am looking for a way to delete PersistentVolumeClaims assigned to pods of a StatefulSet automatically when I scale down the number of instances. Is there a way to do this within k8s? I haven't found anything in the docs, yet.</p>
<p>I suspect that a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#lifecycle-v1-core" rel="nofollow noreferrer"><code>preStop</code> Lifecycle Handler</a> could submit a <code>Job</code> to clean up the PVC, assuming the Pod's <code>ServiceAccount</code> had the <code>Role</code> to do so. Unfortunately, the Lifecycle Handler docs say that the <code>exec</code> blocks the Pod deletion, so that's why whatever happened would need to be asynchronous from the Pod's perspective.</p> <p>Another approach might be to unconditionally scan the cluster or namespace with a <code>CronJob</code> and delete unassigned PVCs, or those that match a certain selector.</p> <p>But I don't think there is any <em>inherent</em> ability to do that, given that (at least in my own usage) it's reasonable to scale a <code>StatefulSet</code> up and down, and when scaling it back up then one would actually desire that the <code>Pod</code> regain its identity in the <code>StatefulSet</code>, which typically includes any persisted data.</p>
<p>From the kubernetes dashboard, I can see all my deployment, pod &amp; replication is Green successfully deploy from the Continuous Deployment pipeline VSTS.</p> <p>My issue is unable to view the site. <a href="https://i.stack.imgur.com/xfw64.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xfw64.png" alt="enter image description here"></a></p> <p>Can anyone help me how to start troubleshooting</p>
<p>You can access the identityapi with the public IP through a browser. If you cannot access the web, you should check the configuration of the identityapi. And then ssh into the Kubernets node to check if the identityapi service work well. Check the configuration of the service if it is the same as the Kubernets dashboard shows.</p>
<p>i am trying to migrate a docrized project to kubernetes, i have used Kompose to convert the project </p> <p><code>kompose --file docker-compose.yml convert</code>, </p> <p>when i run <code>kompose up</code> after migrating the files i get this error </p> <p><code>$ kompose up WARN Unsupported env_file key - ignoring<br> FATA Error while deploying application: k.Transform failed: image key required within build parameters in order to build and push service 'drkiq' </code></p> <p>.env file:</p> <p><code>SECRET_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx WORKER_PROCESSES=1 LISTEN_ON=0.0.0.0:8000 DATABASE_URL=postgresql://drkiq:yourpassword@postgres:5432/drkiq?encoding=utf8&amp;pool=5&amp;timeout=5000 CACHE_URL=redis://redis:6379/0 JOB_WORKER_URL=redis://redis:6379/0 </code></p> <p>Dockerized project Link <a href="https://semaphoreci.com/community/tutorials/dockerizing-a-ruby-on-rails-application" rel="nofollow noreferrer" title="this text appears when you mouse over">here</a>!</p> <p>any idea how to convert the .env file to a format that can be used with kubernetes kompose</p> <p>Docker-compose file:</p> <pre><code> postgres: image: postgres:9.4.5 environment: POSTGRES_USER: drkiq POSTGRES_PASSWORD: yourpassword ports: - '5432:5432' volumes: - drkiq-postgres:/var/lib/postgresql/data redis: image: redis:3.0.5 ports: - '6379:6379' volumes: - drkiq-redis:/var/lib/redis/data drkiq: build: . links: - postgres - redis volumes: - .:/drkiq ports: - '8000:8000' env_file: - .drkiq.env sidekiq: build: . command: bundle exec sidekiq -C config/sidekiq.yml links: - postgres - redis volumes: - .:/drkiq env_file: - .drkiq.env </code></pre>
<p>Kubernetes kompose supports <code>env_file</code> conversion from <a href="https://docs.docker.com/compose/compose-file/" rel="nofollow noreferrer">Docker Compose 3.x</a> version as it's described in <a href="https://github.com/kubernetes/kompose/blob/master/docs/conversion.md" rel="nofollow noreferrer">Conversion matrix</a>.</p> <p>In Kubernetes you can use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a> to store your environment variables from <code>env_file</code>. For <code>SECRET_TOKEN</code> variable, you can use <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secrets</a> to hold your private and sensitive data.</p> <p>You can also check other tools for conversion purpose like <a href="https://github.com/kelseyhightower/compose2kube" rel="nofollow noreferrer">compose2kube</a> or <a href="https://github.com/dhoer/k8s-env-gen" rel="nofollow noreferrer">k8s-env-gen</a>.</p> <p>According to the attached <code>Docker-composer</code> file and the error during the conversion process, I can assume that you missed image key value for <code>drkiq</code> and <code>sidekiq</code> services:</p> <p><strong>Update:</strong> docker-compose.yml file</p> <pre><code>version: '2' services: postgres: image: postgres:9.4.5 environment: POSTGRES_USER: drkiq POSTGRES_PASSWORD: yourpassword ports: - '5432:5432' volumes: - drkiq-postgres:/var/lib/postgresql/data redis: image: redis:3.0.5 ports: - '6379:6379' volumes: - drkiq-redis:/var/lib/redis/data drkiq: build: . image: drkiq:tag links: - postgres - redis volumes: - .:/drkiq ports: - '8000:8000' env_file: - .drkiq.env sidekiq: build: . command: bundle exec sidekiq -C config/sidekiq.yml image: sidekiq:tag links: - postgres - redis volumes: - .:/drkiq env_file: - .drkiq.env </code></pre>
<p>We’ve been planning for a long time to introduce <code>securityContext: runAsNonRoot: true</code> as a requirement to our pod configurations for a while now.</p> <p>Testing this today I’ve learnt that since <code>v1.8.4</code> (I think) you also have to specify a particular UID for the user running the container, e.g <code>runAsUser: 333</code>.</p> <p>This means we not only have to tell developers to ensure their containers don’t run as root, but also specify a specific UID that they should run as, which makes this significantly more problematic for us to introduce.</p> <p>Have I understood this correctly? What are others doing in this area? To leverage <code>runAsNonRoot</code> is it now required that Docker containers run with a specific and known UID?</p>
<p>The Kubernetes Pod SecurityContext provides two options <code>runAsNonRoot</code> and <code>runAsUser</code> to enforce non root users. You can use both options separate from each other because they test for different configurations. </p> <p>When you set <code>runAsNonRoot: true</code> you require that the container will run with a user with any UID other than 0. No matter which UID your user has.<br> When you set <code>runAsUser: 333</code> you require that the container will run with a user with UID 333. </p>
<p>there are two kinds of provisioner in kubernetes storageclass one is:</p> <blockquote> <p>kind: StorageClass<br> apiVersion: storage.k8s.io/v1<br> metadata:<br> name: hdd1<br> provisioner: kubernetes.io/cinder<br> parameters:<br> type: HDD1 # change for your cloud volume type<br> availability: nova </p> </blockquote> <p>and one is :</p> <blockquote> <p>kind: StorageClass<br> apiVersion: storage.k8s.io/v1beta1<br> metadata:<br> name: cinder-standard-iops<br> provisioner: openstack.org/standalone-cinder<br> parameters:<br> type: standard-iops</p> </blockquote> <p>I'm wondering what's the difference between them,Thanks!</p>
<p><code>provisioner: kubernetes.io/cinder</code> default driver is described in <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#openstack-cinder" rel="nofollow noreferrer">official documentation.</a></p> <p><code>provisioner: openstack.org/standalone-cinder</code> : a beta feature which was created for use with external Cinder storage.</p> <p><a href="https://github.com/kubernetes-incubator/external-storage/issues/317" rel="nofollow noreferrer">Here</a> you can find the main discussion about adding additional provisioner toCinder-standalone.</p> <blockquote> <p>The builtin kubernetes cinder support expects that nodes are deployed on nova instances. In order to use cinder as a standalone storage service I'd like to add an external provisioner. This provisioner creates volumes in cinder and retrieves connection information. It then translates this connection information into a native k8s PV (ie. iscsi or rbd are already implemented).</p> </blockquote>
<p>I have a Kubernetes cluster that runs a number of independent, discrete services. I want to use helm to deploy them, and I have made a helm chart for every individual resource.</p> <p>However, now I want to be able to deploy the cluster as a single entity, but it is not clear to me how helm supports stitching together multiple charts.</p> <p>When I look at example repos, they simply have every single template file in the template folder of a single chart, and then a giant, sprawling Values.yaml file.</p> <p>To me, that seems unwieldly, especially crawling around a 2000 line Values.yaml looking for settings.</p> <p>Is there any way to take a folder structure that looks like this:</p> <pre><code>helm |____ Service1 |_______ values.yaml |_______ templates Service2 |_______ values.yaml |_______ templates Service3 |_______ values.yaml |_______ templates </code></pre> <p>And package it into one deployment without manually merging and de-duping the files and values?</p>
<p>We also have similar scenarios wherein we have independent applications that we either need to deploy together to address features that span across them or individually deployed to address bugs. We end up using helmfile (<a href="https://github.com/roboll/helmfile" rel="noreferrer">https://github.com/roboll/helmfile</a>). Each application still maintain their own charts, using helmfile, we can deploy them altogether if need to.</p>
<p>Currently per Microsoft documentation you can set a static IP address on the resource group of the kubernetes service. Problem with this is if you delete the resource group / cluster then the static IP address is also gone.</p> <p><a href="https://learn.microsoft.com/en-us/azure/aks/static-ip" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/static-ip</a></p> <p>Is there a way to connect Reserved IP address in Azure to link to AKS so that the IP address is guaranteed ?</p> <p><a href="https://i.stack.imgur.com/J3hxs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3hxs.png" alt="enter image description here"></a></p>
<p>As I know, there are two types of the public IP in Azure, such as dynamic and static. But whether it is static or dynamic, we cannot set a specific IP ourselves. The IP is randomly assigned by Azure. The types just describe the lifetime of the Public IP.</p> <p>Just the IP included in the Vnet which we designed can be assigned to a specified one with the static type as we want.</p> <p>Reference document: <a href="https://learn.microsoft.com/en-au/azure/aks/static-ip" rel="nofollow noreferrer">https://learn.microsoft.com/en-au/azure/aks/static-ip</a></p>
<p>I have a Kubernetes cluster that runs a number of independent, discrete services. I want to use helm to deploy them, and I have made a helm chart for every individual resource.</p> <p>However, now I want to be able to deploy the cluster as a single entity, but it is not clear to me how helm supports stitching together multiple charts.</p> <p>When I look at example repos, they simply have every single template file in the template folder of a single chart, and then a giant, sprawling Values.yaml file.</p> <p>To me, that seems unwieldly, especially crawling around a 2000 line Values.yaml looking for settings.</p> <p>Is there any way to take a folder structure that looks like this:</p> <pre><code>helm |____ Service1 |_______ values.yaml |_______ templates Service2 |_______ values.yaml |_______ templates Service3 |_______ values.yaml |_______ templates </code></pre> <p>And package it into one deployment without manually merging and de-duping the files and values?</p>
<p>Use <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/" rel="nofollow noreferrer">helm subcharts</a></p> <p>You'd need to have something like a meta-chart, <code>myapps</code>. Then you'd add a <code>requirements.yaml</code> file like so:</p> <pre><code># myapp/requirements.yaml dependencies: - name: Service1 repository: http://localhost:10191 version: 0.1.0 - name: Service2 repository: http://localhost:10191 version: 0.1.0 - name: Service3 repository: http://localhost:10191 version: 0.1.0 </code></pre>
<p>when I am trying to setup pod network using the following</p> <pre><code>sudo kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml </code></pre> <p>I get this error, please help</p> <pre><code>unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused </code></pre> <p><strong>UPDATE:</strong> Doesn't seem to be a permission issue, unlike other question</p>
<p>Found that it's an issue with kubectl not being configured properly.</p> <p>Fixed the issued by using the following commands for calico network(change accordingly for your network addon plugin)</p> <pre><code>sudo kubeadm init --pod-network-cidr=192.168.0.0/16 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> <p>and then run </p> <pre><code>sudo kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml </code></pre> <p>and follow the rest accordingly</p>
<p>I have a cloud-native application, which is implemented using <code>Spring Cloud Netflix</code>.</p> <p>So, in my application, I'm using <code>Eureka</code> service discovery to manage all instances of different services of the application. When each service instance wants to talk to another one, it uses <code>Eureka</code> to fetch the required information about the target service (IP and port for example).</p> <p>The service orchestration can also be achieved using tools like <code>Docker Swarm</code> and <code>Kubernetes</code>, and it looks there are some overlaps between what <code>Eureka</code> does and what <code>Docker Swarm</code> and <code>Kubernetes</code> can do.</p> <p>For example, Imagine I create a service in <code>Docker Swarm</code> with 5 instances. So, swarm insures that those 5 instances are always up and running. Additionally, each services of the application is sending a periodic heartbeat to the <code>Eureka</code> internally, to show that it's still alive. It seems we have two layers of health check here, one for <code>Docker</code> and another inside the <code>Spring Cloud</code> itself.</p> <p>Or for example, you can expose a port for a service across the entire swarm, which eliminates some of the needs to have a service discovery (the ports are always apparent). Another example could be load balancing performed by the <code>routing mesh</code> inside the docker, and the load balancing happening internally by <code>Ribbon</code> component or <code>Eureka</code> itself. In this case, having a hardware load balancer, leads us to a 3-layered load balancing functionality.</p> <p>So, I want to know is it rational to use these tools together? It seems using a combination of these technologies increases the complexity of the application very much and may be redundant.</p> <p>Thank you for reading!</p>
<p>If you already have the application working then there's presumably more effort and risk in removing the netflix components than keeping them. There's an argument that if you could remove e.g. eureka then you wouldn't need to maintain it and it would be one less thing to upgrade. But that might not justify the effort and it also depends on whether you are using it for anything that might not be fulfilled by the orchestration tool. </p> <p>For example, if you're connecting to services that are not set up as load-balanced (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">'headless services'</a>) then you might want ribbon within your services. (You could do this using tools in the <a href="https://github.com/spring-cloud-incubator/spring-cloud-kubernetes" rel="nofollow noreferrer">spring cloud kubernetes incubator project</a> or its <a href="https://github.com/fabric8io/spring-cloud-kubernetes" rel="nofollow noreferrer">fabric8 equivalent</a>.) Another situation to be mindful of is when you're connecting to external services (i.e. services outside the kubernetes cluster) - then you might want to add load-balancing or rate limiting and ribbon/hystrix would be an option. It will depend on how nuanced your requirements for load-balancing or rate-limiting are. </p> <p>You've asked specifically about netflix but it's worth stating clearly that spring cloud includes other components and not just netflix ones. <a href="https://dzone.com/articles/deploying-microservices-spring-cloud-vs-kubernetes" rel="nofollow noreferrer">And that there's other areas of overlap where you would need to make choices.</a></p> <p>I've focused on Kubernetes rather than docker swarm partly because that's what I know best and partly because that's what I believe to be the current direction of <a href="https://dzone.com/articles/why-did-kubernetes-win" rel="nofollow noreferrer">travel for the industry</a> - on this you should note that <a href="https://www.theregister.co.uk/2017/10/17/docker_ee_kubernetes_support/" rel="nofollow noreferrer">kubernetes is available within docker EE</a>. I guess you've read many comparison articles but <a href="https://hackernoon.com/a-kubernetes-guide-for-docker-swarm-users-c14c8aa266cc" rel="nofollow noreferrer">https://hackernoon.com/a-kubernetes-guide-for-docker-swarm-users-c14c8aa266cc</a> might be particularly interesting to you. </p>
<p>I'm running a service in GKE and have an Ingress for setting up LTS. I have tried by my self-signing certificates and I could access to my site through https protocol. It looks good. Please notice that I have a static IP for Ingress and a domain name for it already.</p> <p>But now I'm going to create a real-certificates and trying to create a CSR and sending it to CA but I'm so confused after reading those posts:</p> <p><a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#create-a-certificate-signing-request" rel="nofollow noreferrer">Manage TLS Certificates in a Cluster</a></p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="nofollow noreferrer">Certificates</a></p> <p>I have some questions:</p> <ul> <li>What's Pod's DNS, Pod's Ip and Service's Ip?</li> <li>Do I have to create DNS for pod and service?</li> <li>Can I generate the *.csr file from my local PC?</li> <li>Can I create a server certificate authentication if I follow the steps in this link <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#tls</a></li> </ul> <p>All I want is to make *.crt and *.key file for setting up https on my service. (I have read some blog posts telling about Let's Encrypt but I don't want to use it).</p> <p>Thank you for reading.</p>
<p>Let's go over each of your questions first:</p> <blockquote> <p>What's Pod's DNS, Pod's Ip and Service's Ip?</p> </blockquote> <p>Within the cluster, each pod has it's own internal IP address and DNS record. Same goes for the services. You can read up on DNS within Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">here</a> and you can read more about IP addresses <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">here</a>.</p> <blockquote> <p>Do I have to create DNS for pod and service?</p> </blockquote> <p>For use within the cluster, that's automatically taken care of for you. If you want to expose a pod/service and have it externally accessible via a DNS record, you'll have to create it somewhere, just like you would for any other server/service/whatever.</p> <blockquote> <p>Can I generate the *.csr file from my local PC?</p> <p>Can I create a server certificate authentication if I follow the steps...</p> </blockquote> <p>When it comes to GKE and Ingress, handling certificates can be done in two different ways. You can just add a certificate to your project and tell the Ingress controller to use it. <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl" rel="nofollow noreferrer">Here in this page</a> you can find a fantastic description on how to do this, and <a href="https://console.cloud.google.com/net-services/loadbalancing/advanced/sslCertificates/list" rel="nofollow noreferrer">here</a> is the page to create a certificate in the Console. This page also shows you how to do it by using secrets, though I personally prefer the added visibility of using a certificate that's part of my project.</p>
<p>I have a gRPC service which listens on a port using a tcp listener. This service is Dockerized and eventually I want to run it in a Kubernetes cluster.</p> <p>I was wondering what is the best way to implement <strong>liveness</strong> and <strong>readiness</strong> probes for checking the health of my service?</p> <ol> <li>Should I run a separate http server in another goroutine and respond to <code>/health</code> and <code>/ready</code> paths?</li> <li>Or, should I also have gRPC calls for liveness and readiness of my service and use a gRPC client to query these endpoints?!</li> </ol>
<p>Previously I've run a separate http server inside the app, just for healthchecks (this was because AWS application load balancers only have http checking, I don't know about kube).</p> <p>If you run the http server as a separate routine and the grpc server on the main goroutine then you should avoid the situation where the grpc server goes down and http is still 200 - OK (assuming you don't yet have a means for http to healthcheck your grpc).</p> <p>You could also use a heatbeat pattern of goroutines, that are controlled by the http server and accept heartbeats from the grpc server to make sure that it's all OK.</p> <p>If you run 2 servers, they will need to be running on different ports, this can be an issue for some schedulers (like ECS) that expects 1 port for a service. There are examples and packages that will allow you to multiplex multiple protocols onto the same port. I guess kube supports multiple port services so this might not be a problem.</p> <p>Link to example of multiplexing:</p> <p><a href="https://github.com/gdm85/grpc-go-multiplex/blob/master/greeter_multiplex_server/greeter_multiplex_server.go" rel="nofollow noreferrer">https://github.com/gdm85/grpc-go-multiplex/blob/master/greeter_multiplex_server/greeter_multiplex_server.go</a></p>
<p>Currently per Microsoft documentation you can set a static IP address on the resource group of the kubernetes service. Problem with this is if you delete the resource group / cluster then the static IP address is also gone.</p> <p><a href="https://learn.microsoft.com/en-us/azure/aks/static-ip" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/static-ip</a></p> <p>Is there a way to connect Reserved IP address in Azure to link to AKS so that the IP address is guaranteed ?</p> <p><a href="https://i.stack.imgur.com/J3hxs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3hxs.png" alt="enter image description here"></a></p>
<p>In the ARM deployment model, a Public IP Address (PIP) is an entity/resource all of its own. A PIP's are available as either dynamic or static. Dynamic is cheaper, static is more expensive as there is a finite amount of them in IPv4. </p> <p>Yes, you can assign a PIP to AKS - see reference <a href="https://learn.microsoft.com/en-us/azure/aks/static-ip#use-a-static-ip-address-outside-of-the-node-resource-group" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/static-ip#use-a-static-ip-address-outside-of-the-node-resource-group</a></p> <p>To ensure you don't lose your PIP, keep that in a seperate Resource Group from the Resource Group that contains resources that are volatile that may be deleted often.</p>
<p>I am fairly new to this, I don't know if I am heading in the right direction or not. I have a custom nginx.conf that works fine, I am now trying to build a docker image with it so that I can run it as a container in kuberentes. </p> <p>Here is my nginx.conf</p> <pre><code>user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } stream { #define the various upstreams upstream app_1 { server 192.168.122.206:5678; } upstream app_2 { server 192.168.122.206:9000; } #map source port to upstream map $remote_port $backend_svr { 1234 "app_1"; 1235 "app_2"; } #all udp traffic received on 8000, depending on source it will be redirected server { listen 8000 udp; proxy_pass $backend_svr; } } </code></pre> <p>On my virtual machine I have /home/asvilla/docker-files/nginx-udp/Dockerfile which contains </p> <pre><code>FROM nginx RUN chmod +w /etc/nginx/nginx.conf COPY nginx.conf /etc/nginx/nginx.conf RUN cat /etc/nginx/nginx.conf </code></pre> <p>I build it using </p> <pre><code>docker build -t "custom_nginx:dockerfile" . </code></pre> <p>The nginx container should redirect udp traffic incoming on port 8000 to either port 5678 or port 9000 depending on the source of the udp packet. </p> <p>I run with docker run 'image-id' but it doesn't function as expected. Running docker ps shows "PORTS 80/tcp " and "COMMAND nginx-g daemon of ..."</p> <p>Any pointers on what these mean. nginx by default binds tcp to port 80 but I have changed the nginx.conf, the cat command I run shows that the file is updated. </p> <p>I am assuming that I have to expose the ports in the nginx.conf somehow. Any help much appreciated.</p>
<p>You must publish the port at runtime like this: <code>docker run -p 8000:8000 image-id</code>.</p>
<p>LimitRange allows us to configure these properties per resource (memory, CPU):</p> <ul> <li><strong>Limit</strong>: default maximum ammount of the resource that will be provisioned.</li> <li><strong>Request</strong>: default initial ammount of the resource that will be provisioned.</li> </ul> <p>However I just realized there are two other options, min and max. Since min/max seem to overlap with request/limit, what the difference between all these properties? </p>
<p>I found the answer digging in <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/#motivation-for-minimum-and-maximum-memory-constraints" rel="noreferrer">the docs</a>. Limit and Request params are overridable by the pod configurations. Min and Max enforce the values configured in the LimitRange:</p> <blockquote> <p>Motivation for minimum and maximum memory constraints</p> <p>As a cluster administrator, you might want to impose restrictions on the amount of memory that Pods can use. For example:</p> <p>Each Node in a cluster has 2 GB of memory. You do not want to accept any Pod that requests more than 2 GB of memory, because no Node in the cluster can support the request.</p> <p>A cluster is shared by your production and development departments. You want to allow production workloads to consume up to 8 GB of memory, but you want development workloads to be limited to 512 MB. You create separate namespaces for production and development, and you apply memory constraints to each namespace.</p> </blockquote>
<p>They should able to communicate and update should visible to each other i mean mainly syncing. </p> <blockquote> <p>DiscoveryStrategyConfig strategyConfig = new DiscoveryStrategyConfig(factory); Blockquote</p> <p>// strategyConfig.addProperty("service-dns", "my-serice-name.my-namespace.svc.cluster.local"); // strategyConfig.addProperty("service-dns-timeout", "300");</p> <p>strategyConfig.addProperty("service-name", "my-service-name"); strategyConfig.addProperty("service-label-name", "my-service-label"); strategyConfig.addProperty("service-label-value", true); strategyConfig.addProperty("namespace", "my-namespace");</p> </blockquote> <p>I have followed the <a href="https://github.com/hazelcast/hazelcast-kubernetes.I" rel="nofollow noreferrer">https://github.com/hazelcast/hazelcast-kubernetes.I</a> have used the first approach was able to see the instance(per pod not in one members list) but they were not communicating (if I am doing crud in one hazel instance it's not reflecting in other). I want to use DNS strategy but was not able to create the instance only.</p>
<p>Please check the followings:</p> <h2>1. Discovery Strategy</h2> <p>For Kubernetes you need to use the <code>HazelcastKubernetesDiscoveryStrategy</code> class. It can be defined in the XML configuration or in the code (as in your case).</p> <h2>2. Labels</h2> <p>Check that the service for your Hazelcast cluster has the labels you specified. The same when it comes to the service name and namespace.</p> <h2>3. Configuration</h2> <p>There are two ways to configure the discovery: <strong>DNS Lookup</strong> and <strong>REST API</strong>. Each has special requirements. You mentioned DNS Lookup, but the configuration you've sent actually uses REST API.</p> <h3>DNS Lookup</h3> <p>Your Hazelcast cluster service must be <strong>headless ClusterIP</strong>.</p> <pre><code> spec: type: ClusterIP clusterIP: None </code></pre> <h3>REST API</h3> <p>You need to grant access for you app to access Kubernetes API. Please check: <a href="https://github.com/hazelcast/hazelcast-code-samples/blob/master/hazelcast-integration/kubernetes/rbac.yaml" rel="nofollow noreferrer">https://github.com/hazelcast/hazelcast-code-samples/blob/master/hazelcast-integration/kubernetes/rbac.yaml</a></p> <h2>Other helpful resources</h2> <ul> <li><a href="https://github.com/hazelcast/hazelcast-code-samples/blob/master/hazelcast-integration/kubernetes/" rel="nofollow noreferrer">Hazelcast Kubernetes Code Sample</a></li> <li><a href="https://github.com/hazelcast/hazelcast-code-samples/tree/master/hazelcast-integration/openshift/client-apps/ocp-demo-frontend" rel="nofollow noreferrer">Hazelcast OpenShift Client app</a> (should also work in Kubernetes)</li> </ul>
<p>I have a k8s service of type clusterIP.. i need to change the below configuration via CLI</p> <ol> <li>the http port to https port</li> <li>the port number</li> <li>the type to Load Balancer</li> </ol> <p>Is there a way to do it..?</p>
<p>You can't remove the existing port, but you <em>can</em> add the HTTPs port and also change the type using <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/" rel="noreferrer">kubectl patch</a></p> <p>Example:</p> <pre><code>kubectl patch svc &lt;my_service&gt; -p '{"spec": {"ports": [{"port": 443,"targetPort": 443,"name": "https"},{"port": 80,"targetPort": 80,"name": "http"}],"type": "LoadBalancer"}}' </code></pre> <p>If you don't want to create JSON on the command line, create a yaml file like so:</p> <pre><code>ports: - port: 443 targetPort: 443 name: "https" - port: 80 targetPort: 80 name: "http" type: LoadBalancer </code></pre> <p>And then do:</p> <pre><code>kubectl patch svc &lt;my_service&gt; --patch "$(cat patch.yaml)" </code></pre>
<h1>The situation</h1> <p>I have a kubernetes pod stuck in "Terminating" state that resists pod deletions</p> <pre><code>NAME READY STATUS RESTARTS AGE ... funny-turtle-myservice-xxx-yyy 1/1 Terminating 1 11d ... </code></pre> <p>Where <code>funny-turtle</code> is the name of the helm release that have since been deleted.</p> <h1>What I have tried</h1> <h3>try to delete the pod.</h3> <p>Output: <code>pod "funny-turtle-myservice-xxx-yyy" deleted </code> Outcome: it still shows up in the same state. - also tried with <code>--force --grace-period=0</code>, same outcome with extra warning</p> <blockquote> <p>warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.</p> </blockquote> <h3>try to read the logs (kubectl logs ...).</h3> <p>Outcome: <code>Error from server (NotFound): nodes "ip-xxx.yyy.compute.internal" not found</code></p> <h3>try to delete the kubernetes deployment.</h3> <p>but it does not exist.</p> <p>So I assume this pod somehow got "disconnected" from the aws API, reasoning from the error message that <code>kubectl logs</code> printed.</p> <p>I'll take any suggestions or guidance to explain what happened here and how I can get rid of it.</p> <h3>EDIT 1</h3> <p>Tried to see if the "ghost" node was still there (<code>kubectl delete node ip-xxx.yyy.compute.internal</code>) but it does not exist.</p>
<p>Try removing the finalizers from the pod:</p> <pre><code>kubectl patch pod funny-turtle-myservice-xxx-yyy -p '{"metadata":{"finalizers":null}}' </code></pre>
<p>I am fairly new to this, I don't know if I am heading in the right direction or not. I have a custom nginx.conf that works fine, I am now trying to build a docker image with it so that I can run it as a container in kuberentes. </p> <p>Here is my nginx.conf</p> <pre><code>user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } stream { #define the various upstreams upstream app_1 { server 192.168.122.206:5678; } upstream app_2 { server 192.168.122.206:9000; } #map source port to upstream map $remote_port $backend_svr { 1234 "app_1"; 1235 "app_2"; } #all udp traffic received on 8000, depending on source it will be redirected server { listen 8000 udp; proxy_pass $backend_svr; } } </code></pre> <p>On my virtual machine I have /home/asvilla/docker-files/nginx-udp/Dockerfile which contains </p> <pre><code>FROM nginx RUN chmod +w /etc/nginx/nginx.conf COPY nginx.conf /etc/nginx/nginx.conf RUN cat /etc/nginx/nginx.conf </code></pre> <p>I build it using </p> <pre><code>docker build -t "custom_nginx:dockerfile" . </code></pre> <p>The nginx container should redirect udp traffic incoming on port 8000 to either port 5678 or port 9000 depending on the source of the udp packet. </p> <p>I run with docker run 'image-id' but it doesn't function as expected. Running docker ps shows "PORTS 80/tcp " and "COMMAND nginx-g daemon of ..."</p> <p>Any pointers on what these mean. nginx by default binds tcp to port 80 but I have changed the nginx.conf, the cat command I run shows that the file is updated. </p> <p>I am assuming that I have to expose the ports in the nginx.conf somehow. Any help much appreciated.</p>
<p>If your end goal is to run this in Kubernetes, your easiest path will be to put this config file into a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a> and just configure your Deployment to run the standard nginx image. (In plain Docker, you can use <code>docker run -v</code> to inject the config file into the container at runtime to similar effect.)</p> <p>It doesn't really matter what port nginx listens on inside the container. If the stock nginx container expects to listen on the standard HTTP port 80 (and it looks like its <code>Dockerfile</code> has an <code>EXPOSE 80</code> directive) then you can embrace that and <code>listen 80</code> in your nginx config (over TCP, not UDP). Then in your Kubernetes deployment you can specify that as a container port, and if you want to map it to something else, you can do that in the Service that wraps this. (In plain Docker, if you want host port 8000 to avoid conflicting with other things, <code>docker run -p8000:80</code>.)</p> <p>In terms of best practices I'd discourage directly writing IP addresses into config files. If it's a persistent server outside your cluster, you can set up a DNS server in your network to resolve its hostname, or get a cloud service like Amazon's Route 53 to do it for you. If it's in Kubernetes, use the service's DNS name, <code>backend.default.svc.cluster.local</code>. Even if you really have only an IP address, creating an <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">ExternalName service</a> will help you if the service ever moves.</p> <p>Assuming you have the config file in a ConfigMap, your Deployment would look very much like <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">the sample Deployment in the Kubernetes documentation</a> (it even runs an <code>nginx:1.7.9</code> container publishing port 80).</p>
<p>What is a Load Balancer? </p> <blockquote> <p>Load balancing improves the distribution of workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units, or disk drives</p> </blockquote> <h3>The NodePort</h3> <p>NodePort is not load balancer. (I know that <code>kube-proxy</code> load balance the traffic among the pod once the traffic is inside the cluster) I mean, the end user hits <code>http://NODEIP:30111</code> (For example) URL to access the application. Even though the traffic is load balanced among the POD, users still hits a single node i.e. the "Node" which is K8s's minion but a real Load Balancer, right?</p> <h3>The Ingress Service</h3> <p>Here also same, imagine the ingress-controller is deployed and ingress-service too. The sub-domain that we specify in ingress-service should points to "a" node in K8s cluster, then ingress-controller load balance the traffic among the pods. Here also end users hitting single node which is K8s's minion but a real Load Balancer, right?</p> <h3>Load Balancer From Cloud Provider(for example AWS ELB)</h3> <p>I'm having a doubt, how cloud provider's LB does the load balancing? Are those really distribute the traffic to appropriate Node which PODS are deployed or just forwarding the traffic to master node or minion?</p> <p>If above point is true. Where is the true load balancing the traffic among the pods/appropriate nodes.</p> <p>Can I implement true load balancing in K8s? I asked a related <a href="https://stackoverflow.com/questions/51531312/how-to-access-k8ss-flannel-network-from-outside">question here</a></p>
<blockquote> <p>NodePort is not load balancer. </p> </blockquote> <p>You're right about this in one way, yes it's not designed to be a load balancer.</p> <blockquote> <p>users still hits a single node i.e. the "Node" which is K8s's minion but a real Load Balancer, right?</p> </blockquote> <p>With NodePort, you <em>have</em> to hit a single node at any one time, but you have to remember that <code>kube-proxy</code> is running on ALL nodes. So you can hit the NodePort on any node in the cluster (even a node the workload isn't running on) and you'll still hit the endpoint you want to hit. This becomes important later.</p> <blockquote> <p>The sub-domain that we specify in ingress-service should points to "a" node in K8s cluster</p> </blockquote> <p>No, this isn't how it works.</p> <p>Your ingress controller needs to be exposed externally still. If you're using a cloud provider, a commonly used pattern is to expose your ingress controller with Service of <code>Type=LoadBalancer</code>. The LoadBalancing still happens with Services, but Ingress allows you to use that Service in a more user friendly way. Don't confuse ingress with loadbalancing.</p> <blockquote> <p>I'm having a doubt how cloud provider LB does the load balancing? Are those really distribute the traffic to appropriate Node which PODS are deployed or just forwarding the traffic to master node or minion?</p> </blockquote> <p>If you look at a provisioned service in Kubernetes, you'll see why it makes sense.</p> <p>Here's a Service of Type LoadBalancer:</p> <pre><code>kubectl get svc nginx-ingress-controller -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer &lt;redacted&gt; internal-a4c8... 80:32394/TCP,443:31281/TCP 147d </code></pre> <p>You can see I've deployed an ingress controller with type LoadBalancer. This has created an AWS ELB, but also notice, like <code>NodePort</code> it's mapped port 80 on the ingress controller pod to port <code>32394</code>.</p> <p>So, let's look at the actual LoadBalancer in AWS:</p> <pre><code>aws elb describe-load-balancers --load-balancer-names a4c80f4eb1d7c11e886d80652b702125 { "LoadBalancerDescriptions": [ { "LoadBalancerName": "a4c80f4eb1d7c11e886d80652b702125", "DNSName": "internal-a4c8&lt;redacted&gt;", "CanonicalHostedZoneNameID": "&lt;redacted&gt;", "ListenerDescriptions": [ { "Listener": { "Protocol": "TCP", "LoadBalancerPort": 443, "InstanceProtocol": "TCP", "InstancePort": 31281 }, "PolicyNames": [] }, { "Listener": { "Protocol": "HTTP", "LoadBalancerPort": 80, "InstanceProtocol": "HTTP", "InstancePort": 32394 }, "PolicyNames": [] } ], "Policies": { "AppCookieStickinessPolicies": [], "LBCookieStickinessPolicies": [], "OtherPolicies": [] }, "BackendServerDescriptions": [], "AvailabilityZones": [ "us-west-2a", "us-west-2b", "us-west-2c" ], "Subnets": [ "&lt;redacted&gt;", "&lt;redacted&gt;", "&lt;redacted&gt;" ], "VPCId": "&lt;redacted&gt;", "Instances": [ { "InstanceId": "&lt;redacted&gt;" }, { "InstanceId": "&lt;redacted&gt;" }, { "InstanceId": "&lt;redacted&gt;" }, { "InstanceId": "&lt;redacted&gt;" }, { "InstanceId": "&lt;redacted&gt;" }, { "InstanceId": "&lt;redacted&gt;" }, { "InstanceId": "&lt;redacted&gt;" }, { "InstanceId": "&lt;redacted&gt;" } ], "HealthCheck": { "Target": "TCP:32394", "Interval": 10, "Timeout": 5, "UnhealthyThreshold": 6, "HealthyThreshold": 2 }, "SourceSecurityGroup": { "OwnerAlias": "337287630927", "GroupName": "k8s-elb-a4c80f4eb1d7c11e886d80652b702125" }, "SecurityGroups": [ "sg-8e0749f1" ], "CreatedTime": "2018-03-01T18:13:53.990Z", "Scheme": "internal" } ] } </code></pre> <p>The most important things to note here are:</p> <p>The LoadBalancer is mapping port 80 in ELB to the NodePort:</p> <pre><code>{ "Listener": { "Protocol": "HTTP", "LoadBalancerPort": 80, "InstanceProtocol": "HTTP", "InstancePort": 32394 }, "PolicyNames": [] } </code></pre> <p>You'll also see that there are multiple target <code>Instances</code>, not one:</p> <pre><code>aws elb describe-load-balancers --load-balancer-names a4c80f4eb1d7c11e886d80652b702125 | jq '.LoadBalancerDescriptions[].Instances | length' 8 </code></pre> <p>And finally, if you look at the number of nodes in my cluster, you'll see it's actually <em>all</em> the nodes that have been added to the LoadBalancer:</p> <pre><code>kubectl get nodes -l "node-role.kubernetes.io/node=" --no-headers=true | wc -l 8 </code></pre> <p>So, in summary - Kubernetes <em>does</em> implement true LoadBalancing with services (whether that be NodePort or LoadBalancer types) and the ingress just makes that service more accessible to the outside world</p>
<p>When I provision a Kubernetes cluster using kubeadm, I get my nodes tagged as &quot;none&quot;. It's a known bug in Kubernetes and currently a PR is in progress.</p> <p>However, I would like to know if there is an option to add a Role name manually for the node.</p> <pre><code>root@ip-172-31-14-133:~# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-14-133 Ready master 19m v1.9.3 ip-172-31-6-147 Ready &lt;none&gt; 16m v1.9.3 </code></pre>
<p>This worked for me:</p> <p><code>kubectl label node cb2.4xyz.couchbase.com node-role.kubernetes.io/worker=worker</code></p> <pre><code>NAME STATUS ROLES AGE VERSION cb2.4xyz.couchbase.com Ready custom,worker 35m v1.11.1 cb3.5xyz.couchbase.com Ready worker 29m v1.11.1 </code></pre> <p>I could not delete/update the old label, but I can live with it.</p>
<p>Where are these stored? Is there a recommended way to export them for analysis purposes?</p> <p>I'm referring to the data from <code>kubectl get events</code>.</p>
<blockquote> <p>Where are these stored?</p> </blockquote> <p>If you run <code>kubectl get events --v=9</code> you will observe that there is actual api call behind it:</p> <pre><code>GET /api/v1/namespaces/default/events?limit=500 </code></pre> <p>You can use api to extract details as described in <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#-strong-read-operations-strong--289" rel="nofollow noreferrer">the official documenatation</a>.</p> <p>As for storage, they are kept in etcd cluster. As an excerpt from <a href="https://github.com/kubernetes/kubernetes/issues/47532" rel="nofollow noreferrer">discussion about events</a> here is relevant part to your question:</p> <pre><code>Kubernetes only use etcd's lease API for creating event objects. Event objects' lease lasts for 1 hour and doesn't need good precision. </code></pre> <p>You now have two paths around this:</p> <ul> <li>pull events using api (I'd probably do this for analysis purposes since this is what actually gets relayed by <code>kubectl</code> command)</li> <li>query etcd cluster (if you want more granular control over data processing)</li> </ul>
<p>I am running my docker containers with the help of kubernetes cluster on AWS EKS. Two of my docker containers are using shared volume and both of these containers are running inside two different pods. So I want a common volume which can be used by both the pods on aws.</p> <p>I created an EFS volume and mounted. I am following link to create <code>PersistentVolumeClaim</code>. But I am getting timeout error when <code>efs-provider</code> pod trying to attach mounted EFS volume space. <code>VolumeId</code>, region are correct only. </p> <p>Detailed Error message for Pod describe: </p> <blockquote> <p>timeout expired waiting for volumes to attach or mount for pod "default"/"efs-provisioner-55dcf9f58d-r547q". list of unmounted volumes=[pv-volume]. list of unattached volumes=[pv-volume default-token-lccdw] <br> MountVolume.SetUp failed for volume "pv-volume" : mount failed: exit status 32</p> </blockquote>
<p>AWS EFS uses NFS type volume plugin, and As per <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">Kubernetes Storage Classes</a> NFS volume plugin does not come with internal Provisioner like EBS.</p> <p>So the steps will be:</p> <ol> <li>Create an external Provisioner for NFS volume plugin.</li> <li>Create a storage class.</li> <li>Create one volume claim.</li> <li><p>Use volume claim in Deployment.</p> <ul> <li><p>In the configmap section change the file.system.id: and aws.region: to match the details of the EFS you created.</p></li> <li><p>In the deployment section change the server: to the DNS endpoint of the EFS you created.</p></li> </ul></li> </ol> <hr> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: efs-provisioner data: file.system.id: yourEFSsystemid aws.region: regionyourEFSisin provisioner.name: example.com/aws-efs --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: efs-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: efs-provisioner spec: containers: - name: efs-provisioner image: quay.io/external_storage/efs-provisioner:latest env: - name: FILE_SYSTEM_ID valueFrom: configMapKeyRef: name: efs-provisioner key: file.system.id - name: AWS_REGION valueFrom: configMapKeyRef: name: efs-provisioner key: aws.region - name: PROVISIONER_NAME valueFrom: configMapKeyRef: name: efs-provisioner key: provisioner.name volumeMounts: - name: pv-volume mountPath: /persistentvolumes volumes: - name: pv-volume nfs: server: yourEFSsystemID.efs.yourEFSregion.amazonaws.com path: / --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: aws-efs provisioner: example.com/aws-efs --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: efs annotations: volume.beta.kubernetes.io/storage-class: "aws-efs" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi </code></pre> <p>For more explanation and details go to <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs</a></p>
<p>I created alert rules for pod memory utilisation, in Prometheus. Alerts are showing perfectly on my slack channel, but it do not contain the name of the pod so that difficult to understand which pod is having the issue . </p> <p>It is Just showing <code>[FIRING:35] (POD_MEMORY_HIGH_UTILIZATION default/k8s warning)</code>. But when I look in to the "Alert" section in the Prometheus UI, I can see the fired rules with its pod name. Can anyone help?</p> <p>My alert notification template is as follows: </p> <pre><code>alertname: TargetDown alertname: POD_CPU_HIGH_UTILIZATION alertname: POD_MEMORY_HIGH_UTILIZATION receivers: - name: 'slack-notifications' slack_configs: - channel: '#devops' title: '{{ .CommonAnnotations.summary }}' text: '{{ .CommonAnnotations.description }}' send_resolved: true </code></pre> <p>I have added the option <code>title: '{{ .CommonAnnotations.summary }}' text: '{{ .CommonAnnotations.description }}'</code> in my alert notification template and now it is showing the description. My description is <code>description: pod {{$labels.pod}} is using high memory</code>. But only showing <code>is using high memory</code>. Not specifying the pod name</p>
<p>As mentioned in the <a href="https://engineering.infinityworks.com/slack-prometheus-alertmanager/" rel="nofollow noreferrer">article</a>, you should check the alert rules and update them if necessary. See an example:</p> <pre><code>ALERT ElasticacheCPUUtilisation IF aws_elasticache_cpuutilization_average &gt; 80 FOR 10m LABELS { severity = "warning" } ANNOTATIONS { summary = "ElastiCache CPU Utilisation Alert", description = "Elasticache CPU Usage has breach the threshold set (80%) on cluster id {{ $labels.cache_cluster_id }}, now at {{ $value }}%", runbook = "https://mywiki.com/ElasticacheCPUUtilisation", } </code></pre> <p>To provide external URL for your prometheus GUI, apply CLI argument to your prometheus server and restart it:</p> <pre><code>-web.external-url=http://externally-available-url:9090/ </code></pre> <p>After that, you can put the values into your alertmanager configuration. See an example:</p> <pre><code>receivers: - name: 'iw-team-slack' slack_configs: - channel: alert-events send_resolved: true api_url: https://hooks.slack.com/services/&lt;your_token&gt; title: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] Monitoring Event Notification' text: &gt;- {{ range .Alerts }} *Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}` *Description:* {{ .Annotations.description }} *Graph:* &lt;{{ .GeneratorURL }}|:chart_with_upwards_trend:&gt; *Runbook:* &lt;{{ .Annotations.runbook }}|:spiral_note_pad:&gt; *Details:* {{ range .Labels.SortedPairs }} β€’ *{{ .Name }}:* `{{ .Value }}` {{ end }} {{ end }} </code></pre>
<p>I have a <strong>deployment.yaml</strong> containing deployment of <em>3 containers</em> + <em>LB service</em> and the <strong>cloudbuild.yaml</strong> containing <em>steps to build container images every time there's new commit to a certain branch on Bitbucket git repo</em>.</p> <p>All is working fine except the fact that my deplyment isn't updated whenever there's a new image version (<em>I used :latest tag in deployment</em>) and to change this I understood that my deployment images should use something unique, other than :latest, such as a git commit SHA.</p> <p>Problem: <strong>I'm not sure how to perform image declaration update during GCB CI process to contain new commit SHA.</strong></p> <p>YAML's: <a href="https://paste.ee/p/CsETr" rel="nofollow noreferrer">https://paste.ee/p/CsETr</a></p>
<p>Found a solution by using image tag or URI variables in deployment fine and substituting them with sed during build-time.</p> <p><strong>deplyment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: namespace: dev name: app labels: app: app spec: replicas: 3 selector: matchLabels: app: app template: metadata: labels: app: app spec: initContainers: - name: init image: INIT_IMAGE_NAME imagePullPolicy: Always command: ['sh', '-c', 'cp -r /app /srv; chown -R 82:82 /srv/app'] volumeMounts: - name: code mountPath: /srv containers: - name: nginx image: NGINX_IMAGE_NAME imagePullPolicy: Always ports: - containerPort: 80 volumeMounts: - name: code mountPath: /srv - name: php-socket mountPath: /var/run livenessProbe: httpGet: path: /health.html port: 80 httpHeaders: - name: X-Healthcheck value: Checked initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 15 readinessProbe: httpGet: path: /health.html port: 80 httpHeaders: - name: X-Healthcheck value: Checked initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 15 - name: php image: PHP_IMAGE_NAME imagePullPolicy: Always volumeMounts: - name: code mountPath: /srv - name: php-socket mountPath: /var/run livenessProbe: httpGet: path: /health.html port: 80 httpHeaders: - name: X-Healthcheck value: Checked initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 15 readinessProbe: httpGet: path: /health.html port: 80 httpHeaders: - name: X-Healthcheck value: Checked initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 15 volumes: - name: code emptyDir: {} - name: php-socket emptyDir: {} --- apiVersion: v1 kind: Service metadata: namespace: dev name: app-service spec: type: LoadBalancer ports: - port: 80 targetPort: 80 protocol: TCP selector: app: app </code></pre> <p><strong>cloudbuild.yaml</strong></p> <pre><code>steps: # Build Images - id: Building Init Image name: gcr.io/cloud-builders/docker args: ['build','-t', 'eu.gcr.io/$PROJECT_ID/init:$SHORT_SHA', '-f', 'init.dockerfile', '.'] - id: Building Nginx Image name: gcr.io/cloud-builders/docker args: ['build','-t', 'eu.gcr.io/$PROJECT_ID/nginx:$SHORT_SHA', '-f', 'nginx.dockerfile', '.'] waitFor: ['-'] - id: Building PHP-FPM Image name: gcr.io/cloud-builders/docker args: ['build','-t', 'eu.gcr.io/$PROJECT_ID/php:$SHORT_SHA', '-f', 'php.dockerfile', '.'] waitFor: ['-'] # Push Images - id: Pushing Init Image name: gcr.io/cloud-builders/docker args: ['push','eu.gcr.io/$PROJECT_ID/init:$SHORT_SHA'] - id: Pushing Nginx Image name: gcr.io/cloud-builders/docker args: ['push','eu.gcr.io/$PROJECT_ID/nginx:$SHORT_SHA'] - id: Pushing PHP-FPM Image name: gcr.io/cloud-builders/docker args: ['push','eu.gcr.io/$PROJECT_ID/php:$SHORT_SHA'] # Update Image Tags - id: 'Setting Init Image Tag' name: ubuntu args: ['bash','-c','sed -i "s,INIT_IMAGE_NAME,eu.gcr.io/$PROJECT_ID/init:$SHORT_SHA," deployment.yaml'] - id: 'Setting Nginx Image Tag' name: ubuntu args: ['bash','-c','sed -i "s,NGINX_IMAGE_NAME,eu.gcr.io/$PROJECT_ID/nginx:$SHORT_SHA," deployment.yaml'] - id: 'Setting PHP Image Tag' name: ubuntu args: ['bash','-c','sed -i "s,PHP_IMAGE_NAME,eu.gcr.io/$PROJECT_ID/php:$SHORT_SHA," deployment.yaml'] # Update Deployment - id: Updating Deployment name: gcr.io/cloud-builders/kubectl args: ['apply','-f','deployment.yaml'] env: - CLOUDSDK_COMPUTE_ZONE=europe-west2-b - CLOUDSDK_CONTAINER_CLUSTER=clusterx # Images images: - eu.gcr.io/$PROJECT_ID/init:$SHORT_SHA - eu.gcr.io/$PROJECT_ID/nginx:$SHORT_SHA - eu.gcr.io/$PROJECT_ID/php:$SHORT_SHA # Tags tags: - master - dev - init </code></pre>
<p>I am learning a bit about ingress. It consists of 2 parts:</p> <ul> <li>Ingress resource: <em>"How do you want to route requests to the backing services?"</em></li> <li>Ingress controller: <em>"Listens to Kubernetes API for Ingress resources."</em></li> </ul> <p>I've deployed an ingress controller in namespace called ingress. I've also deployed a default backend. Now I want to deploy an application (svc, deployment of pods etc).</p> <p>Do I need to deploy this application also in the namespace <code>ingress</code>? It seems logical to me I have to deploy a new ingress resource for every new application, but how is this working for the Ingress controller?</p> <p>Do I have to create one ingress controller for every namespace or is it recommended to use on ingress controller over multiple namespaces?</p>
<p>The ingress resource must be in the same namespace as the backend service. This can be any namespace doesn't need to be the same as the controller</p>
<p>I Have a pod that needs to connect to a Database located on GCE Vm's with the same subnet as the GKE nodes. I currently have a k8 Service and k8 Endpoint that the pod successfully connects to but the 10.128.0.2 cannot be routed. Im sure this pertains to a GCP firewall rule/route but I havn't had much luck.</p> <p>subnet -> 10.128.0.0/9</p> <p>cbr0 -> 10.8.15.0/20</p> <p>eth0 -> 10.128.0.1</p> <p>k8 services -> 10.11.224/14</p> <p>Master Version: 1.9.7-gke.3 </p> <pre><code>kind: Endpoints apiVersion: v1 metadata: name: externalDB namespace: default subsets: - addresses: - ip: 10.128.0.2 ports: - port: 7199 name: interface </code></pre> <p>"</p>
<p>At this point in time, services and endpoints are not routable; however pods are as explained in <a href="https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod#communicating_within_the_same_cluster" rel="noreferrer">this article</a>. As @cohenjo mentioned, you should directly connect from the pod.</p> <p>Edit: I believe that this issue is due to a firewall change on Clusters that are running 1.9.x as described in <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#autofirewall" rel="noreferrer">this article</a>. You can follow the steps provided in the article to allow communication from the GKE cluster to all VM instances on the network or attach the network tag assigned on the node to the VM instance you would like the pod to communicate with.</p>
<p>We use local persistent storage as storage backend for SOLR pods. The pods are redundantly scheduled to multiple kubernetes nodes. If one of the nodes go down there are always enough instances on other nodes.</p> <p>How can we drain these nodes (without "migrating" the SOLR pods to other nodes) in case we want to do a maintenance on a node? The most important thing for us would be that kube-proxy would no longer send new requests to the pods on the node in question so that after some time we could do the maintenance without interrupting service for running requests.</p> <p>We tried cordon but cordon will only make sure no new pods are scheduled to a node. Drain does not seem to work with pods with local persistent volumes.</p>
<p>You can check out pod anti-affinity. </p> <p><a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity</a></p> <p>These constructs allow you to repel or attract pods when certain conditions are met.</p> <p>In your case the pod anti-affinity 'requiredDuringSchedulingIgnoredDuringExecution' maybe your best bet. I haven't personally used it yet, i hope it can lead you to the right direction.</p>
<p>I have a Go server that is currently running with Kubernetes on AWS. The website sits under a route-53 and an ELB that manages the SSL termination. Now, I want to support HTTP/2 in my web-server in order to push resources to the clients, and I saw that HTTP/2 requires that the web-server will use HTTPS. I have a few questions according to that.</p> <ul> <li><p>HTTP/2 requires HTTPS - In my case the HTTPS logic is in the ELB and it manages for me the SSL termination. My application gets the decrypted data as a simple HTTP request. Do I need to remove the ELB in order to enable HTTP/2 in my web-server? Is there any way to leave the ELB there and enable HTTP/2 in my web-server? </p></li> <li><p>In my local development I use openssl to generate certificate. If I deploy the web-server I need to get the CA certificate from AWS and store it somewhere in the Kubernetes certificate-manager and inject to my web-server in the initialization. What is the recommended way to do this? </p></li> </ul> <p>I feel like I miss something, so I'll appreciate any help. Thanks</p>
<p>The new ELB supports HTTP/2 (<a href="https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/</a>) but not the Push attribute (<a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-configuration" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-configuration</a>): β€œYou can't use the server-push feature of HTTP/2”</p> <p>If you want to use Push you can use the ELB as a level four TCP LoadBalancer and enable this at your webserver. For HaProxy it is also possible to still offset SSL/TLS with this set up (<a href="https://stackoverflow.com/questions/38730281/http-2-behind-reverse-proxy">HTTP/2 behind reverse proxy</a>) but not sure if similar is possible under ELB (probably not). This is because while HTTP/2 requires HTTPS from all the major browsers it is not a requirement of the protocol itself so load balancer -> server can be over HTTP/2 without HTTPS (called h2c).</p> <p>However I would say that HTTP/2 Push is very complicated to get right - read this excellent post by Jake Archibald of Google on this: <a href="https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/" rel="nofollow noreferrer">https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/</a>. It’s generally been found to benefit in a few cases and cause no change in most and even cause degradation in performance in others. Ultimately it’s a bit of a let down in HTTP/2 features, though personally I don’t think it’s been explored enough so may be some positives to come out of it yet.</p> <p>So if you don’t want Push then is there still a point in upgrading to HTTP/2 on the front end? Yes in my opinion as detailed in my answer here: <a href="https://stackoverflow.com/questions/41637076/http2-with-node-js-behind-nginx-proxy">HTTP2 with node.js behind nginx proxy</a>. This also shows that there is no real need to have HTTP/2 on the backend from LB to webserver meaning you could leave it as a HTTPS offloading loaf balancer.</p> <p>It should be noted that there are some use cases where HTTP/2 is slower:</p> <ol> <li>Under heavy packet loss (i.e. a very bad Internet connection). Here the single TCP connection used by HTTP/2 and it’s TCP Head of Line Blocking means the connection suffers more than 6 individual HTTP/1 connections. QUIC which is a even newer protocol then HTTP/2 (so new it’s not even out yet, so not really available except on Google servers) addresses this.</li> <li>For large packets due to AWS’s specific implementation. Interesting post here on that: <a href="https://medium.com/@ptforsberg/on-the-aws-application-load-balancer-http-2-support-fad4bc67b21a" rel="nofollow noreferrer">https://medium.com/@ptforsberg/on-the-aws-application-load-balancer-http-2-support-fad4bc67b21a</a>. This is only really an issue for truely large downloads most likely for APIs and shouldn’t be an issue for most websites (and if it is then you should optimise your website cause HTTP/2 won’t be able to help much anyway!). Could be easily fixed by upgrading the HTTP/2 window size setting but looks like ELB does not allow you to set this.</li> </ol>
<p>I'm running into DNS issues on a GKE 1.10 kubernetes cluster. Occasionally pods start without any network connectivity. Restarting the pod tends to fix the issue.</p> <p>Here's the result of the same few commands inside a container without network, and one with.</p> <h2>BROKEN:</h2> <pre><code>kc exec -it -n iotest app1-b67598997-p9lqk -c userapp sh /app $ nslookup www.google.com nslookup: can't resolve '(null)': Name does not resolve /app $ cat /etc/resolv.conf nameserver 10.63.240.10 search iotest.svc.cluster.local svc.cluster.local cluster.local c.myproj.internal google.internal options ndots:5 /app $ curl -I 10.63.240.10 curl: (7) Failed to connect to 10.63.240.10 port 80: Connection refused /app $ netstat -antp Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 1/python tcp 0 0 ::1:50051 :::* LISTEN 1/python tcp 0 0 ::ffff:127.0.0.1:50051 :::* LISTEN 1/python </code></pre> <h2>WORKING:</h2> <pre><code>kc exec -it -n iotest app1-7d985bfd7b-h5dbr -c userapp sh /app $ nslookup www.google.com nslookup: can't resolve '(null)': Name does not resolve Name: www.google.com Address 1: 74.125.206.147 wk-in-f147.1e100.net Address 2: 74.125.206.105 wk-in-f105.1e100.net Address 3: 74.125.206.99 wk-in-f99.1e100.net Address 4: 74.125.206.104 wk-in-f104.1e100.net Address 5: 74.125.206.106 wk-in-f106.1e100.net Address 6: 74.125.206.103 wk-in-f103.1e100.net Address 7: 2a00:1450:400c:c04::68 wk-in-x68.1e100.net /app $ cat /etc/resolv.conf nameserver 10.63.240.10 search iotest.svc.cluster.local svc.cluster.local cluster.local c.myproj.internal google.internal options ndots:5 /app $ curl -I 10.63.240.10 HTTP/1.1 404 Not Found date: Sun, 29 Jul 2018 15:13:47 GMT server: envoy content-length: 0 /app $ netstat -antp Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:15000 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:15001 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 1/python tcp 0 0 10.60.2.6:56508 10.60.48.22:9091 ESTABLISHED - tcp 0 0 127.0.0.1:57768 127.0.0.1:50051 ESTABLISHED - tcp 0 0 10.60.2.6:43334 10.63.255.44:15011 ESTABLISHED - tcp 0 0 10.60.2.6:15001 10.60.45.26:57160 ESTABLISHED - tcp 0 0 10.60.2.6:48946 10.60.45.28:9091 ESTABLISHED - tcp 0 0 127.0.0.1:49804 127.0.0.1:50051 ESTABLISHED - tcp 0 0 ::1:50051 :::* LISTEN 1/python tcp 0 0 ::ffff:127.0.0.1:50051 :::* LISTEN 1/python tcp 0 0 ::ffff:127.0.0.1:50051 ::ffff:127.0.0.1:49804 ESTABLISHED 1/python tcp 0 0 ::ffff:127.0.0.1:50051 ::ffff:127.0.0.1:57768 ESTABLISHED 1/python </code></pre> <p>These pods are identical, just one was restarted. </p> <p>Does anyone have advice about how to analyse and fix this issue?</p>
<p>Some steps to try:</p> <p>1) ifconfig eth0 or whatever the primary interface is. Is the interface up? Are the tx and rx packet counts increasing?</p> <p>2)If interface is up, you can try tcpdump as you are running the nslookup command that you posted. See if the dns request packets are getting sent out. </p> <p>3) See which node the pod is scheduled on, when network connectivity gets broken. Maybe it is on the same node every time? If yes, are other pods on that node running into similar problem?</p>
<p>Is it possible to run commands on host from within a pod in kubernetes.</p> <p>So for example, I have a pod running a python image which calculates the size of the os. But the command it uses runs inside the pod, not in the host. Is it possible to run the command on the host from pod.</p>
<p>Actually a command run inside a pod is run on the host. It's a container (Docker), not a virtual machine. That means when you execute something in the pod like retrieving the size of your RAM it will usually return the RAM of the whole machine. If you want to get the "size of the os" and you mean the hard drive with it, you need to mount the hard drive to count it.</p> <p>If your actual problem is that you want to do something, which a normal container isn't allowed to, you can run a pod in <em>privileged</em> mode or configure whatever you exactly need. You need to add a <em>security context</em> to your pod like this (taken from the docs):</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 fsGroup: 2000 volumes: - name: sec-ctx-vol emptyDir: {} containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0 volumeMounts: - name: sec-ctx-vol mountPath: /data/demo securityContext: privileged: true </code></pre> <p>Sources:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></li> <li><a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged" rel="noreferrer">https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged</a></li> </ul>
<p>I want to set up a pod and there are two containers running inside the pod, which try to access a mounted file /var/run/udspath. In container serviceC, I need to change the file and group owner of /var/run/udspath, so I add a command into the yaml file. But it does not work. </p> <p>kubectl apply does not complain, but container serviceC is not created. Without this "command: ['/bin/sh', '-c', 'sudo chown 1337:1337 /var/run/udspath']", the container could be created.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: clitool labels: app: httpbin spec: ports: - name: http port: 8000 selector: app: httpbin --- apiVersion: extensions/v1beta1 kind: Deployment metadata: creationTimestamp: null name: clitool spec: replicas: 1 strategy: {} template: metadata: annotations: sidecar.istio.io/status: '{"version":"1c09c07e5751560367349d807c164267eaf5aea4018b4588d884f7d265cf14a4","initContainers":["istio-init"],"containers":["serviceC"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}' creationTimestamp: null labels: app: httpbin version: v1 spec: containers: - image: name: serviceA imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /var/run/udspath name: sdsudspath - image: imagePullPolicy: IfNotPresent name: serviceB ports: - containerPort: 8000 resources: {} - args: - proxy - sidecar - --configPath - /etc/istio/proxy - --binaryPath - /usr/local/bin/envoy - --serviceCluster - httpbin - --drainDuration - 45s - --parentShutdownDuration - 1m0s - --discoveryAddress - istio-pilot.istio-system:15007 - --discoveryRefreshDelay - 1s - --zipkinAddress - zipkin.istio-system:9411 - --connectTimeout - 10s - --statsdUdpAddress - istio-statsd-prom-bridge.istio-system:9125 - --proxyAdminPort - "15000" - --controlPlaneAuthPolicy - NONE env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: fieldPath: status.podIP - name: ISTIO_META_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: ISTIO_META_INTERCEPTION_MODE value: REDIRECT image: imagePullPolicy: IfNotPresent command: ["/bin/sh"] args: ["-c", "sudo chown 1337:1337 /var/run/udspath"] name: serviceC resources: requests: cpu: 10m securityContext: privileged: false readOnlyRootFilesystem: true runAsUser: 1337 volumeMounts: - mountPath: /etc/istio/proxy name: istio-envoy - mountPath: /etc/certs/ name: istio-certs readOnly: true - mountPath: /var/run/udspath name: sdsudspath initContainers: - args: - -p - "15001" - -u - "1337" - -m - REDIRECT - -i - '*' - -x - "" - -b - 8000, - -d - "" image: docker.io/quanlin/proxy_init:180712-1038 imagePullPolicy: IfNotPresent name: istio-init resources: {} securityContext: capabilities: add: - NET_ADMIN privileged: true volumes: - name: sdsudspath hostPath: path: /var/run/udspath - emptyDir: medium: Memory name: istio-envoy - name: istio-certs secret: optional: true secretName: istio.default status: {} ---</code></pre> </div> </div> </p> <p>kubectl describe pod xxx shows that </p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code> serviceC: Container ID: Image: Image ID: Port: &lt;none&gt; Command: /bin/sh Args: -c sudo chown 1337:1337 /var/run/udspath State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 30 Jul 2018 10:30:04 -0700 Finished: Mon, 30 Jul 2018 10:30:04 -0700 Ready: False Restart Count: 2 Requests: cpu: 10m Environment: POD_NAME: clitool-5d548b856-6v9p9 (v1:metadata.name) POD_NAMESPACE: default (v1:metadata.namespace) INSTANCE_IP: (v1:status.podIP) ISTIO_META_POD_NAME: clitool-5d548b856-6v9p9 (v1:metadata.name) ISTIO_META_INTERCEPTION_MODE: REDIRECT Mounts: /etc/certs/ from certs (ro) /etc/istio/proxy from envoy (rw) /var/run/udspath from sdsudspath (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-g2zzv (ro)</code></pre> </div> </div> </p>
<p>More information would be helpful. Like what error are you getting. </p> <p>Nevertheless, it really depends on what is defined in ServiceC's dockerfile entrypoint or cmd.</p> <p>Mapping between docker and kubernetes:</p> <p>Docker Entrypoint --> Pod command (The command run by the container) Docker cmd --> Pod args (The arguments passed to the command)</p> <p><a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/</a></p>
<p>I am deploying my Microservices on Kubernetes Cluster. Every Application has 4 replicas or PODS. For a REST API in 1 application, I want to track which POD addressed my request. e.g. my <code>/app/deploy(body contains app_id)</code> request is handled by <code>POD1</code>.</p> <p>For the same, I have imported Kubernetes jar in my application. In my code, I want to check the current POD on which this code is running. I want an API like <code>kubernetesDiscoveryClient.getCurrentPOD()</code>, something of this sort.</p>
<p>You do not need Kubernetes Jar in your Java application. A simple <code>System.getenv("HOSTNAME")</code> will give you the name of your Pod. Works on all platform, and since Kubernetes version 1.6 at least.</p> <p>More formally, you could use the following in your Kube spec (<a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables" rel="noreferrer">detailed reference</a>), and then read the environment using <code>System.getenv("MY_POD_NAME")</code> in Java.</p> <pre><code> env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name </code></pre>
<p>I'm using <code>Kubernetes</code> v1.8.14 on custom built <code>CoreOS</code> cluster:</p> <pre><code>$ kubectl version --short Client Version: v1.10.5 Server Version: v1.8.14+coreos.0 </code></pre> <p>When trying to create the following <code>ClusterRole</code>:</p> <pre><code>$ cat ClusterRole.yml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch </code></pre> <p>I get the following error:</p> <pre><code>$ kubectl create -f ClusterRole.yml Error from server (Forbidden): error when creating "ClusterRole.yml": clusterroles.rbac.authorization.k8s.io "system:coredns" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["watch"]}] user=&amp;{cluster-admin [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[] </code></pre> <p>As far as I can tell I'm connecting as <code>cluster-admin</code>, therefore should have sufficient permissions for what I'm trying to achieve. Below are relevant <code>cluster-admin</code> config:</p> <pre><code>$ cat ~/.kube/config apiVersion: v1 kind: Config current-context: dev preferences: colors: true clusters: - cluster: certificate-authority: cluster-ca.pem server: https://k8s.loc:4430 name: dev contexts: - context: cluster: dev namespace: kube-system user: cluster-admin name: dev users: - name: cluster-admin user: client-certificate: cluster.pem client-key: cluster-key.pem $ kubectl get clusterrole cluster-admin -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: 2018-07-30T14:44:44Z labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin resourceVersion: "1164791" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin uid: 196ffecc-9407-11e8-bd67-525400ac0b7d rules: - apiGroups: - '*' resources: - '*' verbs: - '*' - nonResourceURLs: - '*' verbs: - '*' $ kubectl get clusterrolebinding cluster-admin -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: 2018-07-30T14:44:45Z labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin resourceVersion: "1164832" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin uid: 19e516a6-9407-11e8-bd67-525400ac0b7d roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:masters $ kubectl get serviceaccount cluster-admin -o yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: 2018-07-30T13:32:13Z name: cluster-admin namespace: kube-system resourceVersion: "1158783" selfLink: /api/v1/namespaces/kube-system/serviceaccounts/cluster-admin uid: f809e079-93fc-11e8-8b85-525400546bcd secrets: - name: cluster-admin-token-t7s4c </code></pre> <p>I understand this is RBAC problem, but have no idea how further debug this.</p> <h3>Edit-1.</h3> <p>I tried the suggested, no joy unfortunately...</p> <pre><code>$ kubectl get clusterrolebinding cluster-admin-binding -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: 2018-07-31T09:21:34Z name: cluster-admin-binding resourceVersion: "1252260" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin-binding uid: 1e1c0647-94a3-11e8-9f9b-525400ac0b7d roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: cluster-admin namespace: default $ kubectl describe secret $(kubectl get secret | awk '/cluster-admin/{print $1}') Name: cluster-admin-token-t7s4c Namespace: kube-system Labels: &lt;none&gt; Annotations: kubernetes.io/service-account.name=cluster-admin kubernetes.io/service-account.uid=f809e079-93fc-11e8-8b85-525400546bcd Type: kubernetes.io/service-account-token Data ==== ca.crt: 1785 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVyLWFkbWluLXRva2VuLXQ3czRjIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmODA5ZTA3OS05M2ZjLTExZTgtOGI4NS01MjU0MDA1NDZiY2QiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3Rlci1hZG1pbiJ9.rC1x9Or8GArkhC3P0s-l_Pc0e6TEUwfbJtXAN2w-cOaRUCNCo6r4WxXKu32ngOg86TXqCho2wBopXtbJ2CparIb7FWDXzri6O6LPFzHWNzZo3b-TON2yxHMWECGjpbbqjDgkPKDEldkdxJehDBJM_GFAaUdNyYpFFsP1_t3vVIsf2DpCjeMlOBSprYRcEKmDiE6ehF4RSn1JqB7TVpvTZ_WAL4CRZoTJtZDVoF75AtKIADtVXTxVv_ewznDCKUWDupg5Jk44QSMJ0YiG30QYYM699L5iFLirzD5pj0EEPAoMeOqSjdp7KvDzIM2tBiu8YYl6Fj7pG_53WjZrvlSk5pgPLS-jPKOkixFM9FfB2eeuP0eWwLO5wvU5s--a2ekkEhaqHTXgigeedudDA_5JVIJTS0m6V9gcbE4_kYRpU7_QD_0TR68C5yxUL83KfOzj6A_S6idOZ-p7Ni6ffE_KlGqqcgUUR2MTakJgimjn0gYHNaIqmHIu4YhrT-jffP0-5ZClbI5srj-aB4YqGtCH9w5_KBYD4S2y6Rjv4kO00nZyvi0jAHlZ6el63TQPWYkjyPL2moF_P8xcPeoDrF6o8bXDzFqlXLqda2Nqyo8LMhLxjpe_wFeGuwzIUxwwtH1RUR6BISRUf86041aa2PeJMqjTfaU0u_SvO-yHMGxZt3o </code></pre> <p>Then amended <code>~/.kube/config</code>:</p> <pre><code>$ cat ~/.kube/config apiVersion: v1 kind: Config current-context: dev preferences: colors: true clusters: - cluster: certificate-authority: cluster-ca.pem server: https://k8s.loc:4430 name: dev contexts: - context: cluster: dev namespace: kube-system user: cluster-admin-2 name: dev users: - name: cluster-admin user: client-certificate: cluster.pem client-key: cluster-key.pem - name: cluster-admin-2 user: token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVyLWFkbWluLXRva2VuLXQ3czRjIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmODA5ZTA3OS05M2ZjLTExZTgtOGI4NS01MjU0MDA1NDZiY2QiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3Rlci1hZG1pbiJ9.rC1x9Or8GArkhC3P0s-l_Pc0e6TEUwfbJtXAN2w-cOaRUCNCo6r4WxXKu32ngOg86TXqCho2wBopXtbJ2CparIb7FWDXzri6O6LPFzHWNzZo3b-TON2yxHMWECGjpbbqjDgkPKDEldkdxJehDBJM_GFAaUdNyYpFFsP1_t3vVIsf2DpCjeMlOBSprYRcEKmDiE6ehF4RSn1JqB7TVpvTZ_WAL4CRZoTJtZDVoF75AtKIADtVXTxVv_ewznDCKUWDupg5Jk44QSMJ0YiG30QYYM699L5iFLirzD5pj0EEPAoMeOqSjdp7KvDzIM2tBiu8YYl6Fj7pG_53WjZrvlSk5pgPLS-jPKOkixFM9FfB2eeuP0eWwLO5wvU5s--a2ekkEhaqHTXgigeedudDA_5JVIJTS0m6V9gcbE4_kYRpU7_QD_0TR68C5yxUL83KfOzj6A_S6idOZ-p7Ni6ffE_KlGqqcgUUR2MTakJgimjn0gYHNaIqmHIu4YhrT-jffP0-5ZClbI5srj-aB4YqGtCH9w5_KBYD4S2y6Rjv4kO00nZyvi0jAHlZ6el63TQPWYkjyPL2moF_P8xcPeoDrF6o8bXDzFqlXLqda2Nqyo8LMhLxjpe_wFeGuwzIUxwwtH1RUR6BISRUf86041aa2PeJMqjTfaU0u_SvO-yHMGxZt3o </code></pre> <p>And then tried to apply the same <code>ClusterRole</code>, which rendered the same error:</p> <pre><code>$ kubectl apply -f ClusterRole.yml Error from server (Forbidden): error when creating "ClusterRole.yml": clusterroles.rbac.authorization.k8s.io "system:coredns" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["watch"]}] user=&amp;{system:serviceaccount:kube-system:cluster-admin f809e079-93fc-11e8-8b85-525400546bcd [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[] </code></pre> <p>Below are the flags which I use to start <code>apiserver</code>:</p> <pre><code> containers: - name: kube-apiserver image: quay.io/coreos/hyperkube:${K8S_VER} command: - /hyperkube - apiserver - --bind-address=0.0.0.0 - --etcd-servers=${ETCD_ENDPOINTS} - --allow-privileged=true - --service-cluster-ip-range=${SERVICE_IP_RANGE} - --secure-port=443 - --advertise-address=${ADVERTISE_IP} - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem - --client-ca-file=/etc/kubernetes/ssl/ca.pem - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem - --runtime-config=extensions/v1beta1/networkpolicies=true - --anonymous-auth=false - --authorization-mode=AlwaysAllow,RBAC,Node </code></pre> <p>And here are the scripts, which I use to generate my <code>tls</code> certs:</p> <p><strong>root ca</strong>:</p> <pre><code>openssl genrsa -out ca-key.pem 4096 openssl req -x509 -new -nodes -key ca-key.pem -days 3650 -out ca.pem -subj "/CN=kube-ca" </code></pre> <p><strong>apiserver</strong>:</p> <pre><code>cat &gt; openssl.cnf &lt;&lt;EOF [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [v3_req] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = kubernetes DNS.2 = kubernetes.default DNS.3 = kubernetes.default.svc DNS.4 = kubernetes.default.svc.cluster.local DNS.5 = ${MASTER_LB_DNS} IP.1 = ${K8S_SERVICE_IP} IP.2 = ${MASTER_HOST} EOF openssl genrsa -out apiserver-key.pem 4096 openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 3650 -extensions v3_req -extfile openssl.cnf </code></pre> <p><strong>cluster-admin</strong>:</p> <pre><code>openssl genrsa -out cluster-admin-key.pem 4096 openssl req -new -key cluster-admin-key.pem -out cluster-admin.csr -subj "/CN=cluster-admin" openssl x509 -req -in cluster-admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cluster-admin.pem -days 3650 </code></pre> <p>I hope this gives you more insight, what's wrong with my system.</p> <h3>Edit-2.</h3> <p>I noted a slight discrepancy between my system configuration and what @MarcinRomaszewicz suggested, thus the <em>namespace</em> of the <em>cluster-admin</em> <code>ServiceAccount</code>, in my case it is in the <code>kube-system</code> as opposed to the <code>default</code> <em>namespace</em>:</p> <pre><code>$ kubectl delete clusterrolebinding cluster-admin-binding clusterrolebinding.rbac.authorization.k8s.io "cluster-admin-binding" deleted $ kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole=cluster-admin --serviceaccount=kube-system:cluster-admin clusterrolebinding.rbac.authorization.k8s.io "cluster-admin-binding" created $ kubectl apply -f ClusterRole.yml clusterrole.rbac.authorization.k8s.io "system:coredns" created </code></pre> <p>However it still doesn't work with my certificates... </p> <h3>Edit-3.</h3> <p>As suggested in the comments, in order for <code>apiserver</code> to recognize the user as the <code>cluster-admin</code>, Subject line in the certificate of that user must contain the following items: <code>Subject: CN = cluster-admin, O = system:masters</code>. One way to generate such a certificate is as follows:</p> <pre><code>openssl genrsa -out cluster-admin-key.pem 4096 openssl req -new -key cluster-admin-key.pem -out cluster-admin.csr -subj "/CN=cluster-admin/O=system:masters" openssl x509 -req -in cluster-admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cluster-admin.pem -days 3650 </code></pre>
<p>There isn't enough information here to answer your question.</p> <p>It sounds like you are running into privilege escalation prevention: <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping</a></p> <p>This would mean you aren't actually running as cluster-admin. Check your kubectl config. You might be running as "admin" constrained to a particular name space, for example.</p> <p>(edit based on comment below)</p> <p>Your identity to k8s is established by the contents of your cluster.pem certificate, not the user name from kubeconfig, since that user name is only valid inside the kubeconfig file. Your actual user is determined by that certificate. </p> <p>I see that you have a service account named cluster-admin, but it is not a member of "system:masters", since groups are something that are a property of the authentication system which authenticates users - you need to create an explicit cluster role binding to bind your cluster-admin service account to the cluster-admin clusterrole.</p> <pre><code>kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --serviceaccount=default:cluster-admin </code></pre> <p>You should see the clusterrole now bound with your service account.</p> <pre><code>$ kubectl get clusterrolebinding cluster-admin-binding -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: 2018-07-30T22:02:33Z name: cluster-admin-binding resourceVersion: "71152" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin-binding uid: 42a2862c-9444-11e8-8b71-080027de17da roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: cluster-admin namespace: default </code></pre> <p>Note at the bottom, that the binding applies to "ServiceAccount", not group.</p> <p>Your service account has an access token, use that to authenticate instead of your certificate. I made myself a cluster-admin service account, and this is how I get the token:</p> <pre><code>$ kubectl describe secret $(kubectl get secret | grep cluster-admin | awk '{print $1}') Name: cluster-admin-token-96vdz Namespace: default Labels: &lt;none&gt; Annotations: kubernetes.io/service-account.name=cluster-admin kubernetes.io/service-account.uid=f872f08b-9442-11e8-8b71-080027de17da Type: kubernetes.io/service-account-token Data ==== token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItYWRtaW4tdG9rZW4tOTZ2ZHoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2x1c3Rlci1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImY4NzJmMDhiLTk0NDItMTFlOC04YjcxLTA4MDAyN2RlMTdkYSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmNsdXN0ZXItYWRtaW4ifQ.&lt;signature snipped&gt; ca.crt: 1066 bytes namespace: 7 bytes </code></pre> <p>Update kubeconfig to authenticate yourself using that token, instead of the certificate you are currently using, and you should be successfully authenticated as that cluster-admin service account.</p> <p>(edit 2) It turned out that the certificate being used to authenticate into Kubernetes did not have any identity claims about the user. Kubernetes relies on authentication modules to authenticate users, in this case, based on certificates. It was expecting the certificate to contain a claim which put the user into the "system:masters" group, by setting the Organization to "system:masters".</p> <p>There are many moving pieces here. The problem had nothing to do with service accounts or roles, but rather in user authentication, which is very opaque.</p>
<p>The <code>User Microservice</code> is deployed on kubernetes. </p> <p>The <code>Order Microservice</code> is not deployed on kubernetes, but registered with Eureka.</p> <p><strong><em>My questions:</em></strong></p> <p>How can <code>Order Microservice</code> discover and access <code>User Microservice</code> through the Eureka??</p>
<p>First lets take a look at the problem itself: If you use an overlay network as Kubernetes CNI, the problem is that it creates an isolated Network thats not reachable from the outside (e.g. Flannel). If you have a network like that one solution would be to move the eureka server into kubernetes so eureka can reach the service in Kubernetes and the service outside of Kubernetes.</p> <p>Another solution would be to tell eureka where it can find the service instead of auto discovery but for that you also need to make the service externally available with a <em>Service</em> of type NodePort, HostPort or LoadBalancer or with an <em>ingress</em> and I'm not sure its possible, but 11.2 in the following doc could be worth a look <a href="https://cloud.spring.io/spring-cloud-static/Edgware.SR3/multi/multi__service_discovery_eureka_clients.html" rel="nofollow noreferrer">Eureka Client Discovery</a>.</p> <p>The third solution would be to use a CNI thats not using an overlay network like <a href="https://github.com/romana/romana" rel="nofollow noreferrer">Romana</a> which will make the service external routable by default.</p>
<p>We are trying to create an ExternalName service for Kubernetes to hide URL linking to our Firebase:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: firebase namespace: devel spec: type: ExternalName externalName: firebase-project-123456.firebaseio.com </code></pre> <p>The service is created correctly, and we can ping to <code>http://firebase</code>. However connecting to the firebase endpoint doesn't work:</p> <pre><code>curl -v http://firebase/activity.json &lt; HTTP/1.1 404 Not Found &lt; Content-Type: text/html; charset=UTF-8 &lt; Referrer-Policy: no-referrer </code></pre> <p>One idea is that there is an issue with https (as the target service runs on https), however then we wouldn't probably get 404, but some other error. I have no idea what might be wrong on the way.</p>
<p>You might be running into a virtual host issue. firebase-project-123456.firebaseio.com is a virtual host name which is then used to route your request to the correct backend. A Kubernetes external service is essentially a DNS CNAME, which forces a second DNS lookup for the actual host name.</p> <p>See if this works for you:</p> <pre><code>curl -v -H "Host: firebase-project-123456.firebaseio.com" http://firebase/activity.json </code></pre> <p>If it does, that's what you're running into. You might have to make a trivially simple service instead, which proxies your requests to the correct URL at firebase.</p>
<p>I am using the <a href="https://console.bluemix.net/docs/cli/index.html#overview" rel="nofollow noreferrer">CLI for IBM Cloud</a> in my bash environment. Is there support for autocomplete, i.e., automatic completion of commands and their options?</p> <p>I am especially interested in the commands for the Kubernetes service and the container registry.</p>
<p>Autocompletion is supported, but not that well-advertised. The instructions can be found in the <a href="https://console.bluemix.net/docs/cli/reference/bluemix_cli/enable_cli_autocompletion.html#enabling-shell-autocompletion-for-ibm-cloud-cli-linux-macos-only-" rel="nofollow noreferrer">IBM Cloud developer tools documentation</a>. The following source needs to be added to the bash / zsh resource file or profile:</p> <p>bash: <code>source /usr/local/ibmcloud/autocomplete/bash_autocomplete</code><br> zsh: <code>source /usr/local/ibmcloud/autocomplete/zsh_autocomplete</code></p>
<p>Is there a way to set a Galera root password with this script?</p> <p><a href="https://github.com/kubernetes/kubernetes/tree/master/test/e2e/testing-manifests/statefulset/mysql-galera" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/test/e2e/testing-manifests/statefulset/mysql-galera</a></p> <p>Everything works great but I can't figure out how to set password.</p> <p>Thank you.</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql spec: serviceName: "galera" replicas: 3 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: initContainers: - name: install image: k8s.gcr.io/galera-install:0.1 imagePullPolicy: Always args: - "--work-dir=/work-dir" volumeMounts: - name: workdir mountPath: "/work-dir" - name: config mountPath: "/etc/mysql" - name: bootstrap image: debian:jessie command: - "/work-dir/peer-finder" args: - -on-start="/work-dir/on-start.sh" - "-service=galera" env: - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace volumeMounts: - name: workdir mountPath: "/work-dir" - name: config mountPath: "/etc/mysql" containers: - name: mysql image: k8s.gcr.io/mysql-galera:e2e ports: - containerPort: 3306 name: mysql - containerPort: 4444 name: sst - containerPort: 4567 name: replication - containerPort: 4568 name: ist args: - --defaults-file=/etc/mysql/my-galera.cnf - --user=root readinessProbe: # TODO: If docker exec is buggy just use k8s.gcr.io/mysql-healthz:1.0 exec: command: - sh - -c - "mysql -u root -e 'show databases;'" initialDelaySeconds: 15 timeoutSeconds: 5 successThreshold: 2 volumeMounts: - name: datadir mountPath: /var/lib/ - name: config mountPath: /etc/mysql volumes: - name: config emptyDir: {} - name: workdir emptyDir: {} volumeClaimTemplates: - metadata: name: datadir spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi </code></pre> <p>Where would I add:</p> <pre><code>env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password </code></pre> <p>Then do I need to add it to the args?</p> <pre><code>args: - --defaults-file=/etc/mysql/my-galera.cnf - --user=root </code></pre> <p>And readiness probe?</p> <pre><code> readinessProbe: # TODO: If docker exec is buggy just use k8s.gcr.io/mysql-healthz:1.0 exec: command: - sh - -c - "mysql -u root -e 'show databases;'" </code></pre>
<p>You need to use <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secret</a>, for example:</p> <pre><code>kubectl create secret generic mysql-pass --from-literal=MYSQL_ROOT_PASSWORD=YOUR_PASSWORD </code></pre> <p>and then use it as env:</p> <pre><code> spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password </code></pre> <p>or mount it as volume:</p> <pre><code>spec: containers: - name: mycontainer image: mysql volumeMounts: - name: foo mountPath: /etc/foo readOnly: true volumes: - name: foo secret: secretName: mysql-pass items: - key: username path: my-group/my-username </code></pre> <p>Reference: <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#create-a-secret-for-mysql-password" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#create-a-secret-for-mysql-password</a></p>
<p>I am trying to implement CI/CD pipeline by using Kubernetes and Jenkins. I am planning to use Kubernetes HA Cluster having 3 master and 5 worker machine / node. </p> <p>Now I am exploring about the implementation tutorials for CI/CD Pipeline. And also exploring about the Jenkins usage with Kubernetes HA Cluster. When I am reading , I felt little bit confusions about Jenkins. That I am adding here.</p> <p><strong>1.</strong> I have total 8 VMs - 3 Master and 5 Worker machines / nodes (Kubernetes cluster). If I installing Jenkins in any one worker machines , then is there any problem while integrating with CI/CD pipeline for deployment ?</p> <p><strong>2.</strong> I am previously readed the following link for understanding the implementations,</p> <p><a href="https://dzone.com/articles/easily-automate-your-cicd-pipeline-with-jenkins-he" rel="nofollow noreferrer">https://dzone.com/articles/easily-automate-your-cicd-pipeline-with-jenkins-he</a></p> <p>Is this mandatory to use Jenkins master and slave ?. In this tutorial showing that If kubectl,helm and docker is installed then don't need to use Jenkins slave. What is the idea about master and slave here?</p> <p><strong>3.</strong> If I am installing both jenkins master and slave in kubernetes cluster worker machine / node , then Need to install master and slave in separate separate VMs? I have still confusion about where to install Jenkins?</p> <p>I am just started on CI/CD pipeline - Kubernetes and Jenkins.</p>
<p>Jenkins has two parts. There's the master, which manages all the jobs, and the workers, which perform the jobs.</p> <p>The Jenkins master supports many kinds of workers (slaves) via plugins - you can have stand alone nodes, Docker based slaves, Kubernetes scheduled Docker slaves, etc.</p> <p>Where you run the Jenkins master doesn't really matter very much, what is important is how you configure it to run your jobs.</p> <p>Since you are on Kubernetes, I would suggest checking out the <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow noreferrer">Kubernetes plugin for Jenkins</a>. When you configure the master to use this plugin, it will create a new Kubernetes pod for each job, and this pod will run the Docker based <a href="https://hub.docker.com/r/jenkinsci/slave/" rel="nofollow noreferrer">Jenkins slave image</a>. The way this works is that the plugin watches for a job in the job queue, notices there isn't a slave to run it, starts the Jenkins slave docker image, which registers itself with the master, then it does the job, and gets deleted. So you do not need to directly create slave nodes in this setup.</p> <p>When you are in a Kubernetes cluster in a container based workflow, you don't need to worry about where to run the containers, let Kubernetes figure that out for you. Just use Helm to launch the Jenkins master, then connect to the Jenkins master and configure it to use Kubernetes slaves.</p>
<p>I have a kubernetes setup in which one is <code>master</code> node and two <code>worker</code> nodes. After the deployment, which is a <code>daemonset</code>, it starts pods on both the <code>worker</code> nodes. These pods contain 2 containers. These containers have a python script running in them. The python scripts runs normally but at a certain point, after some time, it needs to send a <code>shutdown</code> command to the host. I can directly issue command <code>shutdown -h now</code> but this will run on the container not on the host and gives below error:</p> <pre><code>Failed to connect to bus: No such file or directory Failed to talk to init daemon. </code></pre> <p>To resolve this, I can get the <code>ip address</code> of the host and then I can ssh into it and then run the command to safely shutdown the host.</p> <p>But is there any other way I can issue command to the host in kubernetes/dockers.?</p>
<p>You can access your cluster using kube api.</p> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/</a></p> <blockquote> <p>Accessing the API from a Pod When accessing the API from a pod, locating and authenticating to the apiserver are somewhat different.</p> <p>The recommended way to locate the apiserver within the pod is with the kubernetes.default.svc DNS name, which resolves to a Service IP which in turn will be routed to an apiserver.</p> <p>The recommended way to authenticate to the apiserver is with a service account credential. By kube-system, a pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that pod, at /var/run/secrets/kubernetes.io/serviceaccount/token.</p> </blockquote> <p>Draining the node you can use this</p> <blockquote> <p>The Eviction API</p> </blockquote> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/</a></p> <p>But i dont really sure about on pod can drain own node. Workaround can be controlling other pod from different node.</p>
<p>How does Kubernetes knows what external cloud provider on it is running?</p> <p>Is there any specific service running in Master which finds out if the Kubernetes Cluster running in AWS or Google Cloud?</p> <p>Even if it is able to find out it is AWS or Google, from where does it take the credentials to create the external AWS/Google Load Balancers? Do we have to configure the credentials somewhere so that it picks it from there and creates the external load balancer?</p>
<p>When installing Kubernetes cloud provider flag, you must specify the <code>--cloud-provider=aws</code> flag on a variety of components. </p> <p><strong><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#options" rel="nofollow noreferrer">kube-controller-manager</a></strong> - this is the component which interacts with the cloud API when cloud specific requests are made. It runs "loops" which ensure that any cloud provider request is completed. So when you request an Service of Type=LoadBalancer, the controller-manager is the thing that checks and ensures this was provisioned</p> <p><strong><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#options" rel="nofollow noreferrer">kube-apiserver</a></strong> - this simply ensure the cloud APIs are exposed, like for persistent volumes</p> <p><strong><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options" rel="nofollow noreferrer">kubelet</a></strong> - ensures thats when workloads are provisioned on nodes. This is especially the case for things like persistent storage EBS volumes.</p> <blockquote> <p>Do we have to configure the credentials somewhere so that it picks it from there and creates the external load balancer?</p> </blockquote> <p>All the above components should be able to query the required cloud provider APIs. Generally this is done using <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html" rel="nofollow noreferrer">IAM roles</a> which ensure the actual node itself has the permissions. If you take a look at the <a href="https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md" rel="nofollow noreferrer">kops</a> documentation, you'll see examples of the IAM roles assigned to masters and workers to give those nodes permissions to query and make API calls.</p> <p>It should be noted that this model is changing shortly, to move all cloud provider logic into a dedicated <a href="https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/" rel="nofollow noreferrer">cloud-controller-manager</a> which will have to be pre-configured when installing the cluster.</p>
<p>I'm looking for a command like "gcloud config get-value project" that retrieves a project's name, but for a pod (it can retrieve any pod name that is running). I know you can get multiple pods with "kubectl get pods", but I would just like one pod name as the result.</p> <p>I'm having to do this all the time: </p> <pre><code>kubectl get pods # add one of the pod names in next line kubectl logs -f some-pod-frontend-3931629792-g589c some-app </code></pre> <p>I'm thinking along the lines of "gcloud config get-value pod". Is there a command to do that correctly?</p>
<p>There are many ways, here are some examples of solutions:</p> <p><code>kubectl get pods -o name --no-headers=true </code></p> <p><code>kubectl get pods -o=name --all-namespaces | grep kube-proxy</code></p> <p><code>kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{&quot;\n&quot;}}{{end}}'</code></p> <p>For additional reading, please take a look to these links:</p> <p><a href="https://stackoverflow.com/questions/35797906/kubernetes-list-all-running-pods-name">kubernetes list all running pods name</a></p> <p><a href="https://stackoverflow.com/questions/35773731/kubernetes-list-all-container-id?noredirect=1&amp;lq=1">Kubernetes list all container id</a></p> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/</a></p>
<p>I'm trying to write a go template that extracts the value of the load balancer. Using <code>--go-template={{status.loadBalancer.ingress}}</code> returns <code>[map[hostname:GUID.us-west-2.elb.amazonaws.com]]%</code> When I add <code>.hostname</code> to the template I get an error saying, "can't evaluate field hostname in type interface {}". I've tried using the <code>range</code> keyword, but I can't seem to get the syntax right. </p> <pre><code>{ "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2018-07-30T17:22:12Z", "labels": { "run": "nginx" }, "name": "nginx-http", "namespace": "jx", "resourceVersion": "495789", "selfLink": "/api/v1/namespaces/jx/services/nginx-http", "uid": "18aea6e2-941d-11e8-9c8a-0aae2cf24842" }, "spec": { "clusterIP": "10.100.92.49", "externalTrafficPolicy": "Cluster", "ports": [ { "nodePort": 31032, "port": 80, "protocol": "TCP", "targetPort": 8080 } ], "selector": { "run": "nginx" }, "sessionAffinity": "None", "type": "LoadBalancer" }, "status": { "loadBalancer": { "ingress": [ { "hostname": "GUID.us-west-2.elb.amazonaws.com" } ] } } } </code></pre>
<p>As you can see from the JSON, the ingress element is an array. You can use the <a href="https://golang.org/pkg/text/template/#hdr-Functions" rel="nofollow noreferrer">template function index</a> to grab this array element. </p> <p>Try:</p> <pre><code>kubectl get svc &lt;name&gt; -o=go-template --template='{{(index .status.loadBalancer.ingress 0 ).hostname}}' </code></pre> <p>This is assuming of course that you're only provisioning a single loadbalancer, if you have multiple, you'll have to use range</p>
<p>Hi I am working on kubernetes using AWS EKS. I have a problem with kompose file when I am converting the docker-compose file into kompose file i had aproble with the volume mount point and as well when I run the command the kompose up it is asking the username which credential should i provide.</p> <p>this is my docker-compose.yml</p> <pre><code>services: cms-db: image: mysql:5.6 volumes: - "./shared/db:/var/lib/mysql" restart: always environment: - MYSQL_DATABASE=cms - MYSQL_USER=cms - MYSQL_RANDOM_ROOT_PASSWORD=yes mem_limit: 1g env_file: config.env cms-xmr: image: xibosignage/xibo-xmr:release-0.7 ports: - "9505:9505" restart: always mem_limit: 256m env_file: config.env cms-web: image: xibosignage/xibo-cms:release-1.8.10 volumes: - "./shared/cms/custom:/var/www/cms/custom" - "./shared/backup:/var/www/backup" - "./shared/cms/web/theme/custom:/var/www/cms/web/theme/custom" - "./shared/cms/library:/var/www/cms/library" - "./shared/cms/web/userscripts:/var/www/cms/web/userscripts" restart: always links: - cms-db:mysql - cms-xmr:50001 environment: - XMR_HOST=cms-xmr env_file: config.env ports: - "80:80" mem_limit: 1g [root@my-ip xibo-docker-1.8.10]# kompose up WARN Unsupported env_file key - ignoring WARN Unsupported links key - ignoring WARN Volume mount on the host "./shared/db" isn't supported - ignoring path on the host WARN Volume mount on the host "./shared/cms/custom" isn't supported - ignoring path on the host WARN Volume mount on the host "./shared/backup" isn't supported - ignoring path on the host WARN Volume mount on the host "./shared/cms/web/theme/custom" isn't supported - ignoring path on the host WARN Volume mount on the host "./shared/cms/library" isn't supported - ignoring path on the host WARN Volume mount on the host "./shared/cms/web/userscripts" isn't supported - ignoring path on the host INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead. Please enter Username: </code></pre>
<p>The better way to apply your configuration to Kubernetes cluster is to convert it, check YAML files, adjust them if necessary and then apply them using <code>kubectl</code>.</p> <p>I tested the conversion using <code>kompose v1.16.0</code> on Mac and I had to remove the <code>mem_limit</code> option from the <code>docker-compose.yml</code> file to complete it successfully.</p> <pre><code>$ mkdir export $ kompose -v convert -f docker-compose.yml -o export </code></pre> <p>14 files will be created in the <code>export</code> directory. </p> <p>Local paths are not supported - Persistent Volume Claim will be created instead (Warning will appear). Persistent volume claims is <code>100Mi</code> by default. Edit the claim YAML files and increase the size if necessary.</p> <p>Now your configuration can be deployed to Kubernetes cluster using: </p> <pre><code>kubectl create -f export/ </code></pre>
<p>I am trying to learn the implementation of CI/CD pipeline using Jenkins and Kubernetes for my Spring Boot microservice deployment.</p> <p>I following the following links for references,</p> <ol> <li><a href="https://dzone.com/articles/easily-automate-your-cicd-pipeline-with-jenkins-he" rel="nofollow noreferrer">https://dzone.com/articles/easily-automate-your-cicd-pipeline-with-jenkins-he</a></li> <li><a href="https://medium.com/jfrogplatform/easily-automate-your-ci-cd-pipeline-with-jenkins-helm-and-kubernetes-c96283c25701" rel="nofollow noreferrer">https://medium.com/jfrogplatform/easily-automate-your-ci-cd-pipeline-with-jenkins-helm-and-kubernetes-c96283c25701</a></li> </ol> <p>Here I am finding the way that to use Kubernetes Helm chart for simplifying application deployment. In the aboev link it showing that, to use one Helm repository along with Docker registry (I am planning to use Dockerhub.com).</p> <p><strong>Confusion</strong></p> <p>Here my confusion is that, If we are using Helm chart with Kubernetes, then why to use Helm repository for my CI/CD pipeline?</p>
<p>You could work with your helm charts unpackaged, effectively deploying them from source. You don't <em>necessarily</em> have to package them and host them in a repo. You probably will want to package the charts and host them in a repo if you have multiple apps/teams consuming the charts and perhaps building their own charts using them and/or you want to be able to have different versions of a chart running in different places. </p>
<p>I'm trying to enable dashboard via nodeport service. I have 3 VMs:</p> <ul> <li>192.168.100.31 - master</li> <li>192.168.100.32 - minion</li> <li>192.168.100.33 - minion (dashboard here)</li> </ul> <p>After applying:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml </code></pre> <p>Dashboard became accessable via kube-proxy. So I've erited service to become NodePort:</p> <pre><code>kubectl edit services kubernetes-dashboard -n kube-system </code></pre> <p>Then I've tried to access the dashboard via https NodePort. And it fails. When I'm trying to visit 192.168.100.31 or 192.168.100.32 like:</p> <pre><code>https://192.168.100.31:32443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default </code></pre> <p>Browser hangs out &amp; fired timeout error. Meanwhile the same URL for 192.168.100.33 allow me to add site to the browser exceptions cause of self signed cert and ... fail.</p> <pre><code>This site can’t be reached The webpage at https://192.168.100.33:32443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default might be temporarily down or it may have moved permanently to a new web address. ERR_INVALID_RESPONSE </code></pre> <p>kubernetes version: v1.11.1</p> <p><strong>UPD:</strong></p> <p>kubectl get svc kubernetes-dashboard -n kube-system --export -o yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kubernetes-dashboard"},"spec":{"ports":[{"nodePort":32443,"port":443,"protocol":"TCP","targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"},"sessionAffinity":"None","type":"NodePort"},"status":{"loadBalancer":{}}} creationTimestamp: null labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard spec: externalTrafficPolicy: Cluster ports: - port: 443 protocol: TCP targetPort: 8443 selector: k8s-app: kubernetes-dashboard sessionAffinity: None type: NodePort status: loadBalancer: {} </code></pre> <p>kubectl get svc kubernetes-dashboard -n kube-system</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.96.174.242 &lt;none&gt; 443:32443/TCP 52m </code></pre> <p>kubectl describe svc kubernetes-dashboard -n kube-system</p> <pre><code>Name: kubernetes-dashboard Namespace: kube-system Labels: k8s-app=kubernetes-dashboard Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernete... Selector: k8s-app=kubernetes-dashboard Type: NodePort IP: 10.96.174.242 Port: &lt;unset&gt; 443/TCP TargetPort: 8443/TCP NodePort: &lt;unset&gt; 32443/TCP Endpoints: 192.0.2.2:8443 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre>
<p>Looks like you are using an <code>apiserver</code> type of request while accessing the <code>dashboard</code> service directly via NodePort Service.</p> <p>Could you check the following links in your browser to access Kubernetes dashboard:</p> <p><a href="https://192.168.100.31:32433/" rel="nofollow noreferrer">https://192.168.100.31:32433/</a></p> <p><a href="https://192.168.100.32:32433/" rel="nofollow noreferrer">https://192.168.100.32:32433/</a></p> <p><a href="https://192.168.100.33:32433/" rel="nofollow noreferrer">https://192.168.100.33:32433/</a></p>
<p>In kubernetes we try to have a immutable deployment of our PHP code by deploying the php code in a prepackaged container.</p> <p>By nature kubernetes volume replace the directory with an empty volume, but I would like to keep the data of the php container so we can share that with the nginx container which has a vhost configured to connect to the php container.</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: serviceability spec: replicas: 1 template: metadata: spec: containers: - name: my-stuff-php image: our-php-service-in-fpm-container:latest ports: - containerPort: 9000 name: transport protocol: TCP volumeMounts: - name: my-volume mountPath: /var/www/html - name: my-stuff-nginx image: nginx:latest ports: - containerPort: 80 name: http protocol: TCP volumeMounts: - name: my-volume mountPath: /var/www/html volumes: - name: my-volume emptyDir: {} </code></pre> <p>A similar setup on docker-compose works as docker-compose behaves different with regards to volumes.</p> <p>How can I share the existing data in /var/www/html from my php container with the nginx container?</p>
<p>Kubernetes hasn't mechanism like docker-compose to share some folder as volume. But you can create Persistent Volume (PV) and Persistent Volume Claim (PVC) and share your data between containers. It is described in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">documentation</a> Examples from Docs:</p> <p>YAML for creating PV</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteMany hostPath: path: "/mnt/data" </code></pre> <p>Than you make PVC from this volume</p> <p>YAML:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: task-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3Gi </code></pre> <p>Your YAML for deployment will looks like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: serviceability spec: replicas: 1 template: metadata: spec: containers: - name: my-stuff-php image: our-php-service-in-fpm-container:latest ports: - containerPort: 9000 name: transport protocol: TCP volumeMounts: - name: name: task-pv-storage mountPath: /var/www/html - name: my-stuff-nginx image: nginx:latest ports: - containerPort: 80 name: http protocol: TCP volumeMounts: - name: name: task-pv-storage mountPath: /var/www/html volumes: - name: task-pv-storage ersistentVolumeClaim: claimName: task-pv-claim </code></pre> <p>As a result you will have volume with data which you share between two container in pod.</p>
<p>I am trying to install Kubernetes on Mesosphere DC/OS with only one private agent, using <a href="https://github.com/dcos/dcos-vagrant" rel="nofollow noreferrer">dcos-vagrant</a>.</p> <p>However, the step for "kube-node-0", get stuck on "PREPARED" :</p> <pre><code># dcos kubernetes plan show deploy deploy (serial strategy) (IN_PROGRESS) β”œβ”€ etcd (serial strategy) (COMPLETE) β”‚ └─ etcd-0:[peer] (COMPLETE) β”œβ”€ apiserver (dependency strategy) (COMPLETE) β”‚ └─ kube-apiserver-0:[instance] (COMPLETE) β”œβ”€ mandatory-addons (serial strategy) (COMPLETE) β”‚ β”œβ”€ mandatory-addons-0:[additional-cluster-role-bindings] (COMPLETE) β”‚ β”œβ”€ mandatory-addons-0:[kubelet-tls-bootstrapping] (COMPLETE) β”‚ β”œβ”€ mandatory-addons-0:[kube-dns] (COMPLETE) β”‚ β”œβ”€ mandatory-addons-0:[metrics-server] (COMPLETE) β”‚ β”œβ”€ mandatory-addons-0:[dashboard] (COMPLETE) β”‚ └─ mandatory-addons-0:[ark] (COMPLETE) β”œβ”€ kubernetes-api-proxy (dependency strategy) (COMPLETE) β”‚ └─ kubernetes-api-proxy-0:[install] (COMPLETE) β”œβ”€ controller-manager (dependency strategy) (COMPLETE) β”‚ └─ kube-controller-manager-0:[instance] (COMPLETE) β”œβ”€ scheduler (dependency strategy) (COMPLETE) β”‚ └─ kube-scheduler-0:[instance] (COMPLETE) β”œβ”€ node (dependency strategy) (IN_PROGRESS) β”‚ └─ kube-node-0:[kube-proxy, coredns, kubelet] (PREPARED) └─ public-node (dependency strategy) (COMPLETE) </code></pre> <p>I don't understand the problem because there are enough resources left, as we can see on the DC/OS Dashboard : <a href="https://i.stack.imgur.com/Uez2N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uez2N.png" alt="enter image description here"></a></p> <p>Here is Kubernete's configuration in options.js :</p> <pre><code>{ "kubernetes": { "node_count": 1, "reserved_resources": { "kube_cpus": 10, "kube_mem": 10000, "kube_disk": 15000 } } } </code></pre> <p>And below, VagrantConfig.yaml :</p> <pre><code>m1: ip: 192.168.65.90 cpus: 2 memory: 2048 type: master a1: ip: 192.168.65.111 cpus: 14 memory: 13144 memory-reserved: 512 type: agent-private p1: ip: 192.168.65.60 cpus: 2 memory: 1536 memory-reserved: 512 type: agent-public aliases: - spring.acme.org - oinker.acme.org boot: ip: 192.168.65.50 cpus: 2 memory: 1024 type: boot </code></pre>
<p>The problem seems to be how much RAM you're requesting for the Kubernetes node. Apparently, the cluster has less than 10000MB of available RAM (as per screenshot, there's 4GiB used out of 10GiB). As a test, reduce that to 8000 and it should work.</p>
<p>I have a working redis pod with gcloud persistence disk. Sometimes when deleting the pod the following error is thrown:</p> <pre><code>AttachVolume.Attach failed for volume "redis-volume" : GCE persistent disk not found: diskName="redis-volume" zone="europe-west3-c" </code></pre> <p>In case of failure deleting the pod once again resolve the issue - but this is not a sloution.</p> <p>Using the following Kubernetes configuration:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis spec: replicas: 1 template: metadata: labels: app: redis spec: containers: - name: redis image: redis:3.2-alpine imagePullPolicy: Always args: ["--requirepass", "password", "--appendonly", "yes", "--save", "900", "1", "--save", "30", "1"] ports: - containerPort: 6379 name: redis env: volumeMounts: - name: redis-volume mountPath: /data volumes: - name: redis-volume gcePersistentDisk: pdName: redis-volume fsType: ext4 </code></pre> <p>Has any one encountered this issue? </p>
<p>I had the same problem that solved after separate the persistentvolume to the persistnetvolumeclaim.</p> <p>The Deployment should use the claim. <a href="https://www.youtube.com/watch?v=n06kKYS6LZE" rel="nofollow noreferrer">https://www.youtube.com/watch?v=n06kKYS6LZE</a></p>
<p>I've installed Kubernetes with docker-for-desktop. Now I want to create a user (following RBAC principle). I'm using private certificates and want to sigh them against the <code>ca.crt</code> of the cluster.</p> <p>For minikube this <code>ca.crt</code> was in <code>.minikube/ca.crt</code>but I can't find it in the installation with docker?</p>
<p>By default, your HyperKit VM doesn't mount volumes locally in docker-for-desktop.</p> <p>Your best bet is to copy the ca.crt manually to your machine using <code>kubectl cp</code>.</p> <p>Example:</p> <pre><code>kubectl cp kube-apiserver-docker-desktop:run/config/pki/ca.crt -n kube-system /tmp/ca.crt </code></pre>
<p>We're using <a href="https://www.spinnaker.io/reference/providers/kubernetes/" rel="noreferrer">Kubernetes 1.9</a> as our cloud provider for Spinnaker v1.6. </p> <p>In this mode, <code>halyard</code> deploys all of the Spinnaker components - <code>orca</code>, <code>rosco</code>, <code>igor</code>, etc. - as Kubernetes deployments in the <code>spinnaker</code> namespace. </p> <p>We want to add custom Kubernetes annotations to these specific Spinnaker pods owing to the way our logging solution for containers is defined. </p> <p>While we can edit these pods by hand, I was wondering if there was a way to configure Halyard to attach custom annotations on all the pods it creates. </p>
<p>While its not documented <a href="https://www.spinnaker.io/reference/halyard/custom/#kubernetes" rel="noreferrer">here</a>, it does look like there is a <a href="https://github.com/spinnaker/halyard/blob/master/halyard-deploy/src/main/java/com/netflix/spinnaker/halyard/deploy/spinnaker/v1/service/KubernetesSettings.java#L32" rel="noreferrer">podAnnations option</a>.</p> <p>I was able to add the file <code>~/.hal/default/service-settings/front50.yml</code> with the following config to get kube2iam annotations on the front50 pods.</p> <pre><code>kubernetes: podAnnotations: iam.amazonaws.com/role: myawsrole </code></pre>
<p>Using Ubuntu 18.04.</p> <p>I am trying to install a kubernetes cluster on my local machine (localhost) using this guide (LXD + conjure-up kubernetes):</p> <p><a href="https://kubernetes.io/docs/getting-started-guides/ubuntu/local/#before-you-begin" rel="noreferrer">https://kubernetes.io/docs/getting-started-guides/ubuntu/local/#before-you-begin</a></p> <p>When I run:</p> <pre><code>conjure-up kubernetes </code></pre> <p>I select the following installation: </p> <p><a href="https://i.stack.imgur.com/feNXd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/feNXd.png" alt="enter image description here"></a></p> <p>and select <code>localhost</code> for "Choose a cloud" and use the defaults for the rest of the install wizard. It then starts to install and after 30-40 minutes it completes with this error:</p> <p><a href="https://i.stack.imgur.com/cYl0A.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cYl0A.png" alt="enter image description here"></a></p> <p>Here is the log: <a href="https://pastebin.com/raw/re1UvrUU" rel="noreferrer">https://pastebin.com/raw/re1UvrUU</a></p> <p>Where one error says:</p> <pre><code>2018-07-25 20:09:38,125 [ERROR] conjure-up/canonical-kubernetes - events.py:161 - Unhandled exception in &lt;Task finished coro=&lt;BaseBootstrapController.run() done, defined at /snap/conjure-up/1015/lib/python3.6/site-packages/conjureup/controllers/juju/bootstrap/common.py:15&gt; exception=BootstrapError('Unable to bootstrap (cloud type: localhost)',)&gt; </code></pre> <p>but that does not really help much.</p> <p>Any suggestion to why the install wizard/conjure-up fails?</p> <p>Also based on this post:</p> <p><a href="https://github.com/conjure-up/conjure-up/issues/1308" rel="noreferrer">https://github.com/conjure-up/conjure-up/issues/1308</a></p> <p>I have tried to first disable firewall:</p> <pre><code>sudo ufw disable </code></pre> <p>and then re-run installation/conjure install wizard. But I get the same error.</p> <p>Some more details on how I installed and configured LXD/conjure-up below:</p> <pre><code>$ snap install lxd lxd 3.2 from 'canonical' installed $ /snap/bin/lxd init Would you like to use LXD clustering? (yes/no) [default=no]: Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: Name of the storage backend to use (btrfs, ceph, dir, lvm) [default=btrfs]: Create a new BTRFS pool? (yes/no) [default=yes]: Would you like to use an existing block device? (yes/no) [default=no]: Size in GB of the new loop device (1GB minimum) [default=26GB]: Would you like to connect to a MAAS server? (yes/no) [default=no]: Would you like to create a new local network bridge? (yes/no) [default=yes]: What should the new bridge be called? [default=lxdbr0]: What IPv4 address should be used? (CIDR subnet notation, β€œauto” or β€œnone”) [default=auto]: What IPv6 address should be used? (CIDR subnet notation, β€œauto” or β€œnone”) [default=auto]: Would you like LXD to be available over the network? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: </code></pre> <p>Configured group membership:</p> <pre><code>sudo usermod -a -G lxd $USER newgrp lxd </code></pre> <p>Next installed:</p> <pre><code>sudo snap install conjure-up --classic </code></pre> <p>And then ran installation:</p> <pre><code>conjure-up kubernetes </code></pre>
<p>I wasn't able to reproduce your exact problem but i got <code>conjure-up</code> + <code>lxd</code> installed and in the end Kubernetes on my newly installed VirtualBox Ubuntu 18.04 (Desktop) VM. Hopefully this answer could help you somehow!</p> <p>I looked through the kubernetes.io documentation page and that one lacked tiny bits of information, it does mention <code>lxd</code> but not the part with <code>lxd init</code> which i assume you picked up in the <a href="https://docs.conjure-up.io/stable/en/user-manual" rel="nofollow noreferrer">conjure-up user manual</a>. </p> <p>So with that said, i followed the <code>conjure-up</code> user manual with some minor changes on the way. I'm assuming that it's OK for you to use the edge version of <code>conjure-up</code>, i started off with the stable one but changed to edge when testing different combinations. </p> <p>Also please ensure that you have the recommended resources available stated by the <a href="https://docs.conjure-up.io/stable/en/user-manual" rel="nofollow noreferrer">user manual</a>, <code>conjure-up</code> and the <em>Canoncial Distribution of Kubernetes</em> launches a number of containers for you. You might not need 3 x <em>etcd</em>, 3 x <em>worker</em> nodes and 2 x <em>Master</em>, and if you don't just tune the number of containers down in the <code>conjure-up</code> wizard.</p> <p>These are the steps i performed (as my local user):</p> <ol> <li>Make sure your Ubuntu box are updated: <code>sudo apt update &amp;&amp; sudo apt upgrade</code></li> <li>Install <code>conjure-up</code> by running: <code>sudo snap install conjure-up --classic --edge</code></li> <li>Install <code>lxd</code> by running: <code>sudo snap install lxd</code></li> <li>With <code>lxd</code> comes the client part which is <code>lxc</code>, if you run e.g. <code>lxc list</code> you should get an empty table (no containers started yet). I got an permission error at this time, i ran the following: <code>sudo chown -R lxd:lxd /var/snap/lxd/</code> to change owner and group of the <code>lxd</code> directory containing the socket you'll be communicating with using <code>lxc</code>.</li> <li>Add your user to the <code>lxd</code>group: <code>sudo usermod -a -G lxd $USER &amp;&amp; newgrp lxd</code>, log off and on to make this permanent and not only active in your current shell.</li> <li>Now create a <code>lxd</code> bridge manually with the following command: <code>lxc network create lxdbr1 ipv4.address=auto ipv4.nat=true ipv6.address=none ipv6.nat=false</code></li> <li>Now let's run the init part of <code>lxd</code> with <code>lxd init</code>. Remember to answer <code>no</code> when being asked to <em>create a new local network bridge?</em>, in the next prompt provide your newly created network bridge instead (<code>lxdbr1</code>). The rest of the answers to the questions can be left as default.</li> <li>Now continue with running <code>conjure-up kubernetes</code> and choose <code>localhost</code> as your type. For me the <code>localhost</code> choice was greyed out from the beginning, it worked when i created the network bridge manually and not via the <code>lxd init</code> step.</li> <li>Skip the additional components you can install like Rancher, Prometheus etc.</li> <li>Choose your new network bridge and the default storage pool, proceed to the next step.</li> <li>In the next step customize your Kubernetes cluster if needed and then hit Deploy. And now you wait!</li> </ol> <p>You can always troubleshoot and list all containers created with the <code>lxc</code> tool. If you've ever used Docker the <code>lxc</code> tool feels a lot like the <code>docker</code> client.</p> <p>And finally some thoughts and observations, there's <em>a lot</em> of moving parts to <code>conjure-up</code> as you might have seen. It's actually described as: <em>conjure-up is a thin layer spanning a few different underlying technologies - Juju, MAAS and LXD.</em> </p> <p>For reference, i ended up having the following versions installed:</p> <ul> <li><code>lxd</code> version 3.3</li> <li><code>conjure-up</code> version 2.6.1</li> </ul>
<p>I have installed Apache Superset from its Helm Chart in a Google Cloud Kubernetes cluster. I need to <code>pip install</code> a package that is not installed when installing the Helm Chart. If I connect to the Kubernetes bash shell like this:</p> <p><code>kubectl exec -it superset-4934njn23-nsnjd /bin/bash</code></p> <p>Inside there's no python available, no pip and apt-get doesn't find most of the packages.</p> <p>I understand that during the container installation process the packages are listed in the Dockerfile, I suppose that I need to fork the docker container, modify the Dockerfile, register the container to a container registry and make a new Helm Chart that will run this container.</p> <p>But all this seems too complicated for a simple <code>pip install</code>, is there a simpler way to do this?</p> <p>Links:</p> <p>Docker- <a href="https://hub.docker.com/r/amancevice/superset/" rel="nofollow noreferrer">https://hub.docker.com/r/amancevice/superset/</a></p> <p>Helm Chart - <a href="https://github.com/helm/charts/tree/master/stable/superset" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/superset</a></p>
<p>Docker file seems to be installing python3 package. Try 'python3' or "pip3" instead of 'python'/'pip'</p>