prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I'm trying to figure out how to create an array with some CIDR ip address in order to have the same array in my pipeline. So here is an example var file:</p> <pre><code>whitelist-ip-ranges: |- - 10.0.0.0/24 - 11.0.0.0/24 - 12.0.0.0/24 </code></pre> <p>My pipeline is like:</p> <pre><code>.... .... .... params: variables: loadBalancerSourceRanges: {{whitelist-ip-ranges}} </code></pre> <p>And I want it to be:</p> <pre><code>.... .... .... params: variables: loadBalancerSourceRanges: - 10.0.0.0/24 - 11.0.0.0/24 - 12.0.0.0/24 </code></pre> <p>or</p> <pre><code>.... .... .... params: variables: loadBalancerSourceRanges: [10.0.0.0/24,11.0.0.0/24,12.0.0.0/24] </code></pre> <p>Inside my helm template I have my values.yaml file I have of course:</p> <pre><code>loadBalancerSourceRanges: null </code></pre> <p>and it will be override by the pipeline. And finaly, in my service file I'm making a loop:</p> <pre><code>{{if .Values.loadBalancerSourceRanges}} loadBalancerSourceRanges: {{range $rangeList := .Values.loadBalancerSourceRanges}} - {{ $rangeList }} {{end}} {{end}} </code></pre> <p>Does any of you guys was able to do something like that?</p>
<p>I'm sorry, I cannot speak to anything <code>helm</code> based. I can speak for a concourse pipeline, though.</p> <p>Concourse does not support providing <code>params</code> to tasks that are an array. <code>params</code> are passed in as environment variables to a running task, so they are transformed from YAML to a simple string key-value pair.</p> <p>If you want to pass more complex information. There are two options:</p> <ul> <li>encode the param as JSON/YAML so that it can be parsed as string from your tasks environment</li> <li>provide the task an input from a resource, where a file can be provided -- for example an s3 resource with the contents of the <code>loadBalanceSourceRanges</code></li> </ul> <p>These programmatic ways are examples I've used before to accomplish passing more complex data (ie arrays) to a task.</p>
<p><strong>When the client tries to connect our ingress defined endpoint via a <code>wss://</code> request, the app returns 400 bad request, which according to socket.io docs is due to missing headers removed by load balancing proxies like nginx.</strong> </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress annotations: nginx.org/websocket-services: service-name spec: tls: - hosts: - www.myhost.com rules: - host: www.myhost.com http: paths: - path: / backend: serviceName: service-name servicePort: 80 </code></pre> <p>From the logs in the IBM provided ingress controller it seems to be a fork of this <a href="https://github.com/nginxinc/kubernetes-ingress" rel="noreferrer">nginx ingress controller</a>. Which says that the annotation <code>nginx.org/websocket-services</code> adds support for websockets by adding directives to the generated nginx conf to pass the required headers. We have tried this as per above but to no avail. </p> <p>Has anyone had any success making this annotation work? Any workarounds for adding to the generated nginx conf? </p> <p>Any IBM people know if this functionality was intentionally removed from the fork? And if there is any way to add support for websockets in the IBM version of Kubernetes? </p>
<p>Websockets is not currently supported, we are working on adding it and I will update here when it is available.</p> <p>Thank you</p> <p>Edit: Websocket support is available in all regions, the annotation for it is:</p> <pre><code> annotations: ingress.bluemix.net/websocket-services: service-name </code></pre>
<p>I am new to kubernetes. I have an issue in the pods. When I run the command</p> <pre><code> kubectl get pods </code></pre> <p>Result:</p> <pre><code>NAME READY STATUS RESTARTS AGE mysql-apim-db-1viwg 1/1 Running 1 20h mysql-govdb-qioee 1/1 Running 1 20h mysql-userdb-l8q8c 1/1 Running 0 20h wso2am-default-813fy 0/1 ImagePullBackOff 0 20h </code></pre> <p>Due to an issue of "wso2am-default-813fy" node, I need to restart it. Any suggestion? </p>
<p>In case of not having the yaml file:</p> <p><code>kubectl get pod PODNAME -n NAMESPACE -o yaml | kubectl replace --force -f -</code></p>
<p>When creating/adding a node to kubernetes, we also have to create a Canal pod.</p> <p>Currently, kubernetes does not wait for the Canal pod to be ready before trying to schedule pods, resulting in failures (error below)</p> <pre><code>Error syncing pod, skipping: failed to "CreatePodSandbox" for "nginx-2883150634-fh5s2_default(385d61d6-6662-11e7-8989-000d3af349de)" with CreatePodSandboxError: "CreatePodSandbox for pod \"nginx-2883150634-fh5s2_default(385d61d6-6662-11e7-8989-000d3af349de)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"nginx-2883150634-fh5s2_default\" network: failed to find plugin \"loopback\" in path [/opt/loopback/bin /opt/cni/bin]" </code></pre> <p>Once the Canal pod is up-and-running, simply deleting the failing pod(s) will fix the issue.</p> <p>My question is: what would be the right way to tell kubernetes to wait for the network pod to be ready before trying to schedule pods on the node?</p> <ul> <li>Should I taint the node to only allow Canal, and untaint once it is ready?</li> <li>Should I script the deleting of failed pods once Canal is ready?</li> <li>Is there a configuration or a way to do it that eliminate the issue?</li> </ul>
<p>This is common issue, so I'll post the answer anyway.</p> <p>The behaviour is normal especially in a self-hosted k8s cluster. In a self-hosted environment, all deployments incl the control plane elements (eg. kube-apiserver, canal) are scheduled at the same time.</p> <p>The failed pods should eventually start properly once the control plane is running. k8s will keep restarting failed pods until it comes up properly.</p> <p>To make Canal start first, the manifest can be deployed in the k8s node together with the other control plane manifests (eg. kube-apiserver, kube-controller-manager). It's usually found in <code>/etc/kubernetes/manifests</code> but the path is completely arbitrary. However, if Canal takes too long to be ready, the same error will appear.</p>
<p>My Dockerfile contains default environment variables for development and testing scenarios:</p> <pre><code>ENV mysql_host=mysql \ mysql_user=app \ mysql_password=password \ </code></pre> <p>and my k8s yaml contains an env directive: </p> <pre><code>spec: containers: env: - name: "mysql_password" value: "someotherpassword" name: "mysql_host" value: "someotherhost" name: "mysql_user" value: "someotheruser" </code></pre> <p>but when I exec into my running container with </p> <pre><code>kubctl exec -it service -- /bin/bash </code></pre> <p>I'm still seeing "password" as mysql_password .</p> <p>edit: added more from the k8s yaml for completeness.</p>
<p>you need to declare all the variables in the the correct yaml format so in this exmaple mysql_password will override the variable but mysql_password2 will not override and will only be inserted if its not declared already :</p> <pre><code>spec: containers: env: - name: mysql_password value: &quot;someotherpassword&quot; - name: mysql_password2 value: &quot;password&quot; - name: override_password value: password3 </code></pre> <p>meaning if it started with a &quot;-&quot; it will override . otherwise it will insert the variable to the container</p>
<pre><code>restTemplate.postForEntity(url,entity, String.class); ResponseEntity&lt;String&gt; response = restTemplate.exchange(url, HttpMethod.POST, entity, String.class); </code></pre> <p>Throws null pointer exception when trying to create an object.</p> <p>I checked entity and url, it is getting printed in Logger message. But at this line it throws null pointer exception, but the object is still getting created..</p> <p>If the object is getting created how can this throw null pointer exception..</p> <p>I am using kubernetes, when i check command line in kubernetes it says object got created..but in logs it shows null pointer exception</p>
<p>The problem seem to be that you are executing the request <strong>two times</strong>.</p> <pre><code>restTemplate.postForEntity(url, entity, String.class); ResponseEntity&lt;String&gt; response = restTemplate.exchange(url, HttpMethod.POST, entity, String.class); </code></pre> <p>Both <code>postForEntity</code> and <code>exchange</code> are sending a post request to your <code>url</code>. <code>postForEntity</code> can be seen as a specific case for the <code>exchange</code> method. See the <a href="https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/client/RestTemplate.html#postForEntity-java.net.URI-java.lang.Object-java.lang.Class-" rel="nofollow noreferrer">documentation</a></p> <p>Please use only one of them, for example: </p> <pre><code>ResponseEntity&lt;String&gt; response = restTemplate.postForEntity(url, entity, String.class); </code></pre>
<p>I want to find out the linux flavor running on the vm created using minikube-kubernetes. I log in to the vm and do a cat /proc/version <code>Linux version 4.9.13 gcc version 5.4.0 (Buildroot 2017.02)</code>. Can someone tell me which flavor this is? Obviously not ubuntu as none of the commands man, apt-get work. </p>
<pre><code>cat /etc/*release </code></pre> <p>This works across most distributions.</p>
<p>I'm trying to create a Kubernetes scheduled job, however, I noticed that:</p> <ul> <li>On Kubernetes versions >= v1.4 it's called <em>ScheduledJob</em> (<a href="http://janetkuo.github.io/docs/user-guide/scheduled-jobs/" rel="nofollow noreferrer">http://janetkuo.github.io/docs/user-guide/scheduled-jobs/</a>)</li> <li>On Kubernetes versions >= v1.5 it's called *CronJob (<a href="http://kubernetes.io/docs/user-guide/cron-jobs/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/cron-jobs/</a>)</li> </ul> <p>The default Kubernetes version running on Google Container Engine is v1.4.6 which means I should use <em>ScheduledJob</em> objects.</p> <p>The problem is that <em>ScheduledJob</em> uses the <em>batch/v2alpha1</em> API version which isn't enabled on my Cluster so the job creation fails, on the other hand, creating a new cluster with Alpha Features enabled will only last for 30 days (Google automatically deletes it afterward).</p> <p>Is there any production-ready solution to schedule jobs on Google Container Engine?</p> <p>Thanks,</p> <p>Idan</p> <hr> <p><strong>edit:</strong></p> <p>Below is the official response from Google Support:</p> <blockquote> <p>As you’ve noticed, the scheduled jobs/cron jobs feature is currently in alpha.</p> <p>We realize this is a much-requested feature and are working to get it production-ready in the future. Until then, there is unfortunately no supported feature I can recommend for production.</p> </blockquote>
<p>There is a milestone to migrate CronJobs to Beta in v 1.8 that can be tracked <a href="https://github.com/kubernetes/kubernetes/issues/41039" rel="nofollow noreferrer">here</a>.</p>
<p>I want to find out the linux flavor running on the vm created using minikube-kubernetes. I log in to the vm and do a cat /proc/version <code>Linux version 4.9.13 gcc version 5.4.0 (Buildroot 2017.02)</code>. Can someone tell me which flavor this is? Obviously not ubuntu as none of the commands man, apt-get work. </p>
<p>The minikube distro is custom built using buildroot. It is meant to be a minimal distro and does not include a package manager or package repository. </p> <ul> <li><a href="https://github.com/kubernetes/minikube/tree/master/deploy/iso/minikube-iso" rel="noreferrer">https://github.com/kubernetes/minikube/tree/master/deploy/iso/minikube-iso</a></li> <li><a href="https://github.com/kubernetes/minikube/blob/master/docs/contributors/minikube_iso.md" rel="noreferrer">https://github.com/kubernetes/minikube/blob/master/docs/contributors/minikube_iso.md</a></li> </ul>
<p>I am quite confused about the roles of Ingress and Load Balancer in Kubernetes.</p> <p>As far as I understand Ingress is used to map incoming traffic from the internet to the services running in the cluster.</p> <p>The role of load balancer is to forward traffic to a host. In that regard how does ingress differ from load balancer? Also what is the concept of load balancer inside kubernetes as compared to Amazon ELB and ALB?</p>
<p><strong>Load Balancer:</strong> A kubernetes LoadBalancer service is a service that points to external load balancers that are NOT in your kubernetes cluster, but exist elsewhere. They can work with your pods, assuming that your pods are externally routable. Google and AWS provide this capability natively. In terms of Amazon, this maps directly with ELB and kubernetes when running in AWS can automatically provision and configure an ELB instance for each LoadBalancer service deployed.</p> <p><strong>Ingress:</strong> An ingress is really just a set of rules to pass to a controller that is listening for them. You can deploy a bunch of ingress rules, but nothing will happen unless you have a controller that can process them. A LoadBalancer service could listen for ingress rules, if it is configured to do so.</p> <p>You can also create a <strong>NodePort</strong> service, which has an externally routable IP outside the cluster, but points to a pod that exists within your cluster. This could be an Ingress Controller.</p> <p>An Ingress Controller is simply a pod that is configured to interpret ingress rules. One of the most popular ingress controllers supported by kubernetes is nginx. In terms of Amazon, ALB <a href="https://github.com/kubernetes/ingress/tree/master/controllers/nginx" rel="noreferrer">can be used</a> as an ingress controller.</p> <p>For an example, <a href="https://github.com/kubernetes/ingress/tree/master/controllers/nginx" rel="noreferrer">this</a> nginx controller is able to ingest ingress rules you have defined and translate them to an nginx.conf file that it loads and starts in its pod.</p> <p>Let's for instance say you defined an ingress as follows:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/rewrite-target: / name: web-ingress spec: rules: - host: kubernetes.foo.bar http: paths: - backend: serviceName: appsvc servicePort: 80 path: /app </code></pre> <p>If you then inspect your nginx controller pod you'll see the following rule defined in <code>/etc/nginx.conf</code>:</p> <pre><code>server { server_name kubernetes.foo.bar; listen 80; listen [::]:80; set $proxy_upstream_name &quot;-&quot;; location ~* ^/web2\/?(?&lt;baseuri&gt;.*) { set $proxy_upstream_name &quot;apps-web2svc-8080&quot;; port_in_redirect off; client_max_body_size &quot;1m&quot;; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Real-IP $the_real_ip; proxy_set_header X-Forwarded-For $the_x_forwarded_for; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Original-URI $request_uri; proxy_set_header X-Scheme $pass_access_scheme; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy &quot;&quot;; # Custom headers proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_redirect off; proxy_buffering off; proxy_buffer_size &quot;4k&quot;; proxy_buffers 4 &quot;4k&quot;; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; rewrite /app/(.*) /$1 break; rewrite /app / break; proxy_pass http://apps-appsvc-8080; } </code></pre> <p>Nginx has just created a rule to route <code>http://kubernetes.foo.bar/app</code> to point to the service <code>appsvc</code> in your cluster.</p> <p>Here is <a href="https://crondev.com/kubernetes-nginx-ingress-controller/" rel="noreferrer">an example</a> of how to implement a kubernetes cluster with an nginx ingress controller.</p>
<p>I have a Kubernetes cluster running in AWS. I used <code>kops</code> to setup and start the cluster. </p> <p>I defined a minimum and maximum number of nodes in the nodes instance group: </p> <pre><code>apiVersion: kops/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: 2017-07-03T15:37:59Z labels: kops.k8s.io/cluster: k8s.tst.test-cluster.com name: nodes spec: image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02 machineType: t2.large maxSize: 7 minSize: 5 role: Node subnets: - eu-central-1b </code></pre> <p>Currently the cluster has 5 nodes running. After some deployments in the cluster, pods/containers cannot start because there are no nodes available with enough resources. </p> <p>So I thought, when there is a resource problem, k8s scales automatically the cluster and start more nodes. Because the maximum number of nodes is 7.</p> <p>Do I miss any configuration? </p> <p><strong>UPDATE</strong></p> <p>As @kichik mentioned, the autoscaler addon is already installed. Nevertheless, it doesn't work. Kube-dns is also often restarting because of resource problems. </p>
<p>Someone opened a <a href="https://github.com/kubernetes/kops/issues/341" rel="nofollow noreferrer">ticket for this on GitHub</a> and it suggests you have to install the <a href="https://github.com/kubernetes/kops/tree/master/addons/cluster-autoscaler" rel="nofollow noreferrer">autoscaler addon</a>. Check if it's already installed with:</p> <pre><code>kubectl get deployments --namespace kube-system | grep autoscaler </code></pre> <p>If it's not, you can install it with the following script. Make sure <code>AWS_REGION</code>, <code>GROUP_NAME</code>, <code>MIN_NODES</code> and <code>MAX_NODES</code> have the right values.</p> <pre><code>CLOUD_PROVIDER=aws IMAGE=gcr.io/google_containers/cluster-autoscaler:v0.5.4 MIN_NODES=5 MAX_NODES=7 AWS_REGION=us-east-1 GROUP_NAME="nodes.k8s.example.com" SSL_CERT_PATH="/etc/ssl/certs/ca-certificates.crt" # (/etc/ssl/certs for gce) addon=cluster-autoscaler.yml wget -O ${addon} https://raw.githubusercontent.com/kubernetes/kops/master/addons/cluster-autoscaler/v1.6.0.yaml sed -i -e "s@{{CLOUD_PROVIDER}}@${CLOUD_PROVIDER}@g" "${addon}" sed -i -e "s@{{IMAGE}}@${IMAGE}@g" "${addon}" sed -i -e "s@{{MIN_NODES}}@${MIN_NODES}@g" "${addon}" sed -i -e "s@{{MAX_NODES}}@${MAX_NODES}@g" "${addon}" sed -i -e "s@{{GROUP_NAME}}@${GROUP_NAME}@g" "${addon}" sed -i -e "s@{{AWS_REGION}}@${AWS_REGION}@g" "${addon}" sed -i -e "s@{{SSL_CERT_PATH}}@${SSL_CERT_PATH}@g" "${addon}" kubectl apply -f ${addon} </code></pre>
<p>I am trying to run Vitess on Minikube and I'm going through the 'Getting Started' steps found here: <a href="http://vitess.io/getting-started/#set-up-google-compute-engine-container-engine-and-cloud-tools" rel="nofollow noreferrer">http://vitess.io/getting-started/#set-up-google-compute-engine-container-engine-and-cloud-tools</a></p> <p>I have installed everything I need to including 'vtctlclient'. I have verified that all the correct directories were created when I did this.</p> <p>However, there is a script in my directory '/go/src/github.com/youtube/vitess/examples/kubernetes' called 'kvtctl.sh' which uses kubectl to discover the pod name and set up the tunnel and then runs 'vtctlclient'. When I run this script, this is what is returned:</p> <p>'Starting port forwarding to vtctld...<br> ./kvtctl.sh: line 29: vtctlclient: command not found'</p> <p>I am totally lost as to why the vtctlclient command is not found because I just installed it using Go. Any help on this matter would be much appreciated.</p>
<p>Maybe the go install directory is not in your path. Have you tried running vtctlclient manually (just like kvtctl.sh does)?</p> <p>PS: You may want to join our Vitess Slack channel where you may get more prompt answers for your questions. Let me know if you need an invite.</p>
<p>I have installed kubernetes trial version with minikube on my desktop running ubuntu. However there seem to be some issue with bringing up the pods. Kubectl get pods --all-namespaces shows all the pods in ContainerCreating state and it doesn't shift to Ready.</p> <p>Even when i do a kubernetes-dahboard, i get</p> <blockquote> <p>Waiting, endpoint for service is not ready yet.</p> </blockquote> <p>Minikube version : v0.20.0</p> <p>Environment:</p> <ul> <li><p>OS (e.g. from /etc/os-release): Ubuntu 12.04.5 LTS</p> <p>VM Driver "DriverName": "virtualbox"</p> <p>ISO version "Boot2DockerURL": "file:///home/nszig/.minikube/cache/iso/minikube-v0.20.0.iso"</p></li> </ul> <p>I have installed minikube and kubectl on Ubuntu. However i cannot access the dashboard both through the CLI and through the GUI.</p> <p><a href="http://127.0.0.1:8001/ui" rel="nofollow noreferrer">http://127.0.0.1:8001/ui</a> give the below error </p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "no endpoints available for service "kubernetes-dashboard"", "reason": "ServiceUnavailable", "code": 503 } </code></pre> <p>And minikube dashboard on the CLI does not open the dashboard: Output </p> <pre><code>Waiting, endpoint for service is not ready yet... Waiting, endpoint for service is not ready yet... Waiting, endpoint for service is not ready yet... Waiting, endpoint for service is not ready yet... ....... Could not find finalized endpoint being pointed to by kubernetes-dashboard: Temporary Error: Endpoint for service is not ready yet Temporary Error: Endpoint for service is not ready yet Temporary Error: Endpoint for service is not ready yet Temporary Error: Endpoint for service is not ready yet </code></pre> <p>kubectl version: <code>Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"dirty", BuildDate:"2017-06-22T04:31:09Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}</code></p> <p>minikube logs also reports the errors below: .....</p> <pre><code>Jul 10 08:46:12 minikube localkube[3237]: I0710 08:46:12.901880 3237 kuberuntime_manager.go:458] Container {Name:php-redis Image:gcr.io/google-samples/gb-frontend:v4 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:GET_HOSTS_FROM Value:dns ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:} s:100m Format:DecimalSI} memory:{i:{value:104857600 scale:0} d:{Dec:} s:100Mi Format:BinarySI}]} VolumeMounts:[{Name:default-token-gqtvf ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. Jul 10 08:46:14 minikube localkube[3237]: E0710 08:46:14.139555 3237 remote_runtime.go:86] RunPodSandbox from runtime service failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: x509: certificate signed by unknown authority .... </code></pre> <blockquote> <p>Name: kubernetes-dashboard-2039414953-czptd Namespace: kube-system Node: minikube/192.168.99.102 Start Time: Fri, 14 Jul 2017 09:31:58 +0530 Labels: k8s-app=kubernetes-dashboard pod-template-hash=2039414953 Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kubernetes-dashboard-2039414953","uid":"2eb39682-6849-11e7-8... Status: Pending IP: Created By: ReplicaSet/kubernetes-dashboard-2039414953 Controlled By: ReplicaSet/kubernetes-dashboard-2039414953 Containers:<br> kubernetes-dashboard: Container ID: Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 Image ID:<br> Port: 9090/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Liveness: http-get <a href="http://:9090/" rel="nofollow noreferrer">http://:9090/</a> delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-12gdj (ro) Conditions: Type Status<br> Initialized True Ready False PodScheduled True Volumes:<br> kubernetes-dashboard-token-12gdj: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-token-12gdj Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node-role.kubernetes.io/master:NoSchedule Events:<br> FirstSeen LastSeen Count From SubObjectPath Type Reason Message<br> --------- -------- ----- ---- ------------- -------- ------ ------- 1h 11s 443 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "kubernetes-dashboard-2039414953-czptd_kube-system(2eb57d9b-6849-11e7-8a56-080027206461)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kubernetes-dashboard-2039414953-czptd_kube-system(2eb57d9b-6849-11e7-8a56-080027206461)\" failed: rpc error: code = 2 desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\": Error response from daemon: Get <a href="https://gcr.io/v1/_ping" rel="nofollow noreferrer">https://gcr.io/v1/_ping</a>: x509: certificate signed by unknown authority"</p> </blockquote>
<p>It's quite possible that the Pod container images are being downloaded. The images are not very large so the images should get downloaded pretty quickly on a decent internet connection.</p> <p>You can use <code>kubectl describe pod --namespace kube-system &lt;pod-name&gt;</code> to know more details on the pod bring up status. Take a look at the <code>Events</code> section of the output.</p> <p>Until all the kubernetes components in the <code>kube-system</code> namespace are in <code>READY</code> state, you will not be able to access the dashboard.</p> <p>You can also try <code>SSH</code>'ing into the minikube vm with <code>minikube ssh</code> to debug the issue.</p>
<p><a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="noreferrer">Kubernetes documentation</a> on setting environment variables of a container only include examples of new environment variables.</p> <p>This approach does not work when I try to extend an existing environment variable PATH:</p> <pre><code>kind: Pod apiVersion: v1 spec: containers: - name: blah image: blah env: - name: PATH value: "$PATH:/usr/local/nvidia/bin" </code></pre> <p>The created pod keeps crashing with</p> <pre><code>BackOff Back-off restarting failed container FailedSync Error syncing pod </code></pre> <p>Any recommendations as to how I could extend the PATH environment variable?</p>
<p>If you only need this path declaration for the command you are running with, you can add it to <code>containers</code> section, under <code>args</code></p> <p>Example:</p> <pre><code>spec: containers: - name: blah image: blah args: - PATH="$PATH:/usr/local/nvidia/bin" blah </code></pre> <p>If you do not have args specified in your yaml, you probably have a CMD specified in your Dockerfile that will just run your container with the command automatically. Thus you can add the following to your Dockerfile.</p> <pre><code>CMD ["PATH=$PATH:/usr/local/nvidia/bin", "blah"] </code></pre> <p>If you want this to be in your container in general, you would have to add to the .profile or .bashrc file of the user within the container you are using. This will probably involve creating a new image with these new files baked in.</p>
<p>I have two machines within my netwrok which I want communicate from the pod. </p> <p>Ips are as follows : </p> <pre><code>10.0.1.23 - Lets call it X 13.0.1.12 - Lets call it Y </code></pre> <p>When I ssh into the master node or agent node and then do a ping to X or Y, the ping is successful. Therefore the machines are reachable. </p> <p>Now I create a deployment, I log into the shell of the pod using (<code>kubectl exec -it POD_NAME β€” /bin/sh</code>). </p> <p>Ping to Y is successful. But ping to X fails. </p> <p>CIDR details : </p> <pre><code>Master Node : 14.1.255.0/24 Agent Node: 14.2.0.0/16 Pod CIDR: Agent : 10.244.1.0/24 Master: 10.244.0.0/24 </code></pre> <p>My understanding on what could be the issue : </p> <blockquote> <p>acs-engine has kube-proxy setup the service network with 10.0.0.0/16 If this is the problem how do i change the kube-proxy cidr?</p> </blockquote> <p>Additional Info: </p> <p>I am using <em>acs-engine</em> for my deployment of cluster.</p> <p>Output for <code>ip route</code> </p> <p><code>default via 10.244.1.1 dev eth0 10.244.1.0/24 dev eth0 src 10.244.1.13</code></p> <p>Another suspect: On running <code>iptables-save</code> I see </p> <p><code>-A POSTROUTING ! -d 10.0.0.0/8 -m comment --comment "kubenet: SNAT for outbound traffic from cluster" -m addrtype ! --dst-type LOCAL -j MASQUERADE </code></p>
<p>Based on your question, it sounds like you've added another subnet to the k8 Virtual Network that gets deployed with the ACS Kubernetes cluster.</p> <p>As it turns out, I ran into this <em>exact</em> same problem in our project. Azure Container Services uses very specific routing rules for the agent nodes. When the k8 cluster is deployed, they create a Route Table resource in the same resource group as all your cluster entities. So, if you...</p> <ol> <li>Open the k8 Route Table in the Azure Portal</li> <li>Go to the Subnets section</li> <li>+Associate with the subnet that your other VMs/PaaS services are in</li> </ol> <p>...this will create the routes that the k8 agents are looking for when routing the outbound Pod container traffic.</p>
<p>I'm learning Kubernetes. I'm trying to mapping equivalent concept from Kubernetes to Docker. For example: I have a following <code>docker-compose.yml</code></p> <pre><code>db: container_name: db image: postgres:latest ports: - "5432:5432" environment: POSTGRES_PASSWORD: 1234 app: container_name: app build: . dockerfile: Dockerfile ports: - "3000:3000" volumes: - ".:/webapp" env_file: - ./.env.docker_compose links: - db </code></pre> <p>This <code>docker-compose.yml</code> has two components: db and app. There are 2 ways for understanding. I don't know which one is true and which one is wrong.</p> <ul> <li>First understanding: each components are inside in each pod. This means db in one pod, and app in one pod.</li> <li>Second understanding: both db and app are all in one Pod.</li> </ul> <p>Please tell me which one is true. The same question for <code>Deployment</code> and <code>Service</code>. How can I map to docker concept.</p> <p>Thanks</p>
<p>A pod being a group of one or more container, your docker-compose would by default mimick one pod. (so your second interpretation)</p> <p>But with <a href="https://docs.docker.com/engine/swarm/" rel="nofollow noreferrer">docker swarm mode</a>, you can make sure those two container are in their own "pod" (as a group of one container) with <a href="https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints-constraint" rel="nofollow noreferrer">constraints</a>.<br> With dockerfile v3, you have for instance <a href="https://docs.docker.com/compose/compose-file/#placement" rel="nofollow noreferrer"><strong>placement</strong></a> (also seen in <a href="https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-placement-preferences-placement-pref" rel="nofollow noreferrer"><code>docker service create</code></a>)</p> <pre><code>version: '3' services: db: image: postgres deploy: placement: constraints: - node.role == manager - engine.labels.operatingsystem == ubuntu 14.04 </code></pre>
<p>I have some previously run pods that I think were killed by Kubernetes for OOM or DEADLINE EXCEEDED, what's the most reliable way to confirm that? Especially if the pods weren't recent. </p>
<p>If the pods are still showing up when you type <code>kubectl get pods -a</code> then you get type the following <code>kubectl describe pod PODNAME</code> and look at the reason for termination. The output will look similar to the following (I have extracted parts of the output that are relevant to this discussion):</p> <pre><code>Containers: somename: Container ID: docker://5f0d9e4c8e0510189f5f209cb09de27b7b114032cc94db0130a9edca59560c11 Image: ubuntu:latest ... State: Terminated Reason: Completed Exit Code: 0 </code></pre> <p>In the sample output you will, my pod's terminated reason is <code>Completed</code> but you will see other reasons such as <code>OOMKilled</code> and others over there. </p>
<p>Without any knows changes in our Kubernetes 1.6 cluster all new or restarted pods are not scheduled anymore. The error I get is:</p> <pre><code>No nodes are available that match all of the following predicates:: MatchInterPodAffinity (10), PodToleratesNodeTaints (2). </code></pre> <p>Our cluster was working perfectly before and I really cannot see any configuration changes that have been made before that occured.</p> <p>Things I already tried:</p> <ul> <li>restarting the master node</li> <li>restarting kube-scheduler</li> <li>deleting affected pods, deployments, stateful sets</li> </ul> <p>Some of the pods do have anti-affinity settings that worked before, but most pods do not have any affinity settings.</p> <p>Cluster Infos:</p> <ul> <li>Kubernetes 1.6.2</li> <li>Kops on AWS</li> <li>1 master, 8 main-nodes, 1 tainted data processing node</li> </ul> <p>Is there any known cause to this? </p> <p>What are settings and logs I could check that could give more insight? </p> <p>Is there any possibility to debug the scheduler?</p>
<p>The problem was that a Pod got stuck in deletion. That caused kube-controller-manager to stop working.</p> <p>Deletion didn't work because the Pod/RS/Deployment in question had limits that conflicted with the maxLimitRequestRatio that we had set after the creation. A bug report is on the way.</p> <p>The solution was to increase maxLimitRequestRatio and eventually restart kube-controller-manager.</p>
<p>Kubernetes by default adds a <code>kubernetes</code> service in the default namesapce. This allows access to the kubernetes API from any pod in that namespace.</p> <p>For example, I can </p> <pre><code>TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) kubectl exec -it $SOME_POD -- bash curl -v https://kubernetes/api/v1/ \ -H "Authorization: Bearer $TOKEN" \ -k -v </code></pre> <p>And get something like:</p> <pre><code>&lt; HTTP/1.1 200 OK &lt; Content-Type: application/json &lt; Date: Sat, 15 Jul 2017 22:16:09 GMT &lt; Transfer-Encoding: chunked &lt; { "kind": "APIResourceList", "groupVersion": "v1", "resources": [ { "name": "bindings", "namespaced": true, ... </code></pre> <p>If I create a new namespace, there is no <code>kubernetes</code> service by default. I'm trying to create one by using the following resource service definition (basically copied from the default namespace): </p> <pre><code>apiVersion: v1 kind: Service metadata: labels: component: apiserver provider: kubernetes name: kubernetes resourceVersion: "12" spec: ports: - name: https port: 443 protocol: TCP targetPort: 443 sessionAffinity: ClientIP type: ClusterIP </code></pre> <p>But this doesn't seem to work since there seems to be tied to any pod.</p> <p>Basically, I want the same behavior as in the <code>default</code> namespace in a newly created namespace.</p> <p>Current Version:</p> <pre><code>{ "major": "1", "minor": "5", "gitVersion": "v1.5.7", "gitCommit": "8eb75a5810cba92ccad845ca360cf924f2385881", "gitTreeState": "clean", "buildDate": "2017-04-27T09:42:05Z", "goVersion": "go1.7.5", "compiler": "gc", "platform": "linux/amd64" } </code></pre>
<p>You can access the <code>kubernetes</code> service from other namespaces by qualifying the hostname:</p> <p>The hostnames <code>kubernetes.default.svc</code>, <code>kubernetes.default.svc.cluster.local</code>, and the IP contained in <code>$KUBERNETES_SERVICE_HOST</code> will all resolve to the kubernetes API service from any namespace.</p> <p>Following your example, you could do this from any namespace:</p> <pre><code>TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) kubectl exec -it $SOME_POD -- bash curl -v https://kubernetes.default.svc/api/v1/ \ -H "Authorization: Bearer $TOKEN" -k -v </code></pre>
<p>In most examples about using secrets in Kubernetes, you can find similar examples:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: username: User password: ********** </code></pre> <p>What is the purpose of <code>type: Opaque</code> in the definition above? What other types (and for which use cases) are possible to specify there?</p>
<p><code>type: Opaque</code> means that from kubernetes's point of view the contents of this Secret is unstructured, it can contain arbitrary key-value pairs.</p> <p>In contrast, there is the Secret storing <code>ServiceAccount</code> credentials, or the ones used as <code>ImagePullSecret</code>. These have a constrained contents.</p>
<p>I am using docker service and kubernetes for container orchestration.</p> <p>I would like to have all the volumes mounted from the host with the option nosuid.</p> <p>Emptydir volumes could also live without suid. The only question is whether kubernetes supports specifying such mount options or if they can be handled somehow else.</p> <pre><code> findmnt TARGET SOURCE FSTYPE OPTIONS / /dev/vda1 ext4 rw,noatime,seclabel,data=ordered β”œβ”€/sys sysfs sysfs rw,relatime,seclabel β”‚ β”œβ”€/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime β”‚ β”œβ”€/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec,seclabel,mode=755 β”‚ β”‚ β”œβ”€/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/sy ... β”œβ”€/var/lib/kubelet/pods/05f79fe8-3fab-11e7-8c7b-d00d8969ec73/volumes/kubernetes.io~secret/default-token-lnpbh tmpfs tmpfs rw,relatime,seclabel β”œβ”€/var/lib/kubelet/pods/0911e563-3fab-11e7-8c7b-d00d8969ec73/volumes/kubernetes.io~secret/default-token-lnpbh tmpfs tmpfs rw,relatime,seclabel β”œβ”€/var/lib/kubelet/pods/b550adbd-3fbf-11e7-8c7b-d00d8969ec73/volumes/kubernetes.io~empty-dir/data tmpfs tmpfs rw,relatime,seclabel </code></pre> <p>Related issue from kubernetes: <a href="https://github.com/kubernetes/kubernetes/issues/48912" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/48912</a></p>
<p>As you mentioned, <code>EmptyDir</code> volumes can live without <code>suid</code> is correct but, as of now, there is no way to mention <code>nosuid</code> kind of mount options in Kubernetes Volume manifests.</p>
<p>I've been struggling with this for quite a while now. My effort so far is shown below. The env variable, <code>CASSANDRA_AUTHENTICATOR</code>, in my opinion, is supposed to enable password authentication. However, I'm still able to logon without a password after redeploying with this config. Any ideas on how to enable password authentication in a Kubernetes deployment file?</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cassandra spec: replicas: 1 template: metadata: labels: app: cassandra spec: containers: - name: cassandra image: cassandra env: - name: CASSANDRA_CLUSTER_NAME value: Cassandra - name: CASSANDRA_AUTHENTICATOR value: PasswordAuthenticator ports: - containerPort: 7000 name: intra-node - containerPort: 7001 name: tls-intra-node - containerPort: 7199 name: jmx - containerPort: 9042 name: cql volumeMounts: - mountPath: /var/lib/cassandra/data name: data volumes: - name: data emptyDir: {} </code></pre> <p>The environment is Google Cloud Platform.</p>
<p>So I made few changes to the artifact you have mentioned:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cassandra spec: replicas: 1 template: metadata: labels: app: cassandra spec: containers: - name: cassandra image: bitnami/cassandra:latest env: - name: CASSANDRA_CLUSTER_NAME value: Cassandra - name: CASSANDRA_PASSWORD value: pass123 ports: - containerPort: 7000 name: intra-node - containerPort: 7001 name: tls-intra-node - containerPort: 7199 name: jmx - containerPort: 9042 name: cql volumeMounts: - mountPath: /var/lib/cassandra/data name: data volumes: - name: data emptyDir: {} </code></pre> <p>The changes I made were:</p> <p><code>image</code> name has been changed to <code>bitnami/cassandra:latest</code> and then replaced the <code>env</code> <code>CASSANDRA_AUTHENTICATOR</code> with <code>CASSANDRA_PASSWORD</code>.</p> <p>After you deploy the above artifact then I could authenticate as shown below</p> <ul> <li><p>Trying to exec into pod</p> <pre><code>fedora@dhcp35-42:~/tmp/cassandra$ oc exec -it cassandra-2750650372-g8l9s bash root@cassandra-2750650372-g8l9s:/# </code></pre></li> <li><p>Once inside the pod trying to authenticate with the server</p> <pre><code>root@cassandra-2750650372-g8l9s:/# cqlsh 127.0.0.1 9042 -p pass123 -u cassandra Connected to Cassandra at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.11.0 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cassandra@cqlsh&gt; </code></pre></li> </ul> <p>This image documentation can be found at <a href="https://hub.docker.com/r/bitnami/cassandra/" rel="nofollow noreferrer">https://hub.docker.com/r/bitnami/cassandra/</a></p> <p>If you are not comfortable using the third party image and wanna use the image that upstream community manages then look for following solution, which is more DIY but also is more flexible.</p> <hr> <p>To setup the password you were trying to use the <code>env</code> <code>CASSANDRA_AUTHENTICATOR</code> but this is not merged proposal yet for the image <code>cassandra</code>. You can see the open PRs <a href="https://github.com/docker-library/cassandra/pull/41#issuecomment-174004501" rel="nofollow noreferrer">here</a>.</p> <p>Right now the upstream suggest doing the mount of file <a href="https://raw.githubusercontent.com/apache/cassandra/trunk/conf/cassandra.yaml" rel="nofollow noreferrer"><code>cassandra.yaml</code></a> at <code>/etc/cassandra/cassandra.yaml</code>, so that people can set whatever settings they want.</p> <p>So follow the steps to do it:</p> <ul> <li>Download the <a href="https://raw.githubusercontent.com/apache/cassandra/trunk/conf/cassandra.yaml" rel="nofollow noreferrer">cassandra.yaml</a></li> </ul> <p>I have made following changes to the file:</p> <pre><code>$ diff cassandra.yaml mycassandra.yaml 103c103 &lt; authenticator: AllowAllAuthenticator --- &gt; authenticator: PasswordAuthenticator </code></pre> <ul> <li>Create configmap with that file</li> </ul> <p>We have to create <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configmap/" rel="nofollow noreferrer">Kubernetes Configmap</a> which then we will mount inside the container, we cannot do host mount similar to docker.</p> <pre><code> $ cp mycassandra.yaml cassandra.yaml $ k create configmap cassandraconfig --from-file ./cassandra.yaml </code></pre> <p>The name of configmap is <code>cassandraconfig</code>.</p> <ul> <li><p>Now edit the <code>deployment</code> to use this config and mount it in right place</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cassandra spec: replicas: 1 template: metadata: labels: app: cassandra spec: containers: - name: cassandra image: cassandra env: - name: CASSANDRA_CLUSTER_NAME value: Cassandra ports: - containerPort: 7000 name: intra-node - containerPort: 7001 name: tls-intra-node - containerPort: 7199 name: jmx - containerPort: 9042 name: cql volumeMounts: - mountPath: /var/lib/cassandra/data name: data - mountPath: /etc/cassandra/ name: cassandraconfig volumes: - name: data emptyDir: {} - name: cassandraconfig configMap: name: cassandraconfig </code></pre></li> </ul> <p>Once you create this deployment.</p> <ul> <li><p>Now exec in the pod </p> <pre><code>$ k exec -it cassandra-1663662957-6tcj6 bash root@cassandra-1663662957-6tcj6:/# </code></pre></li> <li><p>Try using the client</p> <pre><code>root@cassandra-1663662957-6tcj6:/# cqlsh 127.0.0.1 9042 Connection error: ('Unable to connect to any servers', {'127.0.0.1': AuthenticationFailed('Remote end requires authentication.',)}) </code></pre></li> </ul> <p>For more information on creating <code>configMap</code> and using it by mounting inside container you can read <a href="https://docs.openshift.org/latest/dev_guide/configmaps.html" rel="nofollow noreferrer">this doc</a>, which helped me for this answer.</p>
<p>I'm running Kong API gateway on GKE and trying to add my own service. </p> <p>I have 3 pods</p> <ul> <li><code>cassandra</code></li> <li><code>kong</code> </li> <li><code>apiindex</code></li> </ul> <p>and 2 services(node ports) </p> <ul> <li><code>apiindex</code> (80/443/8080 ports are open)</li> <li><code>kong-proxy</code>(8000/8001/8443)</li> </ul> <p>I'm trying to add <code>apiindex</code> api to API gateway using</p> <blockquote> <p>curl -i -X POST <a href="http://kong-proxy:8001/apis" rel="nofollow noreferrer">http://kong-proxy:8001/apis</a> -d 'name=test' -d 'uris=/' -d 'upstream_url=<a href="http://apiindex/" rel="nofollow noreferrer">http://apiindex/</a>'</p> </blockquote> <p>But then <code>http://kong-proxy:8000/</code> returns </p> <blockquote> <p>503 {"message": "Service unavailable"}</p> </blockquote> <p>It works fine when I add some public website inside <code>curl -i -X POST http://kong-proxy:8001/apis -d 'name=test' -d 'uris=/' -d 'upstream_url=http://httpbin.org/'</code></p> <p><code>curl http://apiindex/</code> returns 200 from <code>kong</code> pod. </p> <p>Is it possible to add API using kong without exposing <code>apiindex</code> service? </p>
<p>You need to use the fully qualified name of the service (FQDN) in kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p> <p>So instead of <code>apiindex</code> need to use <code>apiindex.default.svc.cluster.local</code></p> <blockquote> <p>curl -i -X POST <a href="http://kong-proxy:8001/apis" rel="nofollow noreferrer">http://kong-proxy:8001/apis</a> -d 'name=testapi' -d 'uris=/' -d 'upstream_url=<a href="http://apiindex.default.svc.cluster.local/" rel="nofollow noreferrer">http://apiindex.default.svc.cluster.local/</a>'</p> </blockquote>
<p>I am attempting to have a kubernetes nginx deployment with zero downtime. Part of that process has been to initiate a rollingUpdate, which ensures that at least one pod is running nginx at all times. This works perfectly well.</p> <p>I am running into errors when the old nginx pod is terminating. According to the kubernetes docs on <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="noreferrer">termination</a>, kubernetes will:</p> <ol> <li>remove the pod from the endpoints list for the service, so it is not receiving any new traffic when termination begins </li> <li>invoke a pre-stop hook if it is defined, and wait for it to complete </li> <li>send SIGTERM to all remaining processes</li> <li>send SIGKILL to any remaining processes after the grace period expires.</li> </ol> <p>I understand that the command <code>nginx -s quit</code> is supposed to gracefully terminate nginx by waiting for all workers to complete requests before the master terminates. It responds gracefully to the SIGQUIT command, while SIGTERM results in violent termination. Other forums say that it is as easy as adding the following preStop hook to your deployment:</p> <pre><code>lifecycle: preStop: exec: command: ["/usr/sbin/nginx", "-s", "quit"] </code></pre> <p>However, from testing this command I have found that <code>nginx -s quit</code> returns immediately, instead of waiting for the workers to complete. It also does not return the PID of the master process, which is what I was hoping for D:</p> <p>What happens is, kubernetes invokes <code>nginx -s quit</code>, which will send a proper SIGQUIT to the worker children, but not wait for them to complete. Instead it will jump right to step 3 and SIGTERM those processes instead, resulting in violent termination, and thus, lost connections.</p> <p><strong>QUESTION</strong>: Has anyone figured out a good way to gracefully shut down their nginx controller during a rolling deployment and have zero downtime? A <code>sleep</code> workaround isn't good enough, I'm looking for something more robust.</p> <p>Below is the full deployment yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller spec: replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0 template: metadata: labels: app: nginx-ingress-lb spec: terminationGracePeriodSeconds: 60 serviceAccount: nginx containers: - name: nginx-ingress-controller image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.8 imagePullPolicy: Always readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 5 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-backend - --v=2 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - containerPort: 80 lifecycle: preStop: exec: command: ["/usr/sbin/nginx", "-s", "quit"] </code></pre>
<p>I hate answering my own questions, but after noodling a bit this is what i have so far. </p> <p>I created a bash script that is semi-blocking, called <code>killer</code>:</p> <pre><code>#!/bin/bash sleep 3 PID=$(cat /run/nginx.pid) nginx -s quit while [ -d /proc/$PID ]; do sleep 0.1 done </code></pre> <p>I found that inside the nginx pod there is a file <code>/run/nginx.pid</code> which has the PID of the master process. If you call <code>nginx -s quit</code> and initiate a wait until the process disappears, you have essentially made the quit command "blocking".</p> <p>Note that there is a <code>sleep 3</code> before anything happens. This is due to a race condition where Kubernetes marks a pod as terminating, but takes a little time (&lt; 1s) to remove this pod from the service that points traffic toward it. </p> <p>I have mounted this script into my pod, and called it via the <code>preStop</code> directive. It mostly works, but during testing there are still occasional blips where i get a curl error that the connection was "reset by peer." But this is a step in the right direction.</p>
<p>I have installed Docker v17.06-ce on 2 minion nodes plus a master node and Kubernetes with Kubeadm v1.7.0. Then I deployed Web UI (Dashboard) with <code>kubectl create -f https://git.io/kube-dashboard</code> and changed type to <em>NodePort</em> using <code>kubectl edit service kubernetes-dashboard -n kube-system</code>.</p> <p>I can access it but its missing CPU/Memory usage graphs. So I've followed the instructions from <a href="https://stackoverflow.com/questions/41832273/kuberenets-web-ui-dashboard-missing-graphs">Kuberenets Web UI (Dashboard) missing graphs</a> to deploy heapster and influxdb, but I still can't see the graps...</p> <p>What's going wrong?</p> <p>UPDATE: checking logs <code>kubectl logs heapster-2994581613-m28hh --namespace=kube-system</code> I've found these errors repeatedly:</p> <pre><code>E0717 09:14:05.000881 7 kubelet.go:271] No nodes received from APIserver. E0717 09:14:05.947260 7 reflector.go:203] k8s.io/heapster/metrics/processors/node_autoscaling_enricher.go:100: Failed to list *api.Node: the server does not allow access to the requested resource (get nodes) E0717 09:14:05.959150 7 reflector.go:203] k8s.io/heapster/metrics/heapster.go:319: Failed to list *api.Pod: the server does not allow access to the requested resource (get pods) E0717 09:14:05.959254 7 reflector.go:203] k8s.io/heapster/metrics/heapster.go:327: Failed to list *api.Node: the server does not allow access to the requested resource (get nodes) E0717 09:14:05.959888 7 reflector.go:203] k8s.io/heapster/metrics/sources/kubelet/kubelet.go:342: Failed to list *api.Node: the server does not allow access to the requested resource (get nodes) E0717 09:14:05.959995 7 reflector.go:203] k8s.io/heapster/metrics/processors/namespace_based_enricher.go:84: Failed to list *api.Namespace: the server does not allow access to the requested resource (get namespaces) E0717 09:14:06.957399 7 reflector.go:203] k8s.io/heapster/metrics/processors/node_autoscaling_enricher.go:100: Failed to list *api.Node: the server does not allow access to the requested resource (get nodes) E0717 09:14:06.965155 7 reflector.go:203] k8s.io/heapster/metrics/sources/kubelet/kubelet.go:342: Failed to list *api.Node: the server does not allow access to the requested resource (get nodes) E0717 09:14:06.965166 7 reflector.go:203] k8s.io/heapster/metrics/heapster.go:327: Failed to list *api.Node: the server does not allow access to the requested resource (get nodes) E0717 09:14:06.966403 7 reflector.go:203] k8s.io/heapster/metrics/heapster.go:319: Failed to list *api.Pod: the server does not allow access to the requested resource (get pods) E0717 09:14:06.966964 7 reflector.go:203] k8s.io/heapster/metrics/processors/namespace_based_enricher.go:84: Failed to list *api.Namespace: the server does not allow access to the requested resource (get namespaces) </code></pre> <p>Any idea?</p>
<p>you need to install the heapster pod. try installing this and check.</p> <p>Install the heapster rbac also.</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml kubectl create -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml </code></pre>
<h1>Problem</h1> <p>I have a monitoring application that I want to deploy inside of a DaemonSet. In the application's configuration, a unique user agent is specified to separate the node from other nodes. I created a ConfigMap for the application, but this only works for synchronizing the other settings in the environment.</p> <h1>Ideal solution?</h1> <p>I want to specify a unique value, like the node's hostname or another locally-inferred value, to use as the user agent string. Is there a way I can call this information from the system and Kubernetes will populate the desired key with a value (like the hostname)?</p> <p>Does this make sense, or is there a better way to do it? I was looking through the documentation, but I could not find an answer anywhere for this specific question.</p> <p>As an example, here's the string in the app config that I have now, versus what I want to use.</p> <p><code>user_agent = "app-k8s-test"</code></p> <p>But I'd prefer…</p> <p><code>user_agent = $HOSTNAME</code></p> <p>Is something like this possible?</p>
<p>You can use an init container to preprocess a config template from a config map. The preprocessing step can inject local variables into the config files. The expanded config is written to an emptyDir shared between the init container and the main application container. Here is an example of how to do it.</p> <p>First, make a config map with a placeholder for whatever fields you want to expand. I used <code>sed</code> and and ad-hoc name to replace. You can also get fancy and use jinja2 or whatever you like. Just put whatever pre-processor you want into the init container image. You can use whatever file format for the config file(s) you want. I just used TOML here to show it doesn't have to be YAML. I called it ".tpl" because it is not ready to use: it has a string, <code>_HOSTNAME_</code>, that needs to be expanded.</p> <pre><code>$ cat config.toml.tpl [blah] blah=_HOSTNAME_ otherkey=othervalue $ kubectl create configmap cm --from-file=config.toml.tpl configmap "cm" created </code></pre> <p>Now write a pod with an init container that mounts the config map in a volume, and expands it and writes to another volume, shared with the main container:</p> <pre><code>$ cat personalized-pod.yaml apiVersion: v1 kind: Pod metadata: name: myapp-pod-5 labels: app: myapp annotations: spec: containers: - name: myapp-container image: busybox command: ['sh', '-c', 'echo The app is running and my config-map is &amp;&amp; cat /etc/config/config.toml &amp;&amp; sleep 3600'] volumeMounts: - name: config-volume mountPath: /etc/config initContainers: - name: expander image: busybox command: ['sh', '-c', 'cat /etc/config-templates/config.toml.tpl | sed "s/_HOSTNAME_/$MY_NODE_NAME/" &gt; /etc/config/config.toml'] volumeMounts: - name: config-tpl-volume mountPath: /etc/config-templates - name: config-volume mountPath: /etc/config env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName volumes: - name: config-tpl-volume configMap: name: cm - name: config-volume emptyDir: $ kubctl create -f personalized-pod.yaml $ sleep 10 $ kubectl logs myapp-pod The app is running and my config-map is [blah] blah=gke-k0-default-pool-93916cec-p1p6 otherkey=othervalue </code></pre> <p>I made this a bare pod for an example. You can embed this type of pod in a DaemonSet's pod template.</p> <p>Here, the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Downward API is used to set the MY_NODE_NAME Environment Variable</a>, since the Node Name is not otherwise readily available from within a container.</p> <p>Note that for some reason, you can't get the <code>spec.nodeName</code> into a file, just an env var.</p> <p>If you just need the hostname in an Env Var, then you can skip the init container. </p> <p>Since the Init Container only runs once, you should not update the configMap and expect it to be reexpanded. If you need updates, you can do one of two things:</p> <ul> <li><p>Instead of an init container, run a sidecar that watches the config map volume and re-expands when it changes (or just does it periodically). This requires that the main container also know how to watch for config file updates.</p></li> <li><p>You can just make a new config map each time the config template changes, and edit the daemonSet to change the one line to point to a new config map. And then do a rolling update to use the new config.</p></li> </ul>
<p>I have been having problems trying to deploy my web app in <a href="https://kubernetes.io/" rel="nofollow noreferrer">kubernetes</a>. </p> <p>I wanted to mimic old deploy with nginx working as reverse proxy in front of my back and front end services.</p> <p>I have 3 pieces in my system, nginx, front and back. I built 3 deploys, 3 services and exposed only my nginx service using <code>nodePort: 30050</code>.</p> <p>Without further delays, this is my nginx.conf:</p> <pre><code>upstream my-server { server myserver:3000; } upstream my-front { server myfront:4200; } server { listen 80; server_name my-server.com; location /api/v1 { proxy_pass http://my-server; } location / { proxy_pass http://my-front; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $remote_addr; } } </code></pre> <p>I tried to install curl and nslookup inside one of the pods and tried to do manual request on cluster internal endpoints... tears came to my eyes, everything is working...i am almost a developer worthy of the cloud.</p> <p>Everything is working smoothly...everything but nginx DNS resolution.</p> <p>If i do <code>kubectl exec -it my-nginx-pod -- /bin/bash</code> and try to curl 1 of the other 2 services: <code>curl myfront:4200</code> it works properly.</p> <p>If i try to nslookup one of them it works as well.</p> <p>After this i tried to replace, in nginx.conf, the service names with the pods IPs. After restarting the nginx service everything was working.</p> <p>Why doesn't nginx resolve the upstream names properly? I am going nuts over this.</p>
<p>Nginx caches the resolved IPs. To force Nginx to resolve DNS, you can introduce a variable:</p> <pre><code>location /api/v1 { set $url "http://my-server"; proxy_pass $url; } </code></pre> <p>More details can be found in this related <a href="https://serverfault.com/a/593003/151221">this answer</a>.</p> <p>As it is likely a caching in Nginx, what you describe, it would also explain why restarting (or reloading) Nginx will fix the problem. At least for a while until the DNS entry changes, again.</p> <p>I think, it is not related to Kubernetes. I had the same problem a while ago when Nginx cached DNS entries of AWS ELBs, which frequently change IPs.</p>
<p>When launching my Kubernetes deployment for Flower (Celery monitoring service), the following environment variables are generated in the Flower pod by Kubernetes:</p> <pre><code>FLOWER_PORT=tcp://10.67.97.89:5555 FLOWER_PORT_5555_TCP=tcp://10.67.97.89:5555 FLOWER_PORT_5555_TCP_ADDR=10.67.97.89 FLOWER_PORT_5555_TCP_PORT=5555 FLOWER_PORT_5555_TCP_PROTO=tcp FLOWER_SERVICE_HOST=10.67.97.89 FLOWER_SERVICE_PORT=5555 FLOWER_SERVICE_PORT_5555=5555 </code></pre> <p>This is due to the Flower service which is started shortly before the deployment. However, Flower expects an integer in <code>FLOWER_PORT</code> and aborts.</p> <p>How can I prevent these environment variables from being created?</p>
<p>You can not prevent creation of these, but you can overwrite them with your own values by setting them explicitly in your deployments pod template. So, if you ie. expect the default value of FLOWER_PORT to be say <code>80</code> instead of <code>tcp://...</code>, all you need to do is put</p> <pre><code>env: - name: FLOWER_PORT value: "80" </code></pre> <p>and that's it.</p>
<p>I'm intending to have a CD Pipeline with Jenkins which takes my application, publishes a docker image to my private docker repository. I think I know how to do that. </p> <p>What I'm unsure about it the Kubernetes part. I want to take that image and deploy it to my private Kubernetes cluster (currently 1 Master &amp; 1 Slave). </p> <p>Question: Does that Jenkins Slave which has kubectl and docker installed need to be part of the Kubernetes cluster in order to trigger a deployment? How can I trigger that deployment?</p>
<p>Assuming that you have the following deployment in your cluster:</p> <pre><code>apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 kind: Deployment metadata: name: foobar-deployment spec: replicas: 3 template: metadata: labels: app: foobar-app spec: containers: - name: foobar image: foobar-image:v1 ports: - containerPort: 80 </code></pre> <p>You would have to somehow have Jenkins tell your Kubernetes master the following command: </p> <pre><code>kubectl set image deployment/foobar-deployment foobar=foobar-image:version </code></pre> <p>where <code>version</code> is the new verion you just created with Jenkins. This will automatically trigger a redeploy with this version.</p> <p>As long as you have access to your Kubernetes master that has your cluster on it (via ssh or similar), you can just pass the above command. Don't forget to keep track of <code>version</code> when you pass this command.</p>
<p>I have created a cluster using the google cloud platform (container engine) and deployed a pod using the following YAML file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: deployment-name spec: replicas: 1 template: metadata: name: pod-name labels: app: app-label spec: containers: - name: container-name image: gcr.io/project-id/image-name resources: requests: cpu: 1 ports: - name: port80 containerPort: 80 - name: port443 containerPort: 443 - name: port6001 containerPort: 6001 </code></pre> <p>Then I want to create a service that enables the pod to listen on all these ports. I know that the following YAML file works to create a service that listens on one port:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: service-name spec: ports: - port: 80 targetPort: 80 selector: app: app-label type: LoadBalancer </code></pre> <p>However when I want the pod to listen on multiple ports like this, it doesn't work:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: service-name spec: ports: - port: 80 targetPort: 80 - port: 443 targetPort: 443 - port: 6001 targetPort: 6001 selector: app: app-label type: LoadBalancer </code></pre> <p>How can I make my pod listen to multiple ports?</p>
<p>You have two options:</p> <ol> <li>You could have multiple services, one for each port. As you pointed out, each service will end up with a different IP address</li> <li>You could have a single service with multiple ports. In this particular case, you must give all ports a name.</li> </ol> <p>In your case, the service becomes:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: service-name spec: ports: - name: http port: 80 targetPort: 80 - name: https port: 443 targetPort: 443 - name: something port: 6001 targetPort: 6001 selector: app: app-label type: LoadBalancer </code></pre> <p>This is necessary so that endpoints can be disambiguated.</p>
<p>I need to provide access to the file /var/docker.sock on the Kubernetes host (actually, a GKE instance) to a container running on that host. </p> <p>To do this I'd like to mount the directory into the container, by configuring the mount in the deployment.yaml for the container deployment.</p> <p>How would I specify this in the deployment configuration?</p> <p>Here is the current configuration, I have for the deployment:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: appd-sa-agent spec: replicas: 1 template: metadata: labels: run: appd-sa-agent spec: containers: - name: appd-sa-agent image: docker.io/archbungle/appd-sa-agent:latest ports: - containerPort: 443 env: - name: APPD_HOST value: "https://graffiti201707132327203.saas.appdynamics.com" </code></pre> <p>How would I specify mounting the localhost file path to a directory mountpoint on the container?</p> <p>Thanks! T.</p>
<p>You need to define a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> volume.</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: appd-sa-agent spec: replicas: 1 template: metadata: labels: run: appd-sa-agent spec: volumes: - name: docker-socket hostPath: path: /var/run/docker.sock containers: - name: appd-sa-agent image: docker.io/archbungle/appd-sa-agent:latest volumeMounts: - name: docker-socket mountPath: /var/run/docker.sock ports: - containerPort: 443 env: - name: APPD_HOST value: "https://graffiti201707132327203.saas.appdynamics.com" </code></pre>
<p>I'm following example found <a href="http://kubecloud.io/minikube-workflows/" rel="nofollow noreferrer">here</a>.</p> <p>I'm simply trying to understand how volumes work with Kubernetes. I'm testing locally so I need to contend with minikube. I'm trying to make this as simple as possible. I'm using nginx and would like to have it display content that is mounted from a folder on my localhost.</p> <p>Environment: macOS 10.12.5 minikube 0.20.0 + xhvve VM</p> <p>I'm using the latest <a href="https://hub.docker.com/_/nginx/" rel="nofollow noreferrer">ngninx image from GitHub</a> with no modifications. </p> <p>This works perfectly when I run the docker image outside of minikube.</p> <pre><code>docker run --name flow-4 \ -v $(pwd)/website:/usr/share/nginx/html:ro \ -P -d nginx </code></pre> <p>But <strong>when I try to run it in minikube I get a 404 response when I visit the hosted page -always. Why?</strong></p> <p>Here are my kubernetes config files...<br> kubernets/deploy/deployment.yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: flow-4 name: flow-4 spec: replicas: 1 selector: matchLabels: run: flow-4 template: metadata: labels: run: flow-4 spec: containers: - image: nginx name: flow-4 ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html name: flow-4-volume volumes: - name: flow-4-volume hostPath: path: /Users/myuser/website </code></pre> <p>kubernets/deploy/svc.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: run: flow-4 name: flow-4 spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: run: flow-4 type: NodePort </code></pre> <p>Finally, I run it like this:</p> <pre><code>kubectl create -f kubernetes/deploy/ minikube service flow-4 </code></pre> <p>When it opens in my browser, instead of seeing my index.html page in the website folder, I just get a '404 Not Found' message (above a nginx/1.13.3 footer)</p> <p><strong>Why am I getting 404? Is nginx not able to see the contents of my mounted folder?</strong> Does the VM hosting kubernetes not have access to my 'website' folder?</p> <p>I suspect this is the problem. I ssh into the kubernetes pod</p> <pre><code>kubectl exec -it flow-4-1856897391-m0jh1 /bin/bash </code></pre> <p>When I look in the /usr/share/nginx/html folder, it is empty. If I manually add an index.html file, then I can see it in my browser. But why won't Kubernetes mount my local drive to this folder?</p> <p><strong>Update</strong> </p> <p>There seems to be something wrong with mounting full paths from my /Users/** folder. Instead, I used the 'minikube mount' command to mount local folder container index.html into the minikube VM. Then in a separate terminal I started my deployment and it could see the index.html file just fine. </p> <p>Here is my updated deployment.yaml file which has clearer file names to better explain the different folders and where they are mounted...</p> <p>Here are my kubernetes config files...<br> kubernets/deploy/deployment.yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: flow-4 name: flow-4 spec: replicas: 1 selector: matchLabels: run: flow-4 template: metadata: labels: run: flow-4 spec: containers: - image: nginx name: flow-4 ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html name: flow-4-volume volumes: - name: flow-4-volume hostPath: path: /kube-website </code></pre> <p>It's using the same svc.yaml file from earlier in the post.</p> <p>I then ran the whole thing like this from my current directory.<br> 1. <code>mkdir local-website</code><br> 2. <code>echo 'Hello from local storage' &gt; local-website/index.html</code><br> 3. <code>minikube mount local-website:/kube-website</code><br> Let this run.... </p> <p>In a new terminal, same folder...<br> 4. <code>kubectl create -f kubernetes/deploy/</code> </p> <p>Once all the pods are running...<br> 5. <code>minikube service flow-4</code> </p> <p>You should see the 'Hello from local storage' message great you in your browser. You can edit the local index.html file and then refresh your browser to see the contents change.</p> <p>You can tear it all down with this... <code>kubectl delete deployments,services flow-4</code></p>
<p>Probably the folder you created is not in kubernetes node (it is minikube vm).</p> <p>Try to create folder inside vm and try again</p> <pre><code>ssh -i ~/.minikube/machines/minikube/id_rsa docker@$(minikube ip) mkdir /Users/myuser/website </code></pre> <p>Also take a look at <a href="https://github.com/kubernetes/minikube/blob/master/docs/host_folder_mount.md" rel="nofollow noreferrer">minikube host mount folder</a> feature</p>
<p>Today I recreated my cluster with v1.7.1 when I run the <code>kubeadm join --token 189518.c21306e71082d6ec</code> command, it giving the below error. this used work in previous version of kubernetes. Is something changed in this version, How do we resolve this?</p> <pre><code>[root@k8s17-02 ~]# kubeadm join --token 189518.c21306e71082d6ec 192.168.15.91:6443 [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] WARNING: hostname "" could not be reached [preflight] WARNING: hostname "" lookup : no such host [preflight] Some fatal errors occurred: hostname "" a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') [preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks` </code></pre> <p><strong>update on 7/21/17</strong></p> <p>Tested this with v1.7.2 same issue still.</p> <pre><code># ./kubeadm version kubeadm version: &amp;version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} # ./kubeadm join --token 189518.c21306e71082d6ec 192.168.15.91:6443 [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] WARNING: hostname "" could not be reached [preflight] WARNING: hostname "" lookup : no such host [preflight] Some fatal errors occurred: hostname "" a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') [preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks` </code></pre> <p>Thanks SR</p>
<p>Looks like it's trying to look up the hostname and can't because it's not in DNS. There are two ways around this:</p> <ol> <li>Kubernetes works better with named nodes. While this is annoying, it provides benefits in the long run, such as when you have to use different IP addresses on a reboot. You could edit <code>/etc/hosts</code> on each machine to give names to all the boxes in your cluster, or start up a local DNS, adding the names to that.</li> <li>Or, you could try skipping the preflight checks... <code>kubeadm join --skip-preflight-checks --token TOKEN HOST:PORT</code></li> </ol>
<p>I created a cluster using Kubernetes and I want to make petitions to a server using different IP, does each of the nodes has a different one so that I can parallelize the requests? Thanks</p>
<p>to make calls to your pods you should use kubernetes services which effectively loadbalance requests between your pods, so that you do not need to worry about particular IPs of PODs at all.</p> <p>That said, each pod has it's unique IP address, but these are internal addresses, in most implementations they come from the overlay network and are, in a way, internal to kube cluster (can't be directly called from outside - which is not exactly the full truth, but close enough).</p> <p>Depending on your goals, Ingress might be interesting for you.</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a> <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
<p>For local development I have a working minikube. There we have different services deployed. Now I want to connect the Frontend with the Backend.</p> <p>The Frontend is a angular application and lives in its own service. The Backend is a node.js application also using a separate service and uses DNS to connect to other internal services like mongodb. </p> <p>Now I want to communicate from the Frontend with the Backend. DNS is not working because the Frontend does not know how to resolve the named route. The problem is to tell the frontend which backend URL and port it should use to send requests to?</p> <p>The only working state was approached when I first started the Backend service with type NodePort and copied the url and port to the Frontends target URL. I think this is very unclean to me. Is there another approach to get the url for backend requests into the Frontend?</p> <p>I know when we deploy a service on a production system with type="LoadBalancer" that the service is exposed by an external IP and I can access the service then from there. And that the external IP will be permanent at pod updates and so on. The problem I also see is that the backend IP needs to be injected into the docker container by an additional commit.</p> <p>Edit(1): The backend service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: backend labels: app: some-app tier: backend spec: type: NodePort ports: - port: 3000 selector: app: some-app tier: backend </code></pre> <p>Edit(2): I also get this response when I request from the client with a fqn:</p> <pre><code>OPTIONS http://backend.default.svc.cluster.local:3000/signup/ net::ERR_NAME_NOT_RESOLVED </code></pre>
<p>First I will try to address your specific questions</p> <blockquote> <p>The only working state was approached when I first started the Backend service with type NodePort and copied the url and port to the Frontends target URL. I think this is very unclean to me. Is there another approach to get the url for backend requests into the Frontend?</p> </blockquote> <p>You have couple of options here 1) As you said, use type="LoadBalancer". OR 2) Proxy all your backend calls through your front end server </p> <blockquote> <p>I know when we deploy a service on a production system with type="LoadBalancer" that the service is exposed by an external IP and I can access the service then from there. And that the external IP will be permanent at pod updates and so on. The problem I also see is that the backend IP needs to be injected into the docker container by an additional commit.</p> </blockquote> <ol> <li>Make it a 12-factor app (or 1 step closer to a 12-factor app :)) by moving the config out from your code to platform (let's say to k8s configmap or an external KV registry like consul/eureka)</li> <li>Even if it's left in code, as you said, the external IP will be referable and it's not going to change unless you do so. I don't see why you need another deployment</li> </ol> <h1>Proxy all your backend calls through your front end server</h1> <p>If you are routing (or willing to route) all your microservices/backend call thru the server side of your front end and if are deploying both your front end and backend in the same k8s cluster in the same namespace, then you can use KubeDNS add-on (If it is not available in your k8s cluster yet, you can check with the k8s admin) to resolve the backend service name to it's IP. From your front end <strong>server</strong>, Your backend service will always be resolvable by it's name.</p> <p>Since you have kubeDNS in your k8s cluster, and both frontend and backend services resides in same k8s cluster and same namespace, we can make use of k8s' inbuilt service discovery mechanism. Backend service and frontend service will be discoverable each other by it's name. That means, you can simply use the DNS name "backend" to reach your backend service from your frontend <strong>pods</strong>. So, just proxy all the backend request through your front end nginx to your upstream backend service. In the frontend nginx <strong>pods</strong>, backend service's IP will resolvable for the domain name "backend". This will save you the CORS headache too. This setup is portable, meaning, it doesn't matter whether you are deploying in dev or stage or prod, name "backend" will always resolve to the corresponding backend.</p> <p>A potential pitfall of this approach is, your backend may not be able to scale independent of frontend; Which is not a big deal in my humble opinion; In a k8s environment, it is just a matter of spinning up more pods if needed.</p> <p>Just curious- What is serving your front end (which server technology is delivering your index.html to user's browser)? is it static servers like nginx or apache httpd or are you using nodejs here?</p>
<p>Does windows minikube support a persistent volume with a hostpath? If so what is the syntax?</p> <p>I tried:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: kbmongo002 labels: type: local spec: storageClassName: mongostorageclass capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/temp/mongo" persistentVolumeReclaimPolicy: Retain --- </code></pre> <p>This passed validation and created the PV and a PVC claimed it, but nothing was written to my expected location of C:\temp\mongo</p> <p>I also tried:</p> <pre><code> hostPath: path: "c:/temp/mongo" persistentVolumeReclaimPolicy: Retain --- </code></pre> <p>That resulted in:</p> <pre><code>Error: Error response from daemon: Invalid bind mount spec "c:/temp/mongo:/data/db": invalid mode: /data/db Error syncing pod </code></pre>
<p>If you use virtualbox in windows, only the <code>c:/Users</code> is mapped into vm as <code>/c/Users</code> which is kubernetes system can access. It is the feature in Virtualbox.</p> <p><a href="https://i.stack.imgur.com/6TWOC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6TWOC.png" alt="snapshot"></a></p> <p>Minikube use VM to simulate the kubernetes VM.</p> <p>Minikube provides mount feature as well, not so user-friendly for persitency.</p> <p>You can try choose one of the solutions below</p> <ul> <li>use folders under <code>/c/Users</code> for your yaml file</li> <li>map extra folders into virtualbox VM like <code>C:\Users</code></li> <li>use <code>minikube mount</code>, see <a href="https://github.com/kubernetes/minikube/blob/master/docs/host_folder_mount.md" rel="nofollow noreferrer">host folder mount</a></li> </ul>
<p>I installed the minikube for local kubernetes development according to article <a href="http://www.bogotobogo.com/DevOps/DevOps-Kubernetes-1-Running-Kubernetes-Locally-via-Minikube.php" rel="nofollow noreferrer">DevOps-Kubernetes-1-Running-Kubernetes-Locally-via-Minikube</a></p> <ul> <li>Ubuntu 16.04 LTS</li> <li>minikube 0.20.0 </li> </ul> <p>The default kubernetes for minikube <code>0.20.0</code> is <code>v1.6.4</code> and I use follow command to use new release <code>v1.7.0</code></p> <pre><code>minikube start --kubernetes-version v1.7.0 </code></pre> <p>How can I set this as default in configuration for minikube ?</p> <p>So far, if I run <code>minikube start</code>, it always starts default <code>v1.6.4</code> even the server VM is upgraded to <code>v1.7.0</code></p> <pre><code>$ minikube start Starting local Kubernetes v1.6.4 cluster... Starting VM... ... $ kubectl version Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} The connection to the server 192.168.42.96:8443 was refused - did you specify the right host or port? </code></pre>
<p>You can set its default value with:</p> <pre><code>minikube config set kubernetes-version v1.7.0 </code></pre> <p>It edits <code>~/.minikube/config/config.json</code> and adds:</p> <pre><code>{ &quot;kubernetes-version&quot;: &quot;v1.7.0&quot; } </code></pre> <p>Check out <a href="https://minikube.sigs.k8s.io/docs/handbook/config/#selecting-a-kubernetes-version" rel="nofollow noreferrer">Selecting a Kubernetes version</a> in the documentation. Check source code <a href="https://github.com/kubernetes/minikube/blob/master/cmd/minikube/cmd/config/config.go" rel="nofollow noreferrer">config.go</a> for reference.</p>
<p>ERROR: The template version is invalid: Unknown version (heat_template_version: 2016-10-14). Should be one of: 2012-12-12, 2013-05-23, 2010-09-09</p>
<p>Best way of creating kubernetes cluster on openstack is, by using <strong>Magnum</strong></p> <p><strong>Magnum</strong> is a OpenStack project which allows to create COE on a fly</p> <p>Please refer this link for further details:</p> <p><a href="https://wiki.openstack.org/wiki/Magnum" rel="nofollow noreferrer">https://wiki.openstack.org/wiki/Magnum</a></p>
<p>How do you get logs from kube-system pods? Running <code>kubectl logs pod_name_of_system_pod</code> does not work:</p> <pre><code>Ξ» kubectl logs kube-dns-1301475494-91vzs Error from server (NotFound): pods "kube-dns-1301475494-91vzs" not found </code></pre> <p>Here is the output from <code>get pods</code>:</p> <pre><code>Ξ» kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default alternating-platypus-rabbitmq-3309937619-ddl6b 1/1 Running 1 1d kube-system kube-addon-manager-minikube 1/1 Running 1 1d kube-system kube-dns-1301475494-91vzs 3/3 Running 3 1d kube-system kubernetes-dashboard-rvm78 1/1 Running 1 1d kube-system tiller-deploy-3703072393-x7xgb 1/1 Running 1 1d </code></pre>
<p>Use the namespace param to kubectl : <code>kubectl --namespace kube-system logs kubernetes-dashboard-rvm78</code></p>
<p>I am trying to follow this tutorial to create a Galera cluster using kubernetes on AWS, but I see that the Dynamic provisioning of the volume fails and this is the error I see in kube-controller-manager.log</p> <pre><code>0711 06:12:21.065942 6 aws_ebs.go:434] Provision failed: claim.Spec.Selector is not supported for dynamic provisioning on AWS I0711 06:12:21.066003 6 pv_controller.go:1312] failed to provision volume for claim "default/mysql-datadir-galera-ss-1" with StorageClass "gp2": claim.Spec.Selector is not supported for dynamic provisioning on AWS E0711 06:12:21.065942 6 aws_ebs.go:434] Provision failed: claim.Spec.Selector is not supported for dynamic provisioning on AWS I0711 06:12:21.066227 6 pv_controller.go:1312] failed to provision volume for claim "default/mysql-datadir-galera-ss-0" with StorageClass "gp2": claim.Spec.Selector is not supported for dynamic provisioning on AWS E0711 06:12:21.066006 6 aws_ebs.go:434] Provision failed: claim.Spec.Selector is not supported for dynamic provisioning on AWS I0711 06:12:21.066303 6 pv_controller.go:1312] failed to provision volume for claim "default/mysql-datadir-galera-ss-2" with StorageClass "gp2": claim.Spec.Selector is not supported for dynamic provisioning on AWS </code></pre> <p>I am not sure what is wrong, it is using a default storage class of type 'gp2'. Any suggestions on this ?</p>
<p>looks like a configuration issue, "claim.Spec.Selector" should not be set in StorageClass/PVC yaml for AWS.</p>
<p>I am a new cookie to kubernetes . I am wondering if kubernetes have automatically switch the pods to another node if that node resources are on critical.</p> <p>For example if Pod A , Pod B , Pod C is running on Node A and Pod D is running on Node B. The resources of Node A used by pods would be high. In these case whether kubernetes will migrate the any of the pods running in Node A to Node B.</p> <p>I have learnt about node affinity and node selector which is used to run the pods in certain nodes. It would be helpfull if kubernetes offer this feature to migrate the pods to another node automatically if resources are used highly.</p> <p>Can any one know how can we achieve this in kubernetes ?</p> <p>Thanks -S</p>
<p>Yes, Kubernetes can migrate the pods to another node automatically if resources are used highly. The pod would be killed and a new pod would be started on another node. You would probably want to learn about <a href="https://medium.com/google-cloud/quality-of-service-class-qos-in-kubernetes-bb76a89eb2c6" rel="noreferrer">Quality of Service Classes</a>, to understand which pod would be killed first. </p> <p>That said, you may want to read about <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noreferrer">Automatic Horizontal Pod Autoscaling</a>. This may give you more control. </p> <blockquote> <p>With Horizontal Pod Autoscaling, Kubernetes automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, with alpha support, on some other, application-provided metrics).</p> </blockquote>
<p>I'm using helm to generate kubernetes yamls. </p> <p>My values.yaml looks like this:</p> <pre><code>... jobs: - nme: job1 command: [sh, -c, "/app/deployment/start.sh job1"] activeDeadlineSeconds: 600 - name: job2 command: [sh, -c, "/app/deployment/start.sh job2"] activeDeadlineSeconds: 600 ... </code></pre> <p><code>templates/jobs.yaml</code></p> <pre><code>{{ range $i, $job := .Values.jobs -}} apiVersion: batch/v1 kind: Job metadata: name: {{ template "name" . }}-{{ $job.name }} labels: chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}" spec: activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }} template: metadata: labels: app: {{ template "name" . }}-{{ $job.name }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} command: {{ $job.command }} env: {{ toYaml .Values.service.env | indent 10 }} ports: - containerPort: {{ .Values.service.internalPort }} {{- end }} </code></pre> <p>Helm is failing with this error:</p> <pre><code>Error: UPGRADE FAILED: render error in "app1/templates/jobs.yaml": template: app1/templates/_helpers.tpl:6:18: executing "name" at &lt;.Chart.Name&gt;: can't evaluate field Name in type interface {} </code></pre> <p>When I look at _helpers.tpl:</p> <pre><code>{{- define "name" -}} {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} {{- end -}} </code></pre> <p>If I remove the range loop and references to $job in my <code>jobs.yaml</code>, the _helpers.tpl <code>name</code> template works fine. When I add in the loop, it fails. </p> <p>It seems like within the loop, all dot <code>.</code> pipeline, which contains the scope for <code>.Chart</code> and <code>.Values</code>, is reassigned to something else.</p> <p>What am I doing wrong? </p>
<p>Inside the loop the value of the <code>.</code> is set to the current element and you have to use <code>$.Chart.Name</code> to access your data.</p> <p>I asked a similar question and I think the answer <a href="https://stackoverflow.com/a/44734585/8131948">https://stackoverflow.com/a/44734585/8131948</a> will answer your question too.</p>
<p>I have some containers that will be runnin users code in them. In order to strengthen security, I want to prevent them from having access to kubernetes api via the service account mechanism, but don't want to turn it off globally. The documentation says you can switch the service account name but only to another valid name. Are there alternatives that I missed? Can you restrict the account to have 0 permissions? Can you overmount the volume with a different one thats empty? Any other ideas?</p>
<p>In Kubernetes 1.6+, you can disable service account mounting on a Pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod spec: automountServiceAccountToken: false ... </code></pre> <p>See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></p>
<p>I'm working on a mysql users operator and I'm somewhat stuck on what's the proper way to report any issues.</p> <p>The plan is to watch on CRD for <code>MysqlUser</code> and create <code>Secret</code>s and mysql users in the specified DB. Obviously, either of that can go wrong, at which point I need to report an error.</p> <p>Some k8s object track events in the <code>status.conditions</code>. There's also the <code>Event</code> object, but I've only seen that used by kubelet / controllermanager insofar.</p> <p>If say, I have a problem creating mysql user because my operator cannot talk to mysql, but otherwise the CRD is valid, should it go to event or to CRD's status?</p>
<p>This sounds similar to events reported from volume plugin (kubelet) where, for example, kubelet is unable to mount a volume from NFS server because server address is invalid, thus can not take to it.</p> <p>Tracking events in <code>status.conditions</code> is less useful in this scenario since users typically have no control over how kubelet (or operator in your case) interacts with underline resources. In general, <code>status.conditions</code> only signals the status of the object, not why it is in this condition.</p> <p>This is just my understanding of how to make the choice. I don't know if there is any rules around it.</p>
<p>I have been struggling for a few hours on this one. I have a very simple 2 tier dotnet core skeleton app (mvc and webapi) hosted on Azure using Kubernetes with Windows as the orchestrator. The deployment works fine and I can pass basic environment variables over. The challenge I have is that I cannot determine how to pass the backend service IP address over to the frontend variables. if I stage the deployments, I can manually pass the exposed IP of the backend into the frontend. Ideally, this needs to be deployed as a service.</p> <p>Any help will be greatly appreciated.</p> <p><strong>Deployment commands:</strong></p> <p>1 - kubectl create -f backend-deploy.yaml</p> <p>2 - kubectl create -f backend-service.yaml</p> <p>3 - kubectl create -f frontend-deploy.yaml</p> <p>4 - kubectl create -f frontend-service.yaml</p> <p><strong>backend-deploy.yaml</strong></p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: acme spec: replicas: 5 template: metadata: labels: app: acme-app tier: backend spec: containers: - name: backend-container image: "some/image" imagePullSecrets: - name: supersecretkey env: - name: Config__AppName value: "Acme App" - name: Config__AppDescription value: "Just a backend application" - name: Config__AppVersion value: "1.0" - name: Config__CompanyName value: "Acme Trading Limited"</code></pre> </div> </div> </p> <p><strong>backend-service.yaml</strong></p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>kind: Service apiVersion: v1 metadata: name: acme-app spec: selector: app: acme-app tier: backend ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer</code></pre> </div> </div> </p> <p><strong>frontend-deploy.yaml</strong></p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 2 template: metadata: labels: app: acme-app tier: frontend spec: containers: - name: frontend-container image: "some/image" imagePullSecrets: - name: supersecretkey env: - name: Config__AppName value: "Acme App" - name: Config__AppDescription value: "Just a frontend application" - name: Config__AppVersion value: "1.0" - name: Config__AppTheme value: "fx-theme-black" - name: Config__ApiUri value: ***THIS IS WHERE I NEED THE BACKEND SERVICE IP*** - name: Config__CompanyName value: "Acme Trading Limited"</code></pre> </div> </div> </p> <p><strong>frontend-service.yaml</strong></p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>kind: Service apiVersion: v1 metadata: name: frontend spec: selector: app: acme tier: frontend ports: - protocol: "TCP" port: 80 targetPort: 80 type: LoadBalancer</code></pre> </div> </div> </p>
<p>If your backend service was created BEFORE the frontend pods, you should have the environment variables <code>ACME_APP_SERVICE_HOST</code> and <code>ACME_APP_SEVICE_PORT</code> inside the pods.</p> <p>If your backend service was created AFTER the frontend pods, then delete the pods and wait for them to be restarted. The new pods should have those variables.</p> <p>To check the environment variables do:</p> <pre><code>$ kubectl exec podName env </code></pre>
<p><a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/" rel="nofollow noreferrer">Official documentation</a> on enabling the GPU support states:</p> <blockquote> <p>A special alpha feature gate Accelerators has to be set to true across the system: --feature-gates="Accelerators=true".</p> </blockquote> <p>I am having trouble decoding the "set to true across the system" part.</p> <p>I have discovered that <a href="https://kubernetes.io/docs/admin/kubelet/" rel="nofollow noreferrer">kubelet</a>, <a href="https://kubernetes.io/docs/admin/kube-apiserver/" rel="nofollow noreferrer">kube-apiserver</a>, and <a href="https://kubernetes.io/docs/admin/kube-controller-manager/" rel="nofollow noreferrer">kube-controller-manager</a> all have the --feature-gates runtime parameter. The specification states that they all listen on modifications to config file.</p> <p>Any help with where those config files are how I can enable the --feature-gates="Accelerators=true" option in them?</p> <p>I did try adding the option to /etc/kubernetes/manifests/kube-apiserver.yaml: spec:</p> <pre><code> containers: - command: - kube-apiserver - -- &lt;...&gt; - --feature-gates=Accelerators=true </code></pre> <p>However, that causes kube-apiserver to stop and never come back.</p> <p>In the end I found the following workaround <a href="https://github.com/kubernetes/kops/pull/2257/files" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>3.I Add GPU support to the Kubeadm configuration, while cluster is not initialized. This has to be done for every node across your cluster, even if some of them don't have any GPUs.</p> <p>sudo vim /etc/systemd/system/kubelet.service.d/&lt;>-kubeadm.conf Therefore, append ExecStart with the flag --feature-gates="Accelerators=true", so it will look like this:</p> <p>ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS [...] --feature-gates="Accelerators=true" 3.II Restart kubelet</p> <p>sudo systemctl daemon-reload sudo systemctl restart kubelet</p> </blockquote> <p>However, I believe that the above approach is not how Kubernetes developers intended for this feature to be enabled. Any help would be appreciated.</p> <hr> <p>[Edit] I was able to turn on the option on both api-server and controller-manager - neither gave the desired result of gpu becoming visible.</p> <p>So it's the kubelet service that needs to get this option.</p> <p>The question becomes: how can the option be set via the kubelet config file?</p>
<p>I use Ubuntu16.04.</p> <p><code>Add --feature-gates="Accelerators=true"</code> to <code>KUBELET_ARGS</code> in file <code>/etc/kubernetes/kubelet</code> should be fine.</p>
<p>I'm trying to launch Google Container Engine (GKE) in Private GCP network Subnet.</p> <p>I have created custom Google Cloud VPC, then I have created custom Private Network Access Subnet too under that VPC.</p> <p>1) When I create GKE cluster with Private Subnet, still my Kubernetes nodes assigned with Public IP. Why it is so ? As per Google Document private instance should get Private IP.</p> <p>2) If I create cluster in Private, can I connect my container application to Google SQL instance ?</p> <p>3) Is any recommendation to launch GKE cluster should launched in Public Subnet only, not in Private Subnet ?</p>
<p><strong>With lots of R&amp;D and some replies got from forum.</strong></p> <p><strong><em>GKE should allow you to create a cluster in a Network that does have a default route to internet</em></strong>. We can launch a cluster in private subnet but that GKE cluster instance will treat as Public Subnet only.</p> <p>As GKE relies on public IPs to access the hosted master, for now.</p> <p>Security aspects considering of GKE cluster, we can deny all ports in firewall to access Cluster through internet. </p>
<p>My workflow is something on the lines of:</p> <ol> <li>Create a static Public IP on Azure and map it to a DNS name.</li> <li>Then start a service in Kubernetes which spins up a an LB to which we attached the pre-reserved public IP.</li> </ol> <p>Approach 1:</p> <pre><code>externalName: &lt;FQDN&gt; </code></pre> <p>Approach 2:</p> <pre><code>type: LoadBalancer externalIPs: - 52.232.30.160 </code></pre> <p>Approach 3:</p> <pre><code>type: LoadBalancer loadBalancerIP: 52.232.30.160 </code></pre> <p>Approach 4:</p> <pre><code>type: LoadBalancer clusterIP: 52.166.121.161 </code></pre> <p>But none of them seems to work. The LB always gets 2 public IPs - one statically assigned and the other dynamically assigned.</p> <p>I was wondering what is the right way to do this and if Azure supports assignment of public IPs to the LB.</p>
<p>try this:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: run: my-nginx name: my-nginx namespace: default spec: clusterIP: $clusterip loadBalancerIP: $externalip ports: - port: 80 protocol: TCP targetPort: 80 selector: run: my-nginx sessionAffinity: None type: LoadBalancer </code></pre> <p>available external addresses check in fronted ip configuration of azure load balancer (but not masters)</p>
<p>I am using <a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer">prometheus-operator</a> to manage a <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> deployment on my <a href="https://kubernetes.io/" rel="nofollow noreferrer">Kubernetes</a> cluster. The setup is working fine to extract metrics from a number of my application pods, using several ServiceMonitors which select the Kubernetes endpoints giving the network address at which metrics are published. As seems to be typical (required?) with prometheus-operator, the Prometheus configuration is generated for me based on the Kubernetes endpoints discovered based on the ServiceMonitors.</p> <p>I would also like my Prometheus deployment to retrieve <a href="http://blog.kubernetes.io/2015/05/resource-usage-monitoring-kubernetes.html" rel="nofollow noreferrer">the cAdvisor metrics published by kubelet</a> on each cluster node. I've verified that kubelet on my cluster has cAdvisor and that it is enabled (by visiting port 4194 and observing the native cAdvisor web interface). However, what I'm missing is how to tell prometheus-operator to configure my Prometheus deployment with targets including each of these kubelet/cAdvisor servers.</p> <p>The only "documentation" I've found on this is <a href="https://github.com/coreos/prometheus-operator/issues/261" rel="nofollow noreferrer">a prometheus-operator github issue</a> asking why some cAdvisor metrics <em>are</em> being discovered on the poster's cluster. The explanation suggests that Kubernetes endpoints for kubelet/cAdvisor gets created by prometheus-operator somehow and then an additional ServiceMonitor finds them and causes Prometheus to be configured with additional targets. However, these Kubernetes endpoints do not exist on my Kubernetes cluster and I haven't found any information about why they ever would.</p> <p>What do I need to configure so that my prometheus-operator-configured Prometheus deployment can get these metrics?</p>
<p>There turned out to be two problems preventing the collection of the cAdvisor metrics.</p> <p>First, there is <a href="https://coreos.com/operators/prometheus/docs/latest/user-guides/cluster-monitoring.html#preparing-kubernetes-components" rel="nofollow noreferrer">an option in prometheus-operator</a> that must be enabled to turn on a feature of the operator which creates and maintains a kubelet service and endpoints (since kubelet does not have these normally). After adding <code>--kubelet-service=kube-system/kubelet --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1</code> to my operator configuration, the desired kubelet endpoints appeared (I'm not sure what the second option does or if it's necessary; both are just copied from the linked docs).</p> <p>Next, the ServiceMonitor has to be selected by the Prometheus configuration. The ServiceMonitor from the prometheus-operator docs that matches the kubelet endpoints has some labels but nothing that's guaranteed to match an already-existing Prometheus resource definition. After updating the ServiceMonitor's labels so they cause it to be selected by the existing Prometheus, the cAdvisor stats quickly become available to the Prometheus deployment.</p>
<p>I currently have a service that looks like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: httpd spec: ports: - port: 80 targetPort: 80 name: http protocol: TCP - port: 443 targetPort: 443 name: https protocol: TCP selector: app: httpd externalIPs: - 10.128.0.2 # VM's internal IP </code></pre> <p>I can receive traffic fine from the external IP bound to the VM, but all of the requests are received by the HTTP with the source IP <code>10.104.0.1</code>, which is most definitely an internal IP – even when I connect to the VM's external IP from outside the cluster.</p> <p>How can I get the real source IP for the request without having to set up a load balancer or ingress?</p>
<p>If you only have exactly one pod, you can use <code>hostNetwork: true</code> to achieve this:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: caddy spec: replicas: 1 template: metadata: labels: app: caddy spec: hostNetwork: true # &lt;--------- containers: - name: caddy image: your_image env: - name: STATIC_BACKEND # example env in my custom image value: $(STATIC_SERVICE_HOST):80 </code></pre> <p>Note that by doing this <strong>your pod will inherit the host's DNS resolver</strong> and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the <code>static</code> service at <a href="http://static" rel="nofollow noreferrer">http://static</a>. You still can access services by their cluster IP, which are injected by <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="nofollow noreferrer">environment variables</a>.</p>
<p>How do I interpret the memory usage returned by "kubectl top node". E.g. if it returns:</p> <pre> NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-XXX.ec2.internal 222m 11% 3237Mi 41% ip-YYY.ec2.internal 91m 9% 2217Mi 60% </pre> <p>By comparison, if I look in the Kubernetes dashboard for the same node, I get: Memory Requests: 410M / 7.799 Gi</p> <p><hr> <strong>kubernetes dashboard</strong></p> <p><img src="https://i.stack.imgur.com/uz0ZS.png" alt="[1]"></p> <hr> <p>How do I reconcile the difference?</p>
<p><code>kubectl top node</code> is reflecting the actual usage to the VM(nodes), and k8s dashboard is showing the percentage of limit/request you configured.</p> <p>E.g. Your EC2 instance has 8G memory and you actually use 3237MB so it's 41%. In k8s, you only request 410MB(5.13%), and have a limit of 470MB memory. This doesn't mean you only consume 5.13% memory, but the amount configured. </p> <pre><code> Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default kube-lego 20m (2%) 0 (0%) 0 (0%) 0 (0%) default mongo-0 100m (10%) 0 (0%) 0 (0%) 0 (0%) default web 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system event-exporter- 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-gcp-v2.0-z6xh9 100m (10%) 0 (0%) 200Mi (11%) 300Mi (17%) kube-system heapster-v1.4.0-3405140848-k6cm9 138m (13%) 138m (13%) 301456Ki (17%) 301456Ki (17%) kube-system kube-dns-3809445927-hn5xk 260m (26%) 0 (0%) 110Mi (6%) 170Mi (9%) kube-system kube-dns-autoscaler-38801 20m (2%) 0 (0%) 10Mi (0%) 0 (0%) kube-system kube-proxy-gke-staging-default- 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system kubernetes-dashboard-1962351 100m (10%) 100m (10%) 100Mi (5%) 300Mi (17%) kube-system l7-default-backend-295440977 10m (1%) 10m (1%) 20Mi (1%) 20Mi (1%) </code></pre> <p>Here you see many pods with 0 request/limit means <strong>unlimited</strong>, which didn't count in k8s dashboard but <strong>definitely</strong> consume memory.</p> <p>Sum up the memory request/limit you'll find they match k8s dashboard. <a href="https://i.stack.imgur.com/pNjNS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pNjNS.png" alt="enter image description here"></a></p>
<p>After triggering the following from my terminal, I have been unable to find a way to list Cronjobs running, but dormant and prior to their first run:</p> <pre><code>oc run pi --image=perl --schedule='*/1 * * * *' \ --restart=OnFailure --labels parent="cronjobpi" \ --command -- perl -Mbignum=bpi -wle 'print bpi(2000)' </code></pre> <p><a href="https://docs.openshift.com/container-platform/3.5/dev_guide/cron_jobs.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.5/dev_guide/cron_jobs.html</a></p> <p>For cronjobs that are once a month, I would like to see them. I was expecting something like:</p> <pre><code>oc get cronjobs </code></pre> <p>But have found nothing like this. Is there anything I can do to list jobs either through the CLI or web interface to list cronjobs specifically?</p>
<p>In OpenShift 3.6, I have no issues with running:</p> <pre><code>$ oc get cronjobs NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE hello */2 * * * * False 0 Fri, 21 Jul 2017 14:02:00 +1000 </code></pre> <p>When the cronjob time arrives a job is created from the job template.</p> <pre><code>$ oc get jobs NAME DESIRED SUCCESSFUL AGE hello-1500609360 1 1 38m hello-1500609420 1 1 37m hello-1500609600 1 1 34m hello-1500609660 1 1 32m </code></pre> <p>Don't know why at this point, but <code>oc describe cronjob</code> doesn't work in 3.6-rc.0 though. Have already asked why that is.</p>
<p>I have kubernetes installed on bare metal ubuntu server, below is the output of kubectl version command</p> <blockquote> <p>Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.3", GitCommit:"0480917b552be33e2dba47386e51decb1a211df6", GitTreeState:"clean", BuildDate:"2017-05-10T15:38:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}</p> </blockquote> <p>I am running google shellinabox inside a docker container. It all works well with the docker container, but when I put the same image inside a kubernetes pod and expose it with a kubernetes service, my browser session to shellinabox times out after ~60 secs. Since this works fine with standalone docker container, I think this is caused by kubernetes. Is there any timeout on the kubernetes and how do I configure that.</p> <p>Any help?</p>
<p>Enable the session Affinity to direct the traffic to one pod per client session here is the samepl deployment.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: shellinabox labels: k8s-app: shellinabox tier: frontend namespace: default spec: replicas: 2 template: metadata: labels: k8s-app: shellinabox spec: containers: - name: shellinabox image: sspreitzer/shellinabox:latest env: - name: SIAB_PASSWORD value: abc123 - name: SIAB_SUDO value: 'true' ports: - containerPort: 4200 --- apiVersion: v1 kind: Service metadata: name: shellinabox-svc labels: app: shellinabox-svc namespace: default spec: type: NodePort ports: - port: 4200 targetPort: 4200 selector: k8s-app: shellinabox sessionAffinity: ClientIP </code></pre>
<p>Currently, I am getting the <code>ECONNREFUSED</code> where node can't connect to the database. I am not sure where the database host is supposed to end up. Should I be using <code>localhost:7474</code>, <code>127.0.0.1:7474</code>, <code>0.0.0.0:7474</code>, or am I supposed to use some host that gets generated when I get my Kubernetes pods up that I can somehow pass in? e.g. like <code>$(minikube ip) = 192.168.90.100:7474</code>, but for my database? Is there an ENV containing the database host that I am supposed to be pulling from somewhere?</p> <pre><code>const neo4jDatabase = connect({ server: 'http://&lt;what goes here!?&gt;:7474', user: process.env.DB_USER, pass: process.env.DB_PASS, }); </code></pre> <p>I seem to have gotten the pods running with the below .yml configs, but am not sure if the neo4j one is correct.</p> <pre><code>NAME READY STATUS RESTARTS AGE neo4j-4166717986-8qbwq 1/1 Running 0 41m node-481322091-g27md 1/1 Running 0 11m </code></pre> <p><strong>node.yml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: node spec: replicas: 1 template: metadata: labels: app: node tier: backend track: stable spec: containers: - name: node image: "myapp" ports: - name: nodeport containerPort: 8080 env: - name: DB_USER valueFrom: configMapKeyRef: name: config key: db_user - name: DB_PASS valueFrom: configMapKeyRef: name: config key: db_pass --- apiVersion: v1 kind: Service metadata: name: node spec: selector: app: node tier: backend ports: - protocol: TCP port: 80 targetPort: type: LoadBalancer </code></pre> <p><strong>neo4j.yml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: neo4j spec: template: metadata: labels: run: neo4j spec: containers: - name: neo4j image: "my-neo4j" ports: - containerPort: 7474 --- apiVersion: v1 kind: Service metadata: name: neo4j labels: run: neo4j spec: selector: run: neo4j ports: - port: 7474 targetPort: 7474 protocol: TCP </code></pre>
<p>Perhaps a less known feature of Kubernetes is the fact that some <em>magic</em> environment variables are injected in the running pods.</p> <p>In your particular case, pods have an environment variable for each service in the namespace. The format is the following:</p> <pre><code>&lt;your service&gt;_SERVICE_HOST &lt;your service name&gt;_SERVICE_PORT_EXPOSED_PORT </code></pre> <p>You can verify this is true by attaching to a running pod with <code>kubectl exec -ti &lt;your pod id&gt; sh</code> and issuing a <code>printenv</code> command.</p> <p>Please note that if the service was created AFTER the pod, you have to delete the pod with <code>kubectl delete pod &lt;your pod id&gt;</code> to force recreation (and injection) of the environment variables.</p> <p>In your case, the final code will look like this:</p> <pre><code>const serviceHost = process.ENV.NEO4J_SERVICE_HOST; const servicePort = process.ENV.NEO4J_SERVICE_PORT_EXPOSED_PORT; const neo4jDatabase = connect({ server: `http://${serviceHost}:${servicePort}`, user: process.env.DB_USER, pass: process.env.DB_PASS, }); </code></pre>
<p>I am setting up a minimal Kubernetes cluster on localhost on a Linux machine (starting with hack/local-up-cluster from the checked out repo). In my deployment file I defined an ingress, which should make the services, which are deployed in the cluster, accessible from the outside. Deployment.yml:</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: foo-service-deployment spec: replicas: 2 template: metadata: labels: app: foo-service spec: containers: - name: foo-service image: images/fooservice imagePullPolicy: IfNotPresent ports: - containerPort: 7778 --- apiVersion: v1 kind: Service metadata: name: foo-service-service spec: ports: - port: 7778 selector: app: foo-service type: NodePort --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: api-gateway-ingress spec: rules: - host: http: paths: - path: /foo backend: serviceName: foo-service-service servicePort: 7779 - path: /bar backend: serviceName: bar-service-service servicePort: 7776 </code></pre> <p>I can not access the services. kubectl describe shows the following for my ingress:</p> <pre><code>Name: api-gateway-ingress Namespace: default Address: Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- * /foo foo-service-service:7779 (&lt;none&gt;) /bar bar-service-service:7776 (&lt;none&gt;) Annotations: Events: &lt;none&gt; </code></pre> <p>Is it because there is not address set for my ingress, that it is not visible to outside world yet?</p>
<p>An <code>Ingress</code> resource is just a definition for your cluster how to handle ingress traffic. It needs an <em>Ingress Controller</em> to actually process these definitions; creating an Ingress resource without having deployed an Ingress controller will not have any effect.</p> <p>From <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">the documentation</a>:</p> <blockquote> <p>In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the <code>kube-controller-manager</code> binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one.</p> </blockquote> <p>There are several Ingress controllers available that you can deploy by yourself (typically, via a Deployment resource), like for example the <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">NGINX ingress controller</a> (which is part of the Kubernetes project) or third-party ingress controllers like <a href="https://docs.traefik.io/providers/kubernetes-ingress/" rel="noreferrer">Traefik</a>, <a href="https://www.envoyproxy.io/" rel="noreferrer">Envoy</a> or <a href="https://github.com/appscode/voyager" rel="noreferrer">Voyager</a>.</p>
<p>So far, I have been using Spring Boot apps (with Spring Cloud Stream) and Kafka running without any supporting infrastructure (PaaS).</p> <p>Since our corporate platform is running on Kubernetes we need to move those Spring Boot apps into K8s to allow the apps to scale and so on. Obviously there will be more than one instance of every application so we will define a consumer group per application to ensure the unique delivery and processing of every message.</p> <p><strong>Kafka will be running outside Kubernetes.</strong></p> <p>Now my doubt is: since the apps deployed on k8s are accessed through the k8s service that abstracts the underlying pods, and individual application pods can't be access directly outside of the k8s cluster, Kafka won't know how to call individual instances of the consumer group to deliver the messages, will it? </p> <p>How can I make them work together?</p>
<p>Kafka brokers do not push data to clients. Rather clients poll() and pull data from the brokers. As long as the consumers can connect to the bootstrap servers and you set the Kafka brokers to advertise an IP and port that the clients can connect to and poll() then it will all work fine.</p>
<p>As I'm going through Docker/Kubernetes tutorials, I notice a lot of people put nginx into a pod.</p> <p>Even after reading about nginx, I am not sure what they are using it for. Doesn't Kubernetes serve your app and handle things like load balancing and whatnot?</p> <p>Isn't something like Node.js a "web server" that can "serve static assets, do caching, and TLS/SSL"?</p> <p>So with your Node.js app on Kubernetes you have your app being served, static assets, caching especially with things like Redis, and load balancing, etc., why nginx?</p>
<p>Kubernetes is load balancing the requests to your app. Now it is up to you whether you want to expose your app directly or you want to use a WebServer in front of it.</p> <p>Putting Nginx in front of it would allow you to have things like access logs, error logs, caching, serving static files etc... There are cases where you may want to have your app exposed directly (this is usually the case when you build applications in GoLang)</p> <p>So Nginx is not a must inside a pod, but it depends on your architecture design whether you want it or not</p>
<p>I followed the instructions for installing Bash completion as given by <code>kubectl completion -h</code>:</p> <ol> <li>I installed <code>bash-completion</code> via Homebrew</li> <li>In my <code>~/.bashrc</code>, I first source <code>bash-completion</code> then output from the completion kubectl subcommand: <ul> <li><code>source $(brew --prefix)/etc/bash_completion</code></li> <li><code>source &lt;(kubectl completion bash)</code></li> </ul></li> </ol> <p>With these in place, I start up a new shell but the completion doesn't work. How do I get it working?</p>
<p>Once <code>bash-completion</code> is installed by Homebrew, it appears that its completions need to reside in <code>$(brew --prefix)/etc/bash_completion.d</code>. There you'll find a lot of other completions that come bundled. To add the completion for kubectl:</p> <pre><code>$ kubectl completion bash &gt; $(brew --prefix)/etc/bash_completion.d/kubectl </code></pre> <p>That did the trick for me.</p>
<p>I have a Node.js app that provides a service which cannot be interrupted.</p> <p>However the load to the app varies overtime and to save cost I would like the vm instance machine type to autoscale in function of the load (ie when over 80% CPU utilisation, scale up from 1 vCPU(3.75 GB memory, n1-standard-1) to 2vCPU(7.5 GB memory, n1-standard-2))</p> <p>Is this possible?</p> <p>PS: I have looked at using the container engine and kubernetes but due to how the app operates, the app cannot be replicated to multiple pods and continue working</p>
<p>You can only change the machine type of a stopped instance and an instance is considered stopped only when the instance is in the TERMINATED state. It is not possible to change the machine type of a running instance.</p> <p><a href="https://cloud.google.com/compute/docs/instances/changing-machine-type-of-stopped-instance" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/instances/changing-machine-type-of-stopped-instance</a></p>
<p>I'm working on a mysql users operator and I'm somewhat stuck on what's the proper way to report any issues.</p> <p>The plan is to watch on CRD for <code>MysqlUser</code> and create <code>Secret</code>s and mysql users in the specified DB. Obviously, either of that can go wrong, at which point I need to report an error.</p> <p>Some k8s object track events in the <code>status.conditions</code>. There's also the <code>Event</code> object, but I've only seen that used by kubelet / controllermanager insofar.</p> <p>If say, I have a problem creating mysql user because my operator cannot talk to mysql, but otherwise the CRD is valid, should it go to event or to CRD's status?</p>
<p>CRDs do not have a status part yet (1.7). Notifying via events is perfectly fine, that's the reason for having them in the first place.</p>
<p>I am using the ELK stack (elasticsearch, logsash, kibana) for log processing and analysis in a Kubernetes (minikube) environment. To capture logs I am using filebeat. Logs are propagated successfully from filebeat through to elasticsearch and are viewable in Kibana. </p> <p>My problem is that I am unable to get the pod name of the actual pod issuing log records. Rather I only get the filebeat podname which is gathering log files and not name of the pod that is originating log records.</p> <p>The information I can get from filebeat are (as viewed in Kibana)</p> <ul> <li>beat.hostname: the value of this field is the filebeat pod name</li> <li>beat.name: value is the filebeat pod name</li> <li>host: value is the filebeat pod name</li> </ul> <p>I can also see/discern container information in Kibana which flow through from filebeat / logstash / elasticsearch:</p> <ul> <li>app: value is {log-container-id}-json.log</li> <li>source: value is /hostfs/var/lib/docker/containers/{log-container-id}-json.log</li> </ul> <p>As shown above, I seem to be able to get the container Id but not the pod name.</p> <p>To mitigate the situation, I could probably embed the pod-name in the actual log message and parse it from there, but I am hoping there is a solution in which I can configure filebeat to emit actual pod names.</p> <p>Does anyone now how to configure filebeat (or other components) to capture kubernetes (minikube) pod names in their logs?</p> <p>My current filebeat configuration is listed below:</p> <p>ConfigMap is shown below:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: filebeat namespace: logging labels: component: filebeat data: filebeat.yml: | filebeat.prospectors: - input_type: log tags: - host paths: - "/hostfs/var/log" - "/hostfs/var/log/*" - "/hostfs/var/log/*/*" exclude_files: - '\.[0-9]$' - '\.[0-9]\.gz$' - input_type: log tags: - docker paths: - /hostfs/var/lib/docker/containers/*/*-json.log json: keys_under_root: false message_key: log add_error_key: true multiline: pattern: '^[[:space:]]+|^Caused by:' negate: false match: after output.logstash: hosts: ["logstash:5044"] logging.level: info </code></pre> <p>DamemonSet is shown below:</p> <pre><code>apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: filebeat namespace: logging spec: template: metadata: labels: component: filebeat spec: containers: - name: filebeat image: giantswarm/filebeat:5.2.2 imagePullPolicy: IfNotPresent resources: limits: cpu: 100m requests: cpu: 100m volumeMounts: - name: config mountPath: /etc/filebeat readOnly: true - name: hostfs-var-lib-docker-containers mountPath: /hostfs/var/lib/docker/containers readOnly: true - name: hostfs-var-log mountPath: /hostfs/var/log readOnly: true volumes: - name: config configMap: name: filebeat - name: hostfs-var-log hostPath: path: /var/log - name: hostfs-var-lib-docker-containers hostPath: path: /var/lib/docker/containers </code></pre>
<p>I have achieved what you looking for, by assigning a group of specific pods to a namespace, and now can query the log I look for using a combination of namespace, pod name and container name which is also included in generated log which is piped by file beat without any extra effort as you can see here <img src="https://i.stack.imgur.com/230U3.png" alt="image"> </p>
<p>I'm trying to create a persistent volume using the azureFile however I keep getting the following error.</p> <pre><code>MountVolume.SetUp failed for volume "kubernetes.io/azure-file/2882f900-d7de-11e6-affc-000d3a26076e-pv0001" (spec.Name: "pv0001") pod "2882f900-d7de-11e6-affc-000d3a26076e" (UID: "2882f900-d7de-11e6-affc-000d3a26076e") with: mount failed: exit status 32 Mounting arguments: //xxx.file.core.windows.net/test /var/lib/kubelet/pods/2882f900-d7de-11e6-affc-000d3a26076e/volumes/kubernetes.io~azure-file/pv0001 cifs [vers=3.0,username=xxx,password=xxx ,dir_mode=0777,file_mode=0777] Output: mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) </code></pre> <p>I also tried mounting the share in one of the VM's on which kubernetes is running which does work.</p> <p>I've used the following configuration to create the pv/pvc/pod.</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: azure-secret type: Opaque data: azurestorageaccountkey: [base64 key] azurestorageaccountname: [base64 accountname] apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce azureFile: secretName: azure-secret shareName: test readOnly: false kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: mypod image: nginx volumeMounts: - mountPath: "/mnt" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc0001 </code></pre> <p>This the version of kubernetes I'm using, which was build using the azure container service.</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:38:40Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.6", GitCommit:"e569a27d02001e343cb68086bc06d47804f62af6", GitTreeState:"clean", BuildDate:"2016-11-12T05:16:27Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>I had <a href="http://harrytechnotes.blogspot.com/2017/08/kubernetes-3-failed-to-mount-azure-files.html" rel="nofollow noreferrer">a blog</a> discussion the errors when mounting Azure files. The <code>permission denied</code> error might be due to the following reasons:</p> <ol> <li>The Azure storage account name and/or key were not encoded with base64 algorithm;</li> <li>The Azure storage account name and/or key were encoded with command <code>echo</code> rather than <code>echo -n</code>;</li> <li>The location of Azure storage account was different from the location of container host.</li> </ol>
<p>I have followed the <a href="https://github.com/IdentityServer/IdentityServer4.Samples/tree/release/Quickstarts/7_JavaScriptClient" rel="noreferrer">IdentityServer4 quickstarts</a> and am able to authenticate my javascript web page (almost the same as that provided in the quickstart) with a localhosted instance of IdentityServer using the Implicit grant. Again, my IdentityServer is almost exactly the same as that provided in the quickstart mentioned above - it just has some custom user details.</p> <p>I then moved my application (C# .NET Core) into a docker container and have hosted one instance of this within a Kubernetes cluster (single instance) and created a Kubernetes service (facade over one or more 'real' services) which lets me access the identity server from outside the cluster. I can modify my JavaScript web page and point it at my Kubernetes service and it will still quite happily show the login page and it seems to work as expected.</p> <p>When I then scale the IdentityServer to three instances (all served behind a single Kubernetes service), I start running into problems. The Kubernetes service round-robins requests to each identity server, so the first will display the login page, but the second will try and handle the authentication after I press the login button. This results in the following error:</p> <blockquote> <p>System.InvalidOperationException: The antiforgery token could not be decrypted. ---> System.Security.Cryptography.CryptographicException: The key {19742e88-9dc6-44a0-9e89-e7b09db83329} was not found in the key ring. at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.UnprotectCore(Byte[] protectedData, Boolean allowOperationsOnRevokedKeys, UnprotectStatus&amp; status) at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.DangerousUnprotect(Byte[] protectedData, Boolean ignoreRevocationErrors, Boolean&amp; requiresMigration, Boolean&amp; wasRevoked) at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.Unprotect(Byte[] protectedData) at Microsoft.AspNetCore.Antiforgery.Internal.DefaultAntiforgeryTokenSerializer.Deserialize(String serializedToken) --- End of inner exception stack trace --- ... And lots more......</p> </blockquote> <p>So - I understand that I am getting this error because the expectation is that the same IdentityServer should service the request for a page that it has shown (otherwise how would the anti-forgery token work, right?), but what I am trying to understand is how I can make this work in a replicated environment.</p> <p>I don't want to host multiple identity servers on different IP's/ports; I'm trying to build a HA configuration where if one IdentityServer dies, nothing calling the endpoint should care (because requests should be serviced by other working instances).</p> <p>I said i was using the quickstart code - this means that in the startup of the IdentityServer, there is code that looks like this...</p> <pre><code> public void ConfigureServices(IServiceCollection services) { services.AddMvc(); services.AddIdentityServer(options =&gt; { options.Events.RaiseSuccessEvents = true; options.Events.RaiseFailureEvents = true; options.Events.RaiseErrorEvents = true; }) .AddTemporarySigningCredential() .AddInMemoryIdentityResources(Config.GetIdentityResources()) .AddInMemoryApiResources(Config.GetApiResources()) .AddInMemoryClients(Config.GetClients()) </code></pre> <p>I am assuming that I need to replace the <code>.AddTemporarySigningCredential()</code> logic with a certificate that can be used by all instances of the IdentityServer which are running in my Kubernetes cluster. Not knowing how MVC really works (MVC6 is used to generate the login pages in the IdentityServer service, which I got from the example code - link above) - I want to know if just changing the code to use a proper certificate which is shared between all services will be sufficient to get a prototype HA IdentityServer cluster working?</p> <p>By working, I mean that my expectation is that I can have n number of IdentityServer instances running in a Kubernetes cluster, have a Kubernetes service to act as a facade over however many IdentityServer's I have running, and be able to authenticate using multiple IdentityServer instances which can share data to the extent that they all provide exactly the same authority to my calling web applications, and can handle each other's requests in the event that one or more instances die.</p> <p>Any help or insight would be appreciated.</p>
<p>I think I have worked this out. To resolve my issue, I have done two things:</p> <ol> <li><p>Create my own X509 certificate and shared this certificate between each of my IdentityServer's. There are lots of examples of how to create valid certificates on the net; I just used</p> <pre class="lang-cs prettyprint-override"><code>services.AddIdentityServer(...).AddSigningCredential(new X509Certificate2(bytes, &quot;password&quot;) </code></pre> <p>in my startup class.</p> </li> <li><p>Dug into the MVC framework code and worked out that I needed to implement a <a href="https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/implementation/key-storage-providers" rel="nofollow noreferrer">Key storage provider</a> in order to share state between different instances of the MVC part of Identity Server which serves up the login page.</p> </li> </ol> <p>It turns out that there is a <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.DataProtection.Redis/" rel="nofollow noreferrer">Redis backed KSP available from NuGet</a>, which means that I just need to spin up a private redis instance in my Kube cluster (which isn't accessible outside of my cluster) to share decryption secrets.</p> <pre class="lang-cs prettyprint-override"><code>/* Note: Use an IP, or resolve from DNS prior to adding redis based key store as direct DNS resolution doesn't work for this inside a K8s cluster, though it works quite happily in a Windows environment. */ var redis = ConnectionMultiplexer.Connect(&quot;1.2.3.4:6379&quot;); services.AddDataProtection() .PersistKeysToRedis(redis, &quot;DataProtection-Keys&quot;); </code></pre> <p>I can now scale my identity service to 3 instances and have a Kube service acting as a facade over all the available instances. I can watch the logs as Kubernetes round-robin's requests between the identity service, and my authentication happens just as I expect.</p> <p>Thanks to those who commented on the question prior to this post.</p>
<p>I want to debug the pod in a simple way, therefore I want to start the pod without deployment.</p> <p>But it will automatically create a deployment</p> <pre><code>$ kubectl run nginx --image=nginx --port=80 deployment &quot;nginx&quot; created </code></pre> <p>So I have to create the <code>nginx.yaml</code> file</p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre> <p>And create the pod like below, then it creates pod only</p> <pre><code>kubectl create -f nginx.yaml pod &quot;nginx&quot; created </code></pre> <p>How can I specify in the command line the <code>kind:Pod</code> to avoid <code>deployment</code>?</p> <p>// I run under minikue 0.20.0 and kubernetes 1.7.0 under Windows 7</p>
<pre><code>kubectl run nginx --image=nginx --port=80 --restart=Never </code></pre> <blockquote> <p><code>--restart=Always</code>: The restart policy for this Pod. Legal values [<code>Always</code>, <code>OnFailure</code>, <code>Never</code>]. If set to <code>Always</code> a deployment is created, if set to <code>OnFailure</code> a job is created, if set to <code>Never</code>, a regular pod is created. For the latter two <code>--replicas</code> must be <code>1</code>. Default <code>Always</code> [...]</p> </blockquote> <p>see official document <a href="https://kubernetes.io/docs/user-guide/kubectl-conventions/#generators" rel="noreferrer">https://kubernetes.io/docs/user-guide/kubectl-conventions/#generators</a></p>
<p>I'm using minikube, starting it with</p> <pre><code>minikube start --memory 8192 </code></pre> <p>For 8Gb of RAM for the node. I'm allocating pods with the resource constraints</p> <pre><code> resources: limits: memory: 256Mi requests: memory: 256Mi </code></pre> <p>So 256Mb of RAM for each node which would give me, I would assume, 32 pods until 8Gb memory limit has been reached but the problem is that whenever I reach the 8th pod to be deployed, the 9th will never run because it's constantly OOMKilled.</p> <p>For context, each pod is a Java application with a frolvlad/alpine-oraclejdk8:slim Docker container ran with -Xmx512m -Xms128m (even if JVM was indeed using the full 512Mb instead of 256Mb I would still be far from the 16 pod limit to hit the 8Gb cap). </p> <p>What am I missing here? Why are pods being OOMKilled with apparently so much free allocatable memory left?</p> <p>Thanks in advance</p>
<p>You must understand the way requests and limits work.</p> <p>Requests are the requirements for the amount of allocatable resources required on the node for a pod to get scheduled on it. These will not cause OOMs, they will cause pod not to get scheduled.</p> <p>Limits, on the other side, are hard limits for given pod. The pod will be capped at this level. So, even if you have 16GB RAM free, but have a 256MiB limit on it, as soon as your pod reaches this level, it will experience an OOM kill.</p> <p>If you want, you can define only requests. Then, your pods will be able to grow to full node capacity, without being capped.</p> <p><a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/</a></p>
<p>In order to check status, I started the <code>busybox</code> in kubernetes using interactive shell.</p> <pre><code>$ kubectl run -i --tty busybox --image=busybox --restart=Never -- sh / # exit $ kubectl run -i --tty busybox --image=busybox --restart=Never -- sh Error from server (AlreadyExists): pods "busybox" already exists </code></pre> <p>When I exit from the shell, I expect the pod will be deleted as well. While it exists there in completed status.</p> <pre><code>$ kubectl get pods -a NAME READY STATUS RESTARTS AGE busybox 0/1 Completed 0 58m </code></pre> <p>I have to delete the pod, it is annoying.</p> <p>DO we have simple parameter I can use to ask k8s to delete the pod for this one task job ?</p>
<p>Just add <code>--rm</code>:</p> <pre><code>$ kubectl run busybox -i --tty --image=busybox --restart=Never --rm -- sh If you don't see a command prompt, try pressing enter. / # exit $ kubectl get pod busybox Error from server (NotFound): pods "busybox" not found </code></pre> <blockquote> <p><code>--rm=false</code>: If true, delete resources created in this command for attached containers.</p> </blockquote>
<p>I deploy Redis container via Kubernetes and get the following warning:</p> <blockquote> <p>WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled</p> </blockquote> <p>Is it possible to disable THP via Kubernetes? Perhaps via init-containers?</p>
<p>Yes, with init-containers it's quite straightforward:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: thp-test spec: restartPolicy: Never terminationGracePeriodSeconds: 1 volumes: - name: host-sys hostPath: path: /sys initContainers: - name: disable-thp image: busybox volumeMounts: - name: host-sys mountPath: /host-sys command: ["sh", "-c", "echo never &gt;/host-sys/kernel/mm/transparent_hugepage/enabled"] containers: - name: busybox image: busybox command: ["cat", "/sys/kernel/mm/transparent_hugepage/enabled"] </code></pre> <blockquote> <p>Demo (notice that this is a system wide setting):</p> <pre><code>$ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled always [madvise] never $ kubectl create -f thp-test.yaml pod "thp-test" created $ kubectl logs thp-test always madvise [never] $ kubectl delete pod thp-test pod "thp-test" deleted $ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] </code></pre> </blockquote>
<p>As host-gw use IP routes to subnets via remote machine IPs, it looks like pure L3 network solution.</p> <p>Therefore, why need direct L2 connectivity between hosts?</p>
<p><code>host-gw</code> adds route table entries on hosts, so that host know how to traffic container network packets.</p> <p>This works on L2, because it only concerns <code>hosts</code>, <code>switches</code> and <code>containers</code>. <code>switches</code> does not care IP and route, <code>hosts</code> know <code>containers</code> exists, and how to route to them, <code>containers</code> just send and receive data.</p> <p>If <code>hosts</code> are at different networks, L3 is introduced, and <code>routers</code> are involved. <code>routers</code> have no idea that <code>containers</code> exists, and any containers packet will be dropped, making communication impossible.</p> <p>Of course, you can add route table entries on <code>routers</code>, but that is out of control <code>flannel</code>.</p>
<p>We are having problems on kube-proxy loading iptables. It locks docker when there's a large number of services. Is there a way to tune this with its parameters?</p> <p>From its documentation, I can only find --iptables-min-sync-period and --iptables-sync-period might be related? What's the recommended values for these in a large network?</p>
<p>We spent the last few weeks looking at this, too. I assume you are also also seeing big CPU spikes (or even a constant 100% iptables) in networks with large amounts of ingress rules/routes.</p> <p>That was identified a few releases ago and in the 1.5 cycle we got a few patches in that would reduce the number of iptables calls being made. In addition to that, we have introduced the min-sync-period flag which guarantees iptables will only run every X period.</p> <p>Our tests set iptables-min-sync-interval=30s but we haven't yet decided yet what to do by default in OpenShift. Hope to have some more formal position soon.</p>
<p>That slave-agent pod always seems to die and go away very quickly after an error in my Jenkinsfile. Is there a way to exec into it and keep it alive while I'm in it? I am running Jenkins on Kubernetes using Helm</p>
<p>If the pod is already dead you can't <code>kubectl exec</code> into the container.</p> <p>However, you can ssh directly into the node that ran your pod and inspect the (now stopped) container directly. (You can't <code>docker exec</code> into it once it stopped)</p> <p>Something like this:</p> <pre><code># this pod will die pretty quickly $ kubectl run --restart=Never --image=busybox deadpod -- sh -c "echo quick death | tee /artifact" pod "deadpod" created $ kubectl describe pod deadpod Name: deadpod Namespace: default Node: nodexxx/10.240.0.yyy Containers: deadpod: Container ID: docker://zzzzzzzzz [...] $ ssh nodexxx </code></pre> <p><strong>Once you have ssh'd into the node</strong> you have several debugging options.</p> <p>Get the output:</p> <pre><code>nodexxx:~# docker logs zzzz quick death </code></pre> <p>Examine the filesystem:</p> <pre><code>nodexxx:~# mkdir debug; cd debug nodexxx:~/debug# docker export zzz | tar xv [...] nodexxx:~/debug# ls -l; cat artifact [...] quick death </code></pre> <p>Create an image from the container, create a new container and get a shell:</p> <pre><code>nodexxx:~# docker commit zzzz debug nodexxx:~# docker run -it zzzz sh / # cat /artifact quick death </code></pre>
<p>I have deployed kubernetes cluster with kops....</p> <pre><code>kops create cluster --state=${STATE_STORE} --cloud=aws --zones=eu-west-2a,eu-west-2b --node-count=2 --node-size=t2.small --master-size=t2.small ${NAME} </code></pre> <p>Is any way to change node-size after deployment? without deleting cluster...</p>
<p>Yes this is possible.</p> <p>You need to run the command: <code>kops edit ig --name=CHANGE_TO_CLUSTER_NAME nodes</code></p> <p>This will bring up and editor screen similar to:</p> <pre><code>apiVersion: kops/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: "2017-07-01T12:06:22Z" labels: kops.k8s.io/cluster: URL_OF_CLUSTER name: nodes spec: image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs- machineType: m3.large maxSize: 7 minSize: 3 role: Node subnets: - eu-west-1a </code></pre> <p>You can then make your edit's to the machine type and the Min / Max nodes required.</p> <p>Once your done, exit out of the editor like you normally would. You will then need to run the command:</p> <p><code>kops update cluster CHANGE_TO_CLUSTER_NAME --yes</code></p> <p>That'll begin the update process - bear in mind you instances are going to disappear and any pods running on those instances will terminate. The scheduler should put them on another node if it can fit them on.</p>
<p>I want to create a loadBalancer service on kubernetes that exposes a large range of ports. As you can't do that on kubernetes yet (<a href="https://github.com/kubernetes/kubernetes/issues/23864" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/23864</a>).</p> <p>I have manually entered a range of port by having a yaml file in the following format:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: service spec: ports: - name: port10000 port: 10000 protocol: UDP . . . - name: port40000 port: 40000 protocol: UDP selector: app: app-label type: LoadBalancer </code></pre> <p>I get the following error:</p> <blockquote> <p>Error from server (InternalError): error when creating "service-udp.yml": Internal error occurred: failed to allocate a nodePort: range is full</p> </blockquote> <p>Is it possible to increase the range of ports available for a service? And if so, how?</p>
<p>This is controlled by the <code>--service-node-port-range portRange</code> argument to <code>kube-apiserver</code> - the way to change that depends on your environment.</p> <p>Keep in mind that nodePorts are meant to be used by load balancers as building blocks. So what you are trying to do is most likely not the best practice.</p> <p>Hope this helps..</p>
<p>I’m trying to change the value of a variable if another variable it set by combining the two with a dash in the middle, I’m not sure of the syntax to do this, I’m thinking of somethings like:</p> <pre><code>{{- $serviceNamespace := .Values.serviceNamespace -}} {{- $serviceTag := .Values.serviceTag -}} {{- if $serviceTag}} {{- $serviceNamespace := .Values.serviceNamespace "-" .Values.serviceTag -}} {{- end}} </code></pre> <p>Is this correct? if <code>serviceNamespace</code> was <code>hello</code> and <code>serviceTag</code> was <code>1.0.0</code> would I end up with <code>serviceNamespace</code> being <code>hello-1.0.0</code>?</p>
<p>For concatenation just use printf:</p> <pre><code>{{- $serviceNamespace := printf "%s-%s" .Values.serviceNamespace .Values.serviceTag -}} </code></pre>
<p>I can sort my Kubernetes pods by name using:</p> <pre><code>kubectl get pods --sort-by=.metadata.name </code></pre> <p>How can I sort them (or other resoures) by age using <code>kubectl</code>?</p>
<p>Pods have status, which you can use to find out startTime.</p> <p>I guess something like <code>kubectl get po --sort-by=.status.startTime</code> should work.</p> <p>You could also try:</p> <ol> <li><code>kubectl get po --sort-by='{.firstTimestamp}'</code>.</li> <li><code>kubectl get pods --sort-by=.metadata.creationTimestamp</code> Thanks @chris</li> </ol> <p>Also apparently in Kubernetes 1.7 release, sort-by is broken.</p> <p><a href="https://github.com/kubernetes/kubectl/issues/43" rel="noreferrer">https://github.com/kubernetes/kubectl/issues/43</a></p> <p>Here's the bug report : <a href="https://github.com/kubernetes/kubernetes/issues/48602" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/48602</a></p> <p>Here's the PR: <a href="https://github.com/kubernetes/kubernetes/pull/48659/files" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/48659/files</a></p>
<p>I have installed kubernetes using minikube on ubuntu 16.04 machine. I have also installed kubernetes-dashboard. When i try accessing the dashboard i get</p> <pre><code>Waiting, endpoint for service is not registered yet Waiting, endpoint for service is not ready yet... Waiting, endpoint for service is not ready yet... Waiting, endpoint for service is not ready yet... ..... Could not find finalized endpoint being pointed to by kubernetes-dashboard: Temporary Error: Endpoint for service is not ready yet Temporary Error: Endpoint for service is not ready yet Temporary Error: Endpoint for service is not ready yet Temporary Error: Endpoint for service is not ready yet Temporary Error: Endpoint for service is not ready yet Temporary Error: Endpoint for service is not ready yet ` </code></pre> <p>However, when i try a <code>kubectl get pods --all namespaces</code>i get the below output</p> <pre><code> kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-addon-manager-minikube 1/1 Running 0 11m kube-system kube-dns-1301475494-xtb3b 3/3 Running 0 8m kube-system kubernetes-dashboard-2039414953-dvv3m 1/1 Running 0 9m kube-system kubernetes-dashboard-2crsk 1/1 Running 0 8m kubectl get endpoints --all-namespaces NAMESPACE NAME ENDPOINTS AGE default kubernetes 10.0.2.15:8443 11m kube-system kube-controller-manager &lt;none&gt; 6m kube-system kube-dns 172.17.0.4:53,172.17.0.4:53 8m kube-system kube-scheduler &lt;none&gt; 6m kube-system kubernetes-dashboard &lt;none&gt; 9m </code></pre> <p>How can i fix this issue? I don't seem to understand what is wrong. I am completely new to kubernetes</p>
<p>You need to run <code>minikube dashboard</code>. You shouldn't install dashboard separately; it comes with minikube.</p>
<p>I am getting a couple of errors with Helm that I can not find explanations for elsewhere. The two errors are below.</p> <pre><code>Error: no available release name found Error: the server does not allow access to the requested resource (get configmaps) </code></pre> <p>Further details of the two errors are in the code block further below.</p> <p>I have installed a Kubernetes cluster on Ubuntu 16.04. I have a Master (K8SMST01) and two nodes (K8SN01 &amp; K8SN02).</p> <p>This was created using kubeadm using Weave network for 1.6+.</p> <p>Everything seems to run perfectly well as far as Deployments, Services, Pods, etc... DNS seems to work fine, meaning pods can access services using the DNS name (myservicename.default).</p> <p>Using "helm create" and "helm search" work, but interacting with the tiller deployment do not seem to work. Tiller is installed and running according to the Helm install documentation.</p> <pre><code>root@K8SMST01:/home/blah/charts# helm version Client: &amp;version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"} root@K8SMST01:/home/blah/charts# helm install ./mychart Error: no available release name found root@K8SMST01:/home/blah/charts# helm ls Error: the server does not allow access to the requested resource (get configmaps) </code></pre> <p>Here are the running pods:</p> <pre><code>root@K8SMST01:/home/blah/charts# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE etcd-k8smst01 1/1 Running 4 1d 10.139.75.19 k8smst01 kube-apiserver-k8smst01 1/1 Running 3 19h 10.139.75.19 k8smst01 kube-controller-manager-k8smst01 1/1 Running 2 1d 10.139.75.19 k8smst01 kube-dns-3913472980-dm661 3/3 Running 6 1d 10.32.0.2 k8smst01 kube-proxy-56nzd 1/1 Running 2 1d 10.139.75.19 k8smst01 kube-proxy-7hflb 1/1 Running 1 1d 10.139.75.20 k8sn01 kube-proxy-nbc4c 1/1 Running 1 1d 10.139.75.21 k8sn02 kube-scheduler-k8smst01 1/1 Running 3 1d 10.139.75.19 k8smst01 tiller-deploy-1172528075-x3d82 1/1 Running 0 22m 10.44.0.3 k8sn01 weave-net-45335 2/2 Running 2 1d 10.139.75.21 k8sn02 weave-net-7j45p 2/2 Running 2 1d 10.139.75.20 k8sn01 weave-net-h279l 2/2 Running 5 1d 10.139.75.19 k8smst01 </code></pre>
<p><a href="https://github.com/helm/helm/issues/2224#issuecomment-356344286" rel="noreferrer">The solution given by kujenga from the GitHub issue</a> worked without any other modifications:</p> <pre><code>kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' </code></pre>
<p>I am installing dashboard service using helm. Its create pods and service name like this <strong><em>kubernetes-dashboard-kubernetes-dashboard</em></strong>. How can remove the duplicate <strong><em>kubernetes-dashboard</em></strong> word from the name.</p> <pre><code>helm install stable/kubernetes-dashboard --name kubernetes-dashboard --namespace kube-system </code></pre> <p>output</p> <pre><code> k get svc -n kube-system |grep dashboard kubernetes-dashboard-kubernetes-dashboard 10.96.114.17 &lt;none&gt; 80/TCP 26m </code></pre> <p>thanks SR</p>
<p>I'm afraid you can not achieve this without changes in kubernetes-dashboard helm chart. </p> <p>According this code </p> <pre><code> _helpers.tpl ... {{ define "fullname" }} {{- $name := default "kubernetes-dashboard" .Values.nameOverride -}} {{ printf "%s-%s" .Release.Name $name | trunc 63 -}} {{ end }} ... svc.yaml ... metadata: name: {{ template "fullname" . }} ... </code></pre> <p>service/pod name is concatenated from release name and "kubernetes-dashboard".</p> <p>You can adjust "fullname" template to fix this.</p>
<p>Kubernetes - is there a repository for centos that works? I don't want to use git clone, I would prefer to use rpm packages. Is this package compatible with the new docker 17-0XX if it exists?</p>
<p>I use below repo to install the kubernetes rpm in <strong>CentOS Linux release 7.3.1611 (Core)</strong></p> <p><strong>add this repo</strong></p> <pre><code>cat &lt;&lt;EOF &gt; /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF </code></pre> <p>Use this command to install</p> <pre><code>yum install -y kubelet kubectl kubernetes-cni kubeadm </code></pre> <p>my docker version.</p> <pre><code>Server: Version: 1.12.6 API version: 1.24 Package version: docker-1.12.6-32.git88a4867.el7.centos.x86_64 </code></pre>
<p>I’ve 3 servers : 1. kubernetes Master 2. kubernetes Minion1 3. kubernetes Minion2</p> <p>A replication controller (with http service) running on kubernetes master with 4 replicas (pods) with a cluster IP 10.254.x.x</p> <p>The cluster IP can accessible via busybox pod that is created by kubectl command.</p> <p>Now I’ve installed docker on kubernetes Master server</p> <p>Then start a container using docker run command. So Now My <strong>Question is: how to communicate between this docker container and kubernetes cluster IP??</strong></p> <p>The actual goal is: the docker container will act as a reverse proxy for kubernetes cluster IP</p> <pre><code>Docker container IP : 172.17.x.x Kubernetes Pods IP : 172.17.x.x Kubernetes cluster IP : 10.254.x.x </code></pre> <p>Thanks.</p>
<p>As @Grimmy stated, I also think that is accomplished by the use of an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">ingress resource and an ingress controller</a>.</p> <p>For example, a pod with nginx and an ingress controller, can be used as a load balancer between the internet and your pods. </p>
<p>I have set a kubernetes (version 1.6.1) cluster with three servers in control plane. Apiserver is running with the following config:</p> <pre><code>/usr/bin/kube-apiserver \ --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ --advertise-address=x.x.x.x \ --allow-privileged=true \ --audit-log-path=/var/lib/k8saudit.log \ --authorization-mode=ABAC \ --authorization-policy-file=/var/lib/kubernetes/authorization-policy.jsonl \ --bind-address=0.0.0.0 \ --etcd-servers=https://kube1:2379,https://kube2:2379,https://kube3:2379 \ --etcd-cafile=/etc/etcd/ca.pem \ --event-ttl=1h \ --insecure-bind-address=0.0.0.0 \ --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \ --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \ --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \ --kubelet-https=true \ --service-account-key-file=/var/lib/kubernetes/ca-key.pem \ --service-cluster-ip-range=10.32.0.0/24 \ --service-node-port-range=30000-32767 \ --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \ --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \ --token-auth-file=/var/lib/kubernetes/token.csv \ --v=2 \ --apiserver-count=3 \ --storage-backend=etcd2 </code></pre> <p>Now I am running kubelet with following config:</p> <pre><code>/usr/bin/kubelet \ --api-servers=https://kube1:6443,https://kube2:6443,https://kube3:6443 \ --allow-privileged=true \ --cluster-dns=10.32.0.10 \ --cluster-domain=cluster.local \ --container-runtime=docker \ --network-plugin=kubenet \ --kubeconfig=/var/lib/kubelet/kubeconfig \ --serialize-image-pulls=false \ --register-node=true \ --cert-dir=/var/lib/kubelet \ --tls-cert-file=/var/lib/kubernetes/kubelet.pem \ --tls-private-key-file=/var/lib/kubernetes/kubelet-key.pem \ --hostname-override=node1 \ --v=2 </code></pre> <p>This works great as long as kube1 is running. If I take kube1 down, the node does not communicate with kube2 or kube3. It always takes up the first apiserver passed to the <code>--api-servers</code> flag and does not failover in case the first apiserver crashes. What is the correct way to do a failover in case one of the apiserver fails?</p>
<p>The <code>--api-servers</code> flag is deprecated. It's no longer in the <a href="https://kubernetes.io/docs/admin/kubelet/" rel="nofollow noreferrer">documentation</a>. kubeconfig is the brand new way to point kubelet to kube-apiserver.</p> <p>The kosher way to do this today is to deploy a Pod with nginx on each worker node (ie. the ones running kubelet) that load-balances between the 3 kube-apiservers. nginx will know when one master goes down and not route traffic to it; that's its job. The kubespray project uses this method.</p> <p>The 2nd, not so good way, is to use DNS RR. Create a DNS "A" record for the IPs of the 3 masters. Point kubelet to this RR hostname instead of the 3x IPs. Each time kubelet contacts a master, it will be routed to the IP in the RR list. This technique isn't robust because traffic will still be routed to the downed node, so the cluster will experience intermittent outage.</p> <p>The 3rd, and more complex method imho, is to use keepalived. keepalived uses VRRP to ensure that at least one node owns the Virtual IP (VIP). If a master goes down, another master will hijack the VIP to ensure continuity. The bad thing about this method is that load-balancing doesn't come as a default. All traffic will be routed to 1 master (ie. the primary VRRP node) until it goes down. Then the secondary VRRP node will take over. You can see the <a href="https://github.com/kubernetes/contrib/tree/master/keepalived-vip" rel="nofollow noreferrer">nice write-up I contributed at this page</a> :)</p> <p>More details about kube-apiserver HA <a href="https://github.com/kubernetes/kubernetes/issues/18174" rel="nofollow noreferrer">here</a>. Good luck!</p>
<p>I would like to create review apps for my open source application which already has a docker file.Is there any way where I could write an app.json file to deploy a Docker container for new pull requests for review apps? or is there any way to use heroku container registry and runtime in review apps?</p>
<p>Using heroku's new <a href="https://devcenter.heroku.com/articles/heroku-yml-build-manifest" rel="nofollow noreferrer">build manifest</a>, you can use the Build API to deploy your docker apps. This means you can use <code>git push</code> to build a docker app. You can also use GitHub Sync, and Review Apps.</p>
<p>How to know the kubernetes version that is installed by minikube? I am aware of the kubectl version and the minikube version.</p>
<p>Once your minikube is running, you can use <code>kubectl version</code> command to see the version of kubernetes server. </p> <p>Also, when you start minikube using <code>minikube start</code>, kubernetes version is shown in stdout.</p> <pre><code>$ minikube start Starting local Kubernetes v1.6.0 cluster... </code></pre> <p>You can supply the kubernetes version you want minikube to start by mentioning the <code>--kubernetes-version (The kubernetes version that the minikube VM will use (ex: v1.2.3))</code> flag.</p> <pre><code>$ minikube start --kubernetes-version v1.7.0 </code></pre>
<p>Would the following directory structure work?</p> <p>The goal is to have Jenkins trigger off GitHub commits and run Multi-branch Pipelines that build and test containers. (I have everything running on Kubernetes, including Jenkins)</p> <pre><code>/project .git README.md Jenkinsfile /docker_image_1 Dockerfile app1.py requirements.txt /unit_tests unit_test1.py unit_test2.py /docker_image_2 Dockerfile app2.py requirements.txt /unit_tests unit_test1.py unit_test2.py /k8s /dev deployment.yaml /production deployment.yaml /component_tests component_tests.py </code></pre> <ol> <li>Is the k8s folder that has the deployment.yamls in the right place?</li> <li>Are the test folders in good locations? The tests in "component_tests" will ideally be doing more end-to-end integrated testing that involve multiple containers</li> <li>I see a lot of repos have Jenkins file and Dockerfile in the same directory level. What are the pros and cons of that?</li> </ol>
<p>There's no good answer to this question currently. </p> <p>Kubernetes provides a standard API for deployment, but as a technology it relies on additional 3rd party tooling manage the build part of the ALM workflow. There a lots of options available for turning your source code into a container running on Kubernetes. Each has it's own consequences for how your source code is organised and how a deployment might be invoked from a CI/CD server like Jenkins.</p> <p>I provide the following collection of options for your consideration, roughly categorized. Represents my current evaluation list.</p> <h2>"Platform as a service" tools</h2> <p>Tooling the manages the entire ALM lifecycle of your code. Powerful but more complex and opinionated.</p> <ul> <li><a href="https://deis.com/workflow/" rel="nofollow noreferrer">Deis workflow</a></li> <li><a href="https://www.openshift.org/" rel="nofollow noreferrer">Openshift</a></li> <li><a href="https://fabric8.io/" rel="nofollow noreferrer">Fabric8 (See also Openshift.io)</a></li> </ul> <h2>Build and deploy tools</h2> <p>Tools useful for the code/test/code/retest workflow common during development. Can also be invoked from Jenkins to abstract your build process.</p> <ul> <li><a href="https://github.com/Azure/draft" rel="nofollow noreferrer">Draft</a></li> <li><a href="http://forge.sh/" rel="nofollow noreferrer">Forge</a></li> <li><a href="http://kompose.io/" rel="nofollow noreferrer">Kcompose</a></li> <li><a href="https://maven.fabric8.io/" rel="nofollow noreferrer">Fabric8 Maven plugin (Java)</a></li> <li><a href="https://github.com/commercialtribe/psykube" rel="nofollow noreferrer">Psykube</a></li> </ul> <h2>YAML templating tools</h2> <p>The kubernetes YAML was never designed to be used by human beings. Several initatives to make this process simpler and more standardized.</p> <ul> <li><a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a></li> <li><a href="http://ksonnet.heptio.com/" rel="nofollow noreferrer">Ksonnet</a></li> </ul> <h2>Deployment monitoring tools</h2> <p>These tools have conventions where they expect to find Kubernetes manifest files (or helm charts) located in your source code repository.</p> <ul> <li><a href="https://keel.sh/" rel="nofollow noreferrer">Keel</a></li> <li><a href="https://github.com/box/kube-applier" rel="nofollow noreferrer">Kube-applier</a></li> <li><a href="https://github.com/weaveworks/kubediff" rel="nofollow noreferrer">Kubediff</a></li> <li><a href="https://github.com/Eneco/landscaper" rel="nofollow noreferrer">Landscaper</a></li> <li><a href="https://invisionapp.github.io/kit/" rel="nofollow noreferrer">Kit</a> </li> </ul> <h2>CI/CD tools with k8s support</h2> <ul> <li><a href="https://www.spinnaker.io/" rel="nofollow noreferrer">Spinnaker</a></li> <li><a href="https://about.gitlab.com/" rel="nofollow noreferrer">Gitlab</a></li> <li><a href="https://plugins.jenkins.io/kubernetes-ci" rel="nofollow noreferrer">Jenkins + Kubernetes CI plugin</a></li> <li><a href="https://wiki.jenkins.io/display/JENKINS/Kubernetes+Plugin" rel="nofollow noreferrer">Jenkins + Kubernetes plugin</a></li> </ul>
<p>I create a sample setup of Kubernets cluster on Azure using Azure Container Service, and it did its job just fine. I set up several containers and services within the Kubernetes, no problem with that.</p> <p>What makes me fuzzy is that if, say, I run <strong>several nginx containers and want to expose it via different external IPs</strong>, I can't do that for what I know and understand.</p> <p><a href="https://learn.microsoft.com/ru-ru/azure/container-service/kubernetes/container-service-kubernetes-load-balancing" rel="nofollow noreferrer">Azure approach</a> is that I can set up Service and set <code>type: LoadBalancer</code> and as I create it Azure will "connect" LoadBalancer attached to client nodes to my service.</p> <p>But this way I can only attach <strong>one</strong> external IP to <strong>all</strong> of my services which is not something I need. In my example when I run several nginx containers I'd like to expose its 80/tcp ports on different IPs so I can these IPs in DNS, not on different ports of the single IP.</p> <p>How can I overcome that? <strong>Please help!</strong></p>
<p>In Azure container service, to expose a kubernetes service to Internet, we should use Azure Load balancer. As Radek said, several containers in one pod, and use the same load balancer to keep HA. </p> <p>If you want to expose several containers to Internet with different Public IP addresses, we can create several pods, and expose them to Internet, in this way, containers with different public IP addresses.</p> <p>The relationship about pod, containers and node, like this: <a href="https://i.stack.imgur.com/zMvUV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zMvUV.png" alt="enter image description here"></a></p> <p>We create several containers in one pod, several pods in one node(host), several pods work for one service. A service works as a cluster, one service with one public IP address.</p> <p>So, if you want to create several nginx containers with different public IP addresses, we can create several services to archive this:</p> <p>Create one or two nginx containers in one service, and expose several services to Internet.</p> <pre><code>root@k8s-master-7273A780-0:~# kubectl run jasonnginx --replicas=1 --image nginx root@k8s-master-7273A780-0:~# kuberctl run mynginx --replicas=2 --image nginx root@k8s-master-7273A780-0:~# kubectl expose deployments mynginx --port=80 --type=LoadBalancer root@k8s-master-7273A780-0:~# kubectl expose deployments jasonnginx --port=80 --type=LoadBalancer root@k8s-master-7273A780-0:~# kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE jasonnginx 10.0.114.116 52.168.176.18 80:30153/TCP 5m kubernetes 10.0.0.1 &lt;none&gt; 443/TCP 15m mynginx 10.0.205.127 13.82.102.171 80:31906/TCP 6m root@k8s-master-7273A780-0:~# kubectl get pods NAME READY STATUS RESTARTS AGE jasonnginx-1417538491-v79mw 1/1 Running 0 20m mynginx-1396894033-78njj 1/1 Running 0 21m mynginx-1396894033-pmhjh 1/1 Running 0 21m </code></pre> <p>We can find the Load balancer frontend IP settings(two public IP addresses) via Azure portal: <a href="https://i.stack.imgur.com/plpL7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/plpL7.png" alt="enter image description here"></a></p>
<p>I am using experimental checkpoint feature to start up my app in the container and save its state. I do so because tests on the app cannot be run in pararell and startup takes long. I want to migrate to kubernetes to manage test containers</p> <ul> <li>Build and start up an app in the container </li> <li>Save state</li> <li>Spin X instances from saved container</li> <li>Run one test on each container</li> </ul> <p>How do I use Kubernetes to do that? I uses GCP</p>
<p>Container state migration (CRIU) is a feature that Docker has in a experimental state. According to Kubernetes devs (<a href="https://github.com/kubernetes/kubernetes/issues/3949" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/3949</a>), looks like it is not something Kubernetes will support in the short term. Therefore, you currently cannot migrate pods with checkpoints (i.e. it will need to start again). Not sure if creating a container image of your started application could help, that would depend on how the container image was created.</p>
<p>If i have installed K8S using minikube, where will the master node components be installed. (Ex: the api server, replication controller, etcd etc) Is it in the host? or the VM? I understand the worker node is the VM configured by minikube </p>
<p>Everything is installed in the Virtual Machine. Based on the localkube project, it is creating an All-in-one single-node cluster. </p> <p>More information here: <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md</a></p>
<p>I was testing Hyperledger-fabic v1.0.0 with kubernetes. The fabic contains 2 orgs, 4 peers, 1 orderer, and a cli. The things goes well until I instantiates the chaincode in the cli. The peer's error message in the picture. It says missing image, but the image just create successful. What's the problem and how can I solve it? <a href="https://i.stack.imgur.com/CSbIm.png" rel="nofollow noreferrer">enter image description here</a> peer's error message</p>
<p>The answer can be found in the Hyperledger RocketChat on the <a href="https://chat.hyperledger.org/channel/fabric-kubernetes?msg=v27L9igqJZRDW9wQz" rel="nofollow noreferrer">#fabric-kubernetes channel</a>. </p> <p>"you basically need the peer to surface its dynamic IP (thats what <code>AUTOADDRESS</code> does) and then tell the chaincode to basically ignore the x509 CN thats what <code>SERVERHOSTOVERRIDE</code> does (and the other part is you need the peer pod to be privelged so it has the rights to drive the docker-api". </p> <p>Basically, there's lots to be learned from following the discussion from that point. </p>
<p>I have tried two different applications, both consisting of a web application frontend that needs to connect to a relational database. </p> <p>In both cases the frontend application is unable to connect to the database. In both instances the database is also running as a container (pod) in OpenShift. And the web application uses the service name as the url. Both applications have worked in other OpenShift environments.</p> <p>Version</p> <ul> <li>OpenShift Master: v1.5.1+7b451fc </li> <li>Kubernetes Master: v1.5.2+43a9be4</li> <li>Installed using Ansible Openshift</li> <li>Single node, with master on this node</li> <li>Host OS: CentOS 7 Minimal</li> </ul> <p>I am not sure where to look in OpenShift to debug this issue. The only way I was able to reach the db pod from the web pod was using the cluster ip address.</p>
<p>In order for the internal DNS resolution to work, you need to ensure <code>dnsmasq.service</code> is running, <code>/etc/resolv.conf</code> contains the IP address of the OCP node itself instead of other DNS servers (these should be in <code>/etc/dnsmasq.d/origin-upstream-dns.conf</code>).</p> <p>Example:</p> <pre><code># ip a s eth0 ... inet 10.0.0.1/24 # cat /etc/resolv.conf ... nameserver 10.0.0.1 # nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh </code></pre> <p>^^ note the dispatcher script in the /etc/resolv.conf</p> <pre><code># systemctl status dnsmasq.service ● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) # cat /etc/dnsmasq.d/origin-dns.conf no-resolv domain-needed server=/cluster.local/172.18.0.1 </code></pre> <p>^^ this IP should be kubernetes service IP (<code>oc get svc -n default</code>)</p> <pre><code># cat /etc/dnsmasq.d/origin-upstream-dns.conf server=&lt;dns ip 1&gt; server=&lt;dns ip 2&gt; </code></pre> <p>If the OpenShift is running on some kind of OpenStack instance, AWS or similar, it might happen that <code>cloud-init</code> does not trigger the NetworkManager dispatcher script, therefore the <code>resolv.conf</code> is not modified to point to <code>dnsmasq</code>. Try to restart whole network, e.g.:</p> <pre><code># systemctl restart network.service </code></pre> <p>I hope this helps.</p>
<p>As docker means many things : </p> <ul> <li>the docker images</li> <li>and the tools to run the images (docker engine)</li> </ul> <p>I am confused and it's not clear for me: Are the "docker engine" mandatory to use Kubernetes to run docker images.</p> <p>If not, is it still recommended to run kubernetes with the docker engine ?</p> <p>I found in kubernetes doc that rkt would be the alternative to avoid dockerd, but does it hold as a sound configuration for production cluster ?</p> <p>Moreover since the licensing change in docker, What are people using in production environment: RHEL + docker EE + Kubernetes ?</p> <p>EDIT: clarify question to target production environment</p>
<p>Yes, Kubernetes often comes bundled up with the things necessary to run containers, but it itself is a container orchestration system. So if you're going to need whatever base container engine you need to run the kind of containers you want (Docker or otherwise). </p> <p>I would personally recommend Docker unless you're using some other container solution simply because it is currently the most fleshed out, with the most support.</p>
<p>I can provide a ubuntu with ssh by docker, and user can setup their env. For example, he apt-get install something and modify his bashrc, vimrc and so on. Once I restart this computer, the user still has same env after restart finished.</p> <p>How can I provide same service by k8s? Once I restart the node, it will create another pod on other computer. But the env is based on init image, not the latest env from the user.</p> <p>The naive way, mount all volume on the shared storage(PV + PVC). Such as /bin /lib /opt /usr /etc /lib64 /root /var /home and so on(Each possible directory may effected by any installation). What is the best practice or other way to do this? </p>
<p>@Saket is Correct. </p> <p>If a docker container needs to persist its state (in this case the user changing something inside the container), then that state must be saved somewhere... How would you do this with a VM? Answer: save to disk. </p> <p>In k8s storage is represented as a persistent volume. Something called a PVC (persistent volume claim), is used to maintain the relationship between the POD (your code) and the actual storage volume (whose implementation details you are abstracted from). The latest version of k8s supports the dynamic creation of persistent volumes, so all you have to do is create a unique PVC specific to each user, when deploying their container (I assume here you have a "Deployment" and "Service" for each user as well).</p> <p>In conclusion... Unusual to run SSH within a container. Have you considered giving each user their own k8s environment instead? For example <a href="https://www.openshift.org/" rel="nofollow noreferrer">Openshift</a> is multi-tenanted. Indeed Redhat are integrating Openshift as a backend for <a href="http://www.eclipse.org/che/" rel="nofollow noreferrer">Eclipse Che</a>, thereby running the entire IDE on k8s. See:</p> <p><a href="https://openshift.io/" rel="nofollow noreferrer">https://openshift.io/</a></p>
<p>I'm trying to delete a failed pod from the <code>Pods</code> page on the Kubernetes Web UI, but it is not getting deleted. </p> <p>I understand what the error itself is, and I believe I have resolved it by using <code>secrets</code>, since this is a private repo. That said, I cannot re-add the pod again correctly, since it already exists.</p> <p>Here is what I am seeing on the <code>Pods</code> page in the Kubernetes UI:</p> <p><strong>Pod Status:</strong> <code>Waiting: ContainerCreating</code></p> <p><strong>Error</strong>:</p> <pre><code>Failed to pull image "&lt;USERNAME&gt;/&lt;REPO&gt;:&lt;TAG&gt;": failed to run [fetch --no-store docker://&lt;USERNAME&gt;/&lt;REPO&gt;:&lt;TAG&gt;]: exit status 254 stdout: stderr: Flag --no-store has been deprecated, please use --pull-policy=update fetch: Unexpected HTTP code: 401, URL: https://xxxxxx.docker.io/v2/&lt;USERNAME&gt;/&lt;NAMESPACE&gt;/manifests/&lt;TAG&gt; Error syncing pod </code></pre> <p>I have also tried deleting the pod with <code>kubectl</code>, but <code>kubectl</code> can't even see the failed pod!</p> <pre><code>$ kubectl get pods No resources found $ kubectl get pods --show-all No resources found </code></pre> <p>Is there any other way that I can delete this pod?</p>
<p>I just found a solution to my own problem. Go to the <code>Workloads</code> page in the Kubernetes Web UI, and delete the associated <code>Deployment</code>, and the <code>Pod</code> will be deleted as well.</p> <p>If the pod does not get deleted after this, you will need to force a delete from the command line.</p> <p><code>kubectl delete pod &lt;POD_NAME&gt; --grace-period=0 --force</code></p>
<p>I am trying to setup redis cluster on Kubernetes. One of my requirements is that my redis cluster should be resilient in case of kubernetes cluster restart(due to issues like power failure).</p> <p>I have tried Kubernetes statefulset and deployment. <br>In case of statefulset, on reboot a new set of IP addresses are assigned to Pods and since redis cluster works on IP addresses, it is not able to connect to other redis instance and form cluster again. <br>In case of services with static IP over individual redis instance deployment, again redis stores IP of Pod even when I created cluster using static service IP addresses, so on reboot it is not able to connect to other redis instance and form cluster again.</p> <p><a href="https://github.com/zuxqoj/kubernetes-redis-cluster/blob/master/README-using-statefulset.md" rel="nofollow noreferrer">My redis-cluster statefulset config</a> <br><a href="https://github.com/zuxqoj/kubernetes-redis-cluster/blob/master/README-using-deployment.md" rel="nofollow noreferrer">My redis-cluster deployment config</a></p>
<p><code>Redis-4.0.0</code> has solved this problem by adding support for <a href="https://github.com/antirez/redis/issues/2527" rel="nofollow noreferrer">cluster announce node IP and Port</a></p> <p>Set <code>cluster-announce-ip</code> as static IP of service over redis instance kubernetes deployment.</p> <p>Link to setup instructions: <a href="https://github.com/zuxqoj/kubernetes-redis-cluster/blob/master/README-using-statefulset.md" rel="nofollow noreferrer">https://github.com/zuxqoj/kubernetes-redis-cluster/blob/master/README-using-statefulset.md</a></p>
<p>I have a kubernetes cluster that is running in out network and have setup an NFS server on another machine in the same network. I am able to ssh to any of the nodes in the cluster and mount from the server by running <code>sudo mount -t nfs 10.17.10.190:/export/test /mnt</code> but whenever my test pod tries to use an nfs persistent volume that points at that server it fails with this message:</p> <pre><code>Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 19s 19s 1 default-scheduler Normal Scheduled Successfully assigned nfs-web-58z83 to wal-vm-newt02 19s 3s 6 kubelet, wal-vm-newt02 Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/nfs/bad55e9c-7303-11e7-9c2f-005056b40350-test-nfs" (spec.Name: "test-nfs") pod "bad55e9c-7303-11e7-9c2f-005056b40350" (UID: "bad55e9c-7303-11e7-9c2f-005056b40350") with: mount failed: exit status 32 Mounting command: mount Mounting arguments: 10.17.10.190:/exports/test /var/lib/kubelet/pods/bad55e9c-7303-11e7-9c2f-005056b40350/volumes/kubernetes.io~nfs/test-nfs nfs [] Output: mount.nfs: access denied by server while mounting 10.17.10.190:/exports/test </code></pre> <p>Does anyone know how I can fix this and make it so that I can mount from the external NFS server?</p> <p>The nodes of the cluster are running on <code>10.17.10.185 - 10.17.10.189</code> and all of the pods run with ips that start with <code>10.0.x.x</code>. All of the nodes on the cluster and the NFS server are running Ubuntu. The NFS server is running on <code>10.17.10.190</code> with this <code>/etc/exports</code>:</p> <pre><code>/export 10.17.10.185/255.0.0.0(rw,sync,no_subtree_check) </code></pre> <p>I set up a persistent volume and persistent volume claim and they both create successfully showing this output from running <code>kubectl get pv,pvc</code>:</p> <pre><code>NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pv/test-nfs 1Mi RWX Retain Bound staging/test-nfs 15m NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE pvc/test-nfs Bound test-nfs 1Mi RWX 15m </code></pre> <p>They were created like this:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: test-nfs spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: # FIXME: use the right IP server: 10.17.10.190 path: "/exports/test" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-nfs spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi </code></pre> <p>My test pod is using this configuration:</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: nfs-web spec: replicas: 1 selector: role: web-frontend template: metadata: labels: role: web-frontend spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 volumeMounts: # name must match the volume name below - name: test-nfs mountPath: "/usr/share/nginx/html" volumes: - name: test-nfs persistentVolumeClaim: claimName: test-nfs </code></pre>
<p>You have to set a <strong>securityContext</strong> as <strong>privileged: true</strong>. Take a look at this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container" rel="nofollow noreferrer">link</a></p>
<p>I have deployed kubernetes cluster. The issue i have is that the dashboard is not accessible from external desktop system.</p> <p>Following is my setup. Two vm's with cluster deployed, one master one node. dashboard running without any issue the kube-dns is also working as expected. kubernetes version is 1.7.</p> <p>Issue: When trying to access dashboard externally through kubectl proxy. i get unauthorized response.</p> <p>This is with rbac role and rolebindings enabled. How to i configure the cluster for http browser access to dashboard from external system.</p> <p>Any hint/suggestions are most welcome.</p>
<p>kubectl proxy not working > 1.7</p> <p>try this:</p> <p>copy ~/.kube/config file to your desktop then run the kubect like this </p> <pre><code> export POD_NAME=$(kubectl --kubeconfig=config get pods -n kube-system -l "app=kubernetes-dashboard,release=kubernetes-dashboard" -o jsonpath="{.items[0].metadata.name}") echo http://127.0.0.1:9090/ kubectl --kubeconfig=config -n kube-system port-forward $POD_NAME 9090:9090 </code></pre> <p>Then access the ui like this: <a href="http://127.0.0.1:9090" rel="nofollow noreferrer">http://127.0.0.1:9090</a></p> <p>see this helps</p>
<p>In order to do some POCs over Mesos, Kubernetes, DC/OS and more, I would like to build a small cluster of 3-5 nodes. I started to build a cluster via AWS, but it becomes expansive fastly. So, I was wondering if there is a good way to build such a cluster without spending too much money on it.</p>
<p>There is a blog where the author gives a good explanation and open source code for a cheap 3 nodes HA production cluster. It's in Digitalocean but could be replicated in any similar host provider. It's too long to post everything here so check this <a href="https://5pi.de/2016/11/20/15-producation-grade-kubernetes-cluster/" rel="nofollow noreferrer">link</a></p>
<p>I'm trying to install kubernetes cluster (v1.7.2) with 2 nodes. And using weave as cni. When joining the other node, kubeadm complains of hostname</p> <pre><code>[root@ctdpc001572 ~]# kubeadm join --token c5ba8a.6bcb25f017648271 10.41.30.50:6443 [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] WARNING: hostname "" could not be reached [preflight] WARNING: hostname "" lookup : no such host [preflight] Some fatal errors occurred: hostname "" a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') [preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks` </code></pre> <p>I'm using centos 7.3</p> <pre><code>Linux ctdpc001572.ctd.internal.com 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux </code></pre> <p>Installed yum packages</p> <pre><code>Loaded plugins: fastestmirror, langpacks, versionlock Loading mirror speeds from cached hostfile Installed Packages kubeadm.x86_64 1.7.2-0 @kubernetes kubectl.x86_64 1.7.2-0 @kubernetes kubelet.x86_64 1.7.2-0 @kubernetes kubernetes-cni.x86_64 0.5.1-0 @kubernetes Available Packages kubernetes.x86_64 1.5.2-0.7.git269f928.el7 extras kubernetes-client.x86_64 1.5.2-0.7.git269f928.el7 extras kubernetes-master.x86_64 1.5.2-0.7.git269f928.el7 extras kubernetes-node.x86_64 1.5.2-0.7.git269f928.el7 extras kubernetes-unit-test.x86_64 1.5.2-0.7.git269f928.el7 extras </code></pre> <p>Steps:</p> <pre><code>$ yum install -y docker kubelet kubeadm kubectl kubernetes-cni $ systemctl enable docker &amp;&amp; systemctl start docker $ systemctl enable kubelet &amp;&amp; systemctl start kubelet $ systemctl stop firewalld; systemctl disable firewalld $ kubeadm init --apiserver-advertise-address=10.41.30.50 $ mkdir $HOME/.kube $ cp /etc/kubernetes/admin.conf $HOME/.kube/config #set IPALLOC_RANGE to 172.40.0.0/16 in https://git.io/weave-kube-1.6 $ kubectl apply -f weave-kube-1.6.yaml #schedule pods on master $ kubectl taint nodes --all node-role.kubernetes.io/master- #disable access control $ kubectl create clusterrolebinding permissive-binding \ --clusterrole=cluster-admin \ --user=admin \ --user=kubelet \ --group=system:serviceaccounts # joining other node $ kubeadm join --token c5ba8a.6bcb25f017648271 10.41.30.50:6443 </code></pre> <p>When running</p> <pre><code>kubeadm join --token c5ba8a.6bcb25f017648271 10.41.30.50:6443 --skip-preflight-checks </code></pre> <p>I see following error in weave-kube pod:</p> <pre><code>2017/07/29 16:36:39 error contacting APIServer: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout; trying with fallback: http://localhost:8080 2017/07/29 16:36:39 Could not get peers: Get http://localhost:8080/api/v1/nodes: dial tcp [::1]:8080: getsockopt: connection refused Failed to get peers </code></pre>
<p>Adding ip route manually on other node resolved the issue. :sweat-smile:</p> <pre><code>route add 10.96.0.1 gw &lt;your real master IP&gt; </code></pre>
<p>I'm using this <code>Dockerfile</code> to deploy it on openshift. - <a href="https://github.com/sclorg/postgresql-container/tree/master/9.5" rel="nofollow noreferrer">https://github.com/sclorg/postgresql-container/tree/master/9.5</a></p> <p>It works fine, until I enabled <code>ssl=on</code> and injected the <code>server.crt</code> and <code>server.key</code> file into the postgres pod via volume mount option.</p> <p>Secret is created like </p> <pre><code>$ oc secret new postgres-secrets \ server.key=postgres/server.key \ server.crt=postgres/server.crt \ root-ca.crt=ca-cert </code></pre> <p>The volume is created as bellow and attached to the given <code>BuidlConfig</code> of postgres.</p> <pre><code>$ oc volume dc/postgres \ --add --type=secret \ --secret-name=postgres-secrets \ --default-mode=0600 \ -m /var/lib/pgdata/data/secrets/secrets/ </code></pre> <p>Problem is the mounted files of <code>secret.crt</code> and <code>secret.key</code> files is owned by <code>root</code> user, but <code>postgres</code> expect it should be owned by the <code>postgres</code> user. Because of that the postgres server won't come up and says this error.</p> <blockquote> <p>waiting for server to start....FATAL: could not load server certificate file "/var/lib/pgdata/data/secrets/secrets/server.crt": Permission denied stopped waiting pg_ctl: could not start server</p> </blockquote> <p>How we can insert a volume and update the <code>uid:guid</code> of the files in it ?</p>
<p>It looks like this is not trivial, as it requires to set Volume Security Context so all the containers in the pod are run as a certain user <a href="https://docs.openshift.com/enterprise/3.1/install_config/persistent_storage/pod_security_context.html" rel="nofollow noreferrer">https://docs.openshift.com/enterprise/3.1/install_config/persistent_storage/pod_security_context.html</a></p> <p>In the Kubernetes projects, this is something that is still under discussion <a href="https://github.com/kubernetes/kubernetes/issues/2630" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/2630</a>, but seems that you may have to use Security Contexts and PodSecurityPolicies in order to make it work. </p> <p>I think the easiest option (without using the above) would be to use a container entrypoint that, before actually executing PostgreSQL, it chowns the files to the proper user (postgres in this case).</p>
<p>We have a kubernetes cluster which has a dropwizard based web application running as a service. This application has a rest uri to upload files. It cannot upload files larger than 1MB. I get the following error:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>ERROR [2017-07-27 13:32:47,629] io.dropwizard.jersey.errors.LoggingExceptionMapper: Error handling a request: ea812501b414f0d9 ! com.fasterxml.jackson.core.JsonParseException: Unexpected character ('&lt;' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null') ! at [Source: &lt;html&gt; ! &lt;head&gt;&lt;title&gt;413 Request Entity Too Large&lt;/title&gt;&lt;/head&gt; ! &lt;body bgcolor="white"&gt; ! &lt;center&gt;&lt;h1&gt;413 Request Entity Too Large&lt;/h1&gt;&lt;/center&gt; ! &lt;hr&gt;&lt;center&gt;nginx/1.11.3&lt;/center&gt; ! &lt;/body&gt; ! &lt;/html&gt;</code></pre> </div> </div> </p> <p>I have tried the suggestions given in <a href="https://github.com/nginxinc/kubernetes-ingress/issues/21" rel="noreferrer">https://github.com/nginxinc/kubernetes-ingress/issues/21</a>. I have edited the Ingress to set the proxy-body-size annotation. Also, I have tried using the configMap without any success. we are using kubernetes version 1.5. Please let me know if you need additional information. </p>
<p>Had this on my setup as well. Two advices here:</p> <p>1: switch to official kubernetes nginx ingress, it's awesome (<a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a>)</p> <p>2: with the above ingress, you can add an annotation to your Ingresses to controll body size limit on per ingress basis like this :</p> <pre><code>annotations: ingress.kubernetes.io/proxy-body-size: 10m </code></pre> <p>works great</p>
<p>I have two services, <code>tea</code> and <code>coffee</code>, each is in their own namespace, I would like <code>domain.com</code> to go to the <code>tea</code> service and <code>domain.com/coffee</code> to go to <code>coffee</code>.</p> <p>As each is in a namespace I have had to make two pieces of ingress, but when I try to apply them I get the error <code>MAPPING Path '/coffee' already defined in another Ingress rule</code>.</p> <p>My two pieces of ingress look like the following:</p> <p>Tea:</p> <pre><code>kind: Ingress apiVersion: extensions/v1beta1 spec: tls: - hosts: - domain.com secretName: tea-tls rules: - host: domain.com http: paths: - path: / backend: serviceName: tea servicePort: 80 </code></pre> <p>and Coffee:</p> <pre><code>kind: Ingress apiVersion: extensions/v1beta1 metadata: name: coffee namespace: coffee spec: tls: - hosts: - domain.com secretName: coffee}-tls rules: - host: domain.com http: paths: - path: /coffee backend: serviceName: coffee servicePort: 80 - path: /coffee/* backend: serviceName: coffee servicePort: 80 </code></pre>
<p>I guess the problem isn't having tea and coffee, but defining the <code>coffee</code> path twice in the coffee ingress. According to <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout</a> I would assume that you only need the <code>/coffee</code> path, and can delete the <code>/coffee/*</code> path.</p>
<p>I'm using kubernetes and changed all my replication controllers to deployments (the new RC from k8s ).</p> <p>I used to Expose Pod Information of Containers Through Environment Variables as described in here:</p> <p><a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/</a></p> <p>So I used it like that:</p> <pre><code> - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name </code></pre> <p>and it was working, after changing to deployments looks like that metadata.name is not defined and I can not use this as an Environment Variables anymore.</p> <p>Does anyone know if this functionality is still working on deployments?</p>
<p>This deploymned works for me. can you test this in your cluster?</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx labels: k8s-app: nginx tier: network-tools namespace: default spec: replicas: 1 template: metadata: labels: k8s-app: nginx spec: containers: - name: nginx image: nginx env: - name: SSHD value: TURE - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name ports: - containerPort: 80 </code></pre>